Think Big, Think Global (Cloud Next '19)
We're. Really glad that all of you could make it hope you're enjoying cloud, next 2019. Now. Over the past few years you've heard us talk a lot about how Google, cloud is built on top of our global network infrastructure. This. Year we wanted to take the opportunity, opportunity. To dive deeper, into this aspect of the global network design, talk. About how it enables features, in global, virtual networks, or V pcs and. Hopefully encourage you to think big and to think global, when you're planning for your own application deployments. We. Have a 50 minute talk today my. Name is Neha button and I'm a software engineer at Google I'm, going to be joined by Marshall Vail who is a product. Manager on the cloud DNS team and by. Ed Hammond who is the senior Enterprise Architect, on the cardinal health team and he's going to be sharing, with us the inside scoop on how cardinal health deploys VP C's. So. This is basically what, cloud. Consists, of cloud is divided. Into regions which further gets subdivided, into zones, now. A region is a geographic, location in a certain continent, where the round-trip, time or the RTT from one VM to another is typically, under one millisecond, a. Region. Is typically divided into three or four zones and a, zone is a geography, location within a region, which. Is which has its own fully isolated and, independent, failure domain and, so. No two machines, that are in different zones or in different regions share. The same fate when it comes to failure so. They're in definitely, in different data centers, in. GCP. At this time we have 19 regions and 58 zones. So. How does Google's, network infrastructure, power this now. Google's network infrastructure, basically, consists, of three main types of networks the, first is the, data center network this, is what connects all the machines in a data center together, the. Second is a software, based private, van that we call before that connects all the data centers together and, the. Third is also an SDN, based public. Van for, use of facing traffic and. Our network so. A, machine basically gets connected from. The internet via the public man and gets connected to other machines in other data centers over. The private fan, so. When you send a packet from your. Virtual, machine running in cloud let's say in North America, to, a GCS, bucket running, in Asia for example then. The packet doesn't leave the network backbone, it, basically doesn't traverse over the Internet it traverses, over Google's, Network. Now. One of the things that I would like to mention here is that, in, addition to the peering routers at the edge of our network we also deploy network, load balancers, and layer seven reverse proxies, the layer, seven reverse proxies, basically, help us in, terminating, the users, TCP. Or SSL, connection, and, a location closest, to the user. Now. This is really important because as you know, establishing. An HTTPS connection requires two network, round trips between the client and the server and so, it's really important, to reduce the RTT between the client and the server and we. Do this by bringing the server closer to the client and so, we, basically. Terminate. The end users TCP or SSL connection, at a location that is closest to the end-user. So. This is basically what comprises, the global Google Network with. 134, network edge locations. Presence. In over 200, countries and territories and, content. Delivered through Google, cloud CDN, you, get access to the same functionality, that. Also powers, all of Google services like, search. Maps. Gmail. YouTube each. Of these are used by over a billion users worldwide. This. Is a snapshot of our footprint, in GCP. As I mentioned before we have 19, regions we. Have also announced two, new regions, that will be turning up before the end of this year so. We are turning up regions in Japan. And South Korea this year.
Before. The end of next year will also be turning up two more regions in Salt. Lake City in USA and in. Jakarta, and Indonesia. As I mentioned before we have 134 pops or points of presence, and. 96. CDN, locations. Our. Data centers are connected to each other through hundreds, of thousands of miles of fiber, optic cable, they. Also have 13 sub, C cable investments. We. Have 81 inter, connect locations, so these are basically sites where you can physically, appear with us directly in order to use either dedicated. Or partner interconnect. So. Via saul of this important, to you why do you care that the network is actually global. Now. If you were to deploy your applications, you would do so by deploying. The applications, to multiple VMs in a single zone in. Order to protect, yourself against, single VM or single machine failures. In. Order, to protect yourself against single zone failures, you can replicate your application, across multiple zones in the region so. This basically gives you added redundancy, and so, increases, the availability of, your application. In. Order to further protect yourself against, regional, outages so things like natural disasters, or other such rare events you, can then replicate your application. Across multiple regions, and. This is how you get a global deployment, now. What is critical here is that with global deployments, you are assured that all the traffic stays, on Google's network backbone and does, not leave Google's. Network backbone so. You get access to all of your compute and storage resources. Globally. And privately, which. Means that you don't need to assign public IP s to your VMs, and, you don't need to set, up expensive, VPN or peering connections, in order to stitch regional, V pcs together. Now. Another way of designing for robustness, is to use load balancing. Due. To the global nature of Google's network we, are able to offer global, load balancing, so that you get a global IP address assigned to your load balanced application. So. A user in Taiwan sees the same global IP address for your application, as a user in Texas and they both get routed to the closest healthy backends. So. The user in Texas may get routed to the, closest, healthy back-end that may be running in Portland, Oregon. But. If there are no healthy backends that are running in any region in North America, they. May get routed to the next closest healthy, back-end which, may be across the Atlantic. But. The important, thing here is that the end users TCP, or SSL connection will get terminated close, to the location of the user before, getting encrypted, and then routed over the. Google Network backbone to, the data center in Europe. We. Use the same DDoS protection system. In cloud as we do for the rest of Google and so you benefit, from the same parameter security, that the rest of Google services have.
We. Are able to assign a single global IP address to our load-balanced. Applications, using, stabilized. Any cost now. Any cost is many, of you would know is an addressing, and routing methodology. That allows you to route data grams from from. A single sender to. The receiver that, is closest, to the sender among. A set of multiple receivers, that are all programmed, to, surf traffic, and to receive. Traffic on the, same IP address. The. You stabilized, any cost in order to preserve the TCP session despite, BGP instability, so. If an end users is B is recalculating. The routes in that. They are announcing. Then. There. May be certain amount of instability in the TCP, sessions that the end user experiences. As a. Result of this the end users request may get routed to a network, load balancer, in a location that is not closest, to the end user. Now. When this happens the network load balancer in that location figures. Out that the client IP is actually closer to a different, location and then, forwards that request to that location. So. This is how we basically preserve. TCP, session stability, despite bgp. Instability. So. That's all about the physical network how, does this enable features in the global network in the, virtual network so. As you know virtual networks or V pcs in Google, cloud are global. In nature. So. You can create your network once you. Can associate, network policies once, in your network. So things like firewall, rules and routing policies can be applied. Once to your network and these. Policies will work seamlessly as, you expand, to, multiple, regions and you place your computer resources, in new regions. If. You're. An enterprise network administrator. Then you can use share BBC in order, to centrally, create and manage your network while. Allowing full. Autonomy to your developers, and allowing. Your organization. To scale to hundreds or even thousands. Of developers. Or development, teams. So. Let's take a look at an example now. Let's say in your organization you, would like to create two types of applications. One is. Web. Application, that is internet facing and the, other is a billing application, that is not facing the internet. That. Share VPC you can create a load balancer in, your web app project. For. Reasons that we discussed earlier you will want to create redundancy and, so you create compute resources in multiple, regions and then. You can set these as backends of your load balancer. You. Can then create internal. Load balanced applications, inside, the billing project and you, can use your billing. Backends. As billing. VMs as backends to these internal load balancers, and thus. You can create multiple tiers, of applications. In your. Shared EPC. You. Can then associate, firewall, rules within the VPC in order to restrict traffic. Due. To the global nature of a network you, also get access to Google, managed API services, privately, which. Means that you don't need to assign public IP addresses to your VMs in order to access, API. Services, like, your, cloud, storage machine, learning BigTable, spanner and many others. This. Functionality, of privately, accessing. Google managed, API services, is now extended, to the on-prem so. That you are able to access, Google, managed API services, privately, from your on-prem, network by a VPN, or interconnect. In. Order to secure your VPC you can use network, layer firewall, rules, net. Their firewall rules are stateful. And connection tracked. Now. You can create allow. Or deny, firewall. Rules either in the ingress or the ingress direction, by specifying, source IP, ranges, or, destination. IP ranges, you, can also use tags or service accounts in order to easily group, resources. That you are applying firewall, rules on. Now. Another really important, feature is enable. And disable firewall, rules, this is something that we have launched in, the past year so. If you're troubleshooting your network and you would like to find out the root cause of an issue by. Temporarily. Disabling. A firewall rule and seeing what the effect it. Has on the network, then. Now you don't have to delete, the firewall rule and recreate, it you, can simply disable and then re-enable the firewall rule.
Another. Really important, feature that we have launched to GA over the past year is firewall, logging. Firewall. Locking, basically, allows you to see reports of connection, records that, get created for. Every time a firewall rule gets applied on a connection. Firewall. Dogs are not sampled, unlike, vbc flow logs but, there is a limit on the number of connections records, that get exported, in, a five-second, interval and, this, limit depends on your, machine size. Firewall. Logs get exported to the shared EPC host project so, that the security, administrator, of your organization, can. View, the firewall rules and verify. That the firewall rules in in the organization, are administered. And are getting applied correctly. You. Can export, far, belongs to stackdriver cloud, pub/sub, cloud storage or bigquery. Now. As I mentioned far more logs. Consists. Of connection records where a connection, record gets created, every, time a firewall rule gets applied so. For VM to VM traffic, there. Would be a connection record that's that gets created for the increase rule on the, sender VM and for. The increase rule that gets applied on the receiver VM. This. Is also true for VM to VM traffic, in. The, V PC even if even, if the VMS belong to multiple service projects. And. For, traffic that is entering or leaving your VPC either to go to a shared vp either, to go to a peered VPC, to, go to the internet or to go to your on-prem, through a VPN connection. Now. If you would like to apply security. Policies. At the edge of our network then you can do so by clicking a. Ting security, policies using cloud armor and what. This allows you to do is to specify the. Rules that should get applied at the edge so. This is perimeter security that gets applied on your load-balanced, applications. So. Let's say you create a load-balanced, application, you. Can now associate, security policies, using cloud armor with. The load balancer, and this. Assures that the traffic that is permitted on to. The load-balanced application, will.
Conform, With the rules that you've specified. So. Your VMs basically, don't see traffic from. From. The sender's that you have blacklisted. You. Can specify cloud, armored rules using IP blacklists, or white. Lists these. This. Set of rules this functionality, is basically generally available now you. Can also specify the rules using. Vassals. Or geo specific, rules or, you can use a flexible, rules language, in order to customize the rules that you would like to specify. So. What does a typical deployment look like now. In a typical deployment, you would create, an HTTP, or HTTPS load balancer this is a global load balanced application, and you, would automatically, get defense, against, DDoS attacks this, is because, at. The edge we implement, the DDoS. Protection and, so any traffic that is entering your network will get DDoS attacks. Defense. For free. If. You, would like to apply custom rules for your load load balanced application, then you can use cloud armor and you can associate, the. Cloud armor security, policy with your load balancer. If. You, would like to allow, access to your load balanced application, only, for, users. That have been granted this access, using ion policies, then. You can use a product called cloud, I. That's. Short for identity. Of where proxy. So. What the identity of where proxy does is that it checks whether the, end users credentials have. It. Will check these and use those credentials against, I am policy and it will check whether the end user has been granted I am access, to, this load-balancing load-balancing application, and if, it has then, it will allow the for the traffic to come in, so. Traffic will basically enter. Your V PC it will be. Received. By the VMS only if it has allowed both by cloud Armour as well as the identity of web proxy. Within. The V PC you can use network layer firewall, rules to specify, the. Security, on your VMs, and you, can specify them so that you are allowing traffic only from the, load balancer, proxy and you don't have to open your VMs or your. Ports on the VMS, to the internet. Another. Really important, aspect of design of security. Design is ensuring. That you, mitigate the risk of data exfiltration. You. Can do this now using BBC service controls, so. That you define a service parameter, or a security perimeter, outside. Of which your data should not be accessible. Or it, should not be allowed, to be copy. With. The. Functionality. Of allowing. Private, access to Google services, from your on-prem network via VPN or interconnects, the. Definition. Of your security parameter, can also be extended to your on-prem. So. That brings us to the next question how. Do you access on-prem now. There are multiple ways of connecting your. Virtual network running in the cloud to your on-prem network you. Can do so using VPN. And you. Can either configure, VPN using static, routes or you can use cloud router in, order to dynamically, exchange, routes using bgp, or. You, can use interconnect, in which case you are directly peering with us and you can use either dedicated, interconnect, where you control the peering or you can use partner interconnect, where you are paying, a partner for where. The partner is basically peering with us and paying them for the bandwidth that you use. You. Can create VPN, connections, or interconnect. Attachments, in the shared BBC host project, and the. Virtual machines and all the service projects that are attached to the shared network will. Then get access to your, own trim via, the, VPN connection, or the interconnect. We. Are really excited to announce now the option, to create highly available VPN, connections. With. Highly available VPN, you, can create two interfaces on your VPN gateway and you, can connect, the pure gateway on your own friend to, these two different interfaces each, of these interfaces gets a different IP address, and this, allows redundancy, in your connection, to your, on-prem, network. You. Can either use highly, available VPN, and active active mode in which, case you are advertising. The routes from, your own trim, using. The same menu, on the same priority, and you, would, need to use the same base priority, as well when. Advertising, those routes to your V PC and. In. This case they're out both. The tunnels will be used and the. Traffic will be ecmp hashed over both the tunnels, or. You. Can use it an active/passive moon, where, one of the tunnels is used violet. As a and when, we figure out that the connectivity, is down then, we fall back to the other tunnel.
This. Basically, increases, the availability of, your VPN, connection, to your on-prem, by 90% and so, in a single region you get a four nines availability, for, connecting to your on-prem network. Now. The next thing that we have launched to GA over the past year is cloud NAT no, cloud NAT is a feature that a lot of our enterprise customers have, been asking for and they're really excited, to use. Cloud. NAT is basically a managed NAT solution, that allows you to configure access. To the internet on your virtual machines without. Having to give your virtual machines public Ivies. We. Implement outbound, not we don't have lament inbound not and so. This increases the security of your VPC by. Ensuring that connections. Not be initiated, from a malicious, user on the Internet to, your virtual machines running in the cloud. Cloud, 9 basically, scales seamlessly, across the, ends and across connections, by, handling both static as well as dynamic, IP. Allocation. You. Can configure not as well in the share PPC host project, in, the, regions where you have virtual machines that need connectivity, to the Internet so, let's, imagine that you have a package server that, you would like to download packages, from onto, your virtual, machines running in your V PC before. You can bootstrap them. Now. After you configure the nut, gateways. In your shell BBC host project the VMS in all, the service projects that are attached to the. Shared bpc will, be able to access the, packet server using, the IPS that are managed by the NAT gateway. One. Thing that is really important, to mention here is that NAT, is a control. Beam component it's, not a proxy based solution, and so. You get the same bandwidth, and performance, for, your internet connection as you. Would if you were to give public, IP s to your VMs. So. NAT basically, scales really. Well you need a single NAT gateway. Because. It's not a proxy based solution, in order to manage, the NAT, IP s for thousands, of VMs in that, region in your V PC. Now. If you assign, public, IP addresses to your VMs primarily. For getting. Connectivity, to the internet but, you also had the, added advantage of being able to SSH, to your VMs using those public IPS then. You lose that capability, when you remove, public IPS and when you switch to cloud not. Because. Connections. Cannot be initiated, from outside the V PC using. Cloud 9. With. Cloud IP, and cloud, IP is something we discussed briefly in the context, of global load balanced applications. Cloud. IP is now extended. To have functionality, for TCP forwarding, and what, this allows you to do is, to specify I am policies, on who has access, to SSS to your VMs and. When. A request comes in the, proxy. Will basically check, whether the user has been granted, I am permission, to access to, SSH access to the VM and then. If that check passes then. The SSH connection, will be, wrapped. Inside, HTTPS. And then, using TCP forwarding, will be sent to the remote instance, running in your V PC. Now. Coming to the next topic how do you access, managed services. Managed. Services can be accessed by, using V PC peering, so, if you want to get, full mesh connectivity, between two V pcs that are running in clown, they may be in different organizations, or the, same organization, then, you can use vbc peering. There. Any excited, to announce the general availability of, private service access the. Private service access is a managed solution that, allows you to get a private connection to a managed service and. The. Other really cool thing that it does is it, allows you as a service consumer, to specify a global, IP range and hand, this off to the, service provider so.
That All the sub networks, in the service providers BBC, get, carved out of this global IP range and, so. As a consumer, you are able to better plan for the IP ranges, that are used for your managed services. One. Of the really important features that we have added. As functionality. To BBC peering, is the. Ability to access. PRV pcs from your on-prem network and so. You can if you have a pen if you have a V PC and you, are accessing a managed service might through V PC peering and, you have a VPN, connection from your own Prem to your V PC then, you are able to also access the managed services from. The on-prem via, VPN or inter connect to your, V PC. The. Other feature that we have launched. Which is now available in beta is the, ability to control custom, round exchanges. So. Let's take, a look at all of these in more detail. So. Let's say you have two works the. One on the left is the consumer, VPC and the one on the right is a producer, vbc these, two are in, different organizations and once. You pair these two bbc's you, get full mesh connectivity, between all the virtual machines in, both. These V pcs and. So. You are able to access the managed service viens, from. Virtual machines in different regions in your consumer V PC as well, as in different service, projects, so. Basically you get full mesh connectivity, now. Let's imagine an example, where the, consumer, V PC basically, has added custom routes in their routing table, so. On the consumer, V pcs routing table you. Can see the, default local, routes for the, subnets that are created in the consumer V PC you. Can see the peer out for the subnet that is created in the peer V PC and, you. Say two additional static routes, so. One of them is a static route to the VPN tunnel so. That's basically your outgoing to the, consumers, on-prem and, there's. Another route that is configured to next. Hop to a VM, so. Let's say you have an appliance that is running on the VM, on. The. Peer sign. You're. Able to see the default local route to the subnet on the peer VPC and. You're, also able to see the, peer routes to, the subnets that are added in the consumer VDC. However. By default you, cannot see the, custom, routes that get added in the consumer V PC and. So the route to the VPN as well. As the route to the VM appliance, are not visible in the routing table of the producer V PC by, default. With. The ability, to exchange custom, routes over V PC peering, these. Routes will now be visible in. The. Producer. BBC and so. If you enable export, in the consumer, and you, enable import, of custom routes on the producer. The, producers, routing table will be populated, with the static, routes that are defined on the consumer b pc so, here you can see the 10.4, slash 16 will goes to the, peer the, consumers, on-prem. Via VPN as. Well as the static route 10.5, slash 16 which goes to the VM clients are now visible, in the routing, table of the producer VPC. You. Can disable this by, either disabling, export, of custom grounds. Or. By disabling, import of custom routes on the, receiving VPC, now. Both export, of custom routes as well as import of custom routes under sable by default and so in, order to exchange custom, routes you need to enable. Export. On the sender VPC, and enable, import on the receiver VDC. Now. The next topic i'll be talking about is that of VPC flow logs, now. BPC flow logs are. Basically. A feature that we announced, and launched last, year and. We have enhanced this product, in order to allow you to have more control over, the, flow. Log size that gets generated. The. Default aggregation interval. For, VPC flow logs is 5 seconds but, now you can configure this and change it to be anywhere, between five. Seconds to up to 15 minutes. You. Can also configure the flow report sampling by, default the sampling is 50% and you. Can now configure it to be anywhere between 0 and 100%. So. This basically allows you to control the flow log size that gets generated. We. Also add a certain, metadata, to flow logs and there, is now an option to exclude this metadata. And. With. That I would like to invite Marshall, onstage to share with us a few recent, announcements, on cloudiness. Thank. You Neha. So. My name is Marshall Vale I am the product manager for cloud DNS here at Google cloud and. So. One of the key elements of connecting. Your resources together in your VPC is of course DNS. Or the domain name, system, today, I'm going to give you a summary of those types of capabilities that, cloud DNS provides for your epc along with a couple of exciting new announcements, of course, it all starts with cloud DNS private zones private.
Zones Allows internal, DNS resolution, to. Your private network, now, it's important to keep your internal. Resolution to your private networks because that helps exclude external parties from discovering, information. About the resources of course on your private networks, private. Zones can be attached to a single network or multiple. Networks in your VPC, Armazones. Also. Supports, what's called split horizon which, allows you to overlapping, public. And private, zones so, for example you may have a portal, that looks different for your employees, then, it looks different from your customers. Private. Zones also supports I am policies, so you can delegate. Administration. Capabilities. Edit, or view capabilities. For, your zones pleased. To announce that private, zones is now in general, availability. Also. Very excited to announce here, at next the availability, of stackdriver logging. And monitoring for, your private zones this. Allows query, logs and, counts. About responses. To be logged to stackdriver, from. There you can course store it long-term and stackdriver, but you can also use stack drivers pub/sub capabilities, to send that long to other storage locations. Such as bigquery but, also on, Prem for your owns storage. And analysis. Tools, the. Query, logs, that. Are recorded are very very similar to bind logs you're familiar with, stores. Information. Such as queue name or our data but, also specific. GCP, Cloud DNS things, such as project. ID or, DNS. Policies. So. For. For. The metrics and the monitoring, that records. Information. Such as serve, fails or annex domain counts. And. This. Is all really important because it helps you debug, your, DNS. Situations. In your V PC but. Also it's important for security analysis. For. Threats in your system, pleased. To announce that here at next it's now available, in public, beta. So. I've been talking about cloud DNS services within your V PC but you also may need to connect your DNS, services from your V PC to other locations such as on Prem or even another V PC. So, for, connecting. To on pram we have a capability called DNS, forwarding, this, allows bi-directional. Dns resolution from, your GCP resources, to. Your on-prem resources, now. DNS, outbound, forwarding, allows, your GP tcp, resources, to resolve. Hostnames. Using. An on-prem, authoritative. Server such as bind or Active Directory, DNS. Inbound. Forwarding, does, the opposite allows your on-prem, resources, to use cloud DNS as your authoritative, server you. Can learn a little bit more about, DNS, forwarding and of course your a variety of hybrid. Connection, options, in the net 204, session. DNS. Forwarding is currently. In public, beta. Also. Very excited to announce availability. Of DNS peering, this, allows you to cross, DNS resolution across. Multiple V pcs let's. Look at a couple scenarios. First is say you're a SAS provider, and you, want to connect with the V PC and your consumer, the. Consumer would be creating a special. DNS, peering, zone that would connect to the network in the. Service, providers, in V, PC and they, would have their own zone, that the final resolution would happen from or. You, might want to combine this with DNS forwarding, to. Create an architecture way of multiple V pcs that can use an on-prem. Authoritative. Resolver, from there you would make a single V PC that would be a hub b pc doing dns forwarding, and, you have multiple spoke v pcs that would use dns peering, to, connect into the hub vp c dns. Peering even supports resolution, for your your, internal. Addresses. Here. At next-gen, s hearing is now available, in public, beta. So. That you can see a summary. Of the cloud dns services. For your private zones the, wide variety of flexibility. That it supports, in your V PC architectures. So. With that I'll pass it back to Nate huh to, give you a summary of what you've heard today. Thanks. Marshall. So. To, summarize, three, main. Takeaways. That we would like you to, you. Know focus, on from our presentation. Is that, V pcs are global, in nature they. Are built on top of a global network backbone, and you, are assured that the traffic basically never leaves the private net the network, backbone, you. Get access to all of your compute and storage, resources, globally. And privately. So you don't have to assign public. IPS to your virtual machines, in order to access managed. Google. Services, like cloud. Storage. You're. Able to configure your global network once you are able to associate, network, policies with this network and you, are also able to centrally, create, and manage the network in your organization, while allowing your organization.
To Scale to, hundreds or even thousands. Of developers or development, teams. Security. Comes first with, Cloud V PC you. Can apply network, layer firewall, rules for specifying security, within your VP see these. Firewall, rules are stateful, and connection tracked. You. Can specify. Cloud. Armor security, policies, in, order to apply, rules. At that, get enforced, at the edge of our network for your load-balanced applications, and you. Can also use V PC service controls, in order to mitigate the risk of data exfiltration. DPC, features are basically integrated. With start rival logging and now. With V PC flow logs and Fargo, logging it is, really easy to monitor, V, pcs. With. That I would like to invite ed onstage to, share with us, Cardinal. Health story on how. They deploy and use V pcs thanks. I should, start by saying this. Is a really great session I've actually watched it a couple times so. For. Those on video you might want to go over it a couple times. In, cardinal health were, a fortune. 500 company and, we. Have used a lot of the features that you heard, today we're. A global, enterprise. And so we have services, running all over primarily. Our first deployments, in GCP, have been within the United States but we, have plans in designs to, fully. Go global and we provision. A lot of that in our networks. Our. Business, objectives. Are in. Adopting. Cloud is to largely be, quick. To market be. Able to. Adopt. Different technologies. Very quickly and be. Very cost effective. Obviously. In all, our businesses, we want to try and save money and so we're, always interested in trying to be the most effective with every dollar we spend. The. Other thing that we have to do is we have to make sure that we're protecting all our healthcare. Information. There's lots of regulations in various different countries about health care information so, protecting, our customer, information is very key to us, as. Far as being. Having. Agility, and being, able to be flexible and, speed, the market we, want to try and move the. Our. Shared, services, traditional, structure. More, into a DevOps, model, where. Development, teams can, provision. Their own services. But. We still need to make sure it's secure, so. We need to provide techniques, and and capabilities. Where they can be. Very agile very flexible, without. Having to say mother mae-eye if you will and. But. Still keep, all those guardrails in place so they don't do something that would be compromising. To our customer data or to the enterprise and of, course I mentioned the cost effectiveness, obviously. That's a key, aspect, so. What we chose to do early on was. We, created to host projects, so we have to host projects, that, basically do all the network things. So that what that does is for our application, teams they don't have to think about all those Network pieces, all the interconnects, and all that kind of stuff is hidden, from them and then. We have hundreds of service projects, that sit on top of that and they, use these, to host projects, these.
We, Have one for production, one for non production each, one of those have multiple different VP C's and. Those. Service projects, are then allocated or. Authorized. To, be used by various different development, teams those. Development, teams then, are able, to utilize. The network, one. Thing that if. You're, in the network world and you've talked to a lot of developers for any length of time you'll. Find that you start talking about routes and BGP and firewalls. And all's in their eyes glaze over and they don't want to hear about it I don't know about you but I've run into a lot of that so. We try and make it very very easy for our consumers, our consumers, being the development, staff. That. Are actually helping the business and. In, that regard there's lots of need for training there's lots of need for documentation, and there's, a continual, process in, our company it's very large and we have lots of people coming - going all the time so, we're constantly doing, this education, over and over again documentation. Training videos in-house. Webcasts. And that kind of thing that we do are. Very very key to our strategy, there we. Also have to make sure we have all the governance controls in place both. Proactive. And reactive governance. Controls that's. Really critical to a success in a highly regulated industry. Such as we. So. The value of the host projects again is I have small number of Network experts, a very, small number of experts that know how, all this networking thing works, so. All the air connects the VPN tunnels. BPC, peering, how. All that gets put together all, that physics, that's behind the scenes we. Try and make it simple and easy to consume. We. Have a few centralized, appliances, that we use I gave a presentation earlier. This week about how we use some centralized, appliances, to do some of the more traditional security. Perimeter. Work that, you might expect in. Large enterprise, that's been around for a long period of time and we're moving to the cloud we, have a lot of technologies. That we have to bring with us, we. Also have need at times to create a separate, subnet say for instance there's a service, project that they have a very specific need, so. We can create a subnet, just for them and, allow. Just, that service project, access to that subnet so. Therefore nobody else knows about it just a swarm service, project, that's pretty powerful we've used that a couple times. As. Far as firewall rule management again this is one of those governance controls we. Use firewall, augie extensively. That. Was a great feature that we looked, forward to and have been using it since the day ik was. Released or. Available, to us we we've, been using it for a while. But. We use service accounts so, the goal here is that I have vmn stand say another, vm instance, and it's in my own service. Project, we, allow the target, and the source to be the same and we allow ingress, and egress rules, for those so, they can talk to each other so, it's wide open any so, I have a collection. If you will within. My service project, I can grant that if I want to go between two service projects, then I go service account two service account what, that does is it gives us pretty good segmentation. Across. The, different. Projects, and pretty. Tight controls, in that regard when. We have a standard. Pattern. Like. A north-south, pattern. For. Example internet. An. Internal. Web server an internal. Web server is a standard. 80 and 443 allowed. In from the RFC, 19, address space and so. If, that's the case then, we allow them to put on a network tag this has been pre-approved, by security, and you, apply that network tag only to the VM instances, where, that applies, so, not everybody gets it but, now as a DevOps team I don't, have to go to the network team to do a firewall rule I just apply that tag I get those firewall rules in some. Cases we have network tags that apply both, at ingress. Egress and, routing, rules, for, example so or egress we use that for. To. Get outside through some security. Appliances, for some certain, use cases so. We do it that was based on routing and firewall rules as well. And, then we. Also have cloud functions, that actually monitor, some of these firewall, rules so.
We Don't do anything really dangerous this is the reactive, and so if, somebody for some reason creates, a firewall. Rule that was not supposed to be the case we. Shut that down within seconds, I'm. Guilty of doing that I open I create a firewall rule it was a zero zero zero I was doing in a very specific use case the, function caught me closed it down I reenable. Did a couple times I'm like oh I know it's wrong I, didn't, add myself to the exclude list so, so. That was a good test it actually worked just what we wanted it to be the. Value of the service projects, again is for the DevOps teams so. A DevOps, team can, actually build all their resources as, you know a project. Is about four. Authorization. As well, as four. Costing. In our case we have costing, based on projects, so. With. That we can delegate, the, accountability, down. To a DevOps team and let them move on their own. Again. We advertise. Only, say. Advertise, it's grant, the permission to. Only. Selected, subnets, to these projects, so. Rather, than seeing all the VP C's that we built running seeing all the subnets, that we built we, only advertise, selected. Subnets so say for instance your project, wants to run in in Europe. And only. In Europe I'm not going to show you any of the subnets that are in the United States same. Way in the United States or in Asia whatever, so. That. That's very very capable very. Very powerful, to us so. Then the developers, don't have to start thinking about all these network things it's us, central one or u.s. East one that's. A pretty simple thing developers, can deal with that ok. We also have an organizational, policy, that prohibits the. Creating, of any instances. With an external IP address. Obviously. The, net. Capability, was a powerful, feature in that regard as well, okay. So. To sort of summarize. Showing. On this picture we, have multiple different V pcs each, one these you know global network XY, and Z we, have different V pcs and those. Are provision, in the host project, within. Any service, project, I may expose only, one or two subnets. And in, this case service, project, B and service project, a are communicating, to each other from, different regions they. Can be global over. The Google backbone and, that. All works and the, firewall, rules that, you see on there I have illustration, of three of them the. First one is the. Internal, web server so. That's a network. Tag based rule so, if I put that on my server I can exceed. 80 and 443 traffic. In my, decision. As a development, devops person, and then. The middle, one is the service count to service count so if you're in that same service account you have ingress egress to, all your peers and that's an any kind. Of rule and then, the last one is service. Project, to service projects so we're using as much as we can service account to service count and that. Gives, us that point-to-point, communication. Well. That concludes my.
Portion. Of the presentation I'll. Turn it back over to Nia. So. We, might not have too much time for questions but we'll be available outside this, room for Q&A. So. Thanks, so much everyone for coming and, you. Know see, you again next year.