Cloud OnAir: Google Cloud Networking 101
Hey. Everyone welcome to see each at hosted by cloud on air which hosts live webinars, every Tuesday about, Google cloud my, name is Stephanie Wong cloud customer engineer and today I'm super excited to introduce you all to Ryan, Frizzle who, is a networking, specialist, and customer, engineer here at Google cloud, as well so, Ryan. Did you want to introduce yourself really quickly sure it's Stephanie said I'm uh Networks, me for Google Club so I spend most my time talking to customers about all. Things cloud networking, awesome. So just to remind all the live viewers you can ask live Q&A throughout. The segment on the cloud on-air platform, and we will have Googlers. Help answer them throughout and we'll also have some time at the end for Q&A as well so, without further ado let's get into it I'm super excited today so, you talked about networking, which is such an important, foundational, concept to understand, as people. Begin to think about moving their workloads to the cloud especially. When they're trying to map out their on-premise, network topology, into, something like a software-defined network on Google Club so Ryan take it away. Sure. So. Networking, is obviously a very broad topic it's not something I can cover in the next 30 minutes so I want to lay out a quick roadmap of what you can expect today and then follow-on sessions that we have planned so, today I'm going to talk about the VPC construct, or the virtual private cloud I'm, gonna talk about a concept that Google is has, done called shared VPC, and I'm also talk about once you build these VP sees how do you actually connect, to them from your on-prem or datacenter locations, during, the next section will cover routing, or VPC, peering. Next. We'll move into firewalls, and security, we'll, also plan to have a session on load balancers, and all the flavors that Google has and how to use them in the cloud environment and then. We'll, cover other services. If there's something that working related that you want, us to cover please let us know and we'll figure out how to integrate it into these topics. So. With that let's jump into it so, if anybody has done any work in the cloud this probably, looks very, similar to you right, this. Is sort of it's what I call a traditional VPC. So if, you've used AWS, you've used other cloud providers this, is sort of how it's built right so, what I'm showing here is two, VPC has been built with two different subnets so, as you know when. You build V pcs in this manner the. Way that VMs. In say uswest, talked, to VMs in u.s. East is typically, through VPN, Gateway right there's no way to actually have direct communication between, say. A web frontend that's sitting in US West and maybe a web back-end that's sitting in u.s. deist other than going through that VPN connection. So. Google thought about this topic and said okay, how can we potentially do this different how can we simplify this and we, came up with Google's, version of the virtual private cloud environment so. In Google's world the VP see the container with which in the, subnets, live is actually. A global construct. So instead of having to build you, know you VPC and uswest, a vp, c and US east of e pc in say amia you, actually build one vp, c and then you actually put subnets, in the different regions within that vp c now. As you see here there's no VPN. Required to actually have a vm, in uswest talk to a vm in u.s. East this is all handled by Google in terms of routing that we have under the hood so it's nothing that you have to do now you can control this through a lot of means through like firewall, rules and bears or things which we'll talk about in other sections but. Really from a conceptual standpoint it, really simplifies, the environment, as you're deploying things so. Just the highlight that again you're saying that we don't need a VPN, for traffic between regions how. Does Google do this so we do this through Google's, underlying, network so this is the same network that we use to run our search engine to run YouTube to run Gmail all that infrastructure, that Google has built for our own use we. Actually use that as part of cloud right so we use that same network infrastructure, to move traffic from, let's say you know VM is running on in u.s., West one to say a VM running in a me oh wow pretty revolutionary yeah. So. Really, why is this concept, important, right, so, one it simplifies the network right, so as I said you have one global, private networking space with a regional, segmentation, to put yourself that's in right, I talked, about the VPNs, so. If you look at the picture, that I that I started, with it's very simple when you're dealing with two V pcs and say a couple projects but, imagine.
Expanding. That model, to hundreds. Of projects, all running, hundreds, of VPNs. Now you've got mass, amounts of VPNs that you're building you've got mass amounts of the VPN gateways that you're trying to stitch together. When. You do it with Google then you don't need to use a lot of infrastructure, so it really simplifies that, Network stuff which your network engineers, and networking engineering, organization, will love that simplification. The. Same thing applies to routers, so, what, I didn't show there is in, each of those V pcs you actually have routers to dynamically advertise routes, so. In, the traditional environment where, you're building a VP seeing in the West you're gonna put one two, maybe even more routers, in that environment you're going to put some more routers in the VP see in the east more, routers in amia you're gonna use those across, the VPN tunnels to provide dynamic, advertisements. For your subnets so. You can quickly see you've got a lot of sprawl, in terms of the routers and the BGP sessions you're managing it. Tends to grow and grow and grow on you with. Google's construct, you don't need to do that you can actually put a couple cloud routers in place right, so if you're trying to stitch together multiple, V. Pcs together you, actually can just use a couple cloud routers to do it so have these very large global routing domains, and and, not have you, know router sprawl we have hundreds of routers. So. It. Really simplifies just that network infrastructure, component, of things second. It simplifies, management right so you can think about if you have all these VPN gateways if you have all these VPN tunnels, what's, happening is when something breaks you have to go and figure out what, is broken where is it broken you, know you're looking at potentially, different cloud routers you're looking at different VPN tunnel just looking at different VPN gateways you're looking at the a side the C side because. You have all the stuff stitched together in this very complex mesh alright with. The Google environment it's. Much simpler so when something isn't functioning, there's only a few things to really look at and sort of dissect what. Is working and what isn't working right so, it really simplifies that operational, management. The. Other part is the security policy so if you ever spun up v pcs you know that the security policy typically is constrained to that V PC so. Again if you have you, know a VP seeing in the west of V BC in the east of V PC and amia you're managing separate, security policies, for all those V pcs so, if you're a security administrator, and your goal is to sort of unify, your policies, of a very uniform policy applied, across many things, with, maybe slight, tweaks in each of them it.
Becomes More difficult to manage that uniformity, because they're having to copy and, manage, you know 12 15 20 of these things right, in the, Google environment because, that V PC can is global. You, can really manage one, security. Policy right so it simplifies, the. Management of that as well. The. Other part is flexibility, so, you. Can build exactly. What. I showed in that first slide right you could build a regional VPC, construct, right and just drop a subnet in one region and build, a second VPC and drop a subnet another region and actually connect those with VPNs right, just like you would in a very traditional model, or, you, can use the Google model right where you have this global construct, and dropping different. Subnets in different regions within that VPC and you. Could actually combine them right you can have some, that you're using that global, VPC presence where you have a lot of subnets, and then, you could also be using specific, v pcs for specific applications, that are really constrained to just one region so. Google. Is really giving you that flexibility, every, business is different every, business is unique so, what, your needs may be may be different from your competitors needs or another business down the street so, we've given you that flexibility, to really, look at how you want to use cloud networking, and to be able to apply that in Google's cloud in a lot of different fashions. You. Didn't mention just the use case for having individual. Subnets in each region so is that sort of a use case for not using that global VPC contract, so, here's, a good your a good use case for not using the global BBC I was working with a customer a few weeks ago that had very specific requirements, around the, applications, they run have, to only live in that particular region right. It's. Not necessarily through something like GDP or other things like that but this is actual contractual. Obligations, they make with our customers, so while. We presented the idea that they could in theory, run a global V PC and use firewall, rules to constrain what can talk to what they really didn't like that idea and they really wanted to create these constrains, network domains so, what we ended up with is actually building, you know a V PC and amia only, putting subnets, in there and then for other customers and a pack we build a completely different V PC and only put stuff that's in a pack right, so this gave them that very traditional, way of doing things you, are sort of losing that benefit of the simplification. Yeah what, you'll find in in networking. In the cloud in general is you're. Always making constant rate offs right so that's an example of okay you know for their particular business they, had requirements, they had to meet for their summers right in terms of contractual obligations, so, they, they, wanted to really you, know manage, that expectation and be able to put that infront of their customers but, they did lose some of the flexibility, here now one of the things they did do is actually, use, the global construct, for a lot of common infrastructure, so they could deploy the application, in a V PCE that just lived in a region but.
There's A lot of core infrastructure. That they actually used a global, V PC and they tied each of those regional constructs back, to that common core infrastructure, V PC so, again there's, that flexibility, it's not an and/or, situation. Right you can use, you. Know all of these constructs together, yeah not, mutually exclusive yep. So. These. Things are really simple to set up I want to take you through a demo real quick just to show you exactly, how easy these things are to set up. Okay. So, for those, of you have used the Google console, before this, is the sort of home page so. What I'm going to do is I've created a project. Called. Cloud. Cloud. On air networking 101 so this is the project that I'm in this is the home page. So. What I'm gonna do is I'm gonna go down to V PC Network click. On V PC networks. Right. Now there's nothing in here I've actually deleted, the default, Network what, happens when you create a project is you're actually gonna get a default V PC that gets created with. A bunch of default networks, one, of the best practices is to get rid of that just just wipe it out in, most cases you're, trying to integrate, AV. PC in the cloud with some on-prem, environment, right so you have a whole IP address scheme that you've got set up the, defaults, generally, are not designed to integrate with what you probably already have so there's probably some overlapping space in there so generally I tell people the best practices just, get rid of the the default V PC, so. I'm. Gonna go in and create a V PC. So. I'm gonna call this demo. Network. You. Have custom and automatic subnets automatic, is what I was describing before where it provisions, a whole bunch of ten dot space for you typically, not that used you typically gonna come in here and use custom, subnet. So. I'm going to create Network one here's. Where you select the region so this is where that global construct, comes into play right I didn't create VPC just in one region now I'm gonna drop just the subnet, into, one of these regions, so. Let's, put this one in US central one. Let's. Pick an, address block. Okay. There's a couple other options in here so, private, Google access this is what enables your VMs, to actually, talk to Google API is without, having to have a public IP address on them so, a lot of customers, don't, want any public IP addresses, on their VMs but they still want to access, Google services, right so this, could be bigquery, this could be you know Google. Cloud Storage bucket scale all the services that Google. Has. To offer so, if you, if you don't have this turned on you actually have to have a public IP address because most of those services you're probably familiar with rely, on public api's, right, so. A best practice is to turn this on if. You were not to use that option do you have to have a public IP for the VM that you deploy yes, if you were to not use that option you have to have a public IP address on there to say like access your storage bucket or access bigquery or access other things okay, and for. The forward slash 24, address range, is. It possible for you to use overlapping, IP ranges if you were to create two subnets in one VPC so, you can't, have overlapping IPs in one VPC, but let's say I created two separate VPC use I could actually create exactly, duplicate, of V, pcs I can have the same addresses, and you, know V PCA as I did BBC be now, that, could cause problems for you as you go to interconnect, these V PC so if V PCA you need to talk to V BCB and has overlapping space now, you've got a problem you can't do that right because it's gonna not going to know you, know where you want to route this day so, generally, speaking you.
Know You're not gonna have overlapping IP space the same thing applies to your on-prem, environments, or your data centers you're gonna sort of use separate. IP space, typically, ten dot spaces what we see in, the cloud environment those same best practices apply, here too yeah. Okay. So I'm gonna click done on that one and I'm. Gonna add another, subnet. Don't. Spill it right. So. I'm gonna create another one this one I'm gonna drop in uswest. I'll. Call this one. And. Again I'm going to turn on private access I'm gonna turn on flow logs flow, logs are another sort of best practice flow. Logs are actually what enable logging, of the actual flows from your VMs so again if you want to capture those flow logs you have to enable flow logging, in. This environment right, but again it's a best practice to turn it on so you capture all those flows they, get pushing the stack driver and that you can use them for troubleshooting you can use them for a lot of other things if you're troubleshooting security, things like that. I'll. Talk about this a little bit next time but, I want to talk about regional versus global routing so, in this environment you can actually set, up your routers to actually, just advertise, their, routes so the subnet that i'm created here in the, region that the router lives right so for example i created one. Subnet. In u.s. central one if I put a router in u.s. central one and I had regional, checked it would only advertise, the subnets in u.s. central one right, generally. Speaking what I see is people using, global what. That really means is I could put a cloud router anywhere, in this environment it's gonna advertise all the, subnets in this environment out. To, wherever, you're advertising so it could be your on-prem your data center or things like that right so again it's another I don't say best practice there are use cases to use regional, routing but, generally speaking most, customers are using global routing. So. Now I click create and it's gonna go and create my subnets for me are. There any limitations on the number of V pcs that you can have in a project or the number of VMs in a V PC or in a subnet, so, let's. Take it one at a time so, number. Of V pcs in a project so we have the concept of quotas, and we have the concept of limits so only touching those real quick quotas. Are generally something that can be increased limits. Are something that are generally, hard, fixed, right so, we have a quota for V pcs where you can put five V pcs per, project now that's a quota right so you can go and request, an increase to that quota to be able to put more V pcs in that project. You're. Asking about a number of VMs and a subnet right every, VM is gonna have an IP address in, that subnet so in this case I created slash 24 subnets, so you're gonna be limited to you, know one IP per VM is gonna dictate how many VMs you can put in this particular subnet if I were to make this you, know a slash 20, that I can obviously put more the ends into that subnet okay yeah. So. There we go while while we were talking it created, my two sub that's for me so literally. It's that simple, we just go in there and populate, whatever something that you want this V PC you drop into whatever regions you want them in and it. Auto creates everything and, I can go and create compute engine environments, create whatever you need using this IP space.
So. Me go, back to the presentation here. Okay. So, I want to expand, on the, concept, of the V PC so this is something else that Google is pioneering we call this the shared V PC so. As I talked about before you build this. Global, V PC construct, to really simplify your network just simplify, your operational, management right, shared. V PC allows you to take that simplification even. Further so, what I'm showing here is this, blue box is actually, the V PC that was created right so, there's various subnets, within this blue box right, I could have more blue boxes in this gray box this gray box is is. The project right so. When you use shared V PC projects. Become two types of projects you either have host projects, and this is where your network and resources are actually going to live so I say networking research as I'm talking about your cloud routers your VPN gateways your, subnets. Things, like that are gonna live in the V pcs in that host project right, on the. Right over here I have got a few, example, projects right so I've got a recommendation project, I've got a personalization, project I've gotten the analytics project right these, are what we call service, projects, right, so. If you think of the traditional model I would, spin these up as a project I would put the V pcs in there right so I would have V pcs in the recommendation project I'd have V pcs in the personalisation project I'd have V pcs in the analytics project I'd have VPN gateways I'd have all the VPN tunnels I have all the cloud routers I'd have all this stuff right so this is sort of that sprawl I was talking about right, so, the global construct, allowed you to eliminate some of that right the shared V PC is gonna allow you to eliminate even, more of that or simplify, your network even further right. So. In this, example I've, got all these subnets that I've created again in one V PC now. What I'm doing is I'm not actually creating any V, pcs in these projects right so the recommendation, project a personalization, project those actually don't have any V pcs that are living within the actual project, themselves right, what. I'm doing is I'm actually sharing. The network infrastructure, from, the blue box that shared V PC into. Those projects, right, so, the users, of those projects, right they're not creating any network they're not using a network that's central, to that project are actually using network from, the shared project right, so, the way you can think about is your user is right so a development, team that people is spinning up VMs and doing operations in these projects they're touching the VMS in the small gray box, right like a recommendation project, maybe this is your production environment right, typically. The people that are operating, in the blue box are your, network engineering staff right your network security team they're managing, all of that Network infrastructure, and those are the only people that are really operating, within that environment so kids are very clean separation, right you don't have a bunch, of hands in there that potentially can break things or mess things up for you right it's just network engineer is just your security teams right and then, they're pushing that infrastructure, out so. Other people can actually use it right. So. From an iamb perspective, this is giving you that segregation. Of duty and correct, when you use I am rights right you're gonna have you know network administrators you're gonna have you, know network users you have all that stuff right so most of your network or infrastructure, administrators, are gonna be operating, in that blue box right, whereas, your users and the guys that are consuming, the network infrastructure by building, VMs by building other sort of network things they're. Gonna be in the recommendation, project right so they're not actually gonna have the ability to do anything but you know say drop a VM into that particular subnet they can't change the subnets they can't you know create, more subnets, they can't do any of that stuff they can only create.
VMs With in the subnet that you've given, to them right, alongside. That kind of idea of least, privilege best. Practice yep. And. Then the right side of this drawing I'm also showing so you've got all those VMs that have been created off that shared infrastructure, right they're, accessing you know various api's right so I'm showing a machine learning API Analytics, sort of bigquery API so, those VMs are still able to do that because remember we. Set up that, Google private access in the blue box VPC, right. When we set up that VPC so, all that stuff carries, down to those service, projects now, the other thing that's that's, really nice about this is we talked about your. Your. Security domain right we talked about your unified, policy, so in this case I'm writing one policy, in that blue box and when I extend all, of those subnets, down to the service projects the, security policy actually goes with them right. So now you've got instead of having to build all these individual V pcs across all these different projects you've been able to centralize, your V pcs right. You Mabel - you, know centralize your security policy and now you're able to scale, to hundreds, of projects, take, that infrastructure, push it down to those projects take that security policy apply to those projects, right and your security policy you may have specific rules that apply you know to just the recommendation, project or just the zatia project, or just the analytics project right but you still only have one policy that's written in that blue box right. And, you're applying certain, parts of that policy and maybe the analytics project or the personalization project or the recommendation, project yeah but again when you think about unifying, everything only having to look at one policy it really. Simplifies that yeah especially from a security perspective, and management perspective and just adding, to that scalability. Yep. So. Really, it sort of continues on that theme and this is if you take anything away from today's talk it's really about how, do I simplify my network how do I simplify the management the operational, day-to-day functions and how. Can I sort of use a lot of this flexibility, that Google gives you right whether you choose you in a very traditional manner you try to you, decide to use some of the functionality, that Google is really created to really allow you to apply this stuff right, that flexibility, is sort of key and is something that we're really happy that we can provide so, again simplifying, the network right you're, now going to deploy that vbc across many projects, so, you could have, a single project with a shared VPC another single project with a shared BBC but this allows you'd have just one project push, that said share BBC to hundreds of them right, as we. Talked about before fewer networking elements because I'm building the routers I'm building those subnets in the host project itself and in those V pcs again, I'm not having a server that that continued sprawl of things, again. I talked about this simplified management you have one security policy that you're managing and you can manage us across hundreds of projects is. There any upper limit on the number of service projects that you can have under one shared those project so. There's. No hard limit that we've we've seen, so we've tested up to 5,000 right and it runs fine the, quota, is a hundred so you. Can do a hundred with, nothing no issues, no you, know not having to come to Google and request anything special right but if you want to go beyond, 100 projects you have to put in a quota increase but again in terms of limits we've. Tested up to 5,000, we've had no issues. So. Yeah we don't really necessarily know how big this thing could potentially scale all right well if you have 5,000 projects then you're in good hands. So. Like, it before let me let me quickly show you how easy it is to set up just the shared DPC aspect of things. Okay. So I'm back into my project, where I built my two subnets. I'm. Gonna go into under VPC Network I'm actually gonna go down to shared. VPC. So. This is the menu when you're actually setting up a shared VPC and what this piece looks like so. On this, first part I'm gonna get to specify because, I'm in this project that's cloud on air networking 101 project I'm gonna say yes I want to use this as a host project right, here, I.
Can Check where they want to only show shared subnets, are all subnets, right and, over here I'm actually going to attach projects. To it right, so. I'm gonna go ahead and attach a project. So. I've created a, second. Project this cloud on-air 101, shared VPC, right so this is the recipient so this is the service project that I'm creating now I defined, the host project in that first tab now I'm defining the service project so. I'll go ahead and click on that. Here. You have the option to either share all subnets, so I created two subnets in there one in the US central one in US West I could just by default share, all subnets right. Most. Customers, that we work with are, gonna share individual, subnets right so maybe their service projects are a production, project a development, project a testing project and they have different subnets, carved out so they don't want to share all subnets, with production and all subnets, with testing and all something that's with dev right, so what they're gonna do is click I want to share individual, subnets so, let's. Say I was using this as a production, environment and I, created my two subnets. But I only want to share one, of the subnets so, I'm just gonna click on that. Save. It. Really. Go any Asian there if you don't want to apply all subnets yeah you can sort of really get that granular level you want right again if you want suffocation, you can say I'm gonna you, know apply all subnets if, you want sort of grandly, specify things you can do that too so again it goes back to that sort of flexibility, the way we've engineered this is there really be flexible, because as I said earlier every business is different every sort of networking, construct, that we build with different customers look slightly different right thing cuz they all have different use cases. Okay. So. Now I can, go. Into, my. Service project. So. If I go into my service project, and I click on VPC networks you can see there's actually no VPC, networks I've created within the service project right so I haven't done, what I did in that very first step where I've actually built a V PC I didn't do any of that right all, I did was really go in and, share.
A Subnet, with this right so now you're gonna see these these two options right so now you're gonna see networks that are shared with this right, so here you can see that the two networks I have specified here the. Both of networks are showing up but I've actually only, shared, one of them so, if you go back to the, other environment. And. They go into the shared V PCE. Menu. So. Here it says network, 1 3. Users right, so 3 users are all my user identities, right even though network 1 and network 2 were both showing up under that other V PC I'm actually shared one all right because I only see 3 users there right right, so if you look at. If. You look at this right, so I actually drill down into the actual shared, service, the service project right again, you'll see both networks but, you see there's 0 users, in there right because I haven't actually shared that network so no users can actually provision. Any resources, in there the, only network, or the only sub that they can provision resources, in is this one right. Again, I could have created 20 subnets, shared, 15 of them or I could have created 20 and shared all of them right yeah, it sort of depends on what your specific needs are and so how you're architecting the V PC components. Of the network, so. The number of users you see here is the number that are in the service project that you've shared that subnet with correct so that that service project I have 3 users in that project right they happen to be all be my user identities, but you could have a whole ton of users in that project like your whole development, team could be in there so when, you share that resource it would say maybe you, know you're sharing this with 250, users right that are authorized, to create resources in that project. So. Again very, simple to set up doesn't take a lot of time. It's. It's it's really designed to be very easy and simple. Okay, so. Now let me talk about the VPC construct, in itself and you sort of built this construct, whether you've built it in a bunch of different V pcs or you've used sort of Google's, global V PC construct. Or using, the shared DBC construct, right this is stuff that you've all built in google, clouds environment. Right so, you still need to connect to it right so this is connecting from your on-prem, environment, or your data center or something like that right, so, this is usually the next logical step of well how do I get connectivity, to all the stuff that I've built so. The, easiest way to think about this is sort of in this cruciform, pattern, right so, on the left side you have sort of layers of the OSI model right, so I have sort of layer two options, versus, layer three options and then, across the top I, have dedicated options. Meaning I'm directly, connecting, with Google's edge or I've, got shared options, which, means I'm connecting, to typically somebody else's Network, which is then multiplexing, a bunch of customers together and then, pushing, all those customers that wants up to Google right so there's there's some other provider in the middle in that in that, sort of environment and then, you've got VPN which I've sort of put over those top two so you can't use VPN with layer two networking.
Options But you're typically going to use VPN over. The top of those layer three options because most people as I said earlier are building private. Address space in there so let's, start with those top options because that's typically the way customers, start interacting with Google cloud so. There's two ways that they can sort of do a layer 3 interconnect when I say layer 3 interconnect I'm basically, meaning they're connecting with our edge at layer 3 right, so you're getting in the case of dedicated, right in the upper left-hand corner you're. Actually connecting to a peering router on Google's. Infrastructure, and you're getting all of Google's net blocks advertised, to you right. If. You. Go through the shared route this is basically using your ISP right Google, is connected to most, ISPs, globally, in the world we're providing all of googles net blocks to those ISPs, those ISPs, are then advertising. Them down to you now, the big thing to note here is there is no SLA, around either of these products right so the layer 3 interconnects, no SLA so if you're planning on using this sort of connectivity option, make. Sure you have some sort of redundancy, built into it right connecting, into one peering, router and, Google directly probably, is not the best architecture, if you're concerned about high availability right. So. As I typically, customers, are building 10 dot space in their cloud environments, right so that's, private address space you can't advert you can't access that directly from a public, routed edge so, the, way that you do that is you build a VPN over the top of it right so whether you're using you. Know your ISP, to connect to Google or you're. Actually connecting directly to our layer 3 edge you're. Going to typically build a VPN over the top so this is where you're going to go and build VPN, gateways within the V pcs that you set up you're gonna build tunnels to VPN gateways that you have on pram or in your data center and that's, how you're gonna tunnel across the public infrastructure, all those, private, routes right. I'll. Talk a little bit more next time on the routing and how this stuff can be built but. Suffice. To say you're. Gonna want to set up some sort of dynamic routing setup where your advertising, subnets. Across multiple VPNs, or multiple, interconnect, methods to give you sort of a high availability connection. Right because again there's, no SLA with with these actual services yeah right now, the one benefit to the, layer 3 service is right now it's it's free of charge you can you know interconnect. To Google's layer, 3 edge for, no cost right I should. Note that if you are doing any dedicated connections. With us whether it's layer 3 or layer 2 everything's.
Done At the 10 gig layer right, so, if you only need say 500, Meg's or you need a gig or something like that typically, we're gonna push over to the shared side so, it's it's typically going to be through some partner interconnect or through your ISP or something like that. So. Let's move down the stack to the layer 2 options these tend to be I would say you, know more preferential, these days this. Is really about, building. So this is really about building a lair to connection right so these are VLANs, that are being built connect your environment, to Google's environment, so with the dedicated interconnect, you're, basically connecting instead of appearing router you're actually connecting to appearing fabric, right, so, this is this is our layer 2 device just like our peer and routers or layer 3 device that sits on the edge of our network right so you're gonna build a physical interconnect, to, that and then from that you're gonna provision VLANs, from, that edge device to cloud routers that you build in the VPC construct. The. Same thing happens on the partner side in this case you're going to connect to a partner right Google has a whole list of partners out there you know a good example very popular one that we we work a lot with customers, is like Equinix cloud exchange right lots of people have presence and Equinix in. This case you're actually connecting to a Kuenn excess fabric Equinix, is fabric is then connecting directly to our peering fabrics right but, you're still really. Going through the same process where you still have to get VLANs provisions, right so I think that that layer 2 connectivity, but, again when you think you and a whole bunch of customers are connecting to equinoxes peering fabric Equinix, is then doing, VLAN segregation, and sending a whole ton of VLANs to are appearing fabric right, so, they're multiplexing, a bunch of customers on that connection when is it necessary for somebody, just like partner interconnect versus dedicated, so, there's a couple use cases here normally. There's, two real things right if you think you're gonna use full 10 gig of connectivity, it makes. More sense to, go. With a dedicated, method. Versus using shared methodology, right because, shared, methodology, you're competing with other customers. Right for bandwidth right you can't, control who is actually, using, that bandwidth in any given time right, so if you and I are both connected, to our Equinix and you're, you know sending a massive file the same time I'm trying to send a massive file we're.
Basically Competing for bandwidth now we try to engineer with all of our partners to prevent that from happening we do a lot of capacity planning, to say you know we, should never be above like 50% utilization on, these interconnects, right so the chances that happening are slim but. There's also a cost component, too right it tends to be more cost effective to go in the dedicated, side right if you're gonna use that full 10 gig port then going to the partner side so there's, a couple trade-offs there. But. Typically if you, want something less than you, know full. 10 gigs or there's there some break point it's it's gonna depend on per partner and where the cost break-even is, so. Where. The layer three dedicate, interconnects were free the layer two dedicated interconnects. There's. A charge for those right, so that's $1,700. A month per port so. You. Know if you're looking at a partner, version you know Equinix is gonna charge you something Google's gonna charge you something at some point it makes sense just to go to a dedicating architect yeah are there any industry, best practices, that you've heard of so, what I see with a lot of customers is that they start by turning up VPNs and then they'll move to like dedicated interconnector they'll move to partner interconnect now the great, thing about this is just like the the VPC construct, are not mutually exclusive right, and a best, practice that I definitely recommend is actually, using, the VPNs, in addition, to the dedicate interconnects so you can start by using VPN you, can actually seamlessly. Hot cut over to a dedicate interconnect or a partner interconnect and your traffic will actually just flip over right, and. Then you can leave the VPN as a, backup so if something would ever happen with your dedicating, or connect or your partner interconnect the traffic will automatically, failover to the VPN it's since you've already turned it up anyways and it's up and running why not just leave it up in place yeah right so. The. Layer 2 things actually do have an SLA right so the partner, interconnect, the SLA is actually gonna be dependent on the partner that you choose right, from. A dedicated interconnect, depending, on the architecture, you use is going to dictate the SLA that you get so if, you, connect to two. Peering fabrics in one Metro will, give you 99.9%. Uptime, SLA if you. Connect to two separate peering fabrics in two separate, metros so you have four connections, at that point the, SLA goes up to 99.99%. So. Again depending on the architecture you choose in terms of inter connecting to Google there's going to be an SLA behind it for both the dedicated interconnect and the partner interconnects it could be appropriate for hybrid customers. As well yeah. There's. A lot of sort of ways you can mix and match things in this environment where. This typical comes are more challenging is if you have a hybrid environment some stuff and say AWS, and some stuff in Google I can't, necessarily connect a dedicated interconnect, from Google to a you, know AWS, version of this product right you actually have to have something, in the middle so a lot of customers will deploy a cabinet, with a router or something like that in Equinix and then run the connections into that router and sort of hairpin the traffic themselves but that actually gives them a cost advantage versus, if they were just trying to move traffic from AWS to Google over public interconnects, though that Google has with AWS that's gonna actually cost you a lot more money so if you're moving a lot of data back and forth it's, definitely worth looking at an architecture, where you're bringing you know our dedicate interconnect service to, AWS equivalent service and landing on a router somewhere. So. That's a quick overview of sort of all the ways that you can interconnect and how these things can be used in, conjunction to each other so, with. That that's sort of a wrap for today we've covered a bunch of sort of the foundational, stuff thank. You so much Ryan that was super helpful everyone. We're gonna be back in less than a minute for live Q&A. You. Alright, let's get started with the first question so, the, first one is can I have a shared V PC that spans across multiple, regions, and if so how, is that traffic build ok. So, remember, the V PC container, right automatically. Is going to span across multiple regions, right, so the, first part of that question is yes you can definitely have a shared, V PC because the shared v bc is just taking that V PC construct, that exists across all regions and applying.
It In the model where it's it's a host it's in a host project and I'm sharing it to other things so absolutely. You can you can do that if. So how was traffic build traffic, is build at a per project level, so when I'm sharing, those resources to another project and that project is, actually using, those resources it's, actually gonna get build from that project, right that I'm sharing it to where the VMS run because the VMS, aren't actually, running in, the, host project are running in all those service projects right and the VMS are what it's creating all the traffic right so everything's gonna get build within those service projects and not necessarily the host project in them itself okay, great. In. One of your diagrams you show two VMs in a V PC with different, subnets, in two different regions being able to talk to each other does, this mean that a single, V PC is synonymous with a single, broadcast domain so. I wouldn't necessarily think of it in terms of a broadcast, domain because a broadcast, domain is really. A layer, to construct. Right. So, typically when you think about a layer to broadcast. Domain what's separating broadcast, domains is layer, 3 routers right, in. This case it's not really a layer 2 broadcast, domain you can almost think there's one big flat, routed, environment, right I'll. Talk about routing little bit more in the next, cloud. On-air that we do but, really you can think about every host is actually, a total router because we program, routes at the VM level so it's almost like this / 32 routing, domain right there is no layer. To broadcast, domain in this environment right, so. Hopefully, that answers that question, oh. Can. I enforce security policies or other network related policies, on communication, between components, in the same V PC. Maybe. For, security policies. So. Yes, you can so. Within. A V PC you're going to apply firewall. Rules right, so. For example, I could have a whole bunch of subnets in there by, default we are actually going to lock everything down right, Google generally, takes a trust, no one policy. Right this is all the way we operate our own internal network right so we applied this same sort of security principle in the cloud so for example if you build a cloud VPC, and you build a VM and you go try to SSH to it you actually can't SSH to it right you actually open, the firewall rules to enable you, know the certain ports to enable TCP, to enable you know ICMP, label SSH all the stuff has to be opened up everything is gonna be lock down by default, right so. When. You set up a, VPC. And you want to sort of control what can talk to what you may build five, different subnets, right and you're gonna specify okay, I want only this subnet to talk to only this other subnet, right, yeah so you can sort of create that granularity, so I talked, about that one custom that we met with where, they, have very specific regional, requirements, for their application, right so we presented that option to them and say you can really get very granular right so you could build one big shared bpc and really. Build a lot of firewall rules around that to say you know things in amia can't talk to subnets, in the. US or in a pack right again. They decided to go a different direction and, we like being able to offer that flexibility, but, you can definitely get very granular and you can define things via ports via protocols. Via subnets, right so, the, firewall, rules are very granular in terms of the VPC right so maybe you. You know you don't want to allow UDP traffic right okay then you don't open it up to a UDP traffic or you know maybe have emitter you don't want TCP traffic you're. Not gonna have a lot of communication then but, you could actually lock that off right, so you. Have that capability to get very granular, in, terms of how you write these firewall, rules to regulate, what can talk to what within. That VPC construct, and the same thing happens when you talk across, multiple V pcs right you're gonna have separate, security policies, in each so, you may connect to VP C's together like I showed initially in that very traditional, model if, you don't open up the security policies you may have written security policies in this V PC to not allow any external, IP addresses, to get to it right so you may have linked them together with a VPN but your security policies on both sides are not allowing you to communicate right right so this goes back to that that complexity, and simplifying, right if you have everything in one you.
Can See one security policy that you're managing here you're trying to manage a security policy over here and try to manage our security policy over there right yeah but, the, answer the question yes it's absolutely doable you can be very granular in this okay depends, on how your requirements, go all. Right we might have time for one quick last one is it possible to create a mirror port in the Google Network of a server to another server so. Port, mirroring is something that we're working on it's not generally, available yet, but it's something that we want to we, want to enable as a as a feature so it is it is a feature request that we have in front of our product team to say we, want to enable port nearing specific on a large scale because typically when you get asked for it a lot. Of times it's around security and various other things. So. It's something we've asked our product team to look at to put on the roadmap but, I don't have a specific date on when port mirroring is going to be available no. Worries all right well thank, you Ryan I think that's about all the time we have thank, you everyone for joining us today, please, stick around our next session is protecting. Your workforce with a secure endpoints so thank you so much Ryan once again you bet. You. You.