A Practical Framework for Adopting Service Mesh & Building Global Applications Cloud Next ‘19 UK

Show video

So. Hi folks welcome to, our session it's really wonderful to be here in London with, all of you I'm project, though I'm one of the product leads for cloud networking, and I Drive a variety. Of products, in my portfolio really excited, to be here yeah. And thanks so much for maybe, wrapping up your day here with us my name is Mike Columbus, I am, on a network, specialist, team that helps our customer, or cloud customers with all things networking. So. What we're going to talk about today is service, mesh now, some. Of you may be familiar with service, mesh some of you may not be familiar with service mesh at, a very high abstract. Level a good way to think of service mesh is that. It takes what an application, does so maybe that's the application logic that's the business logic and then, it takes that and it separates that away from the application networking. So. Who is a service, mesh for. Let's. Say you've got a large deployment, of micro services, and you, have applications that need to be multi cluster, multi-region. Sometimes, multi cloud as well. You. Have multiple, languages in use so these could be because of your developers, they just code in different languages, the. Other thing is that your environment itself. Is heterogeneous, so you could have VMs, you could have containers or you could have a mix of both plus. Things like bare metal in your environment. You. Also want, an easier way to ensure, that you can control, how your apps communicate, with each other so, for example, you want to segment your applications. Or you want to ensure that when application, a talks to application, B they, are secure in terms of the way they are communicating and. Then. A lot, of you roll out code almost every day so. You want modern traffic management, this would be things like Canaries, blue, grid or Bluegreen deployments. Advanced. Things, such as mirroring, circuit breaking and so on but, you don't want to have the toil of actually going and configuring, all of these and you want, to do all of this as easily, as uniformly. As possible, and with, enough visibility into what's going on with your services. So. This is a slide from our, team Hokkien who's one of our leads for kubernetes, he, presented this at Q conscious too few days ago and what. Does the service mesh really give you so it gives you a bunch of things or it should give you a bunch of things one, is that it virtual Isis the service which means if, the service has a name the service has an IP, which. Also means you need a way to godiscover these services, it. Also gives you client site load balancing, now for those of you who are load balancing, developers, or have use load balancers, from 6-7 years ago all, the load balancing, used to be server-side, which means your client, hits the server and the service decides where the traffic should go here. It's the client, in the service mesh the client, decides which, instance, of your service, to go to and that is, what is client-side load balancing, then, you want traffic management, this could be things like traffic splitting but it could be also advanced, traffic control facilities, and then. You want endpoint, management because you don't want your services, to go health check each other to see which ones are healthy the. Other thing is you know we often talk about access, control we talk about identity and, we, talk about encryption, and zero trust so what's the difference access. Control is who can send and receive, identities. Who is sending and receiving and encryption. And zeno Trust is if I got something why do I believe it so, that is what your service mesh provides. Now. Let's. Go create a service mesh so, let's, say each of you had let's, say a VM based monolith, now. It's, got a bunch of code a lot of your modules, but it's also got a bunch of networking, code, let's. Take the code let's chop it up into micro services that's what you see on the screen, what. You do now is where. Do we put the networking, code that used to exist in your monolith, what. We do is we take all of the networking code and we. Encapsulate that into what we call the service proxy, now. This service proxy, is going to do everything networking. Which means your application and your business logic does not need to have any networking, code and this, type of a model is called a sidecar, because as you can see it's sitting just next to your application. Now. If you come to essentially. What did we just do we created, a data plane which. Had services, and it had proxies, the moment, you have a proxy, a, very, popular one on why proxy, which originally came from lift in his open-source, you, have a programmable, data plane but. That does not mean it becomes a service fabric, you need some intelligence, in the control plane to take all of these discrete, elements, and build a service fabric, and so, that's why you have the service Nash control plane and that's what brings all of this together into a service fabric.

So. Now we created, the mesh we, have our services, we. Have a sidecar sitting we have the service mesh control plane that is controlling, all of these but. This is what's called inside, the mesh you. Need to guard the entry point to your mesh because maybe you want to defend it against DDoS attacks or, you want to put some vassals, or you want to put the same traffic management, and routing capabilities, like Canaries. And traffic splitting for your edge and so, that is what you see which is called the ingress you'll hear terms like ingress, ingress gateway, and. Several other terms edge proxy, that's another term you might hear so, we put that. Now. Let's take a closer look at the service mesh control plane a well. Defined service, mesh control plane should be modular, and the, typical modules, you will see are related to traffic and configuration, and security. So. What happens with the traffic module, it is also generally a configuration, module which means if you, configure a traffic, policy, or if you configure a security, policy it's, this control, plane that goes and plumbs it into the proxies, that are sitting inside your data plane. So. If you think of it we spoke of the mechanics, of what your service mesh enables, but. What it really enables, you to do is it, enables you to manage the flow of traffic coming inside, within, your mesh and going out of the mesh what. It also lets you do is it gives you full control on the security, aspects, of service a communicating. With service B and for, all of the services in your mesh and then, the third thing which is very important, is it gives you full visibility into what's going on in your mesh and this, is at the service level, not, at the infrastructure, level not at the container level not, at your hardware a virtual network at the service level and it gives you deep insights, into what's happening and what kind of policies you should be putting in there to, ensure that the behavior of your services, as you want a tour as you expect to be. So. Actually. There is a very interesting, nuance. Which I wanted to point out which is if, you notice what we did we, separated, the application, from application, networking, the, moment you do that as a developer, what you need to do is call your app and then, there's an ops person, who's going to go worry about the policy, or your security admin, so, you actually decouple, the development, from operations, and what, you also did is you provided, a consistent, way because if you notice the. Service mesh is not tied to any particular type of compute, it works for VMs it, works for containers, it works for bare metal it could work for something else that comes along and so, the key property, of any managed, service mesh is that it should support all of these because.

Without That you cannot really manage a heterogeneous, environment. So. What. Are the three popular, service mesh options, we see in Google Cloud the. First one is open source sto how many of you here are familiar with open source this year. That's. A big chunk of people then. There is a set there is actually two offerings, from Google Cloud which are managed by Google one. Of them is traffic director, and the other one is anther service mesh and we'll talk about these in the next few, like. Slides and so on so. What happens with open source is do essentially. If you now take what we described, about the service mesh so. In your data plane you have your services, then you've got a sidecar proxy, like Envoy which sits next to them the, control plane for the traffic, part and configuration, part has an open-source component, called pilot and then, next to it is an entity called Citadel, this is the one which actually does all of the certs managements, so a good way to think of it is as a certificate, authority because. When you wanted a mutual TLS, you will need all of these certificates. Tokens. To be propagated, to your data plane so, your services in the data plane can communicate, with each other securely. All. Of this is great but what else do we need to. Bring sto, to enterprises, when, we started talking to our customers or they said some, of the things they said first of all we, don't want to manage the service mesh control plane that's not a core competency. So, we wanted to go deliver a fully managed control plane the. Second thing they said is you. Know we, love the openness of it because the API is between the control plane and the data plane are open they are called XDS v2 API is when, you use the Envoy proxy, and then, we, said let's preserve, that openness and let's also offer in services, and support with an SLA, on top and then, the last thing is a lot of our customers want, multi cluster, multi-region. Services, you, do have to stitch a lot of this yourself, so you'll have to put in ingress gateways, and Stitch the whole solution together and so, that's when we decided to look, at managed, options, now, the first, thing we wanted to also tackle, is observability. So how do you get deep visibility, into your mesh and that's. When we thought let's create a service graph that gives you this deep visibility, and so that's where the anther service mesh started, it is now in beta, it.

Is Basically, open source is to your pilot it has a managed certificate, authority, and then it's got in a service graph so. This is what the service graph looks like it gives you a neat and nice graph of all of your services, and it's. Almost like a topology, graph if you will just that it's off services, and it, gives you a characteristics. Of your services, so this is where we started building out a managed, offering, and the focus was initially on. Building. This topology graph, and then giving you visibility. Then. We started, then we said ok what about the rest of the pieces where is the managed control, plane are where are all of the advanced, capabilities and there's, a series of things at Google where, which we have capabilities. That don't exist elsewhere how, do we go offer it to the service mesh customers, and so. That is where we started with traffic director, now, if you think of traffic director, a think of all of the things we spoke about one, is, that you want GCP to go manage your Google cloud to go manage your service mesh control plane so, that is the first thing we did then. What we did is we bundle, enterprise-grade, SLA. And support on top of that and then, the last thing is what a lot of customers, love traffic director for which is Debbie wanted them to have global load balancing, for internal micro services and to, do multi region multi cluster container and VM services, and in fact mat, who's the creator of envoy proxy, I mean he's been involved in this project from early days and, he's been a huge source of support and also in terms of the feature set that we should build. So. The, the thing that differentiates, traffic, director from other service, mesh solutions, is this notion of a global service, mesh so it's not just open it's not just managed, it's also global, and what, does that really mean so, first, of all what it means is that it's, a mesh if you notice I didn't say VMs, or containers, it provides. A global service mesh for VMs. Dinners whether it's kubernetes, a self managed docker or GK which is a managed kubernetes, and then, you know many of you here run internet-facing. Workloads, and you. Use our global load balancer, where you put your back-end instances, wherever you want we give you a single anycast. IP and we, let you it will do cross region failover x' overflows and so on imagine. Bringing all of that to micro services, which are internal facing and so, to give you a closer, look at what this global load balancing, with traffic director and what a global, service mesh looks like I'd, like to invite Mike. Thanks. Protector. All. Right so to really, understand. Traffic director, and before I show it to you I do have a demo set up that we can look at I, think, it's super important, as a techie to understand the data model right how is this configured, to really drive home how it looks, on Google cloud platform, so. The first thing that's configured when you configure traffic. Director is a, global. Forwarding, rule right. Think. Of this as identifying, your service, so this, consists, of a port an IP in, a protocol. The. Other thing you can do is if you want to just purely do l7, host or path based rules you, can also have an all zeroes default, forwarding. Rule that, will catch everything so when you redirect things to your sidecar, proxy, you, can purely just do all seven routing rules so, once we have our global, forwarding, rule defined, we. Then have a target, HTTP, proxy, think. Of this is a logical, config, pointer, that. Logically. Consists, of all the Envoy sidecar, proxies, or whatever sidecar procs are using in. The service mesh. So. Really it's it's just basically identifying, where all these services, will be programmed, this. Consists, of a URL, map which is tied to your HTTP. Proxy, where, you can define rule matches and then actions, you'll take on that, so. The URL map, points, to a back-end, service, this. Consists, of where, you'll configure your health checks things, like affinity, settings circuit, breaking. Or. Outlier detection so that you could configure, within your service master back-end. Services, then point, to either managed, instance, groups, or network, endpoint, groups these. Can be globally, deployed right, so the back-end service, can reference, network. Endpoint, groups in any one of our regions which is really just a port IP pair, way to natively target containers, on our platform, or, manage, instance groups VMs. As such. Okay. So here is a quick example of. Our environment, that I have set up today called cloud shop this. Consists, of a, front-end, or a web service, which is running, docker. Containers. On on VMs, and, then we have our cart service, this.

Is Running and managed instance, groups and then finally we have our payment, service, this is running in gke and the, backends consist of network endpoint groups. The. Other thing we need to consider is within. The mesh the forwarding. Rules apply right so that's how you identify a, service and associate, specific, routing, policy for that service, but, we need to get on to the mesh so for this environment we're using our global load balancer, which. Again, has backends, deployed in US Central and Asia. In. This essentially. Routes clients, to the closest, healthiest, back-end, with available, serving capacity, so. My in California. Would route, to the US central environment, provided there was service conserving, capacity, similarly. Shannon, Singapore, would route, to the route, I should be correct myself to. The web service in Asia southeast one, if. It was available from. There the sidecar proxies, within the mesh would target the closest, healthiest, available backends, so, web. To cart cart to payment throughout the mesh as a service chain, so. Again we see the affinity, based on location, so, this is global routing routing, out-of-the-box, right so you don't really have to do anything to, have this localized. Service. Selection take place this. Is because Google knows where all its endpoints, live knows where all its zones live and can take TTL snapshots, as part of a config map to know exactly. How to wait traffic, throughout your service mesh which is really powerful. So. In. This, slide. What we see here is our cart service, and US central one failed. So. I'm going to show this but what happens with traffic director is the cart service, in u.s., I'm sorry the web service in US central knows. That service failed because we're providing managed, health checks and can seamlessly, point, that, next hop to available serving capacity, in Asia with no involvement should that service come backup service, is healthy again, so. If we can switch to the demo. We're. Gonna buy some things. All. Right so the first thing I want to do is just talk a little bit about the environment that is, built so we have. I'm. Gonna switch to a different project here. So. This is a live environment and I'll talk through the different components of it the first thing is the web service so we can see we have managed. Instance groups of one I wanted to be someone for a go but of course these things could be auto scaled but. We have one managed, instance group in Asia one, in central, similarly. We have our cart service, again in Asia in central and then, we have our payment, cluster, which is a gke. Cluster, of a one-note, and. Then if I jump over to, our. Kubernetes. Environment. We. Could see we have a payment service running in both Asia on the Asia cluster, in, one, on the US central cluster, from. A load balancing, perspective, here, we can see and I'm not going to go too deep into this you can see that we have a global load balancer, deployed, this. Consists, of one back-end, service, with the backends being in Asia u.s. central and. Then, if we jump to traffic, director. We. Can see we have identified, our cart service, which, consists, of the VMS globally, and, then we also have our payment, service which consists, of the network endpoint groups that were deployed in the kubernetes service there's, an annotation on the yeah Mel I didn't want to show um all today for some reason but that actually, will spin up the controller, which, keeps track of all the network endpoint groups our, endpoint. Pods. So. We have our environment configured, what, will then do is let's, jump, to, our website. And buy, some things right so this. Domain is live it's actually using Google manage, start on our global load balancer, the, one thing I did because I can't have you, all stopping, service is I put it behind an identity aware proxy, so you, need to have a Google, LD.

Optic To get through but I'll open it up later if folks want to play around with this. So. Here what. We can see is that the web server fit the web service, being, in Europe we got routed to the closest, healthiest, back-end which was us central one and then. You. Know we can buy some sunglasses. Add. Them to the car, see. The cart service is us central one and then, when we confirm the purchase. We. Can see this was served by the payment service but, again what would happen if we stopped the cart service so what I'll do is I'll stop the cart service here and then. What will happen is, if we go into traffic, director. Eventually. We'll see that the service will become unhealthy. Because. The health checks will fail for that service so there you go right so one of the card services down, let's. Buy something else right so let's go back into our. Front-end. And. Let's. Make this a little interactive, does, anyone have, a preference, for what we buy, shout. It out. Radio. Okay. I'm, gonna get the, vintage, Bluetooth radio. So. We can see this is served by the web service but when we add this to the cart service, you'll. Notice that because. Well. I'll talk a little bit more what had happened but the. The web service, knew, that the. Psychic. Our proxy knew that there was no available serving capacity in the closest, zone, so, it failed over across the world to Asia. And then. Again. Because now we're in Asia cart service this is going to pick the the payment service Louisville the, healthiest, available. Serving capacity, which again is local. So. What, we'll do you. Notice we don't have any delivery. Charge. Here so that's a teaser for a future demo but let's, start the cart service, back up and we'll switch back to the slides I want to talk a little bit about what happened there. Okay. So, the first thing is how does traffic director, how do these envoy proxies, know where these endpoints live right so, for managed, instance groups we have what's called backend, manager, which is keeping track of you. Know is this. VM. Healthy, is the standpoint healthy and available, and that talks to the traffic director so traffic, director can then go and then inform, all those envoy sidecar proxies, hey, you have serving capacity, here this is healthy here's how to get to it.

For. Containers. Which are using Network endpoint groups it's fairly similar when you have a nagging, you annotation. On, your services, there's a controller. That gets spun up that, talks to traffic, director and similarly tells. The Envoy proxies, where things live and if they're healthy. So. When we failed the. Cart. Service in central what happens we're. Health checking these endpoints as a service, that's. Talking to traffic director, shopping. Cart failed and sent I'm sorry the yeah car service failed in central and we removed that too. All the proxies within the mesh saying hey you can't get here right. But. From a data plane what happened I think this is really interesting so try to try to follow as best as you can and. We're gonna take a view of this from, you, know we came in through the global load, balancer, hit, the web service up at the top there that's, running in you know has envoy installed, as a sidecar, proxy, so we're gonna have the view of the web service so. The. The. Service itself, resolves. The cart service right so web wants to talk to the cart it, has a VIP, that's, globally, significant, but it doesn't have to be a VIP from your vbc this could really be any address. So. What happens is the. Web, service tries to talk to the cart service resolves, the cart service and sends traffic to it on. This VM we have net filter configured, that basically redirects. As part of the output chain, to, the Envoy sidecar, proxy, the. Proxy then, intercepts, the request applies. The policy, that was configured. And. Then sends the traffic on the way but you'll notice here that, sidecar, proxy, new the cart service was unavailable and central. It, actually sends that traffic directly, to the endpoint, with the client, the, web service has no idea where the traffic is actually getting into where, it's being safe. So. With that I'll hand it back over to projector. To talk about advanced. Traffic control. Thank. You Mike, so, I'm going to deep dive into some. Of the advanced traffic control we probably won't have enough time to go through every. Feature I just wanted to say they are about 134 features but it will give you a highlight and, then before I do that I'm excited to announce this is in GA today and I wanted to give a shout out to our engineering, teams who, are in Boston so Cambridge and in, Sunnyvale, so wherever they are thank you for shipping, this feature out, so. What can you do with this basically. If you think of what you're doing with you know the DevOps stuff where you want to change the flow, of traffic but. You want to do it in a policy driven way that, is essentially what you're getting with these routing and traffic policies, so. Think, that the policies, are essentially, in two types of buckets, one is routing rules and the, other is traffic policies, now. Routing, rules are things like traffic splitting traffic, steering, timeouts, and retries, fault, injection so, just things that let you go and ship out your code very easily and then, the other side is traffic policies, so how do you want to go load balance your traffic the. Things like outlier, detection and, circuit braking and then we'll talk a little bit about that. So. Let's first look at what are the kinds of things you can do a traffic director, this. Is the data model or Mike showed you the data model earlier what you can see here is under the URL, map you, can see a bunch of routing rules so these are the things that you want to match your traffic, on and then, you can take actions, and the actions could, be things like Gori write my url or go redirect, from HTTP, to HTTPS or, go, and split my traffic, it could be more than one of these mirror my traffic, injector, fault so I can do testing and so on and so forth but, as you can see everything. Is plugged into this data model and you will notice that as we talk about more and more products, you, will see that the data model is very uniform across different products so we'll talk about that so. The most a, popular. Feature that I get asked for is traffic splitting this is you've got traffic you've got two versions of the service so you want to say they'll send 99%, to one and 1%, to another so that's a very popular ask you.

Can Do that easily with traffic director it's purely a policy, then. You can do traffic steering, so this would be things like if user agent, is foo go here if user agent is bar go to the other service, so, you can every way you can see that in the diagram is the service, a that's deciding, which one to go to but, the plumbing, of this is actually coming from traffic, director into service a and that's, how service, AC is that I need let's, say in this case the, user agent is Android I go to service B 1 and if, it is iPhone I go to service B 2. Another. One has fault injection so this is if you want to see what would happen to your service if there was a fault like let's say there's, an abort or there's a delay, you. Can simulate all of these conditions by, actually, going and putting this these metrics, and then, having it play out as you go test your code. Mirroring. Is my personal, favorite so what you can do is you go create a shadow, mirror service, and then, any traffic, that is coming to a service, will be mirrored to the shadow service, this, is extremely, helpful if you want to go and test let's say your, service. Which is not yet chipped on production, code so you're not touching your production, service you're simply mirroring the traffic to this shadow, service, the, second thing that you can do is imagine. This some sort of an issue in production you. Can just mirror the traffic, to another shadow service, and debug the problem there and there are several other use cases for this mirroring, as well including. Audits and so on and so forth the, second half of it is traffic policies, so, if you think, of how we do traffic director and what we spoke about first, is a global, policy, which, is let me go select, the optimal, region. For the traffic that's coming in so, let's assume you're from US central in that case and then if you're from California you, would land up at the instances, or the region and you essential once. You've done that then you specify the policy, and, using, that the, instance, disfigures, out how to actually go and pick one of those instances, the destination, instances, things, like round-robin least, request, ring hash and so on and so forth and then, you can also layer in things like affinity, so if you want to stick your traffic to a particular, instance you can do that as well can be done based on cookies, can be done based on IPs and so on, this. Is the most interesting one which most people are not familiar with which is circuit, breakers and outlier detection and, here, imagine, that a service, is saying I can accept, let's say, xmax. Connections, and so many requests, and if you send me more here, is what you should be doing to yourself, that's basically, what circuit breaking is there, are difference, from typical implementation.

Is That the enforcement, of this policy is here service, a and not service B so service be specifying, how it wants to be talked to and service. A the proxy, on service aim to be more specific is enforcing. That policy, so this prevents, your services, from getting overloaded, especially, when you have a chain of services, it's again a very popular, feature most. People don't use it enough but it can be very handy, so. With that I wanted to invite Mike to give a demo of traffic splitting, all. Right thank. You so, we are gonna switch, back to the. Demo laptop. Thank. You all, right so we're gonna buy, some more things here so what I want to do is introduce. A new service to the environment, we're gonna call this the payment. Delivery service, so, when. A user wants. To checkout we're gonna calculate a delivery cost it's. Not real but it shows up so it's good to demo. So. Let. Me reconnect, here so. What we're gonna do is we're gonna deploy the service we're. Gonna add the back-end service, we're gonna add the backends, to that service, and then we're gonna configure traffic, splitting to, send half of the traffic to the that. Goes to the payment service that will calculate the delivery the, other half will not so, I have this script in here and I want to talk through what I'm doing, so, the first thing we'll do is let's get credentials for our kubernetes, cluster. Which. Should already work so I went right to the deployment so. What this is doing is I'm, deploying two files in, both Asia and, u.s. central on those kubernetes clusters, that's going to be our delivery. Service. So. We see that was created in Asia and we're gonna create that to. Payment. Cluster in, US central. The. Next thing we'll do you've, run this and then we'll talk. About what we did. This. Is going to so, when we when we make the deployment we added the the. Shipping deployment. To. Kubernetes, one. Of the things, if. I get this to show up here. Was. That when. Those pods were created, they had a network endpoint Group controller annotation, on them so. What we're doing here is we're creating the, delivery, back-end service, again. When we do traffic splitting through, the URL map, it targets. Where it splits across, back-end. Services, so that is, kind of like the path so what we're gonna do is create the delivery service we're. Gonna health check it and then we're gonna add the two network endpoint, groups the one in central and the one in Asia to that back-end service. And. That's what we just did right so I ran that shell scripts in the, final step we're actually gonna do the splitting, so, let me run, this and I'll talk about how this works.

So. We're gonna basically take the current URL map, which is sending everything to the payment service and, we're gonna say hey send half to our new delivery. Service, so I can show you what this looks like basically. We have a match rule which is everything, and then, we're basically doing weighted, back-end services, so 50 to the payment service. The. Other 50 to the delivery service and again those are global back-end services, so we're gonna have all that goodness of the global routing so. Everything. Ran let's jump back over into traffic director, and take a look and make sure everything's healthy so. We. Can see that we now have. Three. Services, one is partially, healthy so this does take a minute to actually for. Everything to come up, but. We can see our delivery service was populated, so. If we just want to drill into the delivery service while this is spinning up you. Can see we have our two network. Endpoint groups the delivery service. And. We're health checking that service, and then from a URL map perspective. Let's. Jump into the payment map. We. Can take a look here and we can see again this might be a little small but here's our URL map that's splitting 5050, we could also annotate, on here hey also, Mara this traffic there's all sorts of things you can do here but. We'll keep it pretty simple for now so. Let's. Jump. Back and. Just make sure everything's, healthy here. All, right so, everything's healthy so, now we're gonna buy more things and I'm gonna need help from the audience and figuring out what to buy. So. Again we're going through the same website and the. Only difference now is that we have traffic splitting on the payment service so. What are we gonna buy. Anything. Jump out. Tea, pot. Expensive. Tea pot so, we'll add the tea pot to our, cart service, again that's served by the cart service in u.s. essential-1 will confirm the purchase. And, I think because things were failing over but you can see. Now. We can see that the delivery, was actually calculated, so if we refresh, this a few times. You. Can see that you know the delivery service went away. Delivery. Cost went away. Let's. Refresh this a few more times. It. Should be roughly half but sometimes you get on a little bit unlucky. It'll. Try a couple more times. It's, like a flip. Of a coin there we go so we have the payment, the. Delivery cost calculated which was again that other. Deployment. In kubernetes and, again you know we didn't put in any data here it's just the demo so that's what, that's. What we show it's. Very good so if we can switch back to the slides. Thanks. How. Can we switch back to slides. Okay. Great so, we're going to give you now a sneak peek at a bunch of services. That also exist along features, that exist along with this we probably won't have enough time to deep dive but, will given then will point you to our website where you've got more and more information about this feature so the, one of the things that we spoke about was, the, authorization. Authentication. And encryption and, that, is what we say if we speak about when we speak of service to service security so. Again, if you look at the data model, you will notice that it's the backend service, where you're doing a bunch of settings to, go enable, these policies, but these policies are specified. Again in the control plane and plumb, to the data plane by traffic director so. How. Do you actually do, the cert management where are the tokens, coming, from all. Of this is being done through a component if you're if you're playing, with open source in a store you're using a component called Citadel, and that. Is what is actually providing, the certs or CIL CSR's, to be more specific. Now, when, you come to us and you say we want a managed control plane you also want a managed CA and that, is why we recently had, the beta of manage CA and for. Any production, workloads, we recommend, that you use the managed version of it and it will take away the toil of lifecycle, management. The. Other feature is observability. So Huizar there, is actually, multiple, options, for you one is envoy, itself, allows you to pull out signals and you, can pipe them into any, favorite, tool of your choice so you can pipe them into stackdriver, the, second was the observability. Or the service graph that we showed you that was the topology graph, in anthro service mesh and in, the last one is essentially, what one of our partners, did which is using an open source product called Apache sky, walking so, this was this was put together for, a customer, by pet rate which is one of our partners and then you can see it's a different form of service graph where they have information, about services, and this, is, also a set of services you can drill down into the services, so, this is another open flavor, of the, service graph that you saw before I.

Wanted. To call Mike to talk about a flavor that you may not have heard about but let's let's, have Mike describe, it thanks all, right so, we. Announced, the, l7. Load balancer, at, at. Google next in Tokyo around, the summer timeframe, I mean, this really fits two purposes. Right the first is probably the most common which is I want, an internal managed, service from Google that provides, l7, rules. And routing rules right the ability to do things at, l7. Google. Predecessor. Or the other, load balancer was l3 l4 for internal, the. Other thing was how, do I want to bring all the features that envoy and sto bring to the table and traffic, director. Without. You, know if I can't install, a sidecar, proxy, for brownfield right how do I get there right and that's, that's one of the key things that I'll actually show later that, the l7 I'll be provides. So. Let's imagine you, know here's our environment. Again we have our web service, then we have a shopping cart service but, let's say the team that owns that service, can't install envoy, sidecar, proxy for whatever reason, but, once, the ability to actually, split, traffic, across the payment services. You, know it's legacy appliance, whatever right the, owners of this team can't do that. So. We can actually enable the, advanced traffic management an advanced traffic control by inserting, the l7. Internal, load balancer into this arcs architecture, as a, middle-aged proxy, right, so. Under, the covers how does this all work when, you deploy the l7, load balancer, the first thing you need to do is deploy a proxy, address pool, from, within the VPC a, slash 24, is recommended, but we can go as small. Network range as the slash 26. What. Then happens is Google. Will. Run. A pool, an, elastic, pool of envoy proxies, as a middle, edge design. And. This, follows the same you, know data structures the other load balancers, this, is abstracted, from you and, what happens is as the shopping, cart wants to now send traffic to, the, payment service we, have this elastic, pool of envoy proxies, that will manage on behalf of you as.

A Managed service and that. Traffic will then be seamlessly, routed, across load. Across these proxies, your. Routing policy, or basically your configured, policy, will apply to these proxies, and then get sent to the, payment, service and there's all sorts of functionality, in this in. This product but, it's a middle proxy design and one of the things I want to show just to reinforce the consistency, across project. Products, is the. L7. I'll be in that exact architecture, I just showed so if we could switch back to the demo. All. Right so what I'm gonna do is I'm gonna jump to a different, project. Now, I'm. Gonna just do a quick walkthrough of the what's, in the project, so we have the. Same thing right we have the web service, we. Have the cart service, and then we have the payment service, because. The l7, I'll be internal, load balancer, is regional, these are only deployed, in u.s. Central. But. One other difference amongst, this is. That. There's no sidecar, proxy, into, the cart service, the. Cart service basically, exists. For. Others, to target it but. There's an L 7 internal load balance with the card service is then going to use to send traffic to the payment service so. Let's, jump over to load balancing, and I can show you what this looks like. So. Again we have our global load. Balancer, that gets traffic to the web tier but. Again, the difference is we have this payment internal, load balancer, with with two regional backends, the reason being is that this I already pre configured the splitting across the delivery and, the payment back-end and we can take a look at what this looks like it looks the exact same as, traffic. Director config except. We're targeting the two backends, in u.s. central which a regional backends, so. I have another store, this, is going to be the. ILB demo and. I'll. Show you this working, so, we'll buy a, clock. We're. Gonna add it to our cart and we'll confirm the purchase, and. What you'll notice is that we're. Splitting, in traffic's, flowing similar, to the, traffic director deployment, except, this is going through envoys that are basically pulled out of the cart service and Google, manages is a horizontal pool, of load. Balancers. Sidecar. Proxies. So. I think that's it I'll hand it back to projector, to wrap us up. Thank. You Mike. So. I wanted. To take some time maybe I'll look this side I wanted. To take some time and I wanted to just bring together everything, that we spoke about and I know it was a whirlwind, tour but. If you really think of the different capabilities that, we spoke about we, spoke about a, GCP managed control plane we. Spoke about the, API that exists, between the control plane and the data plane and whatever sir, mishmash product, that you use make. Sure that API between the control plane and the data plane is open because if it is not you're. Not going to be able to go and switch your control, planes and you're not even going to be able to switch from the open source versions, of those to, whatever. The provided, managed versions are the second. Thing the, third thing is how did you configure, so right now with traffic director we support, configuration, via GC PAP is and we. Also support, configuration. Via CR D so we have a product called config, connector, but, the, the, Delta. Tell you when the next set, of API configurations. Are coming what. You I don't know if you realize this but we spoke about service. Discovery but we didn't show you an explicit, service discovery anywhere, so, it was hidden in a lot of the stuff we described, so when you put your endpoints, inside a network end point group or a managed instance group with. A network end point group and with the other one there, is inbuilt service discovery so you don't need to go and really co-create, a separate, service discovery mechanism, when, you actually go, put here when you configure their data model, the service discovery is built into that data model so you automatically, get that, then. Then the one which is a huge differentiation, which is multi cluster, if you are using containers, or. Even. The global routing where, for any given service you can actually go and put your instances, anywhere. You want across the globe and, then you saw the cross region, field over an overflow, so, there, is no other solution in the market that does it for internal micro services so that would be one big reason to go give this a try we. Spoke about managed, CA which interoperates, with traffic, director so you can do mutual TLS between your services. We also spoke about the various, ways you can do observability. So you can either just use the Envoy and use the programmability, and observability, it offers you. Can also use open, tools like Apache sky walking, and then, the.

Other One is basically, your control plane observability, so we didn't speak much about it but you can also visualize and, monitor how traffic director, itself is working. We support VMs we support containers, again containers, of all flavors which is self manage docker open. Source kubernetes, and GK, which is our kubernetes, manage kubernetes, we. Support all of the protocols like HTTP, all the modern protocols, HTTP, HTTP, HTTP, 2g RPC, and we, will be enhancing, these to support more including. Things like TCP proxy, and, then the last thing is so. What are the things that we plan to add that don't exist in here first. One is. Which. Is why this diagram, so the things that you see in blue in. 2020. What you will see is that traffic. Director, will get bundled in as you can think of it as pilot, in the land of enter service mesh and. It'll be the traffic in configuration, control plane what, will come from the anther service mess side one is config, via sto api's and the. Kubernetes, style ApS which will count on the kubernetes land the, other thing will be you saw that that beautiful graph the topology graph, that was an anthro service mesh we. Will have traffic director, work with that as well and then the last one is we have several customers, who want the same capabilities, for endpoints, that are not just in GCP but. They're sitting in on-premises, and other clouds and we will also add support, for that so, the reason we created this abstraction, of network, endpoint, group is we. Want to be able to put all kinds, of endpoints and that is how we will support things that are not in GC P so you specify an endpoint in there whether, it's an IP port, pair whether it's a domain name and that is what will be that endpoint which doesn't sit right in Google Cloud. But. You know actually you once you have like you know we spoke a lot about service, mesh today when, you do service with service management, in Google Cloud you have a lot more tools at your disposal so. If you just care about, balances, a way to think about load balancing, is we have several node balancing, flavors we. Added two more constructs, in there one, of them is the layer seven internal, load balancer, so you don't need to know it's traffic director, and envoy proxy, because it looks like any middle proxy load balancer, to you but, that is how we build it out under the hood traffic. Director is basically, a new, way of doing our load balancing, for your micro services where it brings in the client-side, aspect, to it because all of these other flavors, really, mostly. Are actually server side load balancing, and then, obviously, all of our load balancing, flavors work for the things shown your compute engine which is our managed compute.

VMs, Kubernetes. Engine, storage, and so on, the. Other way to look at it is let's. Say you care more, about the, multi cloud modern, services, and you're trying to go modernize, your applications. And this, is where you again have a slew, of choices. But they are all brought together into a platform that we call anthos and you, can think of anthos as an opinionated, stack which lets you do kubernetes, orchestration. Service, mesh. Serverless. With cloud run and then there's things like configuration telemetry. And so on so, if. You notice in here there are several open source for, each managed. Piece that we have we. Have an entity and open source so should you choose to go to your testing with the open source variant, you can generally. When you deploy a production, traffic you'd prefer to have a manage versions for, two reasons one is you could use your developers, in a much better way than building, out these control, cleans or managing them and then, secondly is you get services, and supports, if something goes wrong you, have somebody to talk to so. Protocols. That are going to be extremely, important, in the world of micro services, would be things like G RPC, you. Will see a lot of announcements from us coming on that end possibly. Early next year and especially. Things. Related, to G RPC, and service mesh we believe it's going to play a very critical role in. Providing, some, of the capabilities that service meshes need I, would say one interesting. Thing to think about is we already spoke to you about three, service mesh flavors, where you put an edge proxy, you, put a sidecar, proxy, and you put a middle proxy, and one, of the things to keep in mind is that these products, are going to keep evolving to solve, use cases so, if you need a different variant, of service mesh to ask us to ask your service mesh provider because we, feel like the innovation, has just started there's a lot more that needs to be built, and. Then a few, links that we have in here for you but. I would say the best way to get started with service, mesh if you haven't, already started on, it is literally. Go a fire, up a Google cloud project we have you can see the U is for traffic, control you, can see the U is for traffic director just, go to a Pantheon, UI configure. A simple service maybe, try to replicate some, of the demos that Mike did and he's going to open-source many, of these like that shop that you saw you. Can change the products if you didn't like this but he's, going to open-source that as well so, we'd love for you to try it let us know what. You'd like us to add in traffic director and for service mesh in general and. We hope you're able to take all of these tools and go modernize, your services, so thank you for staying with us till the end of the session thank you.

You.

2019-12-10

Show video