Practical Guide to Modernizing Applications for Containers (Cloud Next '19)
Pay. For my name's James Duke I mean our strategic. Cloud engineering, group in our personal, so just thing I'm here was right I'm Cheyenne I'm also a strategic cloud, engineer, in the Google cloud professional, services and, we're, gonna talk to you today a little bit about the, practical, guide to. Wouldn't. Call it practical guides of mobilizing, applications. For containers. If. Folks have questions there, is a Dory please feel free to use it if, you open your app you. Should have a link to it from the next app if you fire questions into there we'll come back at the end and address, those questions will also take live questions we, have them have a microphone. So. As we, said it's the end of the day right, so we'll take a fairly, summary. Fairly casual experience, we're just gonna have a chat it's like you, guys are watching our chat and we're. Talking through this process a little bit, so. What are our goals I'd sorry, rules are really, three things we. Want to understand, what, modern. Applications, look like what are some of the terms or buzz words you hear associated, with modern, applications, like containers, micro services cloud. Native and so on and so forth and why, they're important, why. Do what, do they mean for your business why would you want to build applications that, are, cloud native or that run as containers and so on and so forth and then, the real meat of our session, is in, how how. Can you get get, your application. Mike, migrated over so it's it uses, some of those design, patterns, that we'll discuss so those are really the three main, goals of this session yeah in the first two we'll kind of level. State a little bit you, know we'll talk about the definitions, of some of these things there pros and cons and that kind of stuff and, then we'll talk about you what does a journey was, a journey to a modern application look like today. So. Let's start with what, so. Shared what, is a container. So. Yes so containers, you hear a lot of buzz words associated, with containers, and they're lightweight portable fast. And so on and so forth so, let's look at what. Do this really what does this really mean so containers. In themselves. Are really about two technologies, first. It's a way of packaging your application, and all of its depend c--'s into, what into a single, image, and then. The second, capability. Is the ability to run that image in, an, isolated, set, of processes, so, those are sort of really the two main things that containers, provide you they let you package, everything, that your application, needs into. One single, image and then, provide you a mechanism to run that image in an isolated manner so that's really what. This is and the key term, here is process, so containers, run as processes, so, just like when you start something on your computer it starts, a process in the background it runs as a process containers.
Run The same way and containers, are a group of processes that sort of work, together to, provide some functionality, that you need so, who he has built, and deployed a container, sheriff. Hanson okay, not bad let's talk about half I would say who's. Used some kind of container, orchestration, system kubernetes, most are slow per stack okay. About third okay that's good that's good that helps us answers, with the level saying for where we go through this so, for those of you who've deployed it you'll be very familiar with this it, gives you this nice nice, packaging, you get the application you get the dependencies, for, those who are less familiar with it this may be a new concept but the idea is that I now I only. Depend, on an environment, to execute the container I don't, have to care about bringing. Him live libraries, and ask to care by bringing in dependencies, because I've packaged them all up into my container. So. Let's look at how, containers. And some other ways of running. Your application sort, of compare so. If you might be familiar with this kind of a diagram which shows, what. Running, applications, on a on a shared host looks like and that's, basically, the, host provides, the kernel and it provides also the libraries, that your applications, need and then, all of the applications, that are running on that host share, that those, that kernel and the libraries, the, challenge, with this kind of setup is since, the libraries, are shared you, have to be really careful which, version, of which write library is installed, on that VM and for, that reason a lot of the time people, actually run one application. Per host so they're on one application they bring all the dependencies, that they need they, bring it to that host configure, the host, the, right kind of environment for that application and then run it there but in many ways is very convenient right because we now have a kind, of a logical grouping, of our application. Applications. Should I say the problem is there's not really there's. Kind of minimal boundaries, between those applications, you can do some stuff with sandboxing, but generally. Speaking in this model those applications, they're all in a shared space and so, I don't have any security minimal. Security boundaries, between those things so. That. Doesn't help me when, those applications, should be isolated. From one another, so. If we go up the stack a little bit we think the, next thing we might think about is VMs right well why don't we take those, applications, those dependencies, and, then, we can fit multiple. Single. Host application. Servers onto one physical, machine people. Talk about bin packing right we're now we're now using we're now more effectively, using more of that that. Machines resources, because, we have more, things running than, we were able to before, but. The trade-off here of course is the libraries, are not common so now and not just the libraries the kernel so, now in a given host given physical machine I'm running, multiple kernels, and there's a huge overhead well there is an overhead for having done that right, each, kernel there will use its own set of resources CPU. Memory and. Its. Boot up time is a, factor every, time I boot a kernel there is there, is a time penalty in doing so okay modern especially. In Linux place one modern kernels can boot very quickly but there's still one penalty there because you still have to do it it. Can take a matter of minutes for, a for a VM to boot up and it. Does provide a lot of isolation as we were saying but the cost is that, it's not very flexible if you need to resize a VM if you need to give it more resources, you have to really essentially, rebuild that we have so. That's where people, went to container, technology Google was one of the pioneers, in that technology all. Of our workloads, internally, run as containers, predominantly. So, the containers, provide. Isolation within. One host so instead of breaking. Up that host into, individual, VMs, we, use some of the mechanisms that are available in the kernel in the Linux kernel to, isolate, a set of processes from. Another. Set of processes, each of those set is a container, the, great thing is that they share the same kernel, so once the host has booted up the kernel has loaded there's. No more delay, in launching these containers, and starting. Each of these containers is like running a process as soon as you type in a command on your on your laptop. For example, the application comes, up quickly it doesn't have to wait for your machine, to boot up so, so, that's some, of the advantages, of using containers, yeah it totally removes that kernel boot time and I was talking about for the VMS right it uses, the same kernel the kernel is already running you just spin up another process and, the reason the reason we're kind of doubling down on that is when we're talking a meaning about was a cloud native computing.
Look Like one. Of the most important parts is scalability, and to. Me when someone says I'm gonna go to a cloud native application, that, means it's going to be very scalable, and. Vm's. They can scale very well but. Imagine, a scenario let's, say well like, an unnamed company who, may or may not have released a new product this morning, imagine. That your press, release generated, a bunch of traffic to your your, application, you. Need to be at a response very very quickly if you, have to wait for vm4 kernels. To boot up that might result if even if it is only a minute or two that, may result in a poor experience for your users and that's especially, concerning if it's the first time they're experiencing your system there's. Nothing worse than sitting, on a loading screen the first time you ever use a product to turn you away from using it, so. The concern with containers that's not a problem right if you want to spin I mean within. Reason right it depends on how many containers you can fit on a given kernel but. Within reason it allows you to scale much much much faster because you don't have that boot time penalty. One. Drawback with, with containers, is that there's. Even, though there's less dependency, on the host there's still a dependency, on the kernel since the kernel is sheared you, have to target a particular kernel, but the good thing is these days linux kernel is pretty standard so a lot of applications. Target, the linux kernel and can run as containers. So. With that in mind let's, revisit some of those buzzwords so to see if you have a better, appreciation of, what that means so, as we described, it containers. Are lightweight, and it's, true because they they. Don't have the overhead of creating a full VM virtual. Machine when you 8:1 you have to provide the host OS image on it libraries. And everything that you need to, run your application, and. In a container you only put in what you need right you you're able to specify just the bare minimum set, of things that, the application, needs in order to do its job so you can end up with containers that are very very very small from an actual disk spice perspective. And. They. Are portable because you're you're, bringing all your dependencies with, you so any libraries, that you need you're bringing with you and all, you need on the host is really, the kernel and some, runtime that can run your container and those, are pretty standard these days the container runtimes and it's, pretty easy to install that on any host so, that that what makes your. Your. Containers, portable, and if, you you, know it's it not just portable, from, one VM to the other but it's also portable, across, environments, you, can run the same container on, Prem, and on, you. Know on any sort, of cloud provider. As well and, that yeah you. Caught the keynote this morning see. The keynote speech yeah. So you saw, our, glorious leaders talking about and those and. What anthos brings to the table so. In a world where you have a and, those fundamentally. Builds on the construct, of a container right if, you. Have. A world where you have that hybrid multi. Cloud environment. Once.
Your Application is container I was moving it between those clouds, or between those environments, is very very very straightforward so. If you have a platform where you can orchestrate, all of your security, your, networking, your monitoring, with, your service mesh and everything and your configuration, management all across, those different environments the. Question is not can I move it the question is where, would be the optimal, place for, that particular workload or that particular application, and which so when we say portable that's really what we're talking about it's, not moving it from one VM to the to the other it's much broader than that it's how do I put. This application, so it's as close to my user as possible well maybe it's something as simple as this one particular platform, is more cost effective for the type of workload documenting, whatever, it might be it gives you the power to choose, to run. Your application wherever. You want it's this old phrase right build, once run anyway. And then they start. Fast so as we said since, there's no overhead of starting a VM your, containers, start as processes, would on an ordinary host so the boot. They, start in I shouldn't say boot but they start, in a matter of seconds, and, as long as your process needs to start, taking, traffic, that's all it takes for it to do, that so. Let's quickly demo. One. You. Know just, see how the containers, look running, from. You, know the host point of view as well as the, container. Itself, so here I have a Google. Cloud VM and I'm, going to run a container, image that I've created and pushed to one. Of the container, registries, that we have so I'm going to use the docker run command, to, run. This container and once. I run that it, has started that container, over, here so when it. Was yes, it was no ko no boots on so since, I had downloaded the image earlier on this VM it, didn't even take time to to sort of start. That so, let's look. At the, you. Know if I look, at what images are running this, is the image that's been running and it's been up 18 seconds since I just started it and so, let's look at what the process, tree looks like. From. You, know. On. The, VM so, as you can see this is the VM so I'm looking at the host I'm looking at all the processes, that are running so, this is everything that's running on the host and, part. Of that tree you will see is the. Is. Is the the. Container D which is our container, runtime and the nginx, process. That I just started as a container so you'll see this is just one set, of processes, within the other processes that are running on that host so, there's nothing special. From, the hosts point of view for these processes other than it's just grouped together and it, has certain, boundaries. Associated. With that next, what I can do is I can look inside that container and I, can you. Know I can I can attach myself to that, container. And see what, the process tree looks like from within that container so what I can do is I can start another command inside, that. Beside that container by using the exact. Amount so you see the command, prompt has changed here and now, I am in the context, of that container this may look very similar, to a VM when, you login to a VM you know you have a certain command, prompt and so on so, if I if.
I Look at the process, tree within the container you'll see it's a very limited, set of processes, it, doesn't even see the container D process that's the parent or the container shim it, only sees the nginx, processes, that I started, and since, I'm running the PS command and, I just started the the, bhl you, see that as well so from the container point of view it has the whole host available, to itself it doesn't see any other processes, it cannot interact, with any other processes, so, that's where that isolation, comes in but from the point of view of the, host you, can see pretty much everything that's, running on the host so. That's what the, difference is there's no magic, here is just processes, that, have been sort. Of put in this sand box or a container. That. You see in actual fact that the Linux kernel has no concept of a container it's, a it's a purely user space construct, so it's. Just built up of processes. The sandbox, from one another the, reason the reason we're going into some detail is because as you can see should. I hit right there as do, you have root on this machine are you route here I. Might. Be I'm not sure. So. I can look inside all of that process, tree yeah that is to say if you have sufficient permissions, on the host then. Any containers, that are running in that are potentially, visible, to you right so when, we talk about security. And we talk about how to secure a cluster and how to run secure clusters it's not just about the containers, it's not just about the namespaces it's also about the operating system level access as well. All. Right we, can switch back to the slides. So. Let's look at some of the other terminology. That we talked about so what are micro-services, again the. The things you see associated, with micro-services, are that they're loosely coupled, they're, frying sort, of fine-grained they provide some, minimal set of functionality, that that. You, know that's useful and they, make you more agile, and and, you know they should be testable on their own and so on and so forth you who's who's, built or deployed micro-services.
Okay. About. Quarter, I would say okay okay. So. For, those of you who you've done it like what, was the cause, of thinking, was the reasoning that so I'm gonna someone shout out white why go micro-services yes. Extract. Business logic from your monolith excellent, excellent, we got one more. Break. Up the monolith to improve scalability, also excellent all of these things they are absolutely, the value of the micro services so let's let's dig into two of those and use those to explain the buzzword so. Separate. The business logic from the monolith all. In, a moment we'll talk about why use these things but the teal the short version of that is these. Are tools, and, tools. Should be used to solve business problems, technology. Just by itself well very cool and I think it's obviously a passion of mine it, doesn't solve real world problems by being cool technology, it's those real-world problems by being applied in the right way to real world problems and so, to the point of extracting, the business logic having. Everything tied up in one thing it makes, it very difficult for you to focus on the things are important for your business or for your customers businesses because, there's so much are the staff wrapped around it in this giant, sprawling, in some cases decades old, piece of software and it becomes very difficult to say ok well it's, now, 2019. I need, to do some some stuff because the markets change my competitors, are changed I have a new idea and it makes it very difficult to do that when it comes with all the other craft and, the. Same is true of the scaling, piece right, if. I have a mic a monolith and I suddenly have the success disaster, scenario, I described earlier I get a hundred or a thousand x traffic. That I do on a usually usual basis to. Scale that I have to either. Absorb. It in the existing model if or recreate. Instances. Of a monolith and manage. Traffic between those instances, and that is a very very can, be a very very painful thing to do because, often, those monoliths are not designed to spin up quickly they, have a bunch of dependencies, in some cases they run on specific hardware there's, all sorts of reasons why that beacon can become very difficult but. If you break each part of that monolith out into its own micro, service now. I can scale each one of those services, independently. I can modify and update each one of those independently, to change the business logic to, add new features. In. Summary, it. Makes us more agile because it allows us to respond to changing market conditions and indeed, changing technology conditions as well and. Testing. The same thing once you have a smaller piece of functionality that, you have to test your, releases are quicker because you don't have to worry about testing. All the features that you've built into this big monolith so so, I hope with these you have a better understanding, of why the, industry has been moving towards micro, services. Net. Next let's look at the cloud native so, cloud. Native you know means built for the cloud or sort, of born in the cloud if you will and, typically. You hear that if you if you want to build for the cloud you have to use containers, you have to be a micro, services architecture and, what, it will give you is quicker releases, and scalability, and all that some, of these reasons we saw earlier you know when we looked at containers, you saw that containers, are. Easy. To sort of deploy from one place to the other you don't have to prepare the infrastructure, as much. To run your container because, you're bringing all the bits that you need with you so, you're building the bits and the infrastructure. Is just providing, the compute and other. Resources, that you need but. It is important to note the the three things are different right. A containerized, application, is not necessarily, the same as so microsomes is architecture they often go hand in hand but it's not necessarily true and you, could do both of those things and still not be cloud native or you could be cloud native and do neither of those things the.
Three Are not necessarily, the same we, do typically, see them coming together though the, majority certainly, from, what we seen with our customers, and in them in the broader. IT. In computing, industry we, have seen, typically. A cloud native application, displays a microphone. As architecture, and it's usually containerized. Especially. When we look at something like anthos right, which will, naturally. Leads, towards. A containerized, and microservices, model and, all, of the three things together is where we see that we kind of see the future of our field going yeah. So. That's what now. Why, so, we touched on this a little bit I'm. Shy why should we use containers so, I see containers, and micro-services, and all these as enabling, technologies, so, they do not bring, necessarily. Benefit, in themselves, they, enable, you to sort. Of reap the benefits, of the modern infrastructure, that's available in the cloud so, that's that's how you should think about it this is not a means, this, is not the end in itself it's a means to an end and you. Get all of the benefits. That we, talked about that, you get from containers you get the portability, you. Know the, the the scaling. Up and down fast, scaling up and down and portability, and all that with that and it opens up a lot of options for you you can run the same container in, a VM you can run it in a in a platform like kubernetes. Or you, can run it even in serverless platforms. Like a cloud, run and so on. And. Similarly micro services and containers, together. Provide. Some, of the some of the benefits. That we talked about you, can scale up your application, quick. Quicker but again your application, has to be built. In such a way still that, it can be scaled up easily and we'll see that when we look at how as to, how to think about that when you're when you're designing, your application or, as you're converting your, monolith, into, a, micro, services or a modern architecture, so what, do we want to be about what the options, are for where we can we can run these containers, yeah, and, quicker release cycles, are basically since. You're. You're. It's not a big application that, you're releasing your micro services are smaller so, it's easier to manage the releases when you have if you think about it when you start off a project it's easy to sort, of you, know iterate, on the code as the code base grows bigger then, it becomes challenging so keeping it in micro services. Forces, you sort of to keep that codebase smaller as. You're separating, out your business logic into individual, repos and individual, release cycles. So. What, that gives you is that. Once you have the image once you've built the image the right way you, can now run it in in, many, different. Types of infrastructure, and, here's some examples, with just once, you've built your image just hello world in this example, you. With, one command you can run it you can pick the runtime that you want and you can deploy the same image in any of these. Products. That Google cloud platform offers. Yes so very simple, right four different commands and you have the same container running in four different infrastructures, or four different platforms should I say and, so when we talk about portability, like, this is what it looks like in practice, now. In. Reality this is a little bit of a trite example, all right because most, people aren't pushing a single container with a single command more, than likely it would be in some kind of orchestration, platform, but, the same thing holds true across, the different orchestration, platforms too so. Here you can look at you. Know you can't just at random pick you know one day I'm gonna run my app in App Engine or cloud run we good yeah you could I wouldn't recommend that, you. Can once you have the container, built, but. Here's, you know some, guidance on how you can, pick, the right runtime. And the, way to look at it is it's sort of a continuum, on, one end you, have pure. Infrastructure. That gives you a lot of flexibility, of how you want to manage that infrastructure, for. Example VMs, you can pick, the image that you want to deploy on your VM you can you can configure it the way you want to you can you know put libraries, that you want to on it and then, you can run your container, on it and. That's one option and, what. You're getting is a lot of control what, you're giving up is. You. Or, what you're I guess signing up for I should say is now, you have to maintain that man since you created a custom, VM you, have to make sure that any updates, that are needed are applied to that and as, you move in the stack to the to the right you.
Start. Giving up that control you're letting the, the platform or your cloud host, provider, take. On some of those responsibilities. Of operational. You. Know aspects, of your, infrastructure, and. You're. Focusing, more and more on just, your workload but, your workload has to conform, to the to, sort of the api's, or standards, that that platform, expects, it to have so, that's that's, the trade-off involved. In. Moving. From left. To the right of this and eventually, the, sort of the new sort of Nirvana is serverless where, you're not worried about your, infrastructure. At all and you're letting your cloud. Provider. Provide. The infrastructure for you based, on your workload, and all your all, your providing, is again your image which is exactly, the, bits that you need the software that you need and nothing extra and you're, letting your, hosting, provider provide. The underlying infrastructure, for you absolutely, oK. We've teased it enough we've, done all the background on the level saying let's, jump into how we might actually do. This right because. It. Seems, like a daunting task, you. Face with a giant monolith like, I said it could be decades old it's a daunting daunting, task, to take on and say okay I want, to actually change this where, do I start, so, we've kind of well. We didn't come up with this approach it's been around for a while but. We found we, wanted, to share with you some things today that we found to be effective when we've done this with our own customers, and as. A suggestion, for ways that you folks could do this thing so. First step you. Need a list you. Need to know what, do you have what, does it look like where, is it running what does it do what. Technologies, is it does it involve one or it's dependencies, and more. To the point what, group, is. It, in so. Classifying. It on the basis of a number of things so we we came up with these three groups right yeah so. We start off with stateless, applications, as you, might imagine stateless. Applications, tend to be the easiest group. To deal with and, so we tend to deal with them first things, like web front-ends, not existing. Services maybe maybe, they micro maybe or not and existing. API is that just taking data transform. And and send out that data anything, that has no stake anything that takes, the request gives a response but doesn't rely on kind of underlying state the. Greek group all those together and they tend to be our first movers that tends to be our first place to go web. Front-ends in particular, seemed, to be a very popular approach. Part. Of the reason for this is. You. Want this first mover, to be, non-trivial. You. Don't necessarily want it to be the number one revenue. Generated for the business because the first time you do this is a bit of a learning curve but. You do want it to be something that's meaningful and that has some, kind of real-world use because if you just start with a hello, world application that, nobody uses it doesn't really prove anything out so often, we'll find their web front-ends, whilst they are you know in the critical path for a lot of user journeys they tend not to be well, hopefully, intend. Not to be too complicated and and, and I'll stateless, usually lend themselves very well to this first quip, yeah. In, the second group. Temporary. State what do we mean by temporary. State so, temporary state is something where an application when it starts up it starts building up at state it, doesn't sort. Of persist that state between. Startups, and a typical, example is a caching application. Redis. Or memcache or anything like that when it starts up it doesn't have any state but as people put. Items. In that cache it starts building up and if, you had to restart, that it's not a catastrophic event, you can you can you can restart it but you'll suffer, some loss. In response, time or there might be some errors. That you might see as it's restarting. Similarly. Applications, that sort. Of take, on different role once they start up for example you could have a master, and a slave when. They start up they are all the same but then this somehow coordinate, to pick one master things. Like that Redis has that and there are other examples of, that as well so, this is sort of the, second group that we've come up with and, we. Calling a temporary state because again that state is not, persistent, across restarts. It's something that it acquires as it's running. So. These these can be great, second. Movers because. Again, you have once, you once, you have the knowledge of what does it take to run applications, as container, in a more dynamic environment.
You, Are ready to take on the challenging, of how to build. That state dynamically, how, can you elect a master, when, you, know when. Existing, master dies without someone, manually, intervening, and so on or how can you build up maybe. The cash from other replicas, and and sort of redistribute, cash as as one, of the replicas, dies or something like that so that's why this is our second group that you should look at and, then thirdly, and finally. Stateful. Applications, databases. File. Servers, and raw storage. It's. Not impossible, in fact it's very doable to move stateful. Applications, into, a container, containerized. Or micro-services architecture. But. It's more complicated than the prior ones. Typically. Typically, many. Especially more legacy databases, are necessary, set up to just add more, rep add more, nodes to get more capacity right, they, have they have a concept, of. Master. And slave but, in a very different way they're often. Not dynamically. Deciding. And voting and promoting, new masters. But. Many time and so, these these, really these. Probably won't want to be the last thing. That you would consider. Largely. Because they benefit the least they, benefit the least from from, migrating to this new paradigm, and they, have the least amount of, the. Least amount of, benefit. For, most. Complexity, to actually change them as well and. How, many of you have tried scaling, my sequel database for example you, can't just add a replica, and expect, it to work out of the box so it's it requires some sort of manual you. Know design, and, decisions, that you have to do that you can't just say oh I need another my sequel instance and just add it there and go, from there so that's why these are more difficult to. Just, do, sort of container, eyes and the benefit isn't as much there because, you won't be scaling these up and down dynamically. To. Begin with. So. Second so first step list. In group second. Step, containerized. Parts, of the application in, most cases you probably don't want to go well. It's. Possible to. Go to a big bang approach tourists say say hey, you know what we're, gonna take a whole company out of our regular release schedule for six weeks three. Weeks and we're, gonna go I just do the whole thing we're gonna take the whole application wrap, out the containers, and release. It out lovely. And practice very very very difficult to do in sorry lovely in theory very, very difficult to do in practice in, practice in most cases for most real-world scenarios, you'll want to do this one step at a time. So once you have your list and your priority your groupings and your priorities then, it becomes about finding parts of that application to move to those containers first. So. Some. Examples, the. Most basic, example of all of this is take, them on with and. Put it into a container right. It's, the fastest, way and technically, you now have a containerized, workload, right check the box yes, I can do containers, it's, quick to deploy and actually. Can be automated which we'll come to in a second this, allows you to prove out the concept allows you to get comfortable with the technology and depending. On the nature of the monolith may allow some some, scaling. That wasn't possible before, you. Don't really get the full benefit having. Said that it, can be a very worthwhile exercise both, from a learning, perspective but. Also from I, need to move right now perspective, and you know the reality is many. Times in the projects that we've seen there is some kind of compelling event like, Friday Cyber, Monday a holiday. A sports. Event or even something as mundane as a data center lease expiry, I have to get out of my data, center by April 29th whatever it might be so this, is part, of sort of the the, lift and shift sort. Of kind, of migration, where you're lifting the application, just container izing it and shifting, it to your, -, as a container to the cloud and this, can be done in. Many times there are there you. Know ways to do this automatically, in fact if you saw. The you. Know the keynote, this morning you, saw an example of an app being migrated, from. From. A VM into, kubernetes, engine, running, as containers so there's, anthos migrate, which helps you do this automatically, so you can do it manually as well but there are instances where, you can just use the, automated, service like anthos to, to, do it for you as well so. Again this can be a good start, to your journey. So. The next the next approach would be okay so let's, let's, think about how we would, migrate.
Some Parts of a three-tier application, right, so. In. Many cases the way you would do this you will be put, this behind a load balancer right. But, a load balancer in front you may already have one, pulling. Behind the Google cloud load balancer and then we can look at the app front-end, and the app backend remember we said these are the pieces that don't necessarily store, straight maybe. That the back end might be one of our semi state applications, the, front end would typically be no state, and. Then what we can do is wrap those up in containers, and now we, can scale those tiers, individually, so if we have that success disaster. Scenario, I can rapidly. Scale up my front end to handle those those requests, without necessarily, having to do the same scaling to my back-end. Now of course you. May end up with contention, that a database in a scenario but at least allows you to respond at the very least, allows. You to respond with a we're, working on it we'll get back to you instead, of just returning a four oh four or five or three. One. Thing to clarify is that dread box does not show. One container boundary, you want to the, the white boxes, are really containers so you want to run the front-end separately. In a separate container and the backend separately, and this is a nice transition, from a monolith as well this could be the first step at breaking up a monolith typically. The web sort. Of the web front and part, of a monolith is easy to break out so, you can start with that and you can separate, that out modernize. It using, some new technology, and and, you, know and leave the rest of the monolith running as the backend for that front-end so, this could be a good starting point on your journey we still know although I they're right we still don't get all the benefits so, what, can we go to next over yeah a little bit more of the benefits we were talking about. So. We can start to think about a transitional. Micro-services, architecture, we're still kind, of mostly, in a monolith mode here we. We're kind of dipping, our toes in the water it okay we're, gonna say well okay, one. Small, part of that my. Life we're gonna break out and build into a separate, service and. In some places this might actually be completely new functionality, we've actually seen both work where we've had customers that have built a new, function, or new feature as a new. Micro service and left everything else in a monolith we've, also seen folks who said well actually you know what this one particular part of my model if I can carve that out and. Recreate. That same functionality, that same feature in, a separate service you. Might now likely, that means you have dead code right, there's probably code left in that legacy back in that now long no longer operates and that introduces some problems, right you may have code duplication so. Maybe some cecal deck to remove that out later later, later in time it. May be possible to remove it there and then that's. Really a very, dependent, on the particular application that, we're working with but the basic principle goes, take a piece out, move. It over there and now, that, piece can scale very easily and we don't have to really affect the existing monolith there's, a lot of there's. Very little appetite, for. Changing, the monolith in a lot of the customers, we work with especially. Things that are in a more regulated industry, or in, a more traditional enterprise, type space where they, don't really want to touch it because it has been running that way for ten years and they're, very nervous about. Changes, breaking, him so. When we do it like this we don't have to touch that monolith we can leave it leave it well alone so there's very little risk to the business and. This, is a great way of reducing risk as you're migrating, because. Taking. Small steps making small changes, reduces. Your risk you don't have to worry about making a big change and upselling something, that you didn't even know, might. Go wrong when you start you know peeling things the other thing is when, you take small steps, it's, easier, to sort of estimate, what you would need to do and and you can apply your learnings, to the next steps that you will take as well so the first time you're building a micro service you, might be, learning a lot and you, can apply those learnings as you build more and more micro services if you start with four or five you.
Might You, know you'll, be repeating the same mistakes, across four or five work streams that you could avoid by doing, one change. At a time, so. In the next step is, let's, add an API gateway, and. Now the API, gateway, becomes, the arbiter of where all traffic goes, so. Anytime a request comes in it can come directly from the load balancer in some cases or it can come from the the, application front-end, now. The API, gateway decides where traffic goes so. Minute. Very minimal change as well no change is required to the front end except pointing a different IP address or service name and, now the API gateway, gets to the slide okay, which, which. Back-end. Do I need to send this request to is this something that I need to send over to the back end the legacy back-end the monolith or is this something like I can send to one of these fancy new micro services that we have and. This is particularly. Powerful. Because. You, think about this as you add service, b c, d e, and so on, now. The only piece that has to change to support those new services is, the api gateway, so, it becomes a single point of configuration, that's basically. Routing, your traffic from. Monolith or micro, service. So. It gives you that flexibility, of. Where. To send that traffic and it hides that detail of what micro services you actually have running behind the scenes from, your clients. From the consumers, of those services, so they don't have to know that now you've migrated service B or C over and you're not no longer calling the monolith it can also provide some added benefits for. Example if you wanted to add TLS, or SSL as, it's also called like you wanted to secure the communication, the API gateway, can typically you can add that functionality very, easily using the API gateway API gateways support you. Know providing. TLS it, can also provide, rate limiting, or it can have other features like if it if it detects you. Know some failures and stuff it can provide sort of you. Know certain braking and other things so depending on the API, gateway of your choice it can add a lot of features to all your services, with very little cost all of these features are built. Into the. API gateway, that you might have so you get all those benefits with that. So. In, finally. There, is the true, micro-services. The end, goal, that we've been sure all along you'll see that the API, gateway still a key component. Eventually. We could get to a place where we don't need that we can have something like a service mesh for this year or something like that to control all the communications, but. We we. Still consider it a true micro services place even though we have the API gateway involved, and at. This point we've, basically taken the previous one and start spinning up these extra services right so there's be service, service. Service B through service in and. Now we've got all that, good stuff everything. We talked about in Y micro services, and containerization. A cloud native designer gets what, gets our business, we, can scale every service in Davitt individually, each, one has its own life cycle so. You know we we go B we've. Seen customers we've gone from releasing once a quarter to releasing sixty, or a hundred times a day because. Now they can release small small things very very very very quickly and that pace, of innovation can. Improve, so much just, by moving to this type of model the, downside, of course it's. Harder to manage yes right. Or different, to, manage because. If you're our company that's used to dealing with a monolith it. Requires a change of thinking because, now I've got multiple, services, with multiple teams are responsible for maintaining those services, I probably. Need to create some kind of centralized CI CD system some, kind of configuration management platform, all these are kind of overheads, which, go into orchestrating. All these micro services. But. When. This happens when you get to this beautiful, state, because. You have so much agility. And ability to innovate in the application, development, teams the. Company, or the business is, able to spend so much more of its resources, on the thing that actually makes, money because. Running the perfect kubernetes cluster doesn't.
Make Money well I mean maybe for Google not. For not for most customers whereas. Being, able to innovate and being agile and respond to the market that's where the true business value isn't. Now. Does anybody recognize that, pattern you. Know, know what it's called I, didn't. Like I wish I'd made this up but I didn't, this. Is the strangler back. So. This is another, yourselves. Sounds a bit macabre, doesn't it actually, comes from a tree, analogy. And. So the, principal goes like this if you have a big old well-established, tree, we're in California so let's call it a redwood a big. Redwood tree and, then, you have which, represents. The monolith and then you have a bunch of vines growing. Up around that. Tree, as. Those vines become more and more mature and stronger, and stronger and harder and harder, there. Comes a point in time at which they are now providing, more structural. Integrity, than, the tree around which they are growing, if, that tree dies and rots, away the, structure, remains, and. That's basically what we're trying to do here we, are trying to build, microservices. Around the monoliths such that we've, made the monolith obsolete, and, we can shut it down and remove, it and throw it away and never shed a tear for ever again, and, so that's the basic premise of the strangler pattern and it gives, you that flexibility, and, it gives you an its gradual. Or partial, what path forward, so you don't have to do everything in one gun and, as. We mentioned your. Organizational. Processes, would need to change as well as you move from one type of architecture, to the other see, icd is one thing we mentioned, just, increasing. Developer, velocity, will have its own challenges, when you have multiple releases, going out imagine doing, you know tens of releases instead of one release a quarter, or whatever you're used to so this allows you to gradually. Change, your posture from. A traditional. Organization. To a more agile. Microservices. Based organization, as well with with less, risk. Alright. Finally. Profit, the. End of the day that's well we're all that's what business is all about right three. Easy steps, categorize. Migrate. Small parts build, up to the final thing and profit. All. Right so some useful resources we. Have we are very blessed here to have a wonderful. Set of labs please. Go find out find the hands-on labs session, you've got all sorts of fancy, machinery, there to go try out these particular labs we, highly recommend the. Gke. Best, practices. Quick labs quest tried, on a team put a lot working to building those out and, they kind of guide you through some of these best practices we've talked about here yeah and they, guide you there are labs that start from just building your first container and deploying it to all the way to you, know running, you know hardening those containers adding, security or cluster, as well, as going all the way to maybe deploying, a service mesh and so on and so forth you can pick, the, you. Know the expertise, level that you have and you can find, a quest or lab, related, to to that expertise, level and go from there we, also have a bunch of open source code samples and templates the, URL is there please check them out about nearly, 20 repos now I think which. Show, essentially. How. You would do this in production and they are templates so you can grab copy, iterate, on and deploy, into your environment to get you started very very quickly with some of these things and, these, are open source that link will take you to a github a. Bunch. Of github repos that you can you're free to use there, under, apache license, and open source.