Kubernetes for the hybrid enterprise | OD331
[Music] hi i'm brennan burns corporate vice president for containers and cloud native computing and microsoft azure and i'm going to talk to you today about kubernetes and cloud native for the hybrid enterprise here's the basic agenda for the talk we're going to talk about kubernetes and how it applies to hybrid enterprise and how we're seeing kubernetes and the hybrid enterprise roll out across the azure continuum then we're going to talk about hybrid application design and hybrid application architecture and finally we're going to conclude with hybrid devstech ops we're going to also have in the middle a little bit of a demo of some of the stuff that we have coming down the pike all right so let's start out talking about kubernetes and hybrid enterprise you know the truth is that enterprise cloud has really become a hybrid environment i think i used to think well you know everybody is making their way towards the cloud and that's the eventual trajectory but the truth is that whether you have ships or stores or factories your applications are going to span from azure public cloud to private cloud solutions all the way down to bare metal and edge devices and the great news is that kubernetes is there for you no matter where you are and that azure can make running applications on that kubernetes infrastructure easy to achieve i want to talk about kubernetes initially as sort of a defining trend in the computing industry over the last five or six years you know over the last four years cncf has shown an increase of over 300 percent in container usage in production production workloads now make up the majority of container applications that are being deployed so we've moved far beyond people experimenting people doing dev tests all the way through to people running mission critical workloads for many years on top of containers and kubernetes there's just been tremendous momentum across the cloud across the computing industry with kubernetes at the core of what it means to define modern cloud-native applications within azure kubernetes has seen similar energy and similar momentum in fact the azure kubernetes service is the single fastest growing service in the history of azure now of course we benefit a little bit from being you know one of the newer services when cloud is being adopted at rapid scale and so of course azure is growing tremendously kubernetes is growing even faster on top of azure and we benefit from all of that but i think it's also a testament to how people are seeing kubernetes really being a technology that can expand and enhance the way that they incorporate their application development and infrastructure management into one seamless api i think the other thing that's really been interesting to see over the last couple years is this idea that the container is the new vm and what i mean by that is not that you know we're going to start treating containers like vms and running scripts and turning them into snowflakes but rather than every feature that we build into azure whether it's the latest graphics cards whether it's you know accelerated networking fast access to disks premium storage all of these capabilities they're not considered launched unless they're available within the azure kubernetes service right so it's really become the place where every single feature that we're building into azure is also integrated into our container solutions and that's the expectation not only of the azure platform but frankly of our customers as well they're expecting that the entirety of azure is available within their container infrastructure when we take a look at kubernetes at the edge it's a similar story people are seeing this technology deployed out to edge locations whether it's you know their factory their store their ship whatever it happens to be as the substrate that they can use for unifying application development both between the cloud and also out at those edge devices and it really doesn't matter what kind of edge you're looking to build azure has a solution that's available to you and that can help you span your applications all the way through from you know the largest hyperscale all the way down to individual devices so i think that to see give you an example of how this works i'm going to walk you through three different sort of typical customer journeys as we think about deploying kubernetes out to the edge the first is the idea that you want complete consistency between cloud and edge for many customers the simplicity of knowing that exactly the same api is exactly the same tools exactly the same toolkit will work both in public cloud as well as in private cloud leads them to adopt azure stack and the amazing thing about azure stack is that 99 of the code for the azure kubernetes service is shared between azure stack and azure public cloud so that means you get perfect consistency of how you manage and deploy and run your applications on top of kubernetes no matter where it happens to be whether it's in your own data center in azure stack or in the hyperscale public cloud in azure itself that's an amazing amount of consistency and an amazing amount of assurance to know that really the training that you do to build teams that know how to deploy to the public cloud can be applied directly to teams deploying into your private cloud as well i think another step along that hybrid journey are people who want a little bit more flexibility in their infrastructure people who are used to running windows server who are used to using windows admin center people who have been managing applications themselves in their own hardware for a long time and you know want to have that familiar experience and the great news about this is that with azure stack hci you can deploy kubernetes there as well right so you can use familiar interfaces like windows admin center to define deploy and manage a kubernetes cluster running on linux running on hyper-v in your own hci infrastructure on a broad set of hardware supplied by our partners allowing you to you know have both the best of familiar experience as well as the best of the latest in cloud native open source and linux technologies available to your developers to build on top of that hdi footprint and for many people who are looking at you know maintaining an existing data center or using their own infrastructure this is a great solution again to have consistency between azure uh and uh the the kubernetes that you're running in your own uh in your own data center of course it's not azure stack hub so there is a little bit of you know a little bit of the api consistency is not there but the kubernetes substrate that you deploy is exactly the same and the applications that you define are yours to to do with however you want i think if we continue down that sort of trajectory from you know sort of completely perfect consistency with azure but you know using a big device like azure stack all the way down through to perfect flexibility that's where we get to azure arc right and we find customers who have their own kubernetes they've already developed either bare metal infrastructure infrastructure with another private cloud sometimes even infrastructure on top of other public clouds but they're looking for consistent management experience a single place to deploy applications a single place to view all of their clusters a single place to know that every cluster is patched and up at the latest version to apply policy that single pane of glass that enables them to manage the resources no matter where they're located people are achieving that with azure arc and the great thing about that is as you push out to the edge where they're you know it's at a facility that's out in a remote location a ship that's you know as this one cutting its way through the arctic you can have defined your kubernetes however you want you can use some of this really small scale kubernetes installations that will run on a single device or a couple of devices all the way through to you know large-scale kubernetes provided by a private cloud solution like vmware or others and arc will give you a single pane of glass in azure to manage it all to give you a single set of unified credentials a single set of you know access control policy and all of the stuff that you really want to be centralized to ensure that you're secure compliant and and that you're being successful in managing a wide variety of different devices and a wide variety of different kubernetes but again that kubernetes substrate provides you with an easy to use application environment that you know you can deploy your applications to in a consistent fashion whether it's you know all the way out in the public cloud or all the way out at the edge device so hopefully that gives you a perspective of how we think about kubernetes and the hybrid enterprise all the way from azure stack down through to azure arc and it really is a continuum where you can enter in at any of those places or really even combine them together including public cloud as well as arc so you don't have to just choose one you can actually mix and match all of them to suit the environments that that you need all right i want to bridge from talking about how we can deploy hybrid kubernetes all the way through to how you might actually think about defining a hybrid application so we'll take a look at a hybrid application architecture but what we're going to start with actually is the tools that we make available to you for managing these hybrid environments and of course the amazing thing about all of these tools is that they're you know focused on open source and also available to you anywhere so it doesn't matter where you're running your kubernetes github vs code helm these tools are all available to you in every single environment obviously github is the predominant place where people do software development it's a familiar interface for both storing your source code doing pull requests change requests filing issues in fact we actually manage the issue queue for our azure kubernetes service on github itself as well and of course all of our open source projects like helm like vs code are also hosted on github when you look at vs code it is the most popular editing environment for building and deploying your applications it's been amazing to see the communities that spring up it's been really great to see you know the interest in things like the kubernetes extension for vs code that we've built that really make it easier for people to adopt this technology and finally of course when you're talking about packaging and deploying your application to kubernetes the majority of applications that are deployed use helm and helm is a great format for basically wrapping up your application into a single you know sort of file that you can use to deploy to any of these hybrid environments so with helm you can easily and consistently deploy to azure azure stack azure stack hdi all the way out to azure arc enabled cluster so all of these tools together really are a unique value proposition that microsoft has for delivering applications in the hybrid enterprise now when we talk about application applications we want to talk about their architecture we're going to start out with just a traditional application without even the hybrid part this is just a picture of what it might look like for you to deploy your application at the edge of course you've got api management most applications that are being deployed these days are really apis the ui is either on a mobile app a native mobile app or a single page rich web app they're all talking to cloud-based apis so api management at the edge is your sort of ticket for rate limiting for security for managing access to the apis that your services are exposing of course in defining the services themselves you're talking about defining them in kubernetes as microservices so you can see all of my little you know gear services that are there inside of my kubernetes cluster defining all those micro services and then of course there's some amount of storage that you're going to be running and we would definitely suggest that you know while it is possible to run storage inside of kubernetes you're far better off using a cloud managed storage solution i always tell all of my teams you know the last thing i want them to be doing is thinking about managing storage there are teams that are dedicated to that within azure there are teams who are the best in the world in azure at managing storage so why you know why when my value proposition is developing applications would i try and you know build a solution that was competitive with these other people who are focusing on that as their core job 24x7 so using a managed sql using a nosql store like cosmos db or whatever it happens to be here i have uh i've illustrated sql as the as the back end but of course it really could be anything so that's the rough shape of our hybrid application architecture gateway api management at the edge lots of microservices within the kubernetes cluster a storage backend for you to actually store your data now what's amazing about this architecture is that when i talk about going hybrid when i move from being in the cloud to maybe being in a factory the actual application architecture can stay identical and that's an amazing value that you can have not having to retrain your teams not having to build different configurations you can still use api management in this case the self-hosted deploy yourself into your hybrid environment version of api management obviously kubernetes through all those solutions we talked about earlier goes anywhere so you can define your microservices in all of those different places and with azure arc for data services or sql server on azure stack you can have that data solution as well right so no matter what version of hybrid you're defining no matter what environment you're building out in your hybrid enterprise to deploy kubernetes you can deploy azure data services on top of that and have again that same managed data service experience and consistency so that the code that you run up in the public cloud that talks to the storage backends can be the exact same code that you run in the private cloud or in your own bare metal clusters talking to sql server for example running on top of that private kubernetes cluster so the great value here isn't just in defining you know a hybrid application architecture but a consistent application experience a consistent set of tools a consistent set of code that can actually span from public cloud all the way through down to edge installations now talking about going even further we're talking about continuous deployment into these applications so it's one thing to sort of define the application architecture but it's really another to make sure that you can do that weekly deployment or even that daily or hourly deployment out into your application now to achieve this of course github is the place to be right your source code is all hosted in github in the first place and now there's great tools like github actions that are available to you so that when you push code into that that github repository it can trigger a github action that will run your tests that will run your integration build and release an application even safely deploy it across multiple stages all the way out to all of these different environments and again because github is available anywhere github actions can target your kubernetes running no matter where the environment is all the way from public cloud through to the edge um and your github with even with capabilities like github enterprise github can even go behind the firewall and do your deployment you know in a world that's not even connected to the public cloud all right i want to take a look at what's coming in the future though so if that all that whole application architecture is sort of what you can do today i think for many people the future is a service mesh right and we are thinking a lot about service mesh these days we're seeing people begin to adopt it if containers are now totally mainstream service mesh is the thing that people are starting to think about what it can do for them and i think for most people the main value proposition of service mesh is both automatic encryption so you can use a service mesh to get mtls everywhere but then also canary experiments so that you can run and say hey look in my environment i want to run 80 of my traffic through to the public cloud but 20 of my traffic i wanted to go on to an on-prem hybrid deployment for example and then actually even within so that sort of gateway you know traffic splitting but at even within your cluster we're starting to see people say hey look i'm actually going to be able to do east-west routing and so 20 of my traffic is going to go to my on-premise cluster but then 80 of the traffic that comes into that on-premise cluster is actually going to go into the public cloud into my microservices in the public cloud and then maybe 20 will go to the back end that's in my um that's in my hybrid location of course these all of these numbers are sort of chosen arbitrarily but they give you an idea of the flexibility that a service mesh provides you with in order to sort of customize the flow of traffic through your service to meet the current needs of the application whether it's because they're having a reliability problem in one particular region or because you're running an experiment and you want to only have a little bit of traffic go to a particular region there's a lot of flexibility that comes from having a service mesh be able to dynamically route traffic through the entirety of your application and across on-premise and and hybrid environments i think the other piece that's really cool to think about is how we think about kubernetes life cycle right so there's more to just like there's more to running an application in terms of deployment and and management of that application weekly rollouts there's more to running a kubernetes cluster than just having it sit there right there's automatic scale up and scaled down in response to traffic load there's version upgrades there's all kinds of different things there's even creating whole new clusters there's all kinds of things that you may want to do to manage your kubernetes and what's really fantastic about this is there's been a lot of work out in the open source community around this idea of cluster api cluster api is a api within the kubernetes cluster that actually knows how to manage a kubernetes cluster so that means that via azure arc and integration with the cluster api you can actually manage your on-premise clusters from the azure portal you'll be able to scale them up scale them down and do all sorts of activities that you might not that you know would typically be thought of as cluster administration activities from azure arc this is of course work that is coming in the near future um and work that is being developed in coordination with the open source cluster api but we're really excited about thinking about how we can simplify life cycle management centralized life cycle management so that those kubernetes clusters that are running across your hybrid environments can be managed with a single pane of glass and a single consistent api of course all of this future stuff is no good if it doesn't actually you know have some degree of reality to it and so in order to kind of give you a better sense of what a service mesh can do i actually want to jump into a little bit of a demo of the open service mesh open service mesh is our open source implementation of the service mesh interface it is sort of the azure service mesh um and it's been developed out on github but it's available to people anywhere we're really excited about its capabilities and i'm going to show those to you right now in this video we're going to see a sneak peek of some capabilities coming to the open service mesh and the azure kubernetes service what you'll see here is a diagram of what we're going to do for multi-cluster service mesh we have two clusters alpha and beta they're both connected into azure arc and they both have the open service mesh implemented on top of those clusters now we have an application that's going to be running in both of these around a bookstore app where we have a bookstore api a book buyer who's buying things from the bookstore api as well as the open service mesh gateway for communication between these two places what you'll see here is that the open service measure actually takes care of certificates for authentication and encryption for the communication across the clusters and also within the services that are running inside the cluster because of the canarying that is available if one of the services goes down you can do traffic failover or if the service is there you can actually do extended capacity if you're under load i'm going to show you a demo of this in operation first let's take a look we've got both of our clusters here in our cube control contexts and right now i'm going to be switching the current context to be my alpha cluster and we'll be showing you the open service mesh within that cluster so if we look at the namespaces you'll see the azure arc namespace that's taking the cluster and connecting it into azure arc you see the open service mesh namespace that's the thing is actually implementing cluster-wide service mesh and then you see this open service mesh multi-cluster which is what's enabling communication between our two different kubernetes clusters so here we're going to actually switch to the beta cluster and you'll see that if we take a look at those namespaces just validating that i'm in the right context if you take a look at these namespaces you'll see the same services running in those namespaces the service for connecting us to azure arc the service that is implementing the open service mesh and the service that's implementing the multi-cluster connection between the two clusters now next what we're going to do is we're going to be taking a look at the state of our kubernetes clusters just to give you a sense for where things are at here are my alpha and beta clusters running in aks we're using aks for this example but these clusters could easily be in on-prem or in other environments and here we're actually showing the connection of these clusters into arc so you can see that both of the clusters are present and registered into arc and again these could be arc clusters that came in from some other location as well and then finally what we're going to do is we're going to actually take a look at the logs of the of the book buyer within uh one of the clusters so we're in the alpha cluster here and what you can see here is that uh the the service was briefly unavailable um but uh as we did failover into the service that's running in um the bookstore beta we're actually managing to communicate across the multi-cluster service mesh to our bookstore service that's running in the other cluster and so what this shows you is that even in a world where the uh you know service fails we can actually successfully fail over to another location all right now that we've seen service mesh and how it works i want to talk to you about actually the development process and in particular i want to talk to you about this idea of secure devops right we've had a lot of people talking about devops for many many years talking about how you can deploy your application but i think of late in the last year or two what's been really interesting is that security has become a core component of what it means to do devops and in particular this notion that you can take this entire pipeline and secure it and put in testing and validation and access control in all the right places means that you really can build a secure pipeline for application deployment you know when we think about secure devops the microsoft core technologies involved are similar to the ones we talked about in the application architecture including github visual studio but actually we're also adding in tools like azure policy and active directory that really provides some unique capabilities and here's a sort of full set of devsecops capabilities that are out there i'm not going to drain this slide but it's here for you to take a look at all of these are available within azure to enable you to build a secure devsecops environment i think one of the things to think about though when we think about devsecops is it's not always about all these really fancy capabilities sometimes it says something as simple as using the kubernetes extension to easily enable your developers to do port forwarding one of the most common security problems we see people running into is people putting stuff out on the public internet that had no business being out on the public internet in the first place the reason that they do this isn't because they want to you know expose their company to a security problem it's because they just want to see the thing that they've deployed to kubernetes and the easiest way to do that is to just give it a public ip well if you use the kubernetes extension the easiest way to do that is actually just to right click on the container and do a port forward you know and this capability just making these capabilities discoverable and easy to use prevent developers from making mistakes that ultimately end up with headlines like this then i think the other piece of this is actually using access control to limit the possible things that a user can do right so access control can actually be used to say hey look we actually don't want to give individual developers the ability to deploy things out to their kubernetes cluster instead let's give those capabilities to the getups github action that's actually doing the development of an image or it's actually doing the push of the image out to kubernetes so developers have access to a github repo automation has the ability through our back to build and push a container image developers have access to a configuration repo it might be the same repo as their application and through a release pipeline deploy that container image out to the kubernetes in the cloud on doing it this way means that you know you don't even have to necessarily worry about your developers um doing the wrong thing because the only thing that has access is the the pipeline deployment and it has testing and validation in the right places to ensure that you only deploy secure configurations now i think when we think about deploying secure configurations you might think well our back just sort of takes care of everything but it's really not true because in particular our back can't answer any of these questions that a security professional might ask like where's that image from or did you get a security review or even what's the email address for the people running these services our back can prevent access but it can't do anything to help shape the content of the application itself in order to achieve that we use something called policy now policy is a great example of how azure interacts with these open source communities in response to customer need and customers who are asking us those questions we developed the gatekeeper project it wasn't actually called gatekeeper at the time it was called the azure policy agent but as we saw the interest in the community around these ideas we actually took that code base donated it to the open policy framework within the cloud native compute foundation and it turned into the gatekeeper project right and gatekeeper is now the de facto way of doing policy for app for kubernetes anywhere um it's a community effort across a bunch of different uh cloud providers and people from other air parts of the kubernetes ecosystem but what's really great is because of our sort of foundational role in creating the project not only are we experts in how it's built and maintainers of the project but we're also the only ones to have integrated gatekeeper directly into our kubernetes story as part of the supported azure kubernetes service and then i think one of the most interesting things beyond policy that we can talk about is how we do access control and the abilities of active directory to carefully manage who has access to your cluster i think one of the things we think about a lot is standing access who are the people who who always have privileged access to a cluster because every one of those people is a risk if their account is compromised for some reason they and they have standing access to the cluster the person who compromises that account immediately has access to the to the cluster as well one of the easiest ways to present prevent this is to ensure that no one has standing access to the cluster and you can achieve this by com combining privileged identity management or pim from active directory with kubernetes with active directory enabled so well the way this works is you have a privileged access group in aad but it has no members that access group has access to your cluster but because it has no members it doesn't actually represent any ability to get into the cluster now let's say there's some sort of event and an operator needs to get into the cluster to debug something that operator can come and ask for a request to elevate just in time they can say hey i need and they file a ticket or they talk to their a person to approve that request once it's approved they're added to the privileged access group and now they have access to the uh the kubernetes cluster but what's important is that that membership is time-bound maybe it's eight hours long and it's audited so there's a complete log of who had access to the cluster and across which time which means first of all that if accounts are compromised you don't have you know immediate standing access to the cluster they have to go through this approval process which dramatically reduces the probability of an exploit but also you have a perfect audit log for understanding who had access when and if you are facing a security incident you can actually backtrack and very easily see who are the people who could possibly have made changes to the cluster during this time period and again privileged identity management is a capability of active directory that we take advantage of in managing azure kubernetes clusters and this capability is unique across the industry it's simply not something you can do with this sort of just-in-time elevation with any other kubernetes provider and it's a great way to give your cso or cio the security and peace of mind that they need in order to enable you to rapidly deploy your applications using modern cloud native technologies and then the last piece of this is is thinking about how we scale devops right so we're thinking about how we really want you to be able to take these dev secure devops processes and not just apply them one cluster at a time but indeed centralize them so that through get ups get ops and azure get ops where you can actually sort of by merging a pr into a single git repo push those configurations across your entire landscape of kubernetes clusters this is the way that you can scale across all those hybrid environments so when you think about you know building out hybrid kubernetes for the hybrid enterprise whether it's in azure stack azure stack hci or azure arc with any kubernetes anywhere you can take tools like azure get ops azure policy active directory these unified single point tools that allow you to scale your application management across all of those different environments without requiring that you touch each one individually finally i wanted to talk about how you do this digital transformation right as we're thinking about how you deploy these applications across all of these different environments one of the most important things that people are thinking about is how they can modernize their existing applications the way that we do this is through the azure migration program or amp there's some great links here about how you can do azure migration there's a bunch of proven methodologies so that if you're new to kubernetes or you want to see the best practices that other people have done whether it's about identity management secure devops application architectures it's all available to you there's a ton of work around offers and incentives for doing this migration a bunch of capabilities and free tools around both learning and deployments it's really a great way to get jump started with kubernetes or even just to ensure that if you're experienced with kubernetes you're aware of all the great technologies we've been building over the last few years and that you can take advantage of them in your own application development thank you so much for listening and i hope you enjoy ignite
2021-03-08 19:57