Radius a new open-source application platform for the cloud | BRK402

Radius a new open-source application platform for the cloud | BRK402

Show Video

[MUSIC] Brendan Burns: Hi there. Thank you so much for coming. It's 9:00 AM on the third day. So we're excited that you managed to make it in here. We're also really excited to have a chance to talk about Radius.

I don't know if some of you have maybe seen the announcements already that went out around Radius, but we're pretty excited with what it can do for you. Mark will be up later to give you a demonstration and introduction to what Radius can do for you and your building Cloud-native applications. I really wanted to set the stage for effectively why we did this, why the office of the CTO and Microsoft Azure did this. I wanted to start out, I think, with maybe casting your mind back to maybe your first encounter with Cloud.

It might have been a few years ago, it might have been almost a decade ago. But I would say that the first version of Cloud that we built basically just took everything that was already in your data center and it put virtual in front of it. There are virtual machines, and there are virtual disks, and there are virtual networks.

There's a lot of value in having things given to you through an API instead of being given to you from an order that you put out to CDW or to Dell or whomever it is. But ultimately, if you're an application developer, it didn't really change the way you thought about or the way that you built applications. I think when we started thinking about what became Cloud-native, what was the first and most important transformation that we were thinking about, was this idea that infrastructure, this list of stuff, is not your application. It's as if someone wandered into your house and said, hey, I know what your house is.

It's three beds, two desks, a couple of bookcases. That's not how you think about a place where you live, it's how you think about a thing that you're building. That intuition that the infrastructure is not the application and that these are two separate things was what motivated the development of Cloud-native APIs. When you look into the Kubernetes API, what you'll see I think is concepts that resonate with a developer, an idea of a deployment or an idea of a service. Things like configuration and secrets. Then similarly, I think that the rise of Docker and the rise of the Container image was because instead of it being a machine image, instead of it being an operating system, it was really focused around what is the code that's been written and how does that code operate.

It really was tightly focused on packaging up just enough for your application to run correctly, your code, the ports that it exposed, and then much smaller and easier for you to use as a developer as well. The image built faster, it shipped faster, and the inner loop, as we call it, was much quicker for you to execute. This actually took place 10 years ago, nearly. In fact, the original idea for some of these came to us a few blocks from here, honestly, and we spent a long time building up these Cloud-native ideas.

I guess I'd say after a decade of working on this, you'd think that we were done. You'd think that while we came along, the "applicationified" find everything, the developer's life is easy and we can all go home. But the truth is that building these Cloud-native applications has proven to continue to be very difficult.

I think we've done a lot to improve the APIs. We've moved from thinking about machines to thinking about containers. We've moved from thinking about networks and ports to thinking about services. But there's all of this other stuff that has to do with how we build applications at scale, especially how we build lots of applications within an enterprise, that is just too hard. Part of it is we force everybody to learn everything. Every single system that I've ever seen someone build is a bespoke system.

There's almost no reuse of libraries for distributed systems or for other pieces like that. There's no ability for someone who's really good at setting up a great database to allow a group of developers to share that database across an entire company. Because of that, troubleshooting is hard because everything is a little bit different. Making sure that we're following security best practices is hard because again, different versions, it's all strewn in different places, there's no leverage to do that. Then similarly, it's very hard for us as a technology industry to collaborate across the industry, across the various Clouds, across the various platforms that are out there.

These applications are still really, really difficult. Now, some of this has gotten a little bit easier with things like Kubernetes and with things like Open source. These are critical aspects of what I think has become known as Cloud-native. The fact that things are done out in the open, the fact that we can in some sense, run across all of these platforms.

Although again, the details become complicated and hard to reason about. But even with the Cloud-native ecosystem and the open source ecosystem that's been created, what we see is that the applications that we build are really more than just that one set of APIs. There's all of these different pieces. Whether it's a Dockerfile or it's in a Helm chart, whether it's expressed in Bicep or Terraform, it combines a bunch of services provided by all of these different Cloud providers together. There's all of these different pieces. But the question that I have for you, and the question that I think we were struggling with when we went and built the Radius project was, where is the application in all of this? Because I think what we've done is we've gone and built a bunch of application oriented APIs, concepts that are familiar to developers, but we never actually built the application itself.

What we're left with then is a bunch of people doing a DIY platform for all of their individual applications. Every single application that gets built is an application that is a single island unto itself. I think this realization of the complexity and the fact that honestly, with the number of applications that we need to build, there's not enough developers who know how to build those applications if they're all building them themselves has led to, in the last year, year and a half or so, the rise of this idea of platform engineering. But what we're seeing as people have been going into platform engineering is that each company, each enterprise is building their own platform.

I will go and talk to a retailer, I will go and talk to an airplane company, and they'll each be building their own platform for building and deploying applications. Some of this makes sense. Every single business is a little bit different. Every single business has particular things that they are needing and that they need to allow their developers to do. But a lot of what they are doing is really honestly undifferentiated heavy lifting. As a result, I think the other piece that we see in the thing that we're trying to help with Radius is the fact that we want to be able to have a community of solutions.

We want all of us to be able to come together to deliver best practices, solutions that can be reused across the industry. I think that leads to the final piece that we were thinking about as we were developing Radius, and that's this notion that whatever we do, it has to be open source, it has to be multiplatform. You're here at a Microsoft conference, but honestly, in order for us to deliver success, Radius has to be something that you use even if you never do something on the Microsoft Cloud platform, and our aspiration is that it is truly something that the industry can take on and use to describe their applications going forward.

This open source innovation is something that we have found root in the office of the CTO inside of Azure, and so to tell you a little bit more about the foundation of Radius and the details of how Radius works, I'm going to have Mark come on up and he'll take you through Radius. Thanks. Mark Russinovich: Thanks, Brendon. Brendan Burns: Awesome. (applause) Mark Russinovich: Good morning everybody.

I'm Mark Russinovich, I'm the CTO for Microsoft Azure. A lot of people know that, but you might not know that the office of the CTO has an incubations team that actually started a few years ago. It started kind of accidentally, or organically because I had a technical assistant that was creative about thinking about the challenges that people were having developing Cloud-native applications.

One of the challenges that he identified was that there was no scale to zero solution for auto scaling on Kubernetes, and that meant that you always had to have a pod laying around listening to events so that it could figure out if it needed to scale up or not, and it could never go away because it just always had to be there listening for these events. He came up with the idea of KEDA, and so we started to incubate KEDA in the office of CTO. This was the first project and it looked really promising, had a lot of interest from people that we talked to, and so we decided to continue to incubate, make it a product, and then eventually we graduated into the developer division where it's now a full blown engineering team that supports KEDA. We submitted it to CNCF shortly after the time that we got validation from customers that they found it useful at solving this problem of scale to zero and actually being a very flexible auto scaler, and proud to say that this graduated in CNCF to a graduated project in August of this year. But we also continued to look at the problems that enterprise developers were having with developing Cloud-native applications, and like Brendon said, one of the things we wanted to do is make it very easy for developers to focus on their business problems and not have to worry about infrastructure, and that means creating a clear separation between an app and infrastructure.

The next project we developed to support that kind of distinction, to let developers be productive was Dapr. How many of you have heard of Dapr? Just out of curiosity, so that's fantastic to see everybody with their hands raised. I believe strongly that Dapr is an obvious solution to the problem of developers dealing with different platforms, having diverse sets of capabilities, no consistency across them, and having to worry so much about implementation details and a lot of the mundane aspects of developing a resilient, scalable, secure Cloud-native, diagnosable Cloud-native application. It just takes care of so much of that for you and it gives you a whole ton of benefits as a side effect like portability at the same time. Proud to say that that went into CNCF when we submitted it as an incubating project, skipping sandbox and it's just been submitted for graduation.

We expect it to graduate eminently given the huge number of community contributions that it's gotten and the amount of enterprises that have adopted it. There's another Cloud-native incubation I want to just make you aware of because it's also came out of the incubations team recently and it's called Project Copacetic. How many of you have just the challenges of dealing with vulnerabilities in your container images where you've just got hundreds of them, this is something we wrestle with at Microsoft. Our container images just have hundreds of vulnerabilities. We're continuously having to rebuild the entire stack of those image dependencies that you have with the container, and so what we decided to do to simplify this was approach it the same way that we do software engineering patching, software patching for traditional software outside of containers, and that's being able to slip a patch in. That's very targeted at a specific vulnerability, and we do that with containers so that you don't have to do rebuilds of everything on top of that vulnerable image.

Those containers can see the patch on top of that vulnerable image. That has been submitted to CNCF and is now a Sandbox project. But what I'm here to talk to you about is the next step, I see this as the complement to Dapr, where Dapr is the programming model for Cloud-native, and Radius is the application model for Cloud-native, and like Brendon said, the whole goal is to make this completely open, completely neutral, because we know that every customer I talk to that is an enterprise of any size, is concerned about deploying things on-prem, deploying things to not just Azure, but other Clouds. In fact, many enterprises when I ask them what their Cloud-native strategy is, their answer is CNCF, and that's another way of saying it needs to be neutral, it needs to be governed neutrally, and it needs to support multiple Clouds as well as on-prem. We know that if we want to deliver something that meets enterprise needs, it needs to meet those characteristics, and so we designed Radius with that in mind. To address the challenges that Brendon brought up, we decided to make sure that there was a clean separation between what an application developer does and what the infrastructure does.

At the same time, incent developers to use Radius because the way you incent them, just like with Dapr, is by having the platform Radius do a lot of things that the developer would otherwise have to do themselves, they get for free if they're using Radius. Now some of the benefits of Radius, and we're going to try to prove this out through the demonstrations and talking about how it works include the fact that it supports this platform engineering, this team collaboration, separation between a developer and the ops team, so the developers don't have to learn all of the infrastructure intricacies, they don't have to learn how do you know, how do you deploy Mongo database on AWS, how do you go deploy a container of a Redis cache on Kubernetes. Somebody else can take care of that for them. They just say, "I want one of those" and they get it. The benefit to the platform team is when the developer specifies that they can make sure the developer gets that, but gets that in the way that the enterprise wants. They get it with the appropriate security controls, they get it with the appropriate cost management, they get it with the appropriate compliance.

They do that through something called infrastructure recipes, so this is like the key ingredient connecting an app in the infrastructure, and that's the recipe, we'll talk more a little bit about that. Now if you leverage this capability of the developer specifying what they want in that way, what you end up with is an application graph. Brendon showed you the sprawling list of resources, and that's what you see when you take a look at a portal, is you just see a list of resources, or if you go take a look at what's on your Kubernetes cluster, you see a list of stuff. You don't really see how they're connected and how they relate, and that makes it very hard to troubleshoot or understand what's the core parts of the app, what's the supporting parts of the app, what's the infrastructure versus the app. Radius gives you this view, an app graph view, we call it, that shows you what is the app, what's the front end, what's the back end? How do they talk to each other? Where's the data store? Then finally, core principle of course, like I've already mentioned, is Cloud Neutral, support any Cloud, support infrastructure also make it easy to incrementally adopt, same principle as we have with Dapr.

You can incrementally adopt Dapr. You don't have to "dapr-ize" your whole thing, you can start with just pieces of it. For Radius, we want to make sure that you can leverage your existing Cloud-native investments, and you'll see that as we show demonstrations. This shows you the big picture here. Radius is not in your data plane, it doesn't care how you write your app.

It is the way that you deploy your app, and so it works with any of these kinds of frameworks on the top. It is specifically designed though to work with Dapr because it understands how to deploy Dapr resources, and we'll show you that in a little bit. It's goal is to support multiple Clouds as well as on-prem, and by on-prem we mean Kubernetes on-prem by multiple Clouds, of course we mean the big three hyperscalers and possibly Ali Cloud as well we're working with. Now, let me just give you a little high level view of what this looks like, so here's an application. You've got an Internet gateway, you've got a front end container, you've got a back end container. The front end talks to the back end using the front end caches the results of the data queries in a Redis cache, and the data store is actually a SQL database.

This is what a developer sees, this is what they want. Now the way that they get this map to an infrastructure is by an environment that's been set up by the ops team, or by themselves, if they're doing full stack DevOps with a recipe that binds and connects these high level descriptions of these resources to the underlying platform. In the case of deploying to a Kubernetes cluster, you set up the environment on the Kubernetes cluster, you point Radius deployment of that app to that, the recipes there say, hey, you're going to get a SQL container, you're going to get a Redis container. But without changing the app at all, you can swap that out and point the app at a different environment, one for a Cloud like Azure. In that case, the recipes for the SQL database are going to point Azure SQL for the recipe for the Redis Cache is going to point at Azure Redis. Then you can swap out another environment, deploy the app to that, and this will bind it to AWS resources, and that's the way that portability is achieved.

Portability and platform engineering you get at the same time with this clean separation. Now, drilling into what actually Radius is underneath, it consists of a core set of primitives or resources, including the definition of what is an app? What is a container? What is a gateway in the core pieces of the compute infrastructure? Then it has what we call standard resources. These are ones that lots of enterprise apps use, like a SQL database or a Key Value store on Mongo. Where what Radius does is, gives you that abstraction for one, not one specific example like a managed one on a particular Cloud, and because the developer can specify, I want a Mongo, they get mapped to a particular Mongo database implementation whether it's in a container or whether it's a managed service.

Then by any Cloud or on-prem, already talked about that mapping to those different environments. Now finally, just a little more detail on platform engineering aspect. The way that works, just to be clear, is the developers define their app. They do this in this way where they can define standard resources, they can define Dapr resources as part of their app.

Or they can define Cloud specific resources like I want an AWS Kinesis. That's another aspect of Radius. It doesn't force you into an abstraction. You can actually use underlying resources in their native descriptions. Kubernetes resources in their native descriptions. AWS resources and soon GCP resources, of course, Azure resources.

Or you can use the standard or Dapr ones and what you get with that is that portability. Then the operator, like I said, sets up a Radius environment and then registers recipes with that environment that says somebody wants to deploy a Mongo, here's how it gets deployed and by the way, I can define different properties that the developer can reference. Like I want a large Mongo or I want a small Mongo and that way the application developers can specify this application needs certain capabilities, needs certain capacity. Or the ops people can define that at the time they register. They can say this Mongo recipe is going to create a large one.

Because we're in this environment where, you know, the app's going to deploy to it got a lot of users, we need a big one. I've already really talked about recipes. That is the glue between app and the infrastructure. It is this what allows us to separate the two and what enables platform engineering. Now what I'm going to do is go ahead and turn it over to Aaron Crawfis who's going to represent the Azure Operator.

Who's going to be setting up an environment and recipes that we're going to later deploy some apps to, so Aaron. (applause) Aaron Crawfis: Thank you Mark. (applause) Hi everyone my name's Aaron today I'm going to be walking through that operator experience of Radius and when we were designing what Radius should look and feel like and how it should really work for that operator we talked to a lot of different customers and enterprises who were working in this space, over 70 in fact and we kept hearing time and time again that there was this trade off that they were needing to make, as Mark and Brendan were saying, they had to think about, do we give our developers a full self service experience over their infrastructure, or do we lock everything down to make sure that we're secure compliant and those best practices are followed? What this meant was that developers often would get like full owner or contributor access to those subscriptions.

Or on the other side, the operators would have to lock everything down and only deploy through manual ticketing system, slack channels, or even that custom bespoke platforms that Mark was talking about earlier. With Radius, we wanted to offer the best of both worlds and that's where environments and recipes come into play. Let's walk through that. Here you can see I'm in a VS Code window. You're going to be looking at a Bicep template. Bicep is a great infrastructure as code language that allows you to describe different Cloud resources. Here you can just see that this is an existing Bicep template that I have that I've used to model an Azure Cache for Redis.

This is Azure's Redis offering and we're going to be turning this into a recipe and we wanted to make sure that was as easy as possible. To make a recipe from a template, it's just two things, an input parameter and the output. I'm going to go ahead and paste in my input parameter here called context.

Now context automatically passes in a lot of information about that underlying platform, the cluster, the namespaces, the environment, things which now your developers don't have to know about and pass in. It's just handled for you automatically. So as one example, we can actually use the context parameter to name our resource.

So that way anytime that this recipe is deployed into our environment, we get a unique name so that way each application can have its own Redis cache. Now that we set this up, we need to set up our output parameter and what this will do is it'll actually wire everything up for us so that way when this gets dropped into an application, our developers can connect to it from their containers. I'm going to be pasting in a output called Result and because this is Redis, the values that we're going to be passing back are host, port, username and password, the common Redis parameters there and that's it, our recipe is complete and we now can drop this into any environment. But I just want to talk a little bit about the importance of some of the things in this recipe that you're going to see here.

A little bit further up here, you can see that in our Redis cache we have a couple different parameters, two that I want to call out when it comes to that non SSL port, the insecure port, we've disabled that and that's something that the developers cannot override so that's how we enforce the security best practice and same thing with the minimum TLS version setting that to be 1.2. This is how we can ensure that our resources are secure, compliant, and meet those organizational best practices so that way we can rest assured that you as the IT operator are in control of that template and you can do other things like diagnostic logging or VNet injection. Anything you can do inside of this template can be part of that recipe. But one of the things that we heard from customers is they wanted to be able to customize these templates in their environments so that way if you're running in dev test, you might want a low cost, low skew offering. But when you're in production, you might want to scale that up to a full premium offering and make it the full throughput.

With recipes, parameters work just like you're used to already and so now when I go to publish this recipe and then register it, we can customize it. Here in the terminal I'm going to be running the rad Bicep publish command and this is actually a really nifty piece of functionality that the Bicep team came up with where we're actually publishing it to our existing Azure Container Registry. That way you don't have to manage any other infrastructure or registries, you just use your existing container registry. Now that that's been published into our ACR, I can now register that in my environment. Using rad recipe register, I'm targeting my Production Azure environment and I'm going to be setting that size parameter which I showed earlier as large because this is production and that's it. Developers can now leverage Redis inside of their applications.

But I've actually registered a couple other recipes here so that way developers can choose from the supported technologies that my organization offers. Using rad recipe list and typing in my Production Azure environment, you can see a bunch of different resources here. These are those standard resources that Mark was mentioning. I have Redis, Mongo SQL, Rabit MQ, plus the Dapr building Blocks. All of these developers can choose from mix and match and it meets developers where they are with their existing apps and any application they create in the future.

This is all like pick and choose, and deploy as needed. But one last thing that operators told us is that they're not just a Bicep shop, they have other infrastructure as code languages that they want to leverage in their environments and so that's why we've created Terraform recipes. If I bring up this Terraform template here, I'm just using an off the shelf Terraform module from the public gallery and I'm going to be wiring this up with that same exact input context variable and the same exact output result so that same recipe contract consistent across Bicep and Terraform. Now with this Terraform recipe, I can go ahead and register that into my environment and if I do rad recipe list one more time, this time switching it out to AWS, I get the same set of resources, but this time using Terraform instead of Bicep and using AWS resources instead of Azure. What we've done here is that we've enabled all of our developers to be able to deploy any of their applications on any platform while remaining secure, compliant and meeting all of those organizational best practices so we get the best of both worlds.

Self service with that compliance. To walk through what that developer experience looks like and to walk us through how to define the application and leverage these recipes. I'm going to be inviting up my colleague, Ryan Noack.

Who will be talking through the developer side of Radius. (applause) Ryan Nowak: Thanks a lot, Aaron. Hi everyone. My name is Ryan Nowak. I'm a developer on the Azure Open Source Incubations team and I'm the creator of Radius.

As part of building Radius, Aaron mentioned that we talk to a lot of customers about 70 or so and in particular about the challenges that their platform teams and their developers face, building and managing applications. I'm going to talk to you a little bit about the developer side of that. The conversations that we had really highlighted the complexity that application developers face when they need to use Kubernetes and they need to use Cloud resources. It's doubly a pain for Cloud developers to get access to those resources and then to troubleshoot them if something's going wrong.

If I got a database created for me and I'm having trouble connecting to it from my application, I might have to have a very long e-mail or chat exchange with someone like Aaron and it's going to take a lot of time. Likewise, if people are giving me access to create those resources myself, I probably don't know how to do that correctly. Like Brendan said, we built Cloud-native programming and then we expected that everybody is going to learn everything about everything. This is why we created recipes so that I can use an on demand provisioning system and still have it follow the organization's policy and cost controls. We also heard that not everybody can be an expert at using Kubernetes. We built a Cloud-native abstraction layer into Radius that works at the developer's level of altitude and has familiar concepts that we expect developers to already use when they design their applications.

Now you can use Radius by working directly with Kubernetes and it's API's. Or you can use our abstraction layer and tools. You don't have to make trade-offs when you make that choice. I'm going to show you how I can use Radius to deploy an existing application.

I'm going to go to a local dev cluster in a code space that I'm just working with here. Then I'm going to deploy to the Cloud using the two environments that Aaron set up for me earlier. Along the way, the thing that that's going to make this great is I don't have to change the application when I move across these different environments. Once I get it working in local dev, I'm going to pretty much keep it the same when we go to the Cloud.

This is an example of how developers and platform engineers can work together. As a developer, I just get to think about the architecture of my app and the things that I understand. I know that the infrastructure pieces are going to be taken care of for me. Now I'm working inside of a code space. I've got a local Kubernetes cluster running here.

I'm going to use "rad init" to set up an environment for development. "Rad init" is part of our CLI with Radius. I've got an application in the current directory, so I'm going to say yes. You can see here in the output that Radius included recipes for local dev.

Radius includes recipes for all of our standard resources. You can use dependencies like databases, message queues, or any of Dapr's bit major concepts in development scenarios without having to write recipes yourself. This is part of the open source project. Let me tell you a little bit about the app. Here, I'm going to open a Docker file real quick.

I like starting with a Docker file to explain Radius because we work really well with your existing application code and containers. There's no SDK or library that you need to put inside your application to start using Radius. We're not asking you to change how you write code, how you architect things, how you build and publish those containers.

How ever you're doing that today should work just great with Radius. I should mention this is a to do application. I think it's traditional to do applications. We're going to be adding a Redis cache to it, which we're going to be using as a database.

I'm going to close this docker file. What we're looking at here is "app.bicep". This file was scaffolded by Radius when I ran "rad init". Now we're using the Bicep infrastructure as code language in Radius which is being developed as an open source project here at Microsoft. I like describing Cloud-native applications with Bicep because infrastructure as code just does the right things for application management. It's good for deploying things, upgrading things, rolling stuff back, deleting it, has all the behaviors we need when we're managing these things.

We also like Bicep a lot because it's very expressive and it's got a good tooling experience. Again, this is just one of a few ways to use Radius and we'll see some other ones later on. Everything that we built in Radius that we're using from Bicep here in this example is also in API. It's possible to build your own tools. How this works is that we extended the set of types in Bicep to build Cloud native primitives that should feel familiar to developers.

Many of the customers that we spoke with had tried to build their own abstraction layers to fire off their developers from having to learn Kubernetes and become experts on it. These projects are hard to get right, and often they end up being really expensive for those teams to maintain as their needs become more complicated. You could think about us as building Radius in the open with the goal of creating a universal Cloud-native abstraction that everybody can benefit from. Right now you can just use this with Kubernetes, but if it's a success, we're going to take it to other places as well. We're going to start customizing this file.

This is the starting point that Radius gave me. First what we're going to do is we're going to add a Redis cache and that's the database we're going to be using for the application. For many of us we might need to file a ticket or ping somebody on slack. Or worse, somebody's going to e-mail us a password. Instead, I'm going to use a recipe and Radius is going to create the cache.

When you look at this, you'll notice that I didn't have to say a lot here when I'm using the recipe. I didn't have to say what Cloud service to create or how to configure it, or what firewall rules I need. I just say that I want it as part of my application and Radius is going to do the rest. When I deploy this, Radius will take the recipe in my local dev environment and it's going to use that to host Redis inside my Kubernetes cluster. When I go to the Cloud, it's going to use the environments and recipes that Aaron showed you earlier and create the Cloud resources in the right way. I've defined my container on my Redis cache, but I haven't said anything about how they're connected.

I can paste in a little snippet here. By adding this connection, I've declared a connection between my container and the Redis cache. This is something that Radius is going to use to inject settings into the application that enable my code to talk to Redis. We're going to get environment variables for all the things we need to plug into the libraries we're using to talk to Redis. We want this to feel like magic to developers and support a bunch of really complex things like managed identity and workload identity.

You should be able to say what you need as a developer and Radius is going to provide it to you. Now since Radius knows about the connection, it's also going to catalog the infrastructure and communication between these components. This will build the application graph that's going to help the whole organization understand the architecture and the relationships of the application, and we'll see an example of that later. With that done, I've described the application at this point and we're ready to try it out in my local dev environment. Again, I'm running inside of a code space here where I've already got a Kubernetes cluster stood up, we already saw me installing Radius on it, so I can use this Rad run command from the Radius CLI. Radius is going to take that Bicep file, chuck it into the cluster, and we've got a component inside the cluster that's going to execute that file.

It's going to deploy the recipe, pull the container and start spinning that up on top of Kubernetes. Then through some code spaces magic, we're going to get a new browser tab open here with our application running. I think this is pretty cool because the web app is actually like inside of a container, inside of a Kubernetes cluster, inside of a code space, inside of a VM, inside of the Cloud. There's a lot of layers of virtualization that are making this happen. On this page, this is just the app that we use to talk about Radius. You can see that we've understood the connection and then you can see that we've understood the settings that came in associated with that connection.

Radius will give you the appropriate data for the things that you need based on the technologies that you're using. Continue along with this, let's test out the To Do functionality. I can go open the To Do list. I can very quickly try this out, can see that that got saved.

I can say that this is working. Just to prove that this works in another way, we're going to go back to the console here and you can see some logging output from Redis saying that the message actually made it into the logs. I think we can really say that this worked. Now that I've done that, I've got this up and going in my local dev environment. Let's talk a little bit about the application graph.

Again, I can browse the application graph using the Radius CLI. There's a lot of text on the screen and it's complicated. I'll talk you through it. Up at the top here, you can see we've got the demo container that we defined in Bicep. You can see that we understood the connection to the database. Down at the bottom, you can see that we understood the database in the incoming connection.

Then this resources section, you can think of as like, what did Radius do on your behalf? When we deployed this container abstraction to the cluster, these are all the things that Radius created in Kubernetes to actually host your code. Down on the bottom you're seeing some more Kubernetes primitives because that's how the recipes for local Dev work. We want you to use the compute systems that you already have and not have to get out your wallet for development.

With the local Dev recipes just run in a container in your Kubernetes cluster. Next we're going to use the environments that Aaron set up for me in AWS and Azure. Again since I'm relying on recipes to create the infrastructure that I need for the application to run, I don't actually have to make any changes to anything I've done. I'm actually already ready to go to AWS and Azure.

Since Aaron has really done a good job with all the compliance and security types of things, I know that I'm not going to get in trouble for this because I'm using just the things that my platform team set up for me and I'm only using them in a supported way. On the left we've got a terminal where we're deploying to Azure. On the right we've got a terminal where we're deploying to AWS. We've actually sped this up a little bit.

It's not really this fast to spin up Redis in either Cloud. You may notice that, but looks like we were able to deploy that successfully and get onto those two Clouds and use those two recipes. To help understand that a little bit further, we're going to look at the application graph output for the application that's deployed in both Clouds.

Again, we're going to have Azure on the left and we're going to have AWS on the right. Now the container should be the same because we didn't change anything in Kubernetes is kind of the same everywhere. But down on the bottom we had different recipes.

It's no surprise that we get different output when we look at the application graph for Redis. We had the hosted Redis service in each of these Clouds, the ones that Aaron wrote the recipes for. On the left we've got Microsoft Cache for Redis. On the right we've got AWS Memory DB, which is one of a couple ways to run Redis on AWS. This is cool because if you've ever been in this position, either as an ops person, as a dev, where you've had to reason about the relationships between apps and infrastructure. A lot of times we do this with something like tags.

If this database server goes down, what applications are going to stop working? Or if my application performance is slow, where do I find the right thing in the Cloud? It can be hard to figure that out and this is some of the stuff that we want to make better with the application graph. Right now we just have this text mode version of it, but imagine if we could put this in other tools and experiences that you're already using and augment those with this data? To sum up what I've shown you so far, we onboarded an existing application using Bicep and Radius as Cloud native abstraction. We got up and going in our local dev environment, just on my workstation or in my code space. Then when I go to the Cloud, I can use environments and recipes that were created for me, either on my team, in open source or by operations experts elsewhere in my organization.

I didn't have to change my application code, or learn everything about everything to move from testing to the Cloud. On another topic, I'm going to show you the same application, but it's a Helm chart instead. Down here at the bottom and it should be scrolling in in just a second we've got a recipe CRD, and I can use this to use the Kubernetes API and do anything that recipes want to do. Any Cloud that recipes can support, any recipe that my Ops team can think up, I can use from Kubernetes API. Up at the top, we've defined some annotations on my deployment to enable Radius for the deployment and declare my connection.

When I converted this Helm chart to use Radius, it was really that easy. I added the recipe, I added the annotations, and I could just deploy it like any other Helm chart. This is because when we talked to customers about how they manage applications, we heard repeatedly that it's really expensive, difficult and risky to migrate existing applications.

We also found plenty of people that have already adopted and want to keep using Kubernetes native tools like Argo, Flux and Helm. Wouldn't it be great if you could just like add Radius to the thing you're doing already instead of adopting what feels like a new platform? This is why we added the ability to use Radius directly from Kubernetes without our abstraction layer. This works the same way and provides the same features as everything you saw before, you're just using Kubernetes native types and tools to do it.

Granted, with everything I showed you so far, it's a pretty simple application, it's just a basic to do that we've all seen one million times. But we also wanted to make sure that Radius could work for more complex architectures. To prove that out, we took the .NET team's eShop microservices sample and we ratified it. As you can see from the diagram, this is a pretty complicated microservices application. There's a number of different storage technologies and architecture paradigms here.

For someone like Aaron, he just needs to set up the environment with recipes for all the things that the platform team supports. As a developer, I'm choosing from the technologies that the platform team wants me to be using in production and I'm going to get the recipes automatically set up for me to create those infrastructures. I just deploy it and the recipes in the environment do the hard work.

The Cloud resources will be created on demand when I need them. Now the deployment scripts for the eShop sample included thousands of lines of shell scripts and Helm charts. It's really hard to see the application or think about the architecture in all that noise. One of the things I really like about Bicep is that it's great for managing complexity.

When we moved this into Bicep, we were able to break all that up into modules and help keep the code really organized and clean and build our own abstractions between those. Here's a demo of me deploying the application, and again, we're working locally Azure and AWS, across the board here. Look at all that stuff flying by. Because this is a pretty complicated application.

With the environments and recipes that Aaron set up for me before we can easily get SQL, Redis, RabbitMQ, and Service Bus set up without me having to understand or even have permission to access any of that infrastructure. When I moved this into Radius, we could replace all of that infrastructure management code with recipes, and so we were able to delete about half the code. As a last step here, we're just opening this in three browser tabs so you can see that it really worked. We really took this application and got it working in three totally different scenarios, on Prem, on Azure and on AWS. From a developer's point of view, I just run Rad deploy, I stick that in my CICD pipeline and Radius handles all of the deployment and configuration. Radius is new, we're just getting started and we can't wait to see what you're going to build.

Thanks for listening. I'm going to bring up Mark and Brendan to wrap us up. (applause) Brendan Burns: Good job. We'll go back to the slides, I think. Mark Russinovich: You need to press Click.

Brendan Burns: I have pressed ''Click.'' Cool. That's why I have him here. No, we're super excited to see what you can do with building Cloud-native applications.

I think as we identified, there's been a long history of how we brought our API's and the way we think about applications to have more application oriented primitives in the Cloud. I think with Radius, we're maybe closing that loop to finally providing an application representation that you can use and that you can separate the concerns of the infrastructure provider from the concerns of the developer. Of course, through that awesome demo we just saw from Ryan, it targets any Cloud, anywhere. Then we're going to be submitting it to the CNCF and I'm going to let Mark tell us a little bit more about what we're doing there. Mark Russinovich: Yeah, thanks, Brendan.

I think what our aspiration here is for Radius to do for applications what Kubernetes did for Cloud-native infrastructure. The combination of the two I think is really the magical combination that we really empower, like I've said at the beginning, the goal of empowering enterprise devs to just focus, get their jobs done, understand just the application, and have platform engineers worry about compliance, security and infrastructure. We want everybody to participate. This is not a baked done project. This is something that we're actively learning on. We have several enterprise partners that we've been working closely with.

They include Comcast, Millennium, BCP, and BlackRock. They were part of our announcements a couple of weeks ago, but we're looking for more. If you want to contribute, you can either reach out directly and work with us. If you've got people on your team that you want to make contributions to Radius, we've got Discord channels, we've got GitHub Repo, where you can go do PRs and open issues. We expect that this is going to evolve to meet the needs of enterprises and we're really looking for the contributions from everybody. To learn more, to get access to those demos that you just saw, just head over to radapp.io.

You'll find tutorials there, you'll find the documentation. You'll also find the link to the GitHub Repo from there, or there it is, directly right there, is where you can go and start playing with it directly yourself, making contributions. We hope to hear from you and we hope to really revolutionize Cloud-native computing for the enterprise with this.

Again, when I take a look at this from just taking a step back, I just don't see any reason not to use this. Because every requirement that we've heard of enterprises, or any concern we've heard of enterprises, we believe Radius is addressing or will address. With that, I hope you found this useful and interesting and hope to see you participating with Radius, and thanks very much for coming. Brendan Burns: Thanks so much. (applause)

2023-11-27 19:10

Show Video

Other news