Modernizing your applications with containers and serverless​ | BRK226H

Modernizing your applications with containers and serverless​ | BRK226H

Show Video

Hello Build, welcome to the session on modernization of your applications with containers and serverless where we are hoping to have you leave here looking at app modernization. In a whole new light. Are you ready? Yes. All right, So here's the team that's going to be presenting to you today in order of our appearance. First, there's me, I'm Kamala Dasika, director of PMM for Cloud Native and Intelligent Applications. The Microsoft Co presenters today with me are George Palmer, Devanshi Joshi and Anthony Chu. We also have.

An exciting guest joining us our customer Barracuda. So our customer and I are going to have a discussion on their overall experience detailing all the steps that they took when they were deployed as a monolithic application on premise. What their design considerations were as they modernized to a cloud native architecture and their best practices that they employed to keep their developers productive and meeting the market so.

Let's get started. How many people in this room spend more than 8 hours a day on a device? Raise your hands. It's pretty much everybody, right? I mean, we work in tech. How many more than 12 hours a day? That's quite a few of you, all right? So it's safe to say.

That's not too surprising given that we are in tech. But this is a pattern that I've observed outside of tech as well. And what this means is that, pandemic or not, our world is still very much digital today and the app is still very much the currency in this economy. So when the app is the storefront and the experience that follows. The customer around through the entire process to you receive your goods or your transaction is complete. Organizations have to focus on delivering high quality applications that are fast, reliable and accessible. So customer experience matters even more today in

this digital economy, because the switching costs for customers have gone down dramatically, we only have a few seconds. To make an impression for on a customer, differentiate ourselves through our customer experience. And that's actually a critical factor today in winning and retaining customers no matter what your product is. Now. The worst part of it is that

people only notice the tech when it doesn't work. We all know this, so. While all of this attention on apps and development is great, the downside is that failures can be very consequential.

And so innovating fast while balancing all of the cost constraints, the unpredictable end user demand, and bringing along your legacy applications while also paying attention to security. All of that pretty tall order, so thankfully for the efforts of many developers in the open source, including those from Microsoft. Kubernetes, a cloud native platform, has emerged as the technology foundation that modern day applications can rely on. So what

do we mean by cloud Native? The definition actually started with Born in the Cloud companies like Netflix, but it has not really expanded to include all of those that are aspirational to the capabilities of the Born in the Cloud companies I've shown here. Some of the defining features of this development paradigm and contrasted that with traditional application development. Some of those include application architecture patterns like using microservices, composite applications that you can pull together and use APIs to kind of pull all of the application together. The deployment pattern is also different and features short sprints rather than. Waiting for all of the features to come together so that different teams can focus on getting their output to the market quickly, and adopting DevOps to release these features more frequently and without a lot of events that follow.

So the operational patterns therefore are also adapting to this kind of development methodology, and the patterns basically favor heavy use of automation, cloud, and managed services. A pay as you go kind of a model infrastructure abstraction and applying serverless model to think through getting your application to market quickly. So joining me now is someone who's no stranger to the app modernization process. I'm very excited that our customer TC Guvitayo is able to join us to discuss Barracuda Network's own modernization journey. Now TC

is Director of Software Engineering. For application security and cloud, he has been designing and developing software for over 20 years. Twelve of those for Barracuda. Welcome to Build TC. Thanks Kamala, it's happy to be here.

All. Right. To kick us off, why don't you tell us a little bit about Barracuda? Sure. Barracuda Networks, founded 20 years ago, is trusted by over 200,000 customers worldwide.

To safeguard their employees data networks and applications, we do this by building industry leading security solutions that are highly effective, easy to buy, easy to deploy and easy to use. One of those solutions that I would like to talk about today is the Barracuda Web Application Firewall service, part of Barracuda's Application Protection platform. Barracuda's Web Application Firewall, or WAF, protects your applications by monitoring and filtering malicious web traffic. The WAF is typically

deployed between clients and servers where proxies web requests and inspects inbound and outbound traffic for threats. Barracuda's WAF is offered in several hardware and virtual models and as originally most of our customers. Applications were on premises. The WAF was designed to support that. As Barracuda's customers migrated their applications to Azure, they

needed their protection to migrate with them. So we released the Barracuda Web Application Firewall to the Azure Marketplace. And though deploying a WAF into your own subscription is effective, many of our customers told us that they would prefer a fully managed solution. After all, when one of the primary benefits of cloud adoption is reduced operational burden. So we listened to their needs and the Barracuda Web Application Firewall as a service was born. Our journey started with an initial service built on Barracuda's powerful and proven hardware WAF appliances and hosted in Barracuda's existing data centers. Here you can see the

original architecture. In the center is Barracuda's data center with WAF appliances, load balancing and volumetric traffic filtering to support DDoS protection and a management layer to control everything. On the right are the application owned by Barracuda's customers, which can be deployed on premises or in the cloud. On the left

are the end users of those applications. Customers configure application endpoints and update their DNS. And clients then connect to the WAP service where their filtered requests are proxied to the back end service. This design was then replicated in Barracuda's data centers to form a highly available multi region architecture. So TCTC, what prompted your move to Azure if you were already successfully deployed in your own data center? Yeah, that's a great question. While the release of the

service was successful, it was almost too successful for its own good. We quickly realized that we needed more of basically everything to continue to grow and meet our customers demands. We needed more capacity in the data centers we had and we needed more data centers to improve our global reach.

Moreover, we learned first hand that creating and managing your own data centers was challenging and operationally expensive. There were two main challenges, remote monitoring and scalability. We used IBMI and serial console servers for remote management, but we often found ourselves having to call the onsite personnel to help us check cables and do network troubleshooting. If you've ever had to talk to a relative over the phone and help them with their home networking issues, it was a lot like that. And scaling is even more challenging as it requires extensive capacity planning, shipping times especially internationally, racking and stacking the equipment the inevitable hardware failures. And trying to balance between over provisioning and under provisioning were some of the serious issues we encountered and that's before we tried to expand to new locations. I think

Tim Jefferson, Barracuda Senior Vice President of Data Networking and Applications said it best when he said friends don't let friends build data centers. We started exploring other options and as you may know conveniently Azure has 60 plus regions available globally. So we worked with our partners at Microsoft and came up with a plan. From then on we expanded into Azure. So the first part of your plan, you were deployed into Vms like a classic lift and shift. Yeah, that's right. As you can see in this diagram,

the architecture was modified to use Barracuda as WAF Vms and Azure native networking. We also extended the management layer to support these new cloud resources. We saw immediate benefits, especially with management and monitoring capabilities as you would expect. And of course, capacity planning is much easier with the elasticity of VM scale sets. We're

even able to provision entire new regions on demand with infrastructure as code. But there were still challenges. The core issue was that the unit of scale was still a VM, so the blast radius for software updates or VM maintenance. Was larger than we would like. Also, this lack of

granularity affected our approach to multi tenancy which often resulted in uneven distribution of traffic and load. So all of these actually sound exactly like the type of challenges that containers and Kubernetes could address. Yeah, that's exactly right. Since we knew the limitations of

Vm's, this was sort of a stepping stone. We had worked in parallel to create a WAF container to run on Kubernetes. The containerization of software as complex as the Barracuda WAF was a significant effort. I believe it took twice as long as we originally anticipated, but it was well worth it. That's great to hear. So did you manage Kubernetes yourself

or did you use the managed service? Yeah, we chose AKS because even though I'm an advocate for DevOps, if I have a choice, I'd prefer to spend more time on dev than OPS with this new architecture. The pod was now the unit of scale. We were able to improve our multi tenant design and fix a lot of the resource utilization issues that we had. Kubernetes supports advanced deployment strategies like blue-green and Canary and we use this to build confidence and release much more frequently.

And the ease of versioning with these containers also allowed some additional benefits, for example even though we had Canary deployments. The unique configuration of customers configurations and their network traffic might still cause problems that we didn't see with those. So we were able to use pinning of versions for that specific tenant while we resolved it and allow their traffic to continue. Also another benefit was that the WAF

images that we used for the service are the same ones that we could provide to our customers to run in their own environments. With this architecture, we have created a solid foundation on which to build that has continued to scale with our growth. The end result of our modernization journey with Barracuda WAFF as a service is an elastic global scale cloud native service that allows Barracuda to focus on protecting our customers from emerging threats. That's actually awesome. So you mentioned that you like to spend more time on dev than OPS.

Are there other practices within Barracuda that you're using to keep your devs productive? Yes, while AKS was the right solution for the WAF service, we used the service model quite a bit of Barracuda. A good example of this was a prototype for a new service that was built entirely with Azure Functions. Once the service demonstrated its value and gained the customer acceptance, we realized that we can invest additional development into it. One of the things that we noted was that IT cost optimization was necessary, so we were able to migrate this service to a KS using Cada with minimal changes to the code. This reduced our compute costs by 80% using reserved instances, which we are better able to anticipate based on the known expected load and in my experience. Azure Functions are great for a number of use cases.

They can be used for rapid prototyping, especially for API's. Durable Functions can be used for most task based workloads and event driven execution was scaled to 0 means we don't pay for what we don't use and anyone who has had to schedule automated VM shutdowns for evenings and weekends knows how valuable that is. Our teams are able to utilize the ability to execute functions locally within Visual Studio Code using the extensions to quickly iterate and do development. They get a real developer experience, they can do break points, and it can even connect to resources that are deployed in the cloud to consume those events. But if you do that, I recommend

making sure that you ensure it's disabled before you reenable the workflow. We have found that new members joining those teams are able to quickly on board and get coding right away. This increased productivity allows us to get to market faster, all with minimal operational overhead. That is awesome and great to hear. So the need for securing applications I would imagine has only grown given the kind of environment we have today and the threats that we hear about in the news, what else is coming up on the horizon for your team? Yeah. So threat intelligence is a really important focus of

Barracuda. We want to derive insights from the data that we collect and use it to improve the security posture of our customers. We're also utilizing Azure a I services. For example, there's an app running in a KS that uses Azure Cognitive Search to identify sensitive data that is in corporate data shares. A I and data-driven insights it provides

represent an exciting opportunity for us and we're also exploring the use of. Of other A I services like GitHub, Copilot for our development and operations workflows. Anything that reduces cognitive load and automates through tuning tasks is a big win for developers. That was awesome. Thank you TC for sharing your experiences. Really appreciate you coming down here and telling your story to the to the audience right here. Thanks a lot.

Thank you. So some of you want to build a containerized application in the cloud, I hear, right. So that's just the stepping stone really for starting your cloud, cloud native application development and operation journey. An enterprise app really is just code, right? And Speaking of code, you can actually just use a lot of the tools and frameworks that Azure is making available for you and from then onwards? Let's reimagine the app development process with AI. So here's

your code editor. When you're building your application with GitHub Copilot, as you many of you saw in Thomas Stonekey's presentation during the keynote, you can actually have Copilot do your boilerplate code for you. It can comment your code for you and in general.

Provide a more readable version of what you intend to write as part of your application. You also saw that if you, in the context of modernization, inherited a piece of code that it might be able to explain that code to you. Something that maybe your predecessor might have written might be an open source piece of code that you can write. And all of these tools are available to you as you build your new application or as you modernize.

Your existing applications in your language of choice since it supports several different languages. As you build your application, lots of data services as well as AI services from Azure AI are available for you to augment your applications. As TC was just mentioning, I think they used the Cognitive Search service from Azure as part of their application. Open

AI is available as a service through Azure. And from an operational perspective, when you are ready to deploy your applications, so many possibilities and to go through that, we have George Palmer joining us and he's going to take us through a demo of what some of these possibilities are. Thank. You all right. So we've seen that we definitely want to ensure that we have intelligent apps.

We also want to make sure that we have an intelligent service that can actually empower us to build those intelligent apps and to be productive with a KS. We want to make sure that you are able to harness all the power of Kubernetes and the portability, the efficiency that it delivers, the scale that it has. But we also want to make sure that we bootstrap and create the necessary tools for you to be productive, for you to be successful, efficient at the end of the day.

So we're really, really happy. To announce the AKS Copilot an A I powered assistant for Kubernetes that will allow you to streamline all your operations. You're going to be able to say, well, how can I scale my cluster or what do I need to do in order to better monitor my application. It will allow you to reduce your skilling efforts. You don't need to choose OK do I need to go super deep on Kubernetes and have my teams learn or myself learn Kubernetes to the deepest level you can actually choose and match what you want to learn when you want to learn it. With the tools we've built for code to cloud on the previous session, for example, you're able to actually deploy to Kubernetes without any knowledge of, you know, YAML or Kubernetes concepts. But even if you do want to have,

you can go and learn that and still use a CAS copilot to tell you and help you build a manifest for Cada, for example. As we just saw, that can be part of your application and you can leverage it to create the scalar for you. Maybe you know Kubernetes very well, but you don't know Cada. And you can use this to empower your developers. You can so you can focus on code. I think we just heard, you know, focus more on dev than on OPS. This is really what the AKS Copilot is trying

to have and to do for you as well. And as it's typically our custom, we're providing this for free for all clusters through the Azure portal. So let's take a look at what this actually looks like. So from a regular cluster, this is an AKS cluster. It's called

Build AI. We're going to go ahead and we're going to jump into our workloads that show our workloads and we're going to go into our typical Create application with a YAML experience. But we're actually going to prompt our AKS copilot and we're going to go ahead and ask it to, hey, can you create a deployment for the image AKS Hello World for me and create a service for it? It will generate an answer for you.

This answer actually has a YAML in the answer, so you actually automatically populate the editor on the left with that manifest. You could go ahead and edit it. It's actually pretty amazingly correct. Has three replicas for it. For our image, it's exposed port 80 because it's a web app, and it created a service that matches port 80 to port 80 on the target port. Thank you so much. Yes, this is pretty cool. This is pretty cool. So you can just go ahead and default it to a load balancer. You can go and change this, you know, to service to a cluster IP or whatever you want and you can just deploy it and see it actually working. So we can, you know, take a look

at it, see it's deployed, everything is running fine. It came from MCR, It's a good image and it's already running. And now we can just, you know, confirm that this is all working. There's no magic going on here and we can. Access the service right here through the IP and everything is working perfectly fine. There's a little bit more, right? OK, this was nice. It helped us get started. It

helped us deploy an application. But I promise, you know, help with operations as well. So let's assume that we actually want to take a look into. OK, is my cluster fine? Is everything OK with my cluster? So you can just go ahead, Pop Copilot on the overview blade right away and ask it Hey, can you help me get a query for the minimum, the average, and the maximum? You know, CPU usage for my notes, and it will go ahead and actually create a query right there for you that you can very easily just copy, go into the experience for log analytics that we have straight up in the AKS portal, we can actually bypass the recommendations because we already know what we're going to do. We

just paste it, run it, and you'll immediately see that query. Get the result for the minimum, maximum and average for all nodes. You could not know any KQL. You might have just turned on monitoring and you just needed some help. Create this query. This is not a super complex

query but it could help you with much more advanced queries. You can now go ahead and say maybe there's something a little fishy with my nodes. I actually want to go and see what's consuming the CPU. So you

could go ahead and say hey OK I want to change my target here. Can you tell me what's the minimum, maximum and average of my pods? And once again, Copilot will give it some thought and it will generate that query for us right there and we can go ahead and paste it. You could ask for anything. Yeah. Say, can you give me the P90

of all my containers? Could you give me, you know, how I can scale my cluster? Could you tell me how to generate a manifest for Cada? Anything that you want and we can just go ahead and paste it right there on the screen and see the access for all our pods in that case. So you we can pop into any of them, In this case it's. For example, our metric server and several deposits that we actually have there. So what you see right here, we have a full session on it yesterday that I definitely would encourage you to check on demand and we show you how to do that from the development process itself, how to do all these operations. We also have several

open source projects you can try out like Cube, Ctlai that will bring this experience. As a Cube CTL plugin for your CLI as well as Cube Co pilot that will actually help you within the cluster. And we didn't keep CTL to actually diagnose problems and have it see any issues or execute commands like hey tell me which one of my note is better suited for receiving my next deployment and it will actually tell you that exactly. Hope you're as excited as us to take a look at this and to actually play with it. We expect to ship it in the

upcoming months for all of you to have access to and we expect your feedback. And we expect to continue to deliver for you all the compiled experience. Thank you so much. Thank you, George. That was awesome. So as you can see leveraging AI in app modernization kind of possibilities today for your applications, all of the various cool things that you can do today with AI like sentiment analysis, text comprehension, risk modeling. Every type of application that's probably deployed today really can get a little bit of an assist just like those who are creating the applications, deploying the applications are getting from AI assisted copilots and a couple of other things here we talked about today in terms of additional assistance that developers can get. In terms of their modernization experience and just getting their prototypes to market quickly, Serverless is certainly emerging as a great technology. And for that I have my colleague Devanchi

Joshi, who is going to come forward and explain to us and take us through some scenarios about serverless. Thank you, Kamla. OK, hi everyone. I'm Devanchi Joshi. I'm the Product Marketing Manager for Serverless in Azure. So now as you modernize your application, you're infusing AI and working with real time events and data sources. Applications are event driven, reacting to triggers in near real time with some unpredictable and bursty traffic, so being a perfect fit for service architectures. For example, millions of events from

different devices stream into a hub. Processed by service compute and stored in a relational database for further analytics or maybe on an analytics dashboard view. Or you are having a scheduled processing tasks coming through including batch processing. You have some backend workload for web or mobile backends, even IoT backends or AIML real time bot messaging that you're processing. So overall, an app architecture often needs to run parallel executables to your core business logic, and this way you're maximizing your compute cycles running these asynchronous tasks while your core app is focused on the main function. For example, your you have a car insurance claim application, the user uploads the car damage image and.

The main function of the app is to process the claim, but there can be several tasks associated with this user action, like for example compressing this image and storing it up in archive for further reference. So while that image is uploaded, the main function still remains to process the entire claim and verify and validate the user request. For such scenarios, I'm happy to announce Azure Container Apps jobs. I now invite Anthony Chu to take us through this. Hey thanks Ivanchi. Yeah so yesterday we were happy to

announce the public preview of Azure Container Apps jobs. So Container Apps is a serverless containers environment for running applications and microservices. And you're able to run applications like web servers and API's and some background processes that that services that run for a while. And jobs allow us to run containers

that run and does a little bit of work or maybe a lot of work for maybe minutes or hours perhaps and they exit and you're able to run them on demand. So this could be through an ARM call or through the CLI. And a good example of this could be like a one time migration. You know, like you want to migrate some data from one database to another. You can create a job and then you kick it off and then you can just run and then it finishes and then exits and you only pay for the seconds that you run it for. You can also run them on a

schedule. So that's pretty self-explanatory and you can provide a chron expression and then you can run a job anytime you want and multiple executions of a job can run. Simultaneously. So you can actually get pretty good scale out of them as well. And lastly, we have a event

driven triggers for jobs as well. So you can sort of imagine jobs that are kicking off by queue messages. So you want a job to run for each message that's in the queue. So there's a way to run

that and that's all powered by Cada. So I'm going to show you a demo of how this works now. So this is a very basic Python script that I wrote. And it does it. All it does is that it can pass in a video URL or or or it can actually go reach out the service bus and get a message. And it has a video URL in it. And given that video URL, it's going to download it.

And then it's going to run open Ai's Whisperer model and actually do it, perform a transcription on it so that you can actually get speech to text. And then output that text, we're not just printing it out to console log, but you can, you know, put, you know, you can imagine kind of outputting that to BLOB storage or something like that. And to create this job, I can use the azcli, the Azure CLI to do this using this command. And there's a lot of stuff going on here, but I'll just kind of quickly point out a couple of things here. I am going to create a job of trigger type manual. So that I can kick it off myself anytime I want and then I can provide the image name that I had before and then I'm sending the video the video URL. So that's going to run, pick up the,

download this video and then transcode it. So I've already run this command to create it and I'm going to run this command here to start an execution of that job. Just like that, the job has been the job execution has started. So I'm gonna flip over to the Azure portal and take a look at what a job looks like there.

I can click into the execution history and you can see that a job is running here that's gonna run for a few minutes. So I'm gonna click into the logs for previous execution and if I scroll over and expand this a little bit, you can see that it actually transcribed one of the videos about container apps. And then if you remember that that the script that I wrote can actually pull a message off of Service Bus and and and basically do the same thing but get the get the video URL from a Service Bus message. So let's see how we can use that as a an event driven job. So very much like how

we ran the command to create a manual job before an event driven job. Has a few more parameters, but they're they're mostly related to scaling. So there's a couple of things that happen when you're running run an event driven job. One is

that every polling interval, in this case I'm setting it to 20 seconds. It's going to execute the Cada scalar and and and and actually, you know, look at my service bus queue in this in this example and it's going to see how many messages there are. And in this, in this case, I've configured it to kick off an execution of my job for every message. And then for every execution of the message, my script will start and it's going to go and reach out the service bus and pull exactly 1 message, and then it's going to get the video URL from that and execute it. So let's see what happens when we do that. So I'm going to go ahead and grab the

URL and I'll head over to Service Bus. Oops. Oh I wasn't. There we go and then I'll just paste this in here and then. Normally would perform this on on multiple on on different videos, but I'm just going to do this on the same 120 times. So I've sent 20 messages to the service bus queue and then now I'm going to head over to my. Job just right here and then we can take a look at the execution history here as well and you can see that the 20 jobs have started and like the previous one, it's going to take a little while to run, but you can see that I can run 20 of them in parallel pretty easily. And lastly, I'm

going to take you through a an example of how a container app can interact with the container app job. So here is an asp.net core application that you can run that I've that I have running as a container app. It's a pretty simple application. The idea is that

for a, you know, for this fictitious company, every time that an onboard's a customer, it needs to run a bunch of processes behind the scenes to actually, you know, like read some data or load some data into a new database, you know, provision infrastructure and things like that. And it can use a job to do that. So in this case we're using a manual job, so and then with only a few lines of code like you see here, we're getting a managed identity and then calling the ARM API. I'm able to actually kick kick

off a job execution whenever somebody executes this executes this API. So so this job can run for minutes or maybe even an hour and I can do all the things that it needs to actually on board that customer. And in order for me to pass in the customer name, I can actually modify the the, the, the, the, the job definition for this specific execution so that I can pass in the customer name here. So that's what I'm kind of doing here in this post. And this

is what the application looks like. I'm just going to quickly refresh it and we can go in onboard a customer. So what's another fictitious cup and customer fabricam? And add that and then maybe we can we can do it for Contoso as well. Oops. And you can see behind the scenes, as the jobs kick off, they're actually running. And the advantage of running these jobs in the same environment as your container apps is that they can talk back to the container app and tell it how it's doing. So you can see as the jobs

are progressing, it's actually updating. In real time on this UI and also for this for for this app here they also have some scheduled jobs running in the background as well. So we can list them off here by calling the as like the. The application here is just calling the ARM API to list off all the scheduled jobs that are running every few minutes. So. So that's how you can combine Azure

Container Apps and the new Azure extension Here Apps Jobs feature to build. Pretty complex microservice applications. Alright, thank you Anthony. That was that was super cool. So OK, here's a visual of what just happened in the demo of how user requests hit the container app and then it kicked off a bunch of jobs, event driven, scheduled and on demand. OK.

So as you write your application in real world scenarios, let's move on to what's the next next task for you to hit. Like for example workflow orchestrations, Durable functions and extension to Azure Functions helps you simplify that complex stateful or coordination requirements and behind the scenes it's managing the state checkpoints. And also restarts for you, allowing you to focus on your business logic. These six patterns are the usual patterns you would try to employ as you design your orchestration. First one being sequence of events, so sequence of functions trying to execute in a specific order with chaining. Or

it can be coordinating the state of operations with external clients like. That could also include long running functions. So with HTTP APIs there is the fan in and fan out as a structure where at the end of parallel execution you may want to aggregate the end result. Next we have monitor, so you may be wanting to monitor a long running function for example and take an action out of it. For example periodic table cleanup maybe.

There could also be human interaction involved, especially along the approval processes. You could have an escalation path in parallel when the human is not able to respond in time for immediate action. And lastly the overall event aggregation. Events could be coming in from different event sources, so a function combining all of them together together in a bash and having the end user interact to the cleaned up event set. The Serverless and Azure landscape starts with a core using Azure Functions and Azure Container Apps. As you consider developing applications, note the end to end application ecosystem where serverless enhances the overall journey for your solution. You can easily integrate it with incoming events as well as leverage the broader API economy internal or external to your solution. You enrich your application with data and you

process them either updating the data or getting insights out of the data leading into analytics to fuel that inferences maybe. And lastly we have that tinge of AI and ML and. Going forward with IoT IoT applications to power your applications so you could be either running those models for or processing those models for intelligence that drives your further solution Dr. Lastly, this entire Azure landscape for serverless can be deployed against other app types like AKS or App Service.

Together for entire application architecture and overall, you can go out to extend it to external APIs outside the Azure platform including Kafka, Elastic search, Power apps or even through ARC. So process your events and data just the way you like it and where you like it. For application lifecycle management journey, secure supply chain is critical. And as you develop and test your apps, choose the code platform that suits you best, your dev tools of your choice like VS Code, deploy it using GitHub application, GitHub Actions to your target environment and once deployed, go ahead and monitor it out. For consistent performance, use App Insights, use Log Analytics. Go ahead to manage the end to end solution architecture components through Azure portal, through ARM templates and provision governance overall with Azure policies. All of you here today are

from some industry, some vertical developing applications that's tailored to that industry and vertical. For a variety of use cases, whether it's the retail industry where you're involved with inventory management or you're with automotive industry doing predictive maintenance or just banking doing document processing, there is a difference that containers and serverless can make to your modernization journey today. So it is your time to modernize and bring your apps to market faster using containers and serverless. With built in security and a diverse application ecosystem across data, AI and observability, services like Azure Kubernetes Service, Azure Container Apps, and Azure Functions abstract the way that common developer challenges like managing the infrastructure to deliver a more seamless and productive experience. It's all about finding the right balance between control. And productivity for your app. So to help you get

started on your modernization journey, we have Azure Landing Zone accelerators. Leverage these ready made based architecture design to deploy with best practices already baked in. One available for Azure Kubernete service and one for Azure Container Apps. So Are you ready to modernize your applications? I know you are. There are whole bunch of other sessions and labs to help you dive deeper, but join us for the Q&A right after this onsite on Room 444 and also digitally we'll take all your questions there. Thank you. It's been a pleasure and can't wait to

see what you bring with your modernization journey. Thank you.

2023-05-26 21:13

Show Video

Other news