AWS re:Invent 2024 - Delivering low-latency applications at the edge (HYB307)

AWS re:Invent 2024 - Delivering low-latency applications at the edge (HYB307)

Show Video

welcome everyone I'm Eric Durand I'm a product leader for our hybrid cloud services here at AWS and I spend most of my time working on AWS outposts I'm very excited to be here today to talk about delivering low latency applications at the edge I've actually been working on AWS hybrid Services since they very be since they began initially at AWS um and I've been here to watch The Journey since we've launched services like Outpost and local zones back in 2019 and I've seen all the features and Innovation that we've delivered all over those years as well as see our presence expand globally I'm joined by pav Shakra uh who is our principal product manager at AWS and he focuses on local zones so for our agenda today we're going to spend some time talking about low low latency use cases um and then we'll do a deep dive first on outposts and then later on local zones and talk about how customers are using these two different hybrid Edge services uh to meet the low latency needs uh for their customers uh at the end we aren't going to be able to do Q&A but we can meet you outside if you have any questions and happy to take questions out there so we've seen a wide adoption across Industries for our AWS hybrid offerings customers from multiple sectors use AWS hybrid uh services for various use cases so for example if you look on um on the left side of the slide you'll see that we have customers that are um sorry on the right side of the slide that are migrating and modernizing their Enterprise workloads Um this can include back office applications moving to software as a service uh and in doing these migrations hybrid Services can be useful because they're dealing with the challenge of just the size of the migration itself interdependencies with on premise uh infrastructure and data and con other constraints that can really slow the migration so hybrid Services can help e that ease that along and then um on the left side of the slide you'll see that the low latency workloads um are and this the topic we're going to focus on today and examples of this include real-time Medical Imaging content creation and media streaming and because it's low latency really the key requirement here is Fast Response times for your end users and then in the middle I kind of grouped these two together to think about them as data locality and this is examples where customers use hybrid Services because data needs to stay in a particular location some of the data can't go to the region and so that could be due to regulatory needs security or just that there's a need to do local data processing so the way we think about our hybrid Services is customers have told us that they want a consistent experience they want the same experience at the edge as they do when they're working in the in the AWS region and so this includes pres providing a consistent set of apis and consistent Services also you want to be able to use the same automation tools whether you're building your applications in the cloud or building them for the edge you want to use the same security controls so you can have a uniform security policy no matter where your applications are being deployed or where your data is stored and finally you want a consistent operational experience whether that's at the edge or in the region and so what hybrid Edge services at AWS allow customers to do is it really allows them to accelerate their Innovation because they can focus on delivering the application wherever it needs to be um it also allows you to manage that same um manage a global deployment with a consistent set of tools and a consist consistent set of skills and then finally it allows you to adopt Cloud security a cloud security posture anywhere your application is deployed so let's focus in on just low latency use cases we as I mentioned we see uh we see hybrid cloud services across a number of Industries and we also see low latency use cases across a number of Industries it's one of the main drivers of the use for hybrid cloud services so these really the the needs here are just driving that realtime access for end users the applications need minimal delays to ensure a responsive experience for for your end customer your end user and some systems even require single digigit latency or even submillisecond latency in some cases so examples of these as you can see on the screen are in multiplayer gaming where you need to measure your latency in tens of milliseconds or in things like Financial exchanges and core trading systems those need to be running in submillisecond latencies so in realtime multiplayer gaming you see companies like Riot epic and supercell they deploy game Ser Game servers all over the world to bring the application closer to their end user to their to the gamer and so as a gamer right lag lag negatively impacts your experience it doesn't make your game feel real time and so the ideal latency in this scenario is you know between 20 and 40 milliseconds and really the only way to accomplish that due to the speed of light is to bring that infrastructure closer to the gamer and so what the these gaming companies do is they deploy latency sentative Game servers All Around the World in multiple locations so that they can ensure a real-time and interactive gaming session for their customers in the media and entertainment industri companies like Netflix run expensive artist workstations in specific locations specific where their talent is like Los Angeles um so that they can have bring it so that they can perform video editing and do uh live production and so the key latency uh requirement here is they need to have a jitter-free experience for managing and manipulating that content and so specifically on those remote Ed workstations and so the latency threshold for this type of use case is sub five milliseconds from their offices or from from the animation hubs to the artist workstations so again very difficult to do that in the region and there's and the need there is for hybrid cloud services in the financial services industry companies like NASDAQ run Financial exchanges in key metros like New York City and again here we talked about this is an ultr low latency use case they need to have submillisecond latencies to for real-time data gestion and distribution uh just to um for treating that for handling that marketing data and those trading systems in healthcare and Life Sciences uh they have a need for low latency compute Healthcare customers rely on management software to support things like Radiology practices their health systems or even clinical research and so in the Imaging use case Radiologists use Rapid low latency access to high quality images in order to do a diagnosis for a patient and so with a low latency it ensures that Radiologists can focus on improving patient outcomes without worrying about system performance and then finally uh the final low latency use case we look at is just challenges as an Enterprise migration and this one might not jump out of you as a low latency use case but what we've seen with our hybrid cloud services is that many customers have interdependent systems in their existing on- premises data centers things like mainframes Legacy data warehouses large amounts of storage or large amounts of data sets on on existing storage and so what customers do is you can't provision your new applications necessarily fully in the region you need to bring the region to those data sources and and have a hybrid environment and that allows you to connect to those those Legacy data sets locally and over time perform a migration to move move the application when you're able to so I've mentioned the hybrid services so let's talk a little bit about what those are um so we have an entire cloud Continuum of services and those range from the region which I hope everyone is probably familiar with out to these distributed locations so as we work out from the region you have local zones and so local zones are deployed in major metropolitan areas or in large industrial centers and we're growing that footprint all the time and Prav is going to talk about that we also have wavelength zones and what wavelength zones are is bringing aw compute and services into 5G core Network locations to improve the ability to process mobile applications um then we move to on premises the world that I spend most of my time on and that's where we have AWS Outpost and what AWS Outpost allows you to do is to extend the region and extend AWS infrastructure either to your collocation facility or to your premises uh in this scenario we also have what we call dedicated local zones so dedicated local zones are local zones that are that are fully managed environments that are deployed for for a specific customer at a customer specified location and then we have uh solutions that allow you to run on both AWS managed offerings and off of AWS managed Hardware on say third party Comm commodity hardware and that's where we have uh the elastic container service anywhere ECS anywhere and the elastic kubernetes service eks anywhere and and these will allow you to integrate your container environments on any hardware with AWS and then finally we have out at the far Edge uh we have the AWS we have Solutions like AWS snowball which this allows you to extend cloud computing capabilities out to remote rugged locations where you have limited or perhaps never even have connectivity so let's talk about extending AWS Services specific on premises specifically with AWS outposts and so as I mentioned Outpost extends the AWS infrastructure including ec2 and a select set of Serv Services directly into your on premises environment and that doesn't have to be a data center or a collocation facility that can also be things like an IT closet a back office all a factory work floor and we'll talk about kind of the different form factors that enable that the first form these are the form factors um and so this is what we refer to as the Outpost family and there's two hardware architectures for Outpost there's a full 42u rack um standard industry standard size racks that you would find in the Data Center and then there's two different server form factors both a oneu server form factor and a 2u form factor and so the 42u rack um this is the this is the same Hardware the same rack that we use in our regions in our own data centers and the rack itself is managed entirely by AWS and the way you should think about this rack is this is the smallest piece of of region Hardware that you can get on premises um this matches what we do in the region the the and really the customer uh responsibility when taking one of these racks is you provide the power you provide the space and you provide the network connectivity and we do the rest and then with the oneu and 2u server form factors these give you smaller more flexible deployment options and that's what I was alluding to earlier right these server form factors don't even necessarily require a rack right you can mount them in a back office you can mount them um in an IT Closet in in many different locations or on a factory floor and this really is just for any locations if you think about it where a 42u rack can't fit but you still want to extend ec2 right you want un extend the ability to run your and run with a consistent set of apis and so you might just need one or two of these servers um you might need three or four of them you might manage whole fleets of them we do recommend as it pertains to the servers that if we're running a business critical application we should always be running to for high availability and so I just before I move on I want wanted to take a minute to thank all of our existing Outpost customers and our Outpost Partners um and these range as mentioned all these industries we've been talking about customers in financial services in gaming in manufacturing uh Communications in Telco space um as well as in the public sector and so these customers are leveraging outposts to leverage ec2 on premises for these use cases that we're talking about including low latency and this enables them to create and have a a truly consistent and secure experience across no matter what environment they're deploying in and what that does is that creates efficiencies for both developers and operators right it's the same I can develop once and I can deploy it where I need I know how to operate it regardless of where it is whether it's in the region or at the edge what it also allows you to do is to these customers to do is to accelerate their Innovation because they can they can focus on developing their application right and less about managing their hardware and then finally we have a number of Partners some of who are included in the slide that help customers work to do to accomplish this to modernize their on- premises infrastructure modernize their applications and to provide some of the best support in the industry so as I mentioned we launched Outpost back in uh 2019 and since that time we've been very busy expanding the services expanding availability of the service around the world today Outpost is available both racks and servers are available in over 75 countries and territories around the world and the way we think about this is is we've did this in a few different phases when we first started the service this was list was not nearly as long and we were able to ship outposts and deploy outposts into countries and territories where we were already doing business where AWS was a known quantity and then what we've been doing the last several years is really what I consider our second phase of expansion and that's gone rather quickly but what we've been or we've deployed quite a bit but what that's entailed is we've had to set up business entities we've had to set up shipping Depots we've had to create new Logistics capabilities for going and being able toplo in all these new locations and meet certain regulatory requirements and now I would say we're closer to our third phase which is more of the longtail of countries right and what drives that priority and and that will we will continue to do that but what drives our prior our prioritization of that expansion is feedback from customers so customers tell us all the time this is where I need outposts we do take that data in we talk about it all the time I work with my team on it and we will constantly work to expand that so if you see a country that we don't have Outpost available when you need it let your AWS account manager know tell me when outside of here right and and we want that data so we know where to go next so AWS Outpost brings ec2 on premises is the way I like to think about it it also brings other services but at its simplest element it allows you to leverage ec2 on premises and what this does is it gives you that flexibility and deployment that consistent experience across being in the cloud and being on premises using the same API the same tools and the same services at least a select set of them that you're using in the region the consistency in development as I mentioned this allows developers to use that same set of tools and apis there's no need for separate training for managing what you had on premises because it's a different environment it's the same skill set that you're using in the cloud it also gets you out of the business of that undifferentiated heavy lifting of infrastructure management right so this is a fully managed offering we are managing the hard Hardware uh we're you know we're responsible for ensuring that that Hardware stays up and running this you can think about this is it also with our hybrid Services we employe the same shared uh responsibility model with some addition so with Outpost the customer is responsible for providing physical space provid providing power providing network connectivity and then we manage all the hardware and infrastructure we manage every Outpost like it's an extension of our regional Fleet we send Personnel on site we replace Hardware when needed we do all the brake fix and then last but not least you have a single pain of management so outposts are completely configured and managed from the AWS console so the same console you use deploy services in region is the same console you use to manage each and every Outpost whether that's a rack or a server that also lets you use the same tools to manage this infrastructure so things like cloud trail and Cloud watch um any other network alarming type services that you would set up in the region you do the same with your Outpost so as I mentioned this simplifies your it your it management complexity for customers this amplifies your developer productivity and allows for faster development cycles and it allows your on-prem applications and infrastructure to move at the pace of the cloud and really this just takes a cloud first mentality on premise and allows you to focus your resources on differentiating your business through application development and not through infrastructure management so let's spend let's go a little deeper on the rack so as I showed the picture of earlier this is the rack um this is a standard 42u rack it uh fits into most data centers although it is quite large and quite heavy so we do actually when we deploy these we send an advanced team on site to come out and ensure that you have the correct power you have the correct space and you you have the correct weighted floor requirements if you look at this rack a fully loaded rack here weighs about 1,700 lb um and it's shipped and we we we make these two order when customers order them we make them in our manufacturing facility we ship it in a secure crate that actually has tamper resistant or tamper evident uh enclosure and so that we can ensure that it's been secure securely shipped and once it's been shipped to you we deliver that on site and we install it on site at your location um so many considerations when we're doing that that's why we come out and look in advance because we found out all sorts of surpris rises in people's environments in their data centers when trying to bring these racks in each the the the rack has been designed with redundancy in mind so there's really a minimum of two of every component uh we have redundant power we have redundant networking switches we have redundant network connectivity at least two of everything the only thing that there isn't necessarily two of although I would highly recommend it is the compute servers that are within the rack because those are configured by the customer so what we do is we would work with you to determine what are the instance types you need what are the comp compute workloads you're running what is the storage capacity you need and then we populate the rack with the correct amount of servers and the right size servers and then we are constantly monitoring this rack like I said as an extension of our regional Fleet to ensure the uptime and and uh reliability of the St of the the rack so the rack um if you see so it's top of switches it can support 110 40 and 100 Gig uh Network fiber for uplink it has dual network switches as I mentioned and then in terms of power um this thing it depending on how you populate it can use quite a bit of power so it uses uh it comes with two five to 15 KVA power supplies and what we recommend is those power supplies need to be connected to distinct separate power sources right for maximum reliability and uptime so either if you have two distinct power feeds in your collocation facility your data center or one of those is to a feed one of those is to a backup generator that's things that will work with you when we are uh bringing this to deploy it or we're specking it out to deploy uh we have dual network uh we have dual network connections coming out of the Outpost for Uplink into your environment um and we also recommend those go to distinct networks right most collocation facilities offer more a a myriad of options the one thing you want to avoid is you don't want this coming back to some piece of fiber up the road where that gets cut by a back ho and now the redundancy was for not and then as I mentioned it's customizable compute so you choose the amount of compute that you need based on your needs and then will we recommend that you plan for extra capacity so you never know if you have a bursty workload or you expand use of the service and you want to have some of that capacity in there because we're doing this on premises we do have to do a bit of capacity management and then we'll work with you to balance out what are those compute needs all the space Power and all these requirements and understand and make that match your business requirements so the rack um architecture it it's really an extension of the AWS region so they we extend your infrastructure to on premises resources appear Outpost resources appear in a private as a private subnet within your VPC and it's connected to a home region so every Outpost anchors back to a specific home region that you're managing it from just like you would manage other resources out of that home region and then it's managed by the regional control plane in that region and that's what it allows to have a seamless integration with either across multiple availability zones or even as PR going to talk about even with other local zones that are anchored to that same region and what Outpost introduced different than the region is two new logical networking constructs so one is the local Gateway and so as that indicates right that is the connect the connect the logical connection that Prides provides connectivity into your local infrastructure and so that could be other infrastructure in your data center it could be third party storage it could be other systems it's going to access it can also be the internet connection out of the collocation facility the other logical construct is a service link and so the service link you can think of is like a sight tosite VPN and and that's where all the managed traffic goes across from The Outpost back to the AWS region that includes when we first provision The Outpost we're provisioning it over that service link when we are later uh when you are adding new amies to that Outpost or new services to that Outpost some of that service traffic is traversing that service link and then we're using a piece of that if you will a small amount of that for all the man managing management and monitoring of the actual Outpost itself so I've mentioned ec2 a lot let's talk about the other services that are also available on outposts and so we offer the same tools and services that are in the region as I've mentioned and what that does is that allows that seamless development and that seamless operational model both from on on cloud and as as you can see we have about a dozen services that are available locally on The Outpost and of course the Outpost as it's connected to the region can also leverage and your applications can leverage services that are in the region where that makes sense the things you want to think about there is if I'm architecting an application for data residency I want to ensure that the services that I'm using the storage services for example that I'm using where I'm storing my data are local to The Outpost I don't want to store that data that has you know regulatory implications on Anywhere But The Outpost I don't want to bring that back to the region the other consideration is like if we're architecting for low latency as we're talking about today then I need to be thinking about like what is what are the services that I need that are in the direct path of My End customer so I can ensure the fastest response time I need to use those services that are local to The Outpost right to avoid things but perhaps I say I want to use a Lambda function on the back end that doesn't directly impact my user for some data transformation I can do that and that can still be used back in the in the in the region so we do have um a number of core compute services so I've obviously mentioned ec2 we have EBS we have a local version of S3 on The Outpost um we have I'm not going to mention them all we have a Route 53 local resolver um and then we have things like container services of course ECS and eks um and then database with RDS we also have EMR um any number of services you see and then again you can extend back to the region one other consideration I don't have on this slide is in some cases where we don't have a service that's local to The Outpost there's a number of different uh Amazon partner Network Solutions APN Partners out there isvs that have built uh API consistent uh offering software that you could run on The Outpost that still enables you to to to architect your application to be completely consistent with the deployment in the region it's just going to leverage that third party component when you're doing that on Prem if that's your need and you'll still be able to pull that back with that same consistent API experience into the region when that makes sense so let's then look at the servers so the servers are quite a bit smaller a bit more flexible um and this is a newer offering we came out with this a few years back and it's still a space we're very very excited about we're seeing a lot of traction in and so the 2u server is uh based on Intel process processors and the oneu servers built on graviton processors and when we launched the oneu server this was actually the first time you were able to get graviton outside of an AWS data center um and so what uh the servers have they use significantly less power than the rackes you can see they use 1 to2 KVA and they use standard AC power so would typically you would already find in virtually any Rack or pretty much in any environment um so that you could just plug these in and use them they have a 10 gig Uplink it also down selects to 1 gig so you you can run it at 10 or 1 gig and then um the other different construct here which I'll talk about in a minute too is this doesn't have uh the local um Gateway because it's a single server right we have a local Gateway on the rack because it has multiple servers it has multiple switches in this case we just have what we call the lni the local network interface and this is just a layer 2 connection uh to your local network and it's where traffic is all going to Traverse these all um so you'd be putting these into your own switch right so you'd be managing the network in this case um and you would be traffic between redundant servers would would Traverse your local network at layer 2 um and then this also then leverages the service link right which is that same logical connection back to the region so Outpost servers use the same logical constructs as Outpost racks uh it's consistent across both ec2 instances are hosted in a private subnet and within the home region just like with the racks and you have flexibility with your VPC your VPC can span multiple availability zones this can include Outpost servers and outpost racks for high availability right that's what I mentioned before about sort of a minimum of two Outpost servers when you're deploying them if High availability is a concern which it is with Mo most workloads each server has a unique Outpost ID and so what this enables you is to configure application for fault tolerance to fail between the unique Outpost servers or almost like acting as separate it's instances um that you can also then we've seen as all to the bottom we have some customers that are deploying whole fleets of these servers in manufacturing use cases and in this use case they use M they use cloud formation templates to sort of you know rubber stamp out their their standard configurations for deploying these servers which allows them to expedite their deployments so the local network interface as I mentioned it's a layer 2 connectivity uh it provided directly into your switch and this is direct communication between your on premises resources and The Outpost server the service link will will Traverse this physical link but that's a sight to sight tunnel back to the region just like the rack with that metadata with that provisioning software when you're first load first deploying the server and then it's your the The Connection by which you're getting back to the region to use additional Services over that service link so we see customers using these in factories in smart retail store components um as I mentioned in manufacturing and we even have customer we have customers who use these for experimentation right in their Labs we have a lot of AWS employees that are using these for experimentation um in their own Labs as well so I want to talk about a few new announcements that made uh just one right on reinvent on Sunday um this one and then I'll talk about the next one so I'm super excited about these and I'll talk about the implications of them um so we have a new feature and and it's the first of more to come but this is where we are simplifying the use of thirdparty block storage with AWS outposts and so what this means is for customers that have your on- premise data center you've likely already made an investment in on premises storage so our launch partners for this are net apppp and Pure Storage and what we've done is you're now able to attach to uh block data volumes that are configured on a on a net app aray or on an nup filer on a Pure Storage array you're able to connect those block data volumes to your ec2 instances hosted on The Outpost Rack or The Outpost server so what this allows you to do is to take advantage of the storage you've already invested in gives you additional capacity if you already have it there or if it makes sense for you to deploy more and it allows you to take advantage of the frankly the the things that these storage vendors do very well is they've been managing on- premises storage for a long time they do things like thin provisioning and D duplication and you know different things with uh with um snapshotting all sorts of advanced data management capabilities and so with these Partnerships and and as we continue to expand our Partnerships we'll continue our deeper integration with our storage Partners to give you more flexible deployment options on premises the other piece and and this is very critical in uh low latency applications as well is as I mentioned the Outpost is a connected service right so we have that service link back to the region and that's how you're managing The Outpost and so that requires connectivity to the region if you want to make any mut mutable changes to your instances or configuration changes to your to your Outpost and so we announced uh just a couple weeks back our first iteration of static stability for ec2 instances on outposts so what this means is that you can have a loss of network connectivity you can um have that The Outpost Rack or The Outpost server be power cycled and your instances that were already running will continue to run in the event of of loss of power whatever state those are in so if your instance was running it will come back up automatically and continue operating so if you have a business critical application you lose Network you lose power you now have a higher resiliency than you did previous to this announcement this um today the guidance we're giving and the testing we've done is these we support periods of connection up to 7 days we're going to continue to work on that we're going to continue to do more test in we've certainly seen customers who have done more but that's what we're stating from based on what we validated with our engineering the other thing to think about this and the previous announcement is when you're architecting applications that these low latency applications that need to have be persistent on premises regardless of power outages or connectivity losses in particular is if you take the third-party storage integration piece you'll be in a place where you can now leverage the boot volumes off of your third party storage array which has always been designed to operate disconnected right that's from a pre- cloud world and now with the static stability for uc2 instances you can architect your application to be persist have persistent compute and persistent storage through any network outages so I'm going to wrap up before I hand over PAB with a few customer use cases um so Marcato Libre is one of Latin America's leading e-commerce platforms and they needed to optimize their Distribution Center operations while Main maintaining their strict M maintaining to their strict low latency requirements for their uh their operational technology their OT systems that were on premise in their distribution centers and so they did this by deploying AWS Outpost racks to enhance their existing infrastructure in their distribution centers and then they integrated those OP the automation systems to streamline their warehouse operations and improve their overall efficiency since they had the Outpost there they then migrated over their physic physical access controls they moved over their Wi-Fi access point controls and all of that's now being hosted on the outposts in these in these distribution centers they've also been in the process of testing uh robotic control services on outpost servers and so that they're going to be able to extend their Outpost Fleet out to more points to just build more efficiency into their overall operations and what this did enable them to do was to achieve their goals for single single digigit millisecond latency and uh improve the real-time responsiveness for their critical workloads in these distri distribution centers and in their service centers um what it's also allowed them to do is have a more consistent management experience across all of the these different locations the next one I'll talk about is uh Vector limited so Vector is uh is New Zealand's largest uh energy uh largest distributor of electricity and gas in New Zealand pardon me and they serve about 30% of the overall population of New Zealand and represent about a third of the nation's G GDP and so they're running their um they managed their electricity distribution network with a system called uh ge Advanced distribution management system or GE adms and so this is a business critical National infrastructure system that manages uh the power grid in New Zealand and so they moved this over to AWS Outpost and and architected it with dual redundant uh AWS direct connects for Network redundancy and what this enabled them to do um was to enable a zero touch Edge deployment with gdms GE adms and it provides for improved their outage management with near realtime uh geolocation of their electricity issues so they're able to triage where they're having outages and and address those during maintenance Windows far more effectively what it also provided them the benefit is they're also using the AWS region for these for these um non-national infrastructure right non- latency sensitive workloads and so they've now been able to have the same operations team manage their it that's managing their OT just building more efficiency into their overall operations and then the last one I'll touch on is uh an announcement that was also made earlier this week at at reinvent but it's showing the again the sort of pace that we're seeing customers take outposts in to manage these business critical systems and so this is uh POS X Mees uh which is a manufacturing execution system from corber Pharma and so they have validated their solution which their end customers are pharmaceutical manufacturers um to manage their manufacturing environments they validated on AWS Outpost for two different configurations that they're supporting with with customers with the pharmaceutical companies the first is a multi uh configuration a multi- Outpost configuration which does achieve a level of high availability High availability and resiliency while addresses some of the residency or Lo data locality concerns and low lat requirements that this application has the second architecture is deploying a POS xmes on redundant Outpost this would allow for a multi-site deployment for even higher um set even higher level of res redundancy and resiliency so if you look at the multirack Outpost deployment here the redundancy is handled being handled at the kubernetes control plane so posex is leveraging local kubernetes control plane instances for improved orchestration and management across the Dual across the multi deployment and then they're leveraging um managed RDS on outposts and this is ensuring better seamless data handling and better data availability and so the failover here is accomplished again through the kubernetes cluster and then these clusters are able to remain operational even during service link outages because they don't have a dependency on the region and this ensures maximum up time for these this critical Mees system in the next AR Ure uh this is the multiple outposts so this is the active active architecture that they've deployed um and what this does is this takes advantage a full advantage of synchronous RDS replication for the databases and so this ensures that there's data consistency and high availability between the two different sites or the two different Outpost deployments and the active active deployment also allows them to main continuous operations so in the event that there could be a failure or an outage at one site they'll fail over to the other site in an active active passion and ensures uptime for that this critical system so with the the data the DAT the databases are always consistent and upto-date duee to the synchronous RDS replication being done on RDS on the outposts um and manages data consistency and then the failover again is being managed by the fail over between the outposts and this in this case they can again lose an entire site they can lose the Outpost they can lose power connectivity at that site so all of these these two architectures allow them to significant enhance their ability to meet to the meet the needs for their reliability while still addressing low latency for the application and thank you for your time I'm going to hand it over to Prav and he's going to talk about uh local zones thanks Ed thanks right hey everyone I'm PR chra I'm a principal product manager at AWS uh I've been working on local zones for The Last 5 Years I in fact led the launch of us local zones in LA and since then I've been looking into what services we want to bring to these local zones and how do you want to improve the experience as well so in this section I'll talk about what local zones are where are they available what services are available in these local zones and really then get into some of the examples of customers who are using these local zones to achieve low latency either to their end users or to their on premises installations so before I dive deep into local zones uh let's understand what local zones are logically speaking you can think of these local zones as very similar to availability zones is just that they lie in a different physical geography uh so what this means is that we as AWS have deployed infrastructure in these metropolitan areas and these are what are local zones we' have connected these local zones to the region using our backbone connection which is multiple redundant and secure connection and what it does is that you can use services in the local zones and as and and because the backbone you can reach out to the region and use a lot of services back in the region as well similar to region uh local zones also offer elastic on demand Pay As You Go pricing uh which means that you know similar to Regions you also get the same operational and security parure that you expect and see availability zones as well so it's not a surprise that customers from many Industries including Telecom Healthcare uh manufacturing gaming and others are using local zones to achieve what they want in this case low latency either to their end users or to their on- premis installations so before we get to the next set of uh slides I want to basically discuss why do we even need local zones so if you take this visual queue uh let's imagine I am an end user who's a gamer in the Lagos metro area as a gamer I need to access a multiplayer gaming session uh and the option in this case for me is the nearest region which is the Cape Town region and as a gamer if I'm in Lagos and the region is in Cape Town it means that the latency will depend on the traffic going from Lagos to the Cape Town which is in tens of milliseconds um which of course is not ideal for a good gameplay experience so we heard this feedback from a bunch of our customers and what we did was we launched local zones across multiple Metro areas including Lagos in this case so as an end user as a gamer in Lago is now I have a local Zone available in the same metropolitan area and now I can access the gaming server within single digit millisecond latency this of course provides the the ideal uh experience from the latency perspective which you know ensures that the latency is much more predictable and uh lower that customers expect in this case so this is of course the the distributed Gaming use case we'll get into some of the other use cases like hybrid where customers are using local zones to reduce latencies to their on Prem installation or to their corate assets as well so the next question is where are these local zones available we launched our first local zones in LA in 2019 and then since then we've been adding local zones across the globe uh so we have added over 16 local zones in the US uh where the idea is to reduce the latency and make sure customers across the conus US can access low lat see services within single digit millisecond latency and then we didn't stop just in the US we of course you know expanded local zones internationally as well we've launched over 15 local zones internationally in places like um you know Copenhagen Lagos muset ockland and per and we also announced on the locations that you've seen in white here which are over 10 locations where we're planning to launch local zones over the next couple of years including bogoda uh hanar aens and so forth again the intention here is that we want uh customers to get access to AWS infrastructure anywhere they want across the globe so now that we've understood where local zones are available uh let's let's understand what services are available in these local zones so when we launch these local zones we focus on bringing Services where latency matters which means that we brought services like ec2 EBS or block storage ECS and eks for containers needs and then some of the networking services like Route 53 Shield standard diet connect and wepc we've also had services like application migration service to help you migrate your workloads to local zones easily and then we realize that as we added local zones in more locations the use cases were unique depending on the locations we went into so there are certain services that we've added in select locations depending on the use cases we saw so the ones you see in white here like FSX n Gateway RDS game lift EMR and elastic Ash these are Services which are available in select locations where the idea is that now we know how to bring these services to local zones and depending on the feedback from customers we want to bring these to more locations as we go and as I mentioned local zones are connected back to the parent region using our backbone connection which means that beyond the local set of services that you see on the top you can also access all the services back in the parent region which means services like cloud trail cloud formation ec2 autoscaling Cloud watch S3 and so forth they all available in the parent region and you can access them over the backbone from local zones and in the last couple of years we've heard feedback from customers that they want more capabilities across the set of services that we have added so over the last year or so we have added additional local zones in places like Atlanta Chicago Dallas Houston uh Miami and so forth where we have access to the latest generation instances including Gen 6 and gen 7 um and we've also added more EBS volumes including gp3 i1 se1 and then of course additional features like IPv6 and spot instances that customers have been asking for so these local zones are available in select locations and then the idea is to evolve and add some of these capabilities into the existing metros as well as some of the newer locations where we planning to launch local zones so we just discussed that local zones is a logical extension of region and that is also reflected in the pricing as well um there are there's of course no cost for enabling local zones once you enable local zones uh the prices for E to instance in Services running local zones are unique so you can always find these these prices on our pricing uh the Service pricing pages and then there is uh when it comes to data transfer we think of local zones as very similar to availability zones and regions so in this case if you're trying to access S3 in the parent region from the2 running in the local Zone that is free of cost and then beyond on demand you can use uh sings plans we all side spot instances in in some of the select locations for local zones so um before we jump into some of the examples of how customers are using local zones uh let's discuss how it really works in practice um and I think from The Experience perspective you should really think of local zones as similar to availability zones is is that they you need to First opt into these local zones so you need to enable these in for your own account and once you do that they show up alongside availability zones and then you can extend your VPC by creating a subnet and linking into the local Zone and start launching resources into it so if you look at this architecture this is a good example of how it really works in practice so in this case we have a VPC in US West Oregon which is extended into the Seattle local Zone by creating a subnet and linking into the Seattle local Zone and when we do that all the things like uh gateways and Route tables are automatically taken care of and then because local zones have their own ESS and Ingress when it comes to internet and diet connect you can access these local zones using your on premise installations or from your end users with extremely low latency and diet connect is an interesting use case where when we launched these local zones we realized that a lot of customers were looking at these local zones as an extension of their own on- premises so what they wanted to do was they they would use D connect to connect from their on premise installations uh you know it could be their own data centers could be their Co location facilities or could be their animation hubs or offices as well and then they'll connect that on premise installation to the D connect location which would then in turn be connected to the local Zone which is our infrastructure through the virtual Gateway and when that's done we've seen customers like mind body and others achieving as low latency as 1 to 2 millisecond which allows the hybrid architecture which means that you can have some parts of your applications running on premises while you start migrating other applications to the local zone so now that we have discussed how local zones uh are being used uh I think another piece I want to touch upon is how do we really think about higher availability alongside low latency access here and all in all there are multiple options available here so you can you can partition your applications between the local Zone and one of the availability zones in the region just like how you partition your applications between two availability zones in the region so that's one option and then we've seen customers partitioning their workloads between nearby local Zone as well for example in Houston or in Dallas and then there are locations where we have two local zones available in the same Metro that provides you the higher availability option here and then finally we have many customers like FanDuel who are partitioning their workloads between the local Zone and the out course to achieve the higher availability while ensuring low latency access here all in all there are multiple options here and we do recommend you to work with your partner or aw specialist to ensure that the architecture meets the requirements so in the beginning of the presentation Eric discussed various use cases for low latency uh including multiplayer gaming now I'm going to provide a couple of examples where customers are using local zones to achieve low latency access either to their end users or to their on premise installations as well as what business impact local zones have been able to create the first one is the example of Epic where customers like epic need to deploy the gaming servers in multiple locations so they can be closer to their end users so in this case when we launched our local zones in in Dallas epic side using local zones in Dallas to achieve low latency access for the gamers who were uh who were on fortnite including in the US as well as you know uh some of the uh South America locations like like Mexico here and by by doing that you know epic was able to ensure latency access and the idle game gameplay experience and and in turn you know by not maintaining their own data centers they were able to scale up and down as and when they they needed for their their Gamers uh and for the multiplay gaming sessions there the second example is the Mind Body example which goes back to the the architecture that I was refering to where there are customer like Mind Body who have told us that it can be daunting to move a portfolio of inter interdependent applications to the cloud and what they do in this case is they would use local zones as a way to extend their on premise installation so they would use di connect to connect to the local zones from their own Prem premises installation and this in turn would enable hybrid architecture which is what uh Mind Body did in this case so they kept some applications on premises while they started moving other parts of the applications to the local Zone while ensuring really low latency access within the same Metro and as a result they didn't really need to refactor a lot of applications while they were able to move parts of the applications to the cloud here overall both mind body and epic are great examples of how how customers are using local zones to achieve low latency access either to their end users or to their on premises installations so now that we have covered both Outpost and local zones uh I want to talk about how should really how should you really think about choosing between the two options and you know overall it really depends on two factors location where you want to run your compute and the latency profile that you want for your applications so first of all if there is a region of available closer to you or your end users and that meets the latency requirements we of course recommend regions that has uh you know access to a lot more services in scale and then there are going to be locations where region doesn't really meet your latency requirements and there's a local Zone available closer to you or your end users and that meets your latency needs in this case local zones are a great option and then finally you know there are scenarios where there is no region or local Zone available that can meet your latency requirements and in such scenarios you know customers rely on outpost uh for example manufacturing where they really need to ensure ultra low latency Outpost as a are a great uh solution in that case and and this is exactly what we've seen with customers like Riot and FanDuel who are leveraging combination of both Outpost and local zones to bring their applications wherever they need it with that uh you know we've come to the end of the session thanks again for joining the session uh given this is silent session we will not be able to cover Q&A here but both Eric and I'll be available outside to take any questions and we are more than happy to you know connect offline as well and answer any questions and then finally uh don't forget to complete the session survey uh in your mobile app thank you

2024-12-08 23:25

Show Video

Other news