AWS re:Invent 2022 - Innovate with AI/ML to transform your business (AIM217-L)

AWS re:Invent 2022 - Innovate with AI/ML to transform your business (AIM217-L)

Show Video

please welcome vice president machine learning and AI Services Bratin Saha [Music] good afternoon everyone welcome and thank you for being here I'm Bratin Saha VP of AI and machine learning services at AWS when I own my PhD in computer science machine learning was a largely academic Pursuit but over the last five years machine learning has transitioned to become a rapidly growing mainstream endeavor and I feel incredibly fortunate to have been part of AWS during this time and to have been leading machine learning at AWS during this time because in large part this transformation has been driven by AWS this also gave me the opportunity to help build one of the fastest growing services in AWS history with more customers doing machine learning on AWS than anywhere else now as a reminder the AI and machine learning services are part of aws's overall portfolio of data services that Swami talked about in his keynote this morning now machine learning helps computers learn from data identify patterns and make predictions and so whether Ai and ml Services come in within the entire AWS portfolio of data services is when you're trying to extract insights from your data and then act on those insights now before I get into the details of our Ai and machine learning let me share an interesting anecdote with you so from time to time my friends and colleagues send me books and articles and events of interest in machine learning so five years back I would get books like this you know this is a really deep book on machine learning that has been written for experts and data scientists and researchers these days however I get book links like this yes babies are now part of the machine learning community and that tells me machine learning is indeed getting democratized and the data seems to back this up according to a McKinsey survey on the adoption of AI almost 60 percent of companies now say that they use AI in at least one function in the organizations and that shows how machine learning has transitioned from being a niche activity to becoming integral to how companies do the business and in large part AWS drove this transformation by building the broadest and deepest set of machine learning services and as a result today over a hundred thousand customers do machine learning on AWS no customers approach machine learning in one of three ways and therefore at AWS we have built three layers of machine learning services so we can meet customers where they are at the bottom layer are the machine learning infrastructure services this is where we provide the machine learning hardware and the Machine learning software that customers can use to build their own machine learning infrastructure and this is meant for customers of Highly custom needs and that is why they want to build their own machine learning infrastructure at the middle layer is Amazon Sage maker this is where AWS builds the machine learning infrastructure so that customers can focus just on the differentiated work of building machine learning models and because customers just focus on the differentiated work that is where most ml Builders are and then at the top layer are our AI services this is where AWS embeds machine learning into different use cases such as personalization such as forecasting anomaly detection so speech transcription and others and because AWS embeds machine learning into these services and customers can call these services they are able to embed machine learning into their applications without requiring any ml expertise now customers across every domain and across every Geo more than a hundred thousand of them as I said are using these services to innovate at a very rapid clip and since machine learning is now so important for innovation I want to spend the rest of this time on talking about the key trends that drive machine learning innovation the key enablers that let customers scale out their machine learning innovation so that as you're thinking about your own machine learning strategy you can consider how you can leverage these key trends and this should also give you an idea of where machine learning is headed to innovate with machine learning it's really important to be able to leverage these six key trends first is the exponential increase in the sophistication of machine learning models and being able to use the latest models next is harnessing the variety of data available to train machine learning models then comes machine learning industrialization or the standardization of machine learning infrastructure and tools then there is ml powered use cases or automating use cases by embedding machine learning into them then there is a responsible AI or making sure that we are using machine learning in an appropriate way and finally this ml democratization in other words making sure that more users have access to machine learning tools and skills let's now dive deeper into the first Trend and that is the exponential increase in the sophistication of machine learning models and how you can use these latest models now one way in which we measure the sophistication of machine learning models is by counting the number of parameters Within These models you can think of parameters as like variables or values that are embedded inside machine learning models now in 2019 the state of the art machine learning models had about 300 million parameters now the state of the art models have more than 500 billion parameters in other words in just three years the sophistication of machine learning models has increased by 1600 times now these models are also called Foundation models and because they're so massive you can actually train them once on a lot of data and then reuse them for a variety of tasks and as a result they reduce the cost and effort of doing machine learning by an order of magnitude in fact within Amazon we also use these Foundation models for a variety of tasks and one of the tasks that we use these Foundation models for is actually software development and we are now making this available to our customers through Amazon code Whisperer and so I'm very happy to announce that Amazon code Whisperer is now open for all Developers now Amazon code Whisperer is a machine learning powered coding assistant and it generates code just like a developer Rights Code it's based on a giant Foundation model that has been trained on billions of lines of code and it comes integrated with Ides Ides are the tools that software developers use for writing their programs and so as a developer is writing the program in their IDE code Whisperer understands the intent of the developer and by understanding what the developer is trying to do code Whisperer is able to generate code just like a developer Rights Code let me give you a demo of how code Whisperer works so you have this IDE and the programmer is writing some code Whisperer looks at this code and understands this is python code Whisperer also understands that the developer wants to use AWS apis by just looking at the libraries being used now all that the developer has to do is write a comment the developer says Hey generate me a function that loads data from S3 and use encryption that's all that the developer has to do code Whisperer looks at the context looks at the comment understands what is it that the developer wants to do and then code Whisperer automatically generates the code this is amazing this is transformational and I would encourage all of you and I would encourage all of the software developers in your organizations to go try out code Whisperer because it's going to change the software development paradigm now many other customers are also using Foundation models to hear more about this let's look at this video from lgai um foreign [Music] isn't it amazing that a machine learning model is generating fashion designs that were displayed at the New York Fashion Week I mean 18 months back this was unthinkable and so customers are now asking us that they want to be able to use Foundation models on AWS and they want us to make these Foundation models available to them on sagemaker because they don't want to have to build these Foundation models themselves and so I'm very happy to announce that Foundation models from stability.ai are now available on sagemaker [Applause] these models are some of the most popular Foundation models available today and these are going to be transformational these are going to be able to act as assistants to your creative work and so I'm very happy to welcome Imad mushtaq the CEO and founder of stability.ai Welcome Iman hi everyone thank you Amazon AWS for having me here today um stability AI yes company was set up 13 months ago actually it's been quite a period uh our mission is to build the foundation to activate Humanity's potential through AI what does that mean these Foundation models are just so amazingly flexible trained on the almost entirety of human knowledge the nature of these things is that we've seen gigantic models 540 billion parameters we've seen flexible models that can do fashion as we've seen text and others but we thought what if we made these models available to everyone what if we built AI for the People by the people to do that we develop communities so we have open biomaldine protein folding copper doing code and other models harm and I doing audio Luther AI doing language and these communities have tens of thousands of developers that work with our core team and our partners to build some of the most advanced Foundation models in the world that we then give away to everyone we give it away to stimulate this sector and to see what can we create around this the most famous model that we've released is stable diffusion which was led by the confis lab at the University of Munich with our team Runway ml elutha Lyon and many others contributing and it's an interesting model in just two gigabytes of file size it can generate any image in any style we took a hundred thousand gigabytes of images and labels to compress it down to that and it's been an absolute Revolution so these you just type in Floral wolf a color splash lady or a cat light and that's what you get all in a matter of seconds it's taken the World by storm this is the time to get to 40 000 GitHub Stars so ethereum and Bitcoin just got there if you look on the left hand side yep that stable diffusion in 90 days so one of the most popular pieces of software ever let alone AI there you can see Kafka and Cockroach and kind of other things there the developer Community is hundreds of thousands Strong building hundreds of different applications it runs on your MacBook M1 without internet it runs on your iPhone now this is a step change last week we were proud to release stable diffusion 2.0 developed entirely at stability which is another step forward it's a cleaner data set better quality less bias and faster these are some of the example images that were created from that we worked very hard to listen to community feedback and so we made it safer we have attribution mechanisms coming in and we built this all on AWS we're happy now to try and take this forward and I'll give you some examples of the types of things that you can do it's hit photorealism or at least it's approaching that these people do not exist this content does not exist these were created in two seconds on g5s these Interiors do not exist but they do now just from a few words of description this is a revolution and you can take this General model and create anything or Suraj Patel at hugging face took 10 images and created a Mad Max world and a Mad Max model in just an hour you can take your own content and bring it to these models or in fact you bring the models to your data this is one of the revolutions that we've seen because typically you've had to do massive training tasks where these models know about the world and then you can extend that knowledge so hopefully we don't end up like that although the cars are kind of cool but it's not enough just to have the models who can do anything what if the images aren't quite right we release depth to image that does a 3D depth map that you can then transform one image to another just with words you can use it for example to transform a CEO into a robot or something else you know but then if that image itself isn't correct we can do in painting we can make him cool the ability to transform and adjust these pictures is amazing and it will be through natural language and new interfaces beyond that you can have things like our four times upscaler soon to be eight times it's a bit like enhance enhance enhance on uh procedural TV show this technology is Revolution I mean look at the whiskers there it's fantastic and this technology is getting faster and faster and better and better when we release stable diffusion in August oh gosh 23rd of 2022 it took 5.6 seconds to generate an image now it takes 0.9 seconds thanks to the work of

our partners at Nvidia today I'm proud to announce distilled stable diffusion and this will be a paper released today and the models available very soon on sagemaker we've managed to get a 10 times Improvement in speed so it's not 0.9 seconds anymore it usually takes 50 steps of iteration to get to that image those images that you just saw in one second now it takes five and in fact in the last 24 hours since I submitted this it now takes two what does that mean it means you're heading towards real-time generation of images in high resolution that is completely disruptive for every creative industry and it's something everyone has to get used to now or any image generation industry because what we've done in the last year is we've actually enabled humans to communicate visually talking is the easiest than writing visual communication is awful especially slides we'll be able to make this PowerPoint presentation just by talking within the next couple of years and that's amazing that's why we're delighted to work with Stage maker AWS and stability work together to build one of the largest open source public Cloud clusters in the world we have nearly gosh over 5000 a100s working with sagemaker we have unprecedented quality of output unprecedented resilience and this is across our model Suite so for example GPT Neo X from our Luther AI Community is the most popular language model foundation in the world it's been added 20 million times working with sagemaker we took it on 500 to 1000 a100s to give an example the fastest supercomputer in the UK 640 from 103 teraflops to 163 teraflops within a week of performance a 60 times performance increase scaling our infrastructure is incredibly hard making these models available is incredibly hard we think that with sagemaker with the broader Amazon Suite we can bring this technology to everyone to create not only one model for someone but create models all around the world and make this accessible we have audio video 3D code and all other models coming and these will be available as tools to use in code whisper and others for you to create amazing new things to activate the potential of your businesses your community and humanity and we're super excited to see what you're going to create thank you everyone thank you Matt I mean I'm really excited by what customers will be able to do with stable diffusion on AWS you can imagine as these models start developing photorealistic images and start doing it in real time all kinds of content generation will get disrupted now I talked about Foundation models and they have billions of parameters and they need terabytes of data to be trained and that means they need lots of compute and they need lots of compute at very low cost and that is why AWS is also innovating on machine learning Hardware aw listranium is a purpose-built machine learning processor that has been designed from the ground up for machine learning tasks in fact compared to gpus it has twice the number of accelerators 60 more memory and twice the network bandwidth and so what this means is that trainium can provide you more compute power than any other processor in the cloud and not just that trainium provides you the lowest cost of any processor in the cloud and because it has such a compelling value proposition we have been collaborating with a lot of customers for developing Terranea and so to hear more about this collaboration let's listen to aparna ramani who's the VP of AI and data infrastructure at meta Hello, I’m Aparna Ramani VP of AI data and developer infrastructure engineering at meta and pytotch Foundation board member it is my pleasure to talk about mata's AI relationship with AWS our collaboration has been expanding since 2018 when meta AI researchers started using AWS for state-of-the-art AI research pytorch is seeing great adoption among large Enterprises and startups and is a leading machine learning framework today for years now matters pie torch Engineers have been collaborating with AWS on keep High touch projects such as co-leading and maintaining torch serve and making open source contributions to torch elastic more recently we've been working together on pytosh enhancements for AWS purpose-built ml chips inferencia and trainium we are excited to see AWS launch trainium-based ec2 instances our Engineers saw near linear scaling across the training cluster for large language models meta has also collaborated extensively with AWS to provide native pie torch support for these new trainium-powered instances AWS contributed a new XLE backend to torch distributed that makes it really easy to migrate your models to trainium instances this also enables developers to seamlessly integrate pytorch with their applications and leverage the speed of distributor training libraries and models we look forward to continuing our collaboration through the pytorch foundation and Beyond I'm truly thankful to The Meta team because I think this collaboration between AWS and meta is going to make it much easier to use trainium pie torch and do machine learning on AWS let me now get to the next key Trend that drives machine learning Innovation and that is harnessing the variety of data available to trade machine learning models harnessing multiple modalities of data to train machine learning models now data Falls machine learning and so at AWS we have been building a variety of data processing capabilities so that customers can prepare a variety of data multiple modalities of data as I mentioned so you have sagemaker ground truth that can be used for processing images audio video text and other forms of unstructured data you have Sage make a data Wrangler that can be used for processing structured data and then you have sagemaker notebooks that can be used for spark based data processing and all of these are allowing customers to train machine learning models to extract insights from data insights that let machine Learning Systems answer The Who and the what so for example if I take a trained machine Learning System and I showed this image and I ask what is this image about it'll actually be able to answer this is an image of a football game and if I ask who are in this image it will actually be able to identify all the players in this image but if I ask when was this game played where was this game played unfortunately machine learning models do not do a good job of answering the when and the where but ironically most of the data generated in the world today actually comes tagged with geospatial coordinates that let you answer the when and the where it's just that it's too hard to process this data and that's because it needs special visualization tools and special data processing Primitives but it's important to answer the when and the where and that is why we are augmenting our machine learning capabilities to train geospatial data to train with geospatial data at this morning's keynote we announced the public preview of sagemakers Geo spatial machine learning capabilities that will now allow customers to train models with geospatial data and answer the when underwear now now the automotive industry uses geospatial data in a variety of ways for example BMW uses geospatial data for many different use cases to talk more about this I am pleased to welcome Mark Meyer the general manager of AI and data transformation at BMW [Music] so thank you Bratton good afternoon everyone it's great being here with you my name is Marco gertma and I'm heading our data transformation artificial intelligence unit at the BMW group so in the vision and the mission of our team is to drive and scale business value creation through the usage of AI across our value chain now looking to our products at the BMW group We Believe that individual Mobility is more than just moving the body from A to B we believe it's also about touching the heart stimulating the mind and what you see here is the BMW i Vision circular it's a compact all-electric vehicle that shows how a sustainable and luxury approach in the future could look like and we believe this future is electric digital and circular so today I have an exciting use case for you where we touch on all three of those areas and before I jump right into the use case um I just want to give you a short overview of where we stand with our data and AI transformation so we've built up a data analytics and I AI ecosystem at the BMW group and we had more than 40 000 of our employees engage here and they created thousands of created data Assets in the company that can be reused and brought siled data together and based on this they were able to deliver more than 800 use cases with more than 1 billion US dollar value since 2019. so we're taking this transformation very seriously and one main area where we focus on is sustainability and today I want to drive you through one specific area there namely Mobility it's around 60 percent of the world's population lives in cities and urban areas and that's also where 70 percent of greenhouse gas emissions are generated so clearly we can make the biggest contribution here and that's why we the BMW group are getting involved here and our vision and also the idea is here to assist city planners in solving problems in those urban areas and let me give you three examples how we do this already today so we're able of training machine learning models to predict how new traffic regulations for example e Drive zones can probably reduce traffic and gas emissions locally we can also help identify where we have insufficient charging infrastructure since obviously that prevents people from switching to an electric vehicle and the last example here based on machine learning models we can predict how change in pricing policies for example for parking or use inserting streets can impact drivers commuting routes and therefore estimate like the traffic on a mission so and all of these problems they're characterized by geospatial information so to solve them we had to extensively use geoservices within machine learning such as map matching efficient Geo hashing or digital Maps and we opted to test a new geospatial capabilities brought in just mentioned and let's see how and with what results so specifically for our Fleet customers so large company Fleet it's difficult to to foresee how their share of electric vehicles will look like in the future so we set us the goal to train machine learning models to learn correlations between engine type and driving profiles the rationale behind us was if such a correlation would exist then the model could learn to predict the Affinity of certain drivers for an electric vehicle based on their profiles of course we did this we fully anonymized data and also or only on a fleet level so we could never draw any conclusions to individual drivers so now let's see how the solution works so we started from anonymized raw TPS data of where vehicles are driven and parked and then we converted those GPS traces into routes using map matching and if a root were a sentence then the landmarks along the Route would be words so we use the natural language processing model to predict which roots are likely to be taken by EV drivers in parallel we built a second model to Cluster vehicle parking locations to predict where EVS are likely to be parked so for example near charging infrastructure then we merge the two models to triangulate the predictions and at the end of the training the hybrid model was capable of predicting How likely it was for specific fleets to convert to EV with an accuracy of more than 80 percent so let me show you three things that really helped us here to be to be so quick uh on building the solution so one advantage of sagemaker geospatial capabilities is the standardization of common apis to access transform and enrich geospatial data so for example for reverse geocoding sagemaker provides a single managed interface to apis by the integration with Amazon location services and they again Source high quality geospatial data from esri and here.com so second thing is um with sagemack you have pre-built algorithms to split the raw data set along geospatial boundaries so the data can be used for training and inference and in the end of course you need to visualize um and there there are great pre-built visualization tools really tailored to geospatial data so to sum it up yeah we went from idea to solution in just eight weeks and with a high accuracy of 80 in prediction in that short time so it was really great using the services and they helped but we also had a great collaboration with the Amazon machine learning Solutions lab team and our internal BMW team so brightin thank you very much for the great collaboration and let's move on [Music] [Applause] thank you Marco truly inspiring work at BMW and I'm also really impressed by how BMW has successfully applied machine learning to Automotive because it's hard and I have some personal experience of it in a previous life I worked on self-driving cars and machine learning then was hard it was hard to apply machine learning to Automotive and so my management would ask me from time to time when will these cars work and I would tell them look we got to have patience these cars have to be at least 18 years old before they can drive by themselves and in hindsight what I realized is that we lacked an industrial scale machine Learning System a machine learning infrastructure that would have allowed us to quickly iterate on developing machine learning models that would have allowed us to make machine learning development robust and scalable and reliable and that gets me to the next key Trend that drives machine learning Innovation and that is ml industrialization let me first Define what is machine learning industrialization and why that's important ml industrialization is the standardization of machine learning tools and machine learning infrastructure and it's important because it helps customers automate and make the development reliable and scalable like five years back you would have customers deploying maybe half a dozen models now you have customers deploying thousands of models and they train models with billions or hundreds of billions of parameters and the infrastructure often makes trillions of predictions a month and so when you're talking of billions and trillions you need an industrial scale machine learning infrastructure and on AWS you can use sagemaker for standardizing and industrializing your machine learning development and tens of thousands of customers are doing that now in fact AstraZeneca moved to sagemaker and they were able to reduce the lead time to start machine learning projects from three months to just one day think of it three months to just one day even within Amazon we are using Sage maker for industrializing and machine learning development for example the most complex Alexa speech models are now being trained on sagemaker to hear more about this let's start with Alexa hey Alexa I'm curious how are you able to answer all the questions that people ask you so intelligently hi Bratton thanks for the compliment there's actually a whole team of Applied scientists and Engineers who train ml models that power my intelligence thank you Alexa I'm pleased to welcome now Anand Victor VP of Alexa ml development who can talk about the Alexa machine learning infrastructure and how they use Sage maker to industrialize the machine learning development [Music] oh it's awesome before I get started I was wondering how many of you are Alexa users in the room if you make some noise and if you're a hey Siri hey Google or Siri user maybe you should be I'm kidding I'm kidding you don't get worried you know um you know folks in my role at Amazon I'm on fire for animal Builders anywhere I'm really excited to be here to speak about how sagemaker has helped the Alexa ml Builders innovate faster our mission for Alexa is to become an indispensable assistant a trusted advisor and a fun and caring companion and today Alexa supports 17 languages with 130 000 plus girls and 900 000 developers building on Alexa of course these are active on more than 100 million Alexa powered devices to deliver this awesome experience behind the scenes Alexa is powered by thousands of ml models that power the billions of customer interactions that happen worldwide and my team is specifically responsible for the tooling that enables this thousands of ml Builders building effectively on Alexa of course we need to do this at massive scale millions of GPU hours but more importantly we need to do this securely while maintaining customer privacy so when we start on this journey with sagemaker we launched one of our simpler Alexa models to prove that sagemaker does help our scientists innovate faster it worked the scientists for this particular model were so happy but the broader business teams and the security teams were still not convinced you know most of the feedback was oh no this is not going to work very unique one-off use cases and we realized that to really go with sagemaker we are together go big or go home so we picked one of the biggest most complex critical models for Alexa at the time the Alexa speech recognition model a little bit of me a couple they were right there were gaps we had to fix so we worked closely with Stage maker and other AWS teams to design a secure Foundation this secure Foundation included a air gap Network fine-grained permission controls and a secure browser that enabled our ml Builders to interact the data inside sagemaker now this becomes a standard pattern if you're going to initialize ml with critical data with this secure foundation in place we use the same tools you do to ingest and store training data into S3 and we use the same sagemaker tool set to develop train and host ml models and of course our ml Builders are so happy because you know why they get to focus on building and executing experiments and so wasting the time building and managing infrastructure literally saving them multiple hours every week and of course the business teams are happy not only did we actually increase the security bar for Amazon by moving our most critical model into Stage maker they pay for what you use model has helped us increase our resource utilization this enables us to train more models more iterations with the same resources to improve Alexa customer experience you know but of course it's still day one for us all these happy ml Builders still have a truckload of experiments and features they want from sagemaker for the next wave of Alexa functionality you know but before before I leave I want to uh leave you with some words of wisdom some of you in my role where you own the ml infrastructure for your teams you're going to go back you're going to tell them hey get the ml models on sagemaker right and what can you tell them you're going to tell them Brad and told you hey I saw babies doing it on sagemaker yeah you're gonna get beat up don't do that you know you've got to say hey Alexa's running on sagemaker the most critical model is running on Satan because we can do this but more importantly my learning has been in in a leader who's leading and owning the infrastructure for ML Builders we need to be on fire for ML builders and they need to hear this from us not just think it so before I go I want to practice this with you right I often say I want fire if I'm on Builders so I'm going to ask you are you on fire for Emerald builders in the room and I want you to say I'm on fire for Emerald Builders you guys got that you're going to say shout it I'm on fire FML Builders you have that yes okay who's on fire for Emerald builders oh come on guys louder we're gonna fire family builders I love it thank you guys thank you [Applause] [Music] thank you Annan I really look forward to all of the innovations that Alexa comes up with now one of the capabilities of sagemaker that makes it easy for customers to standardize the machine learning development is sagemaker Studios notebooks and these notebooks are based on the open source Jupiter notebooks that revolutionize data science by making it easy for customers to prepare data and experiment with machine learning models and as these notebooks have become more popular for development we saw an opportunity to make them easier to use on sagemaker and so I'm pleased to announce that sagemaker studio notebooks just launched the next generation of studio notebooks which [Applause] which makes it easy for customers to visually prepare their data to do real-time collaboration and to quickly move from experimentation to production let me dive a little deeper into these details now machine learning development today is a highly collaborative activity but what happens is developers use one tool for developing their models and a different tool for communicating with each other so they're using notebooks for developing their models but they communicate with each other on email or slack or other ad hoc ways and that makes their collaboration a little disjoint with this new generation of notebooks Sage Maker Now allows you to both develop and collaborate within the notebook itself and what that means is that multiple users can simultaneously co-edit and read these notebooks and files and not just that these notebooks are also integrated with source code repositories like bitbucket and AWS code command and that makes it much easier to manage multiple versions of these notebooks that get created as users are collaborating with each other now when you want to go from experimentation to production today a data scientist has to take all of the code they have written in the notebook paste it into a script convert it into a container spin up the infrastructure run their code and then tear down the infrastructure instead with this new generation of notebooks all you do is you click a single button and sagemaker does all of the work of taking your code converting it into a container spinning up the infrastructure running your container and then tearing down the infrastructure and so what used to take weeks before takes only a few hours now now Sage maker industrializes your machine learning and makes it much easier and much faster for you to do machine learning deployments but we didn't just start there we also embedded machine learning into many commonly used use cases and that gets me to the next key Trend that drives machine learning Innovation and that is ml powered use cases customers asked us to help them automate a lot of common use cases like document processing like industrial Manufacturing like personalization forecasting anomaly detection language translation and others and so we built a lot of AI services to help customers automate these use cases through machine learning let me give you a few examples of how customers are innovating with these AI services Amazon transcribe lets you embed AI into your contact center Solutions both on-prem and in the cloud and Amazon transcribe supports both post call analytics and real-time call Analytics so for example State Auto Insurance they provide Insurance in many different segments they used Amazon transcripts call analytics to be able to glean insights from millions of calls to the customer service Representatives and by using these insights State Auto was able to increase the efficiency of the call handling by 83 percent weeks used Amazon transcribes post call analytics to increase visibility of customer sentiment from just 12 percent to 100 percent of the calls now the experience that customers have when they call into your call centers can have a profound influence on how they view your company and so it's really important that they get all the help they need when they call into your call centers now today contact center supervisors listen in on a fraction of the calls to make sure that customers are getting the health they need obviously this is not scalable and so there are many calls where customers remain frustrated so our customers have been asking us for a solution that enables live call assistance and so I'm very happy to announce Amazon transcribes new real-time call analytics capabilities [Applause] this new real-time call analytics capabilities uses machine learning it uses speech recognition models to understand customer sentiment for example it uses speech recognition models to detect raised voices or prolonged periods of silence or repeated requests to talk to a manager or even the user phrases like I'm going to cancel the subscription and when transcribe finds these customer issues it then sends a notification to the call center supervisor in real time who can then join the call and help both the customer and the agent another domain that is getting transformed by AI is actually document processing an Amazon text track lets you embed AI into document processing and automate document processing by extracting things like names addresses and other key bits of information from documents in fact Benny Mack used to spend hours every day processing documents by using text track they are now able to process three thousand page PDFs in just five minutes imagine three thousand page PDFs in just five minutes eleven's health also automated their document processing the claims insurance and they have been able to automate ninety percent of the document processing now customers tell us that they want to be able to automate document processing in specialized tasks like mortgage processing it turns out that a mortgage loan package can have 500 pages and can take 45 days to close and almost half of this time almost 20 days is just spent getting information out of these documents and sending it to various departments and so I'm very happy to announce Amazon text tracks new analyze lending capability [Applause] we built this capability by taking Amazon text track and training it on a lot of mod gate specific documents like mortgage loan forms and W-2s and payslips and others and here is how this works so analyze Lending takes a machine learning model and then first understands what kind of a document is it is it a payslip is it a W-2 is it a mortgage loan form or something else it then uses a second set of machine learning models to extract out all of the information and not just that it can actually even flag pages that need review by a human underwriter so for example if a page is missing a signature analyze lending will actually flag that page for the human underwriter and that makes it a lot easier to automate document processing another domain that is getting transformed by AI is industrial monitoring in fact by being able to predict when an equipment is due for maintenance we can significantly reduce equipment downtime and to enable this for our customers we launched Amazon monitron in 2020. Amazon monitor on uses machine learning to predict when an equipment may need maintenance and it's a complete end-to-end solution it comes with its own Wireless sensors its own Gateway and its own app and best of all it needs no machine learning to be used so here is how it works you first have to decide on what equipment you want to Monitor and then once you have decided that you take the Amazon monitron sensors these just work out of the box and they measure your equipment's vibrations and temperature so you just take the sensors attach them to your equipment and then wire them to the Gateway and that's it Amazon monitron sensors then take your equipments temperature and vibrations stream that data to the cloud where machine learning models analyze your equipment's data and if they find any anomalies they send an alert to the app to hear more about monitor on inaction please welcome AK Karan the senior director of digital transformation at Baxter [Music] thank you yeah hello and good afternoon I'm AKA Coran the senior director of digital transformation for Baxter Healthcare it's my pleasure and great honor to be here today since 1931 the Baxter name has stood for excellence and innovation we are a global manufacturer of healthcare and life-saving products we have a pretty broad portfolio and we are driven by a higher Purpose with a mission to save and sustain lives so if you be into a doctor's office which I think most of us have been or say been in an emergency room or say being in surgery you have been touched by one of the many products that we make our impact is filled by 350 million patients whose lives We Touch in a year their families and their friends as a company we have over 70 manufacturing sites which are located globally and we run 24 7 365. and as any other manufacturer our supply hn is very complex and very dynamic so what does it mean to us right so if we have to keep our operations running trouble free Non-Stop a human reliability is going to be key every minute of production counts for us and every instance of downtime that we can avoid is very critical and highly crucial let's say it could be a HVAC system that is providing conditioner to a clean room assembly process or it could be a pump that is applying water to a steam generator or it could be a motor that is driving a high speed conveyor line when one of the systems fail we have a catastrophe on our hands so as we started exploring tools I mean we were looking for some predictive tools tools that can give us insights before the systems will go down or fail as opposed to having a condition based monitoring tool or time-based systems to kind of take this into the Next Generation and what we found out was Amazon Monitor and has some unique capabilities first and foremost right there's a plug-and-play system as Brighton showed in his previous slide for us as a consumer we had just a mistake the sensor onto the device it's literally flipping a burden stream the data to the cloud the system has inbuilt capabilities to do all the analytics and give us alerts to let us know when things might go wrong we are looking for a system that was agnostic meaning we have systems that are five years old or 50 years old but the system is in good shape we wanted to have the system deployed across the board so the system made it the Monitor and mediter Breeze the third was we were looking for ease of use in terms of deploying the sensors scaling it up ease of use of the software and a mobile app it gave us what you wanted but the biggest game changer or the biggest driver was the embedded machine learning and AI capabilities the system has capabilities to develop a custom signature profile or a temperature profile for every single asset so this is the scale very fast to thousands of our asses and this indeed was a truly a game changer for us and one of our early use cases we saw I mean this is one of the HVAC systems that is providing conditioner to a group of machines and what we found out was we got an alert from the monitron a lot of technicians found out I mean the gearbox on the system wasn't a pretty bad shape so they plan for the downtime event they took it down replaced the hardware and put it back into good health if we had not reacted to this alert it would have created a very serious supply chain issue for us so thanks to monitron for helping us identify this issue and react in appropriate time manner our journey with monotone has been very exciting we started with a group of sensors which are like three to four hundred sensors we wanted to deploy cannot get a feel for it see how it operates in real life we saw some good success from there we'll launched it to the entire site and this is one for one of our Lighthouse plans we have in the U.S

and now based on all the results we have we are scaling this across to all our Global sites in a very prudent time manner what we've seen so far we have seen around 500 hours of downtime elimination and this impacts to a 7 million units of production but a bigger impact is we have been able to supply life-saving products to our patients on time and this cannot be any more gratifying on the operations front and these are my team that I'm pretty took a monotone and they deployed with the entire site there's a myth that says I mean machine learning and AI is going to be eliminating jobs in our case that is not the case it has augmented our Workforce it is driving higher productivity levels and our engineering team has not been ever more excited before they used to go on rounds I mean they used to check every single Hardware or device they used to lock the data they don't do that anymore because monitoring with this capabilities gives us actionable alerts helped us to be more efficient monitor has really helped us to democratize machine learning and AI on our show flow but the biggest benefit that we've seen the system has really put a smile on our engineering team's face and ladies and gentlemen this is only a start in our digital transformation Journey thank you writing [Music] foreign [Music] it's an amazing example of how a company is transforming an entire domain with AI now all of this great Innovation that I've been talking about would not be possible unless we knew how to use machine learning in a responsible way and that gets me to the next key Trend that drives machine learning Innovation and that is responsible AI according to IDC the global spend on AI related technologies will exceed 200 billion dollars by 2025. in fact more than 50 percent of Executives say that AI will transform the organization in the next three years with that growth in Ai and machine learning comes the realization that we must use it responsibly now what does it mean to use AI in a responsible way at AWS we think of it along these six key dimensions first is fairness or in other words the machine Learning System must operate equally for all users regardless of race religion gender and other factors then there is explainability or in other words we must be able to understand how the machine Learning System operates then there is robustness or in other words there must be a mechanism to ensure that the machine learning system is working reliably then there is privacy and security which is always job number one at AWS then there's governance which means there must be mechanisms to make sure responsible AI practices are being used and finally this transparency which increases customer Trust and makes it possible for them to make informed decisions about how to use your systems now talking about transparency I'm really pleased to announce a new transparency tool for our AI Services called AI service cards now we are announcing these cards now for Amazon recognition Amazon extract and Amazon transcribe and these will serve as a single stop shop for all of the responsible AI questions of our customers they represent our comprehensive development process that spans all of the dimensions of responsible AI that I talked about previously and they go into the model the systems the features and the performance now it's important to build our services in a responsible way but at AWS we are also taking a people-centric approach and educating developers on responsible AI and that is why I'm pleased to announce a new course on fairness and bias mitigation as part of the AWS machine Learning University this free public course has more than nine hours of tutorials and once you've taken the course you will realize why bias happens in practice and how you can mitigate it with scientific methods talking about education gets me to the last key Trend that drives machine learning innovation and that is ml democratization or making machine learning tools and skills accessible to more people customers tell us that they have a hard time they often have a hard time in hiring all the data science talent that they need and to address this we launched Amazon sagemaker canvas at last year's reinvent canvas is a completely no core tool for doing machine learning what this means is that canvas prepares your data builds your models trains your models and then deploys a fully explainable model all of this without the user having to write even a single line of code and so what this means is that data analyst marketing professional sales professionals Finance professionals anybody that uses data that would benefit from using machine learning but may not have the coding skills may not have the machine learning skills can actually now do machine learning and so analysts at Samsung are using this for forecasting at 3M they're using it for operations improvements at Siemens they're using it for supply chain research now it's important for us to make our services easier to use and we are going to continue to do that but AWS is also investing in making machine learning in training the next set of machine learning Developers Amazon has committed that by 2025 we will help more than 29 million people improve the tech skills through free cloud computing skills training then there is AWS depressor that has now educated more than 320 000 developers in more than 160 countries we also have the training and certification programs that are part of AWS machine Learning University and available for free to the public and then lastly earlier this year we launched the AWS Ai and machine learning scholarships in partnership with Intel and audacity and we till date we have been able to train more than 20 000 underserved and underrepresented college and high school students on foundational machine learning Concepts and prepared them for careers in machine learning so to summarize machine learning is no longer the future machine learning is the present that needs to be harnessed now and if you want to harness machine learning you want to be able to leverage these six key trends first Leverage The exponential increase in the sophistication of machine learning models and use these latest models harness the variety of data available the multiple modalities of data available to train your machine learning models industrialized machine learning in your companies use machine learning forward use cases for automation make responsible AI an integral part of everything you do and then democratize machine learning in your companies so that more employees have access to machine learning tools and skills thank you for coming and enjoy the rest of reinvent and please fill out the session survey thank you

2022-12-05 19:50

Show Video

Other news