AWS re:Invent 2023 - Innovation talk: Emerging tech | HYB207-INT
Hey, everybody. It's always exciting to be here at re:Invent. It's an amazing time that we're seeing out there. And as the machine learning models and everything continues to grow, it's really exciting. So we're going to talk a little bit about emerging tech. We're going to talk a little bit about sort of our roadmaps and why we deliver, what we deliver, and some of our plans. You know, back in 1978, I'm going to age myself a little bit, I worked on the early neural networks for autonomous underwater vehicles for navigation.
And, you know, we were building so under the same foundation that's in the, you know, linear algebra based models and things like that that we're working on today, with all the interconnects and things like that. But, you know, there just wasn't enough data and enough compute at that time to really make it work. So the submarines got lost all the time, and that was very frustrating for my boss. And so with things like the cloud, now we have enough compute and enough storage to actually do it and make these things real.
My boss used to call it artificial stupidity, because the thing would get lost all the time, and that was always one of the challenges. And now with the boom of generative AI, you can start doing a lot more in this area. The other thing that we're starting to see with the amount of compute what we're getting is what I will call full fidelity digital twins, and that allows you to create complete software defined environments that operate in the cloud the way they do in the real world. And that's pretty exciting. It really accelerates high performance computing and machine learning and allows you to apply to every part of your business kind of end to end.
So, you know, back in the '90s, when I was working on a lot of the early internet components, we were really excited. I mean, when you'd see a URL in an ad, you'd be excited, because there was a URL, right? And that was that was exciting, and we thought that was going very fast. Well, things are moving much faster now.
It's accelerating five times faster than the early internet age did and the growth there. So that was me when I was CIO at the Pentagon, not CIO, one of the CIOs for the Pentagon. And that was Amazon Web Services, what it looked like back then. And this is what it looks like now and what it looks like for me, fortunately, unfortunately, I don't look as good as Amazon Web Services does.
I haven't done as well over the ages. But it's really pretty amazing to see how things. So you imagine how quickly that has changed.
It's going five times faster now. I'm really amazed. And so why is emerging technology so important and why do we focus on it so much? We kind of make the future in our business here in a lot of ways.
If you're not staying up to date with emerging technology, you're going to get outperformed by your competition. I talked to a lot of leaders that say they don't want to get Ubered. I mean, while I was at the Pentagon, we worked on the GPS satellites. And I remember when we were deploying those GPS satellites, we're sitting around talking about, wow, there's going to be a screen in your car, and it's going to navigate for you, but we had no concept of something like Lyft and Uber when you combine a smart car, a smartphone, the cloud, and a GPS satellite system and you disrupt all transportation systems.
And that's the kind of disruptions we're starting to see. And I think with what's coming with gen AI, where it's going to be just embedded in everything you do, that's going to just continue to do it. So you really need to stay on top of technology all the time. So we'll talk a little bit about our roadmaps.
You know, things like Nitro and Graviton and Trainium. They don't just happen overnight. Those are planned years in advance. We knew that machine learning would be coming. That's why we focused on Trainium.
We knew there would be need for arm based processing in the cloud. That's why we focused on Graviton. You saw the launch at Adam's keynote of the high performance S3. We started planning that in 2015, and so we knew that was going to be needed as well. And Mai-Lan, I'm sure will talk about that when she gets up here.
So the other thing we talk about a lot is flywheels at Amazon. And so this is the way we look at the flywheel. Now, we would have liked to deliver all of this at once and all fully formed, but two reasons we couldn't.
One is that all the technology didn't catch up in time to do it, and two, it's tremendously complex to do this. And you see this flywheel. Every time it turns, things get better and things improve. So let's talk about the flywheel. So connect and collect is happening everywhere, whether you're, you know, streaming in web clicks from your website or connecting on IoT or connecting a phone.
Whether it's database or ETL, connect and collect is happening everywhere. We make it possible for you to store and manage real and synthetic data at the exabyte scale and potentially even the zettabyte scale. Everything's becoming software defined. If you're not software defining everything, software is eating hardware, then you're falling behind. And you should think about all the ways you can software define your business and your processes.
And once it's software defined, you can build it as a digital twin, and then you can run it in the cloud, in the test environment. You can run it in the real world as well. And you can run even now in the cloud on the same binary that you can in the real world. So you can run on an ARM processor in the cloud and an ARM processor in a car, for example, or a control system or something like that. And once you have it in the cloud, you can test and simulate it. You can do things you just can't do in the real world.
You can drive millions of miles an hour in an autonomous vehicle, for example, in the cloud. You can't do that in the real world. And with all the advances of AI, you can optimize it. You can have it take a look at all the possible combinations that could ever occur that you could never conceive as a human in the normal environment and leverage things like high performance computing. And then we make it really easy for you to push it out and operate it, right? And once you push it out and operate, with that improvement piece, it just starts over again, and it gets better and better. So let's talk about these in detail.
So first of all, if you're going to do this, and especially all the connect component, you need a really, really strong foundation, a really strong foundation of security. Security is job one at AWS, and it has to be very scalable, and it's got to be sustainable. We can't run, you know, all of this and use all this power if it's not sustainable.
It's got to reduce cost. It's got to accelerate innovation. It's got to increase your performance, and it's got to be a better customer experience, ever improving.
So, at AWS, we focus on what I'll call military grade security. That's why you see us win so many of our government contracts and things like that, because we have the strongest foundation of layered encryption, level six certification, firewalls, all that kind of stuff, and over 300 cloud products that are focused on security. And we have end to end scalability. As Adam mentioned and reemphasized, having multiple availability zones that are physically separated by many kilometers, not all on floors of the same building.
And our other goal is to provide from the edge to the cloud a common set of APIs, deployment and management infrastructure for you. And we're very sustainable. Just by moving to the cloud, you're 3.6X more energy efficient,
and you reduce your carbon footprint on average 88%. And we're not just doing it in the cloud, we're doing it for our delivery vehicles. We're doing it everywhere. We're on the path to be a 100% renewable energy by 2025 and net zero by 2040. So we're very committed here. We're 8X bigger buyer of renewable energy than any other company.
So let's talk about connect and collect. So there's a vast amount of data, trillions of transactions coming into AWS every day. Web clicks are being tracked, sales information, logistics, supply chain, you know, purchases from your partners, purchases from your customers, all this data coming in, data coming in wirelessly through Wi-Fi and 5G, satellites and more. So let's do a quick survey.
How many unique device connections does AWS manage today for IoT devices? Not the total number, but how many unique device connections? So we'll give it a period of time to poll here. [music playing] Wow, 1.2 billion, so 270 million. That's interesting.
but about 270 million of those connect every day, because not every device connects all the time every day. So at any one time, pretty amazing, isn't it, when you think about it, it's kind of mind boggling. So some of the things you might not see in some of our launches that are actually pretty important in this connect environment. So at re:Invent, we launched this EDI transaction system to make it easier to connect and pull in EDI transactions from your field offices and partners and logistics sales organizations and things like that.
And EDI is still a very common method of bringing in data, and we just made it much easier for you and the enterprises to do that. We've got our LoRaWAN where you can just turn it on and sidewalk and have it immediately connect for low bandwidth and low power connectivity. We already cover 93% of the United States. We've got private 5G, where you can and run it on a Snowball or a Snowcone or an outpost at your site. And we have our wavelength zones as well, and of course, Wi-Fi.
But what about connecting everywhere? You may have seen it Adam's keynote the launch of Kuiper. Kuiper allows you to create a VPC from the edge device all the way through to your EC2 instance, as it loads out there, and we get all those satellites out. It's a low earth width orbit system, less than 50 milliseconds latency on connectivity and one gig of bandwidth. Pretty amazing. It allows you to connect all those remote sites and kind of roam between all different types of wireless connectivity. So it really gives you the ability to be connected all the time everywhere.
And, of course, we recently put a Snowcone on the international space station that's doing edge processing in space and collecting data. So you're going to get a chance to try it yourself. Right now on the app store for Apple, you can take and download this app, if you take a picture of it here.
It turns your phone into an IoT sensor, and you can shake the phone and watch the data stream back into the cloud. It doesn't save any data in the cloud, but you can bring it up on with a desktop next to it. Like if you have your laptop with you, you can send yourself the URL from it and bring it up and shake your phone and watch it change on the cloud. Now, the cool thing is about this, you can see the log data in JSON streaming off on your phone in real time, so you can see what we're actually sending. So you can confirm we're not sending any API data or anything like that.
And you can go over to GitHub and download the code, cut and paste it, and turn any app you want into IoT. So go ahead and take a chance, take a picture of it, download it, and give it a shot. So you can be connecting and collecting while I'm continuing to talk. So you're going to have a lot of real data coming in.
And, as I mentioned, we've got lots of ways to get it in, whether it's Kinesis or Snowballs or transferring ETLs or storing it in things like S3 or FSx, whether it's coming from cars or IoT or any of those places. A lot of real data comes in, and you're going to need it to save that real data, and we make it really easy to save that cheaply. But you're also going to need to generate synthetic data.
You need to train machines on how to deal with, say, a fire in a city or a disruption of your logistics supply chain. You don't want to do that in the real world and record it to see what happens and then train it. You need to synthetically generate it. And so we're working on ways, now I'm talking future, it's not all here today, but these are all things in process. We're working on ways to generate synthetic images for vision system. So, for example, on the left side there, I guess it's right for you, you see these are synthetic packages I'll show you that we generate.
You would think at Amazon with us shipping millions of packages every day, that we'd have enough images of packages. And you'll see that with some of our fulfillment center digital twins in a minute. But it isn't. We need billions of images to train these machine models. In the center there, we generate defects in motors so that machine learning models can learn to detect those defects.
And there's Perdue where we generate synthetic chicken nuggets. I never thought I'd do that for a living as a programmer, but anyway, and it allows you—the computer to learn to do those. You need millions or billions of pictures. So what you're seeing here, that is not real.
Everything there is synthetic. That is generated with the Unreal Engine and a physics engine. It's all synthetic.
And we feed that in, along with the real images, to our optical systems to train our models at our fulfillment centers. Everything here is synthetic. This is Aurora, one of our great customers. They do 15 million miles a day of driving in synthetic environments.
You could just physically never do that. You see, it's synthetic. So you need a combination of rich synthetic data and capabilities and rich real world data. And you can see here, there's a great visualization of the images of those packages coming from the real data and the synthetic data and how it builds out the matrix in the parameters of one of these models, and that's what you want.
You want a dense parameter set, to have an accurate large language model. So you're going to want to keep and manage your real and synthetic data. And then once you have that real and synthetic data, you can define all your processes and systems as software and feed it into that software in the cloud, whether you're running for real in the cloud or you're running tests in the cloud and synthetic environments in the cloud, whether you're doing, you know, smart cities or factories or buildings or enterprises. And then you can get to a point now where you can start doing full fidelity digital twins in the cloud.
And to enable those digital twins, we have TwinMaker. And so this is thousands of our customers, along with our fulfillment centers in TwinMaker. There are real images of TwinMaker. You can create a digital twin of a body, a human, a part of a body, an organ, a car, a ship, an airplane, or full factories.
It's pretty amazing to see all that. And you might notice we've just got KVS, which is also in your mobile app, if you downloaded it, that makes it very easy to connect all the vision systems right into the TwinMaker, so you can see what's happening real time on the factory floor. And you may have seen Jensen on stage with Adam talking about the L40S, and a lot of that was about machine learning.
But another very important point is Omniverse. So Omniverse is NVIDIA's full fidelity digital twin. So they're doing a 100% ray tracing, no rasterizing, because you have to be able to create a twin of the real world. And this is now getting integrated into TwinMaker as well. So at first time brings the OT environment and the simulation environment together in real time for you to be able to train your environment, simulate your environments, optimize your factories.
But, you know, there's still one piece missing. And Rainer's going to come up little bit later and talk about what's missing on this. And that's the whole MENS environment and all the factory automation code that you need to also simulate in this. And, of course, you know, you continue to see great examples of software defined vehicles in the automotive industry. Everything's moving to software defined vehicles. So let's take a look at how high performance computing can optimize the design of vehicles.
So for the last six years in a row, AWS has won the best cloud platform for supercomputing at the Supercomputing Conference. And we continue to work with great customers like Formula 1 and Toyota on advanced design, using things like ParallelCluster and Batch to design their next generations of vehicles and win races. And they have to do all this in the cloud. You can't do all this in the physical world anymore. Another question, so as you start to see the merging of machine learning and high performance computing, do you think you could do something as complicated as doing a computational fluid dynamics and drag coefficient for a car using just machine learning? What do you guys think? Let's take the survey. I'm curious what you guys think.
[music playing] Okay. We got the answer yet? Yes. You know, it's interesting. My team presented this to me in one of our PRFAQs, and I told them I didn't believe them, but you're actually right.
We're getting very close to being able to do that. Let me show you an example of ML optimization where we do just that. So what I did, as I said, to Stable Diffusion one of the Titan models, make me an image of a luxury vehicle, and it went out and generated that image, and then we took that image and we ran it through a neural radiance field by taking lots of different images of it. Now, the important thing to point out here, the fidelity for 3D in large language model generation is not there yet.
It's evolving. There must be a new paper every week. This is back to the emerging technology part of this, but this is what we did. And you'll see how granular it is. It's fascinating to me just, you know, we did this a few weeks ago, and today I saw a new image of it that would have been even better, but we'll go into that. But how fast this is evolving.
We generated a mesh, and then we ran a simulation on it. And you can see this is the mesh that was generated from that image. And this is the simulation. And we're finding right now that the computation of fluid dynamics that can be done in minutes, not in with millions of cores in hours or days, is about 98 to 99% accurate. That's pretty amazing.
Now, I wouldn't fly in an airplane that's 98% accurate. So you definitely want to still do the HPC runs and validate it after you finished your designs, right? But it's really amazing. And then so what we did is we did what's very common no, since these are all API enabled, we put it in a loop, and we had it feedback to the design of the car to increase the drag coefficient. Of course, there's no accounting for taste in the machine learning models.
We may need to work on that. But it's really fascinating when you see what you can do here. So after you've done all these things, now you can push these digital twins to production and operate them in the edge and in the cloud.
And you get this flywheel where it's just continuously improving. And you can kind of see how all of what we've been delivering over the years for the last 15 years is building up to these kinds of things. So let's see how this applies to a city or an enterprise.
Enterprises have been software defined for years. Or a factory floor or building automation, or closer to home, a house. What's a software defined house look like? Or software defined car. Or how about a software defined molecule with a quantum computer? We'll talk about that as well.
Sometimes it's not how big you can build something that's impressive, but how small. So in cities, more and more things are getting connected and integrated, and you see it kind of across the board, this entire cycle happening, and new cities being built all the time like Neom and other things like that that are completely focused on fully integrated digital twins. KONE's a great example of this. KONE runs on all our IoT systems.
They're the leading provider of people movers, escalators and elevators. And so what you're about to see is their TwinMaker implementation of the Helsinki metro station. They've done it for all the metro stations they operate in, and this gives both real time IoT data coming in using KVS and other things like that. You see the trains coming and going, you see the escalators and elevators moving.
This is real time. But the cool thing is they can simulate all this, too. They can say what happens if the train breaks down, what happens if a lot of trains come at once, what happens if we're having a concert and there's a lot of people, what happens, how do I do predictive maintenance? When do I need to pre-maintenance? All these different things. That's the power of digital twins.
Well, software defined enterprises have been around for a long time. You guys have been familiar with ERP systems and things like that. They've tremendously improved production and efficiency in enterprises, whether it's logistics or CRM, ERP systems, HR financial, companies like Salesforce and SAP and Workday are amazing at doing this. And you all interact with them all the time.
And it's tripled the productivity of enterprises of anything software defined where they can change things on the fly. And the organizations, you know, like banks. Banks, you know, if you talk to people from the banking industry, they're all software defined these days. They don't have just big vaults in places. They're all software defined. Nasdaq, with their matching engine on AWS, has been able to leverage the cloud to also be software defined, and you see these same kinds of things for years in the financial industry where they've been using HPC and machine learning to accelerate their financial analysis as well.
And Nasdaq sees a reduction of 10% of round trip improvement of latency in this matching system by running on AWS and on the cloud. We also see companies like Woodside who are taking this to the next level, doing end to end simulations and end to end operations with AWS Supply Chain, having digital twins that are physically distributed globally. So we see enterprises now starting to do what used to be the impossible. They take every combination of every supplier, every combination of every logistics problem, every combination of the digital twin of their manufacturing, every combination of their logistics out to their customers and every combination they've ever seen of customer demand and synthetics, what could happen, what happens if there's a hurricane, what happens if there's a storm, what happens all these things. They run those billions and billions and billions of combinations in the cloud. They have a machine learning model learning from it.
So when any of that happens in the real world, unlike a human who can't consume all that information, the machine learning model can immediately recommend the optimal solution for their enterprise. And I tell you, companies that do that are going to just, you know, blow away companies that don't. So let's talk about manufacturing and factory automation.
So I'm going to invite Rainer on stage, the CEO of Factory Automation to tell you a little about what we're doing with Siemens. So a warm welcome for Rainer. [music playing] Thank you, Bill. Great to be at re:Invent. Great to be in Las Vegas after 20 years. And I also must say the city's reinvented, and I'm working for a company which has a history of more than 175 years, and there was a lot of reinvention also in that company to stay relevant. And I'll give you some examples.
Siemens started way back, and they did, for example, the first electric street car in 1881. Nowadays, Siemens delivers a software for NASA to fly to Mars. And, by the way, Bill told me, well, AWS is also involved in that one. Siemens was a company which built the first x-Ray system. And nowadays Siemens is the first company building a real digital twin of a human heart, which makes a big difference for medical care. And Siemens, now coming to factory automation, was the first company introducing a transistor based controller for operating on the shop floor in factories.
And nowadays, every third machine globally, every third line globally is controlled by a Siemens controller. And it's state of the art. We build own ASIC's own chips to have high performance, low energy consumption. And why those boxes are relevant? Because it touches every you, everybody of you.
What did you have for breakfast? Maybe a juice, or the plate you had in the morning or the glass you drink your juice. Well, you have a nice shirt. It was produced somewhere in a factory. Or how did you come to Vegas? By car or by plane? They've all been manufactured in factory, and that factory is a complete own world, and that's called operational technology. I was entering this world of operational technology in '96. I was sitting there in Ann Arbor and was helping Siemens to do the automation of the US postal systems, and I was studying computer engineering at that time, was writing code in C++.
Java was just on the market. I came in that world and said, oh, my god, how they are programming? It's called letter logic. It looks like an electrical diagram.
I said, how can you program in that? But I learned, well, it makes sense, because it was at that time very much bit operations. And I tell you what that means. Like you have an input, and you press a button oh sorry, no, you press a button and something happens. It's a digital operation. You press a button and something happens. Try to write a code in Python to a binary operation.
It's quite hard. In the PLC, it's one line of code. So it makes sense.
I learned that. And it creates impact. When you press a button, suddenly a big motor is moving something. I was kind of had goosebumps when I have seen that.
The question is, is that good enough today? Automation brought us a long way. And a lot of the prosperity we have in western nation comes because of productivity out of factories. Every country which doesn't have natural resources basically gained prosperity through industrialization. And automation plays a major role for being more cost efficient, for being more energy efficient, having a higher quality, a faster time to market and flexibility.
But that's not good enough anymore. We need factories which contribute to a sustainable operation. We need think circular. We need not only think how to produce something, but how to repair it, maybe in an automated way, or how to recycle it, how to get back the material.
We have big issues on re-shoring. In the past, there was one location in the world where you produce something. Mass production is perfect for automation, because you do always the same thing without a change. Mass production. But now re-shoring, you want to produce very close to the consumption.
Because you want to get resilient. That means you have small lot sizes. Small lot sizes are normally not so good to automate, because it doesn't make sense to put all the efforts in. But you need to do that in an automated way, because you don't find people anymore who want to work in the factory. And therefore, if I want to move a factory close maybe here to Las Vegas to produce food or whatever, you need to find the people, and you don't have them.
So you need to automate differently. It's not good enough more anymore to only make that. You need to get much more data. You need the data not only for controlling the machine or switching on the slide.
You need to say that further more on the secondary use. For example, I need to know now what is the lifetime of the battery. I need to know what is the charging situation.
I need to order that battery when this battery is empty. I need to know what is the status of the LED. I need to get much more data than only binary data. But now the problem comes into place.
This OT world is quite closed. The door is locked. We don't get this data out. You are perfect in executing using this data, making value, creating value out of this data.
The problem you have, this data is quite closed or it's quite manual work, maybe with some protocols, MQTT to get it somehow out of the factory, but it's very close. So the question we have, how we get this data, and it's massive data as Bill have shown. All the sensors are there.
And it's not only new factories. The major topic is brownfield, a lot of existing factory, with quite old field buses. It's called Profinet, Ethercat, Ethernet IP, Profi, a lot of old things. You don't even need to know it, but we need to get this data out. And therefore we need to unlock this, and imagine you get all this data and you can use this data, and we make it more software defined.
And Siemens is doing that. That controllers will also be available as a container. You can run that control, which for the past, very hardware bind, in the future wherever you want to do it. We need to unlock this door. We need to have a continuous flow of data from the shop floor over the edge, into the cloud and back. And that's why I'm very excited that Siemens, which is the number one globally factory automation, and AWS as a main player, are joining forces to unlock this data source.
And together, we can make the data flow here very seamlessly and not going up, only up in the cloud, but, as Bill said, also push it back into production. And having that continuous data flow is very important. And it's not only data flow on this vertical, it's a data flow which goes beyond, because in the future, you want to have transferability over the complete supply chain.
So manufacturer one, factory one to the next one. You want to know the product which was produced, what CO2 footprint was, has that product in that factory, what material has been used. And you want to take that, that CO2 footprint, for example, and then take that material on the component, the next factory and add the next value add step and so on and so on. And you want to have it for the lifecycle of the entire product. So when the product is defective, want to repair it and need to get this data.
So we need not only thinking virtually, we need to think over the lifecycle and over the entire supply chain. We need to make this round trip, as Bill has shown it. We need to access the data. As I said, it's not so easy to get all this data out of the shop floor, which is existing.
You need to contextualize data. It doesn't help you if you have the number 345 and he doesn't know what it is. Oh, it's the rotation of a motor.
That motor is built into a pump, and that pump is running in the factory doing this and this. Then you have context. And then you can use this data, and we want to provide this data contextualized to use it in the cloud. Do a lot of things Bill just have shown, analyzing it, creating transparency, creating digital twins, training models with this data. And then, that's very important creating impact, and you only create impact if you can push it back into production.
So how we want to do this, so I'm very happy to announce that we are joining forces. We're combining Siemens Industrial Edge, which is the leading system for the shop floor, to connect to all different brownfields, but also greenfields, connecting to all different field buses, from Siemens and from all others, and putting a container, AWS SiteWise Edge, on Industrial Edge. That container can be downloaded from the marketplace of Industrial Edge from Siemens nowadays already. You deploy it to the shop floor, to Industrial Edge, and that opens up that universe of OT. You put a pipeline there, and now you can get the data, contextualized data, into the cloud and use it whatever you want to do. And you also can push it back.
It's not only one direction. And now you are very familiar how to use the data then in the cloud. But also what's interesting, I want to enable people on the shop floor that they can use this data as well. They have a problem, those people are not capable to write—they're not full stack developers.
They know the process. They have the main knowledge. But how can we enable them to use this data? And therefore also happy to announce that we use Mendix, which is a low code environment which is currently available running on AWS in the cloud, where you can low code programming.
In the future, now you can take Mendix and also put the app directly on the shop floor. And that will enable people on the shop floor to write own apps with the domain knowledge they have, using the data from the cloud and from the edge. And that will enable them, and that will enable data usage from the OT world in the IT world. And I give you some real examples. So one customer, you know that company that's Volkswagen and they have brands like Audi and Skoda and others.
They did that, Most of the factories run on Siemens equipment. The cars are produced with Siemens controllers. But what they do nowadays, they produce cars, and they don't use that data which is generated all the time producing the cars for secondary use. Now, what they did, they collected this data, and they didn't write anything new. They simply take the data out of the PLC of the controller program, analyze the data and identified significant bottlenecks in the manufacturing process.
Then they changed it, pushed it back into the line. And now they can produce a significant numbers of car more on the same line. That's real productivity. That's real energy efficiency, using this data, not only for building the cars, analyzing them and improving it. The second example, a company it's in Spain.
It's called bonArea, and I was talking to Mark who is responsible in that company, and he said, well, I really have a problem. I want to expand into e-business, and they do food production. I want to expand into e-business. I don't find people. They are in a rural area in Catalonia, and basically everybody who was living there is already working for bonArea, and they cannot expand, and nobody want to move there. So here's a problem.
How can we expand our operations if you don't have the people working there? And what we did now, we stretched the boundaries of what you can automate today. So there was a lot of manual work. Like in fulfillment centers where you grasp thing and put it into a box, that's today manual work. You can either do it by simulating the object, what Bill have seen there, with maybe synthetic data.
What we do here is different. We don't simulate the object. We trained the controller the skill of grasping, so that robot can grasp any part, even it has never been seen before. It's never trained before on that part, because not the part was trained.
The skill of grasping was trained. And now imagine that. That means you can in the future maybe automate more than only logic.
You can use AI in expanding automation. You can dream of maybe an autonomous factory where a machine is handling a situation which has never been programmed before, which has never been considered before. And now you also see that AI, if you want to do that, you need to train in the cloud. You need to deploy that algorithms down to the shop floor.
This can be not done like in the past where you had never changed a running system. You first install it, it runs, don't switch it off, it runs. In the future, it will constantly change and constantly deploy new functionality. And that can only be done in cooperation with a major cloud provider like AWS, to making this closed loop reality, to pushing automation beyond what we can imagine today.
And with that, we opened the door, which was closed, together, Siemens and AWS. Thank you very much. [applause] [music playing] Thanks so much. Very exciting to see what two innovative companies can do together there.
So let's go from the factory to a little bit closer to home. So by the way, I saw that over 1,000 people have already downloaded that app and are playing with it. So let's keep doing that and see if we can overload the Wi-Fi here, give our networking people something fun to worry about today. But, you know, software defined homes are coming.
They've been sort of fits and starts with smart home for a while, but now we're getting to the point where this can start to get really mature and nicely integrated. You see over 302 million smart home devices out there. And this integration of creating a software defined home is going to allow you to do things you never thought of before, automatically closing drapes and windows, changing your air conditioning, prepositioning the temperature at your house before you get home from work, all sorts of interesting things, along with security and thermostat controls.
But we want to continue to make it easier to do that, and we want to make it easier for you to apply ML to this and predictive capabilities. And this is really going to affect how energy consumption is used, and it's going to be another path, one of the many things we have to do for sustainability. So we've teamed up with a really innovative company called Telus, which is integrating together a smart home hub with quick connect on devices and full automation for you, where the devices automatically connect. You've got voice control, control over lock systems, control over your curtains, your AC systems, all in a nice integrated thing with a nice integrated phone app as well. Look for this soon. Telus has done some amazing work here, and we're looking to roll this out globally with them.
Very excited about this as a SaaS product for the smart or software defined home. Pretty soon, you'll be able to make digital twins of your house. Software defined vehicles, that's the way everything's going these days, and the companies are just driving very quickly to be fully software defined.
You may have seen the announcement on Adam's keynote with Stefan and BMW. We've been working really closely with them to have complete end to end software defined vehicle development testing and ADAS in the cloud along with simulation. The cool thing is, again, the software running in the cloud that runs in the car doesn't know it's in the cloud. It thinks it's in the car. You're feeding it the information, kind of like the matrix. People didn't know that they were in a virtual environment.
And you can feed it all sorts of things. You can feed it, like I said, millions of miles of testing, different failure rates, all those things, see how it reacts in the cloud. So you end up with this very similar cycle, where you have real and lots of synthetic data going into training. You have the full software defined system, you've got the digital twin, the simulation, the ML optimization. And then with things like FleetWise, we make it really easy to push it out to the car, run it in the car.
Then you get even more data, and it just gets better and better and better. It's self-reinforcing. So, of course, at re:Invent, we announced more FleetWise capability. We're adding full vision system capability into FleetWise. So you can now aggregate radar, lidar, any of that type of data along with camera data into your car and pull it from FleetWise all for you, all seamlessly integrated with the car. And, of course, FleetWise builds a nice data lake for you in the cloud, which you can then take a look at any customer's car in TwinMaker.
And, of course, we're working with BMW to accelerate that as well, and our partner Qualcomm, to deliver the actual AI 100 accelerators on the cloud that they use in the car. And, of course, BMW with their Neue Klasse next generation vehicle in 2025. If you haven't seen this, it's astounding. They're taking it to the next step. It is a software defined paint on the car.
You can dynamically change the color of your car, which will stop all those arguments between spouses as to what color the car is. It's going to make registering cars very confusing. I guess you put any for color. I'm not sure what you do there, anyway.
But it's very exciting to see this coming, and it just gives you an idea of how things are getting taken to the next level by the innovation that BMW is putting out there. And of course, Continental also is doing this. They build their entire cockpit on Graviton in the cloud, and they have a developer's work bench that shows the full dashboard of the car. You can even do the voice commands and everything, test it out right on your desktop.
It's running in the cloud. Push a button, pushes it out to the car. And then you can test it in your fleet vehicles, and then you can push it to real customer vehicles. Really amazing work that they're doing there as well. So let's get a little bit smaller. We've gone through very big.
Let's start talking about molecular simulation and what we can do there. Materials science has come a long way, and large language models also have a hope here of solving some of our big materials science problems. But materials science and the advent of quantum computers probably have the greatest effect on our lives longer term.
And if you saw Peter's keynote, you see what our team is doing in quantum computing and some of the things we've been working on there. But the target really is to do molecular simulation. So in the history of computing, we literally started computing with sticks and stones.
We used sticks and stones to count and keep track of things. Then we used clay tablets and paper tablets to keep track of things, and writing was developed. And then we had a huge innovation with the abacus, and then we used gears and things like that and clockworks to start doing computing.
So this is emerging technology through the ages, if you like. And then as more electronics get to be understood, we started using tubes for switching and operations and then the first integrated circuits. And then with the space program, we started, I mean, first transistor space program, we started with integrated circuits where we built RS flip flops using lithography right on the chip. And now all we're doing is moving a little bit smaller and doing the same thing with atomic particles.
So quantum computers operate with photons or electrons or neutral atoms or ions. Those are the primary technologies that are used today. So let's talk a little bit about quantum computers and see what you guys know about it. And you may know more than I do in some cases, because it's a pretty amazing subject.
So quantum computers use a thing called qubits or quantum bits that can exist in both states, a one and a zero at the same time. Is that true, or is that science fiction? What do you guys think? [music playing] All right. Let's see the answer. Yeah.
So you guys listened to Peter's keynote. They can exist in two states at once. Let's see how our app is doing. Well the app hasn't crashed yet, so that's good. So let's talk about what that actually means.
So in digital computers that we're all familiar with, by the way, we call those classical computers now, just you know, and these digital computers, your digits exist in ones and zeros, and you're operating on them in ones at zeros. The two magic things in a quantum computer, first is a thing called super position. So when you're operating on these qubits, while they're in super position, they represent every point on the sphere. They're everything at once. Right.
And so that allows you to represent much larger bodies of information. The second magic thing, if you like, around physics is entanglement. So entanglement is how we actually program the computers. We actually are creating the equivalent of a chemical bond by entangling these two qubits together. And so if you go out into Bracket today, you can build your own shocks on real quantum computers, and you can build those sockets, those circuits yourselves, and watch how the qubits interact with each other.
And that's what makes it possible to do the amazing things that quantum computers will be able to do. But there is a problem, and we have the same problem on existing computers, but not to the same level. So on my iPhone and all your phones and computers, there are alpha particles flipping bits in the memory. But we have a thing called ECC, or error correction code built in.
On our large storage systems like S3 and others, we have error correction code built into that as well. So software virtualizing and fixing hardware. There we go again, software defined. On a quantum computer, we have many, many things that can infect its environment and cause it to flip its bit or its phase, whether it's temperature, pressure, magnetic fields, the ability to fab it at the proper process. So if the error rate is too high, what happens is, as you add more qubits, which you need to solve these big problems, the noise overcomes the signal, and then you can't do anything with it. So error correction is where all of the emphasis is on quantum computers right now, and they're really going to affect the physical sciences first.
And the reason being is quantum computers work like molecules, and so simulating molecules is where it's going to happen first on quantum computers. So material and physical sciences. And then as more qubits become available, more error corrected logical qubits become available, it'll be focused on optimization problems and even eventually cryptography.
But error correction is really where the focus is, and that's where our team down in Caltech is building the machines of the future that will do logical qubits and error correction. So let me give you an example of this. So ammonia is used, you may not realize how often ammonia is used, but it's used in petrochemicals, it's used in fertilizer, it's used everywhere. It's a trillion dollar industry.
But we know that ammonia can be produced in a much lower energy state for a lot less money than it is today. And ammonia also has the potential to replace carbon fuels. So we need to figure this out. We know it, because bacteria can do it.
But if we ran that simulation on all of AWS and all the cloud computers and all the iPhones and all the laptops on earth, it would take longer than the history of the universe to complete that simulation. Theoretically a quantum computer with about a 1,000 error corrected qubits could do that in a few minutes. That's a trillion dollar problem that you could solve with a quantum computer.
And that's why there's so much focus on it. So today we have Braket, it's equivalent to the EC2 software. It software defines the quantum computers. We have quantum hardware out there, and we have quantum networking.
And we just announced Braket Direct, which allows you to pre-allocate space on the quantum computer and work directly with the different quantum providers. So what does the quantum computer look like? You see these a lot. What we call the "chandelier" is mostly the cooling and communication system, and the quantum computer actually sits down at the bottom. Now, remember, we're measuring vibration and spin. Heat is vibration, so we have to make heat go away.
That's a problem. You can kind of see out of focus in the back end is one of the machines running, that silver component eliminates the electromagnetic fields of the earth. And then there's a thermos bottle. When this is running, it's the coldest place we know of in the universe.
It is as close as we can get to absolute zero, which is astounding to me. It takes us 48 hours from the time we fab a new chip and put it in the machine to get it down to that temperature. But one of the things that Peter announced, he took my thunder away from my team, is that we've come up with, I think the first better than breakeven qubit, and we're not going to tout it too much until we have done the white paper and peer reviewed it and all those good things, right, out there. But basically it significantly reduces bit flip error rate and reduces phase flip as well.
And it's what's necessary for a logical qubit. We call it a cat qubit. It's based on a piezoelectric oscillator, and the cat as Schrodinger's cat, but it's a bunch of physical qubits working together to error correct each other. Very similar to ECC, in some ways, very similar to what we do on S3, for example.
We use lots of physical components to error correct each other, and this is going to allow us to scale a quantum computer. But we still—this is a huge step forward, but we still need to be 10, a factor of 10 better to scale to enough qubits to solve the real problems. So it's probably going to take us another 10 years or so, between five and 10 years of working on this.
And this is where we're talking about emerging technology here, but it's pretty exciting. But in addition to that, if you have quantum computers, you need quantum networking. So in our Boston facility, we're fabbing diamond voids, and I think this is a really cool picture. So over here, I guess, on the left side, you can see the physical, actual physical size of the quantum repeater. And under an electron microscope, you can see the photons coming off of the fiber there and getting trapped in the diamond voids.
And that allows us to repeat the quantum state. And you need this, because if you're going to send quantum information over fiber, you have to re-enhance the photons in fiber about every 100 kilometers and you'll lose your quantum state. And this is going to be very exciting once we can use this to entangle between quantum computers, to do quantum key distribution and other things like that. So, again, emerging technology under development right now.
So if you take anything away from this, you should be thinking about how you software defined everything you do. You know, AWS is very much a software defined compute storage and network system, and that's been a lot of our success. As Werner said today, hardware fails all the time, so software is taking care of that underneath the covers, but it'll transform your business. It'll transform everything you build, if you fall into this flywheel of continuous improvement. You want to connect and collect everything. One of the things Rainer focused on is you got to keep all that data, and it's really cheap to keep it on things like S3 Glacier and other things like that.
The more data you keep, the denser—remember that that image of the matrix being built of the machine learning model, the denser that matrix can become, the more accurate your models are going to be. You want to have real and synthetic data, and you're going to see more and more machine learning model work on creating synthetic data for you. You want to simulate and use HPC every place you can to optimize your business, along with machine learning. And you just want to create this flywheel that's always running. And our job is to make it easier and easier for you to turn that flywheel. And we'll continue to build out this picture throughout the next years until you can do it as easily as you downloaded that app on your phone, I hope.
Anyway, with that said, thank you very much. Go out and innovate, play with the apps. Look forward to talking you guys at next re:Invent. [applause]