GTC 2022 Keynote with NVIDIA CEO Jensen Huang
Now approaching your destination. Hi! Welcome to NVIDIA. Welcome to GTC! I hope all of you are well. We have really big announcements and cool things to show you today.
But first, let me share the new I AM AI. It’s a work of love by NVIDIA’s creative team and beautifully tells the stories of the impactful work you do. I am a visionary. Expanding our understanding of the smallest particles, And the infinite possibilities of the universe. I am a guardian. Protecting us on all of our journeys, And insuring our most precious passengers make it home safely.
I am a healer. Searching for hidden threats in every cell, And delivering precise care with every breath. I am a helper. Taking on complex tasks in the most challenging environments and giving our crops room to grow. I am a creator. Transforming the very fabric of our everyday lives, And using the creative DNA of the masters to inspire a new generation of art.
I am a learner. Taking just minutes to discover how to crawl, walk, and stand on my own. I am a storyteller.
Giving emotion to words and breaking down the language barrier. I am even the composer of the music. I am AI. Brought to life by NVIDIA, deep-learning, and brilliant minds everywhere. Doctors can now sequence the human DNA in a couple of hours. And predict the 3D structure of the DNA from the amino acid sequence.
Researchers can use computers to generate new drug candidates. And inside a computer test the new drug against a target disease. AI is learning biology and chemistry, just as AI has learned images, sounds, and language.
Once in the realm of computers, fields like drug discovery will undergo the same revolution we are witnessing in other areas impacted by AI. None of these capabilities were remotely possible a decade ago. Accelerated computing, at data center scale, and combined with machine learning, has sped-up computing by a Million-X. Accelerated computing has enabled revolutionary AI models like the Transformer and made self-supervised learning possible. AI has fundamentally changed what software can make and how you make software. Companies are processing, refining their data, making AI software, becoming intelligence manufacturers.
Their data centers are becoming AI factories. The first wave of AI learned perception and inference, like recognizing images, understanding speech, recommending a video, or an item to buy. The next wave of AI is robotics – AI planning actions.
Digital robots, avatars, and physical robots will perceive, plan and act. And just as AI frameworks like TensorFlow and PyTorch have become integral to AI software, Omniverse will be essential to making robotics software. Omniverse will enable the next wave of AI. We will talk about the next Million-X and other dynamics shaping our industry this GTC.
Over the past decade, NVIDIA accelerated computing delivered a Million-X speed-up in AI and started the modern AI revolution. Now, AI will revolutionize all industries. The CUDA libraries, the NVIDIA SDKs, are at the heart of accelerated computing. With each new SDK, new science, new applications, and new industries can tap into the power of NVIDIA computing. These SDKs tackle the immense complexity at the intersection of computing, algorithms, and science. The compound effect of NVIDIA's full-stack approach resulted in a Million-X speed-up.
Today, NVIDIA accelerates millions of developers and tens of thousands of companies and startups. GTC is for all of you. It is always inspiring to see leading computer scientists, AI researchers, roboticists, and autonomous vehicle designers present their work at GTC. We can see AI and accelerated computing's expanding reach and impact from the new attendees and talks. This year, we see Best Buy, Home Depot, Walmart, Kroeger, and Lowe's working with AI.
LinkedIn, Snap, Salesforce, DoorDash, Pinterest, ServiceNow, American Express, and Visa will talk about using AI at scale. And you can look forward to seeing talks from healthcare companies GSK, AstraZeneca, Merck, Bristol Myers Squibb, Mayo Clinic, McKesson, and Eli Lilly. GTC 2022 is going to be terrific. The GPU revolutionized AI. Now, AI on GPUs is revolutionizing industries and science.
One of the most impactful to humanity is climate science. Scientists predict that a supercomputer a billion times larger than today's is needed to effectively simulate regional climate change. Yet, it is vital to predict now the impact of our industrial decisions and the effectiveness of mitigation and adaptation strategies. NVIDIA is going to tackle this grand challenge with our Earth-2, the world’s first AI digital twin supercomputer, and invent new AI and computing technologies to give us a Billion-X boost before it’s too late.
There is early evidence we can succeed. Researchers at NVIDIA, Caltech, Berkeley lab, Purdue, Michigan, and Rice Universities have developed a weather forecasting AI model called FourCastNet. FourCastNet is a physics-informed deep learning model that can predict weather events such as hurricanes, atmospheric rivers, and extreme rain. FourCastNet learned to predict weather from 40 years of simulation-enhanced ground truth data from ECMWF, the European Center of Medium Weather Forecasting. For the first time, a deep learning model has achieved better accuracy and skill on precipitation forecasting than state-of-the-art numerical models, and makes predictions 4 to 5 orders of magnitude faster - what takes a classical numerical simulation a year now take minutes. Atmospheric rivers are enormous rivers of water vapor in the sky – each carrying more water than the Amazon.
They provide a key source of precipitation for the western U.S., but these large, powerful storms can also cause catastrophic flooding and massive snowfalls. NVIDIA has created a physics-ML model that emulates the dynamics of global weather patterns and predicts extreme weather events, like atmospheric rivers, with unprecedented speed and accuracy.
Powered by the Fourier Neural Operator, this GPU-accelerated AI-enabled digital twin, called FourCastNet, is trained on 10 TB of Earth system data. Using this data, together with NVIDIA Modulus and Omniverse, we are able to forecast the precise path of catastrophic atmospheric rivers a full week in advance. FourCastNet takes only a fraction of a second on a single NVIDIA GPU. With such enormous speed, we can generate thousands of simulations, to explore all possible outcomes. Allowing us to quantify the risk of catastrophic flooding with greater confidence than was ever possible before. NVIDIA is pioneering accelerated computing, an approach that demands full-stack expertise.
We built NVIDIA like a computing stack, a neural network – in four layers – hardware, system software, platform software, and applications. Each layer is open to computer makers, service providers, and developers to integrate into their offering however best for them. I will announce new products at each of these layers today. Let's get started.
The progress of AI is stunning. Transformers opened self-supervised learning and unblocked the need for human-labeled data. We can use enormous training sets with Transformers and learn more robust and more complete representations. Because of Transformers, model and data size grew, and model skills and accuracy took off. Google BERT for language understanding, NVIDIA MegaMolBart for drug discovery, and DeepMind AlphaFold are all breakthroughs traced to Transformers.
Transformers made self-supervised learning possible, and AI jumped to warp speed. Natural language understanding models can learn without supervision from vast amounts of text, which is then refined with a small amount of human-labeled data to develop good skills for translation, Q&A, summarization, writing, and so much more. Multi-model learning with language supervision has added another dimension to computer vision. Reinforcement learning models, like NVIDIA NVCell, are doing chip layout – AI is building chips. Like FourCastNet and Orbnet, physics-ML models are learning physics and quantum physics.
The conditions are prime for significant breakthroughs in science. Generative Models are transforming creative design, helping build virtual worlds, and soon, revolutionizing communications. Like Nerf, Neural Graphics networks that learn 3D representations from 2D images will elevate photography and help us create digital twins of our world.
AI is racing in every direction – new architectures, new learning strategies, larger and more robust models, new science, new applications, new industries – all at the same time. Here’s an amazing example. This AI-powered character is animated using a physics-based reinforcement learning model. Let’s take a look. We are using reinforcement learning to develop more life-like and responsive physically simulated characters.
Our character learns to perform life-like motions by imitating human motion data, such as walking, running, and sword swings. Our character is put through an intense training regimen for 10 years in simulation. Thanks to NVIDIA’s massively parallel GPU simulator, this just takes 3 days of real world time. The character then learns to perform a large variety of motor skills. Once the character has been trained, it can use those skills that it has learned to perform more complex tasks.
Here, the character is trained to run to a target object and knock it over. We can also steer the character to walk in different directions like you would with a game character. Our model allows the character to automatically synthesize life-like responsive behaviors to new situations. We can also control the character using natural language commands.
For example, we can tell the character to do a shield bash or swing its sword. We hope this technology will eventually make animating simulated characters as easy and seamless as talking to a real actor. NVIDIA AI is the engine behind these innovations, and we are all-hands-on-deck to advance the platform – solving new problems, getting it everywhere, and making AI more accessible. NVIDIA AI is a suite of libraries that span the entire AI workflow - from data processing and ETL feature engineering to graph, classical ML, and deep learning model training to large-scale inference.
NVIDIA DALI, RAPIDS, cuDNN Triton, and Magnum IO are among the most popular libraries. We use the libraries to create specialized AI frameworks that include state-of-the-art pre-trained models and data pipelines that make it easy to scale out. Let me touch on a few of our updates at GTC.
Hundreds of billions of web interactions a day – like search, shopping, and social – generate trillions of machine learning model inferences. NVIDIA Triton is an open-source hyperscale model inference server – the grand central station of AI deployment. Triton deploys models on every generation of NVIDIA GPUs, x86 and Arm CPUs and has interfaces to support accelerators such as AWS Inferentia. Triton supports any model – CNNs, RNNs, Transformers, GNN, decision trees, any framework.
Tensor Flow, PyTorch, Python, ONNX, XGBoost. Triton supports any query type – real-time, offline, batched, or streaming audio and video. Triton supports all ML platforms – AWS, Azure, Google, Alibaba, VMWare, Domino Data Lab, OctoML, and more.
And Triton runs in any location – cloud, on-prem, edge, or embedded. Amazon Shopping is doing real-time spell checking with Triton. And Microsoft is using Triton for its Translator service.
Triton has been downloaded over a million times by 25,000 customers. NVIDIA Riva is a state-of-the-art speech AI that is end-to-end based on deep learning. Riva is tunable.
Riva is pre-trained with world-class recognition rates, and then customers can refine with custom data to learn industry, country, or company-specific jargon. Riva speech AI is ideal for conversational AI services. Snap, RingCentral, Kore.ai, and many others are using Riva. Today we are announcing the general availability of Riva.
Release 2.0 has speech recognition in 7 languages, neural text to speech with male and female voices, and custom tuning with our TAO transfer learning tool. Riva runs on any cloud and anywhere with NVIDIA GPUs, basically everywhere. Maxine is an SDK featuring state-of-the-art AI algorithms for reinventing communications. Video conferencing encodes, transmits, then decodes images and sound.
Computer vision will replace image encoding, and computer graphics will replace image decoding. Speech recognition will replace audio encoding, and speech synthesis will replace audio decoding. Fifty-five years after AT&T demonstrated the Picturephone at the World's Fair in New York, AI will reinvent video conferencing. Remote work is here to stay. We need virtual live interactions more than ever. Maxine is an AI model toolkit used by developers to reinvent communications and collaborations. Maxine has 30 models today.
The GTC release adds new models for acoustic echo cancellation and audio super-resolution. Let’s take a look at what Maxine can do. NVIDIA Maxine reinvents real-time video communication with the magic of AI. Thanks to Maxine, we can now hear and see each other better, and feel more connected and included — even when language becomes a barrier. To stay engaged with my audience, Maxine helps me keep eye contact with everyone on the call, whether it’s one person or one hundred - and even if I’m reading a script.
(In Spanish) How can you overcome the language barrier with Maxine? While I don’t speak Spanish, with Maxine’s help, now I can! (In Spanish) Now I can now speak your language in my own voice. Not bad, is it? (In Spanish) Magnificent. (In French) That’s great. But can Maxine translate into more than one language? Oh yes, absolutely.
(In French) Maxine also allows me to speak French. (In French) And many more languages. (In German) We’d tell you more about Maxine’s magic, but you’ll have to wait until the next GTC. (In German) Stay tuned so you don’t miss anything! (In German) Wonderful! I’ll be there Recommenders are personalization engines. The internet has trillions of items and is constantly changing – news, social videos, new products. How do we even know what is out there? Recommenders learn the features of items, your explicit and implicit preferences, and recommend things likely of interest to you – a personalized internet.
Advanced recommendation engines drive the world’s consumer internet services. In the future, it will also drive financial services, healthcare services, vacation planners, and much more. NVIDIA Merlin is an AI framework for recommender systems. Merlin consists of end-to-end components of a recommender pipeline, including feature transforms, retrieval, to ranking models. With NVIDIA Merlin, companies can quickly build, deploy, and scale state-of-the-art deep learning recommender systems. Snap uses Merlin to improve ad and content recommendations while reducing cost by 50% and decreasing serving latency by 2x.
Tencent Wechat uses Merlin to achieve 4 times lower latency and 10 times throughput for short video recommendations. Tencent's cost is halved moving from CPU to GPU. At GTC, we are announcing the 1.0 release, and general availability of Merlin.
Transformers revolutionized natural language processing. Training large language models is not for the faint of heart – it is a grand computer science challenge. OpenAI’s GPT-3 is 175 billion parameters. NVIDIA Megatron is 530 billion. And Google's new Switch Transformer is 1.6 trillion parameters. Nemo Megatron is a specialized AI framework for training large language models – up to trillions of parameters.
To get the best performance possible on the target infrastructure, Nemo Megatron does automatic data, tensor, and pipeline parallelism, orchestration and scheduling, and auto precision adaptation. Nemo Megatron now supports any NVIDIA systems. And automatically does hyper-parameter tuning for your target infrastructure.
Nemo Megatron is also cloud-native and supports Azure, with AWS coming soon. AI, the creation and production of intelligence, is a giant undertaking and touches every aspect of computing and every industry. NVIDIA AI libraries and SDKs accelerate software, platforms, and services throughout the AI ecosystem. Even with excellent tools and libraries, developers and NVIDIA must dedicate significant engineering to ensure performance, scalability, reliability, and security.
So, we have created the NVIDIA AI Accelerated program to work with developers in the AI ecosystem to engineer solutions together that customers can deploy with confidence. NVIDIA AI democratizes AI so that every industry and company can apply AI to reinvent themselves. One of the most impactful is the revolution in digital biology. AI accelerates DNA sequencing, protein structure prediction, novel drug synthesis, and virtual drug testing.
Funding for AI drug discovery startups surpassed $40 billion in the past couple of years. Insilico Medicine has just sent its first AI discovered drug to enter human clinical trials. The novel drug and target was discovered in less than 18 months, years faster than previously possible. The conditions are prime for the digital biology revolution.
I can't imagine a greater purpose for NVIDIA AI. AI applications like speech, conversation, customer service, and recommenders are driving fundamental changes in data center design. AI data centers process mountains of continuous data to train and refine AI models. Raw data comes in, is refined, and intelligence goes out. Companies are manufacturing intelligence and operating giant AI factories.
The factory operation is 24/7 and intense – minor improvements in quality drive a significant increase in customer engagement and company profits. New organizations called MLOps are showing up in companies around the world. Their fundamental mission is to efficiently and reliably transform data into predictive models - into intelligence. The data they process is growing exponentially – the more predictive the model, the more customers engage the services, the more data is collected. The computing infrastructure of MLOps is fundamental, and its engine is the Ampere-architecture A100.
Today we are announcing the next generation. The engine of the world's AI computing infrastructure makes a giant leap. Introducing NVIDIA H100! The H100 is a massive 80 billion transistor chip using TSMC 4N process. We designed the H100 for scale-up and scale-out infrastructures, so bandwidth, memory, networking, and NVLINK chip-to-chip data rates are vital. H100 is the first Gen5 PCI-E GPU and the first HBM3 GPU.
A single H100 sustains 40 terabits per second of IO bandwidth. To put it in perspective, 20 H100s can sustain the equivalent of the entire world's internet traffic. The Hopper architecture is a giant leap over Ampere. Let me highlight 5 groundbreaking inventions.
First, the H100 has incredible performance. A new Tensor Processing format – FP8. H100 has: 4 PetaFLOPS of FP8 2 PetaFLOPS of FP16 1 PetaFLOPS of TF32 60 TeraFLOPS of FP64 and FP32 Designed for air and liquid cooling, H100 is also the first GPU to scale in performance to 700W. Over the past six years, through Pascal, Volta, Ampere, and now Hopper, we developed technologies to train with FP32, then FP16, and now FP8.
For AI processing, Hopper H100’s 4 PF of FP8 is an amazing six times the performance of Ampere A100’s FP16 - our largest generational leap ever. The Transformer is unquestionably the most important deep learning model invented. Hopper introduces a Transformer Engine. The Hopper Transformer Engine combines a new Tensor Core and software that uses FP8 and FP16 numerical formats, and dynamically processes layers of a Transformer network. Transformer model training can be reduced from weeks to days.
For cloud computing, multi-tenant infrastructure translates directly to revenues and cost of service. A service can partition H100 up to 7 instances – Ampere can also do this. However, Hopper added complete per-instance isolation and per-instance IO virtualization to support multi-tenancy in the cloud. H100 can host seven cloud tenants, while A100 can only host one. Each one is equivalent in performance to two full T4 GPUs, our most popular cloud inference GPU. Each Hopper multi-instance supports Confidential Computing with Trusted Execution Environment.
Sensitive data is often encrypted at-rest and in-transit over the network but unprotected during use. Data can be an AI model that results from millions of dollars of investment, trained on years of domain knowledge or company-proprietary data, and is valuable or secret. Hopper Confidential Computing, a combination of processor architecture and software, addresses this gap by protecting both data and application during use. Confidential Computing today is only CPU-based. Hopper introduces the first GPU Confidential Computing.
Hopper Confidential Computing protects the confidentiality and integrity of AI models and algorithms of the owners. Software developers and services can now distribute and deploy their proprietary and valuable AI models on shared or remote infrastructure, protecting their intellectual property and scaling their business models. And there’s more. Hopper introduces a new set of instructions called DPX. Designed to accelerate dynamic programming algorithms.
Many real-world algorithms grow with combinatorial or exponential complexity. Examples include - The famous traveling salesperson optimization problems, Floyd-Warshall for shortest route optimization used for mapping, Smith-Waterman pattern-matching for gene sequencing and protein folding, and many graph optimization algorithms. Dynamic programming breaks complex problems down to simpler subproblems that are solved recursively, reducing complexity and time to polynomial scale. Hopper DPX instructions will speed-up these algorithms up to 40 times.
H100 is the newest engine of AI infrastructures. H100s are packaged with HBM3 memories using TSMC CoWoS 2.5D packaging and integrated with voltage regulation into a superchip module called SXM. Let me now show you how we built up a state-of-the-art AI computing infrastructure. 8 H100 SXM modules are connected by 4 NVLINK Switch chips on the HGX system board.
The four-super high-speed NVSwitch chips each have 3.6 TFLOPS of SHARP in-network computing, first invented in Mellanox Quantum Infiniband Switches. For all-to-all reductions, used extensively in deep learning and scientific computing, SHARP effectively boosts bandwidth by 3 times. The CPU subsystem consists of dual Gen 5 CPUs and two networking modules, each with four 400Gbps CX7 IB or 400Gbps ethernet networking chips. CX7 has 8 billion transistors and is the world's most advanced networking chip.
A total of 64 billion transistors deliver 3.2 terabits per second of networking. Introducing the DGX H100 - our new AI computing system. DGX has been spectacularly successful and is the AI infrastructure for 8 of the top 10 and 44 of the Fortune 100. Connected by NVLINK, DGX makes the eight H100s into one giant GPU: 640 billion transistors 32 petaflops of AI performance 640 GB of HBM3 and 24 terabytes per second of memory bandwidth. DGX H100 is a spectacular leap. And there's more! We have a brand-new way to scale up DGX.
We can connect up to 32 DGXs with NVLINK. Today, we are announcing the NVIDIA NVLINK Switch system. For AI factories, DGX is the smallest unit of computing. With NVLINK Switch system, we can scale up into one giant 32-node, 256-GPU DGX POD, with a whopping 20.5 terabytes of HBM3 memory, and 768 terabytes per second of memory bandwidth. 768 terabytes per second! In comparison, the entire internet is 100 terabytes per second. Each DGX connects to the NVLINK Switch with a Quad-Port Optical transceiver.
Each port has eight channels of 100G-PAM4 signaling carrying 100GB per second. 32 NVLINK transceivers connect to a one rack unit NVLINK Switch system. The H100 DGX POD is essentially one mind-blowing GPU: 1 exaflops of AI computing 20 TB of HBM3 192 TF of SHARP in-network computing The bi-section bandwidth moving data between the GPUs is an amazing 70TB per second. Multiple H100 DGX PODs connect to our new Quantum-2 400 Gbps Infiniband switch with SHARP in-network computing, performance isolation, and congestion control to scale to DGX SuperPODS with thousands of H100 GPUs. Quantum-2 switch is a 57 billion transistor chip with the ability to connect 64 ports of 400gbps each. DGX SuperPODs are modern AI factories.
We are building Eos, the first Hopper AI factory, at NVIDIA, and she's going to be a beauty. 18 DGX PODs 576 DGXs 4608 H100 GPUs At traditional scientific computing, Eos is 275 petaFLOPS or 1.4x faster than the fastest science computer in the US – the A100 powered Summit. At AI, Eos is 18.4 Exaflops or 4 times the AI processing of the world's largest supercomputer – the Fugaku in Japan. We expect Eos to be the fastest AI computer in the world.
Eos will be the blueprint for the most advanced AI infrastructure for our OEM and cloud partners. Partners can take H100 DGX SuperPOD as a whole or the technology components at any of the four layers of our platform. We are standing up Eos now and will be online in a few months.
Let's take a look at Hopper's performance. The performance boost over Ampere is incredible. Training Transformer models, the compound benefits of Hopper's raw horsepower, Hopper Transformer engine with FP8 Tensor Core NVLINK with SHARP in-network computing, NVLINK Switch scale-up to 256 GPUs, and the Quantum-2 InfiniBand, and all of our software results in a 9X speed-up! Weeks turn to days. For inferencing large language models, H100 throughput is up to 30 times higher over A100. H100 is the most significant leap we've ever delivered.
NVIDIA H100, the new engine of the world's AI infrastructure. Hopper is going to be a game-changer for mainstream systems as well. As you've seen with Hopper HGX and DGX, networking and interconnects are critical to computing – moving data to keep the lightning-fast GPUs fed is a most serious concern. So, how do we bring Hopper's superfast compute to mainstream servers? Moving data in traditional servers’ overloads CPU and system memory and are bottlenecked by PCI-Express. The solution is to attach the network directly to the GPU. This is the H100 CNX, combining the most advanced GPU and the most advanced networking processor, CX7, into a single module.
Data from the network is DMA'd directly to H100 at 50 gigabytes per second and avoids the bottlenecks at the CPU, system memory, and multiple passes across PCI express. H100 CNX avoids bandwidth bottlenecks while freeing the CPU and system memory to process other parts of the application. An incredible amount of technology in a tiny little package designed for mainstream servers.
Hopper H100 powers systems at every scale – from the PCI express accelerator for mainstream servers to DGX, DGX Pod, and DGX SuperPOD. These systems run NVIDIA HPC and NVIDIA AI and the rich ecosystem of CUDA libraries. Let me update you on Grace - our first data center CPU. I am pleased to report that Grace is progressing fantastically and on track to ship next year. We designed Grace to process giant amounts of data.
Grace will be the ideal CPU for AI factories. And this is Grace-Hopper. A single superchip module with direct chip-to-chip connection between the CPU and GPU. One of the critical enabling technologies of Grace-Hopper is the memory coherent chip-to-chip NVLINK interconnect – a 900 gigabytes per second link! But I only told you half the story. The full Grace is truly amazing. The Grace CPU can also be a superchip made up of two CPU chips connected, coherently, over NVLINK chip-to-chip.
Grace superchip has 144 CPU cores! And an insane 1 terabyte per second of memory bandwidth - over two to three times the top Gen 5 CPUs that have yet to even ship. We estimate the Grace superchip to have a SPECint 2017 rate of 740. Nothing close to that ships today. And the amazing thing is the entire module, including a terabyte of memory, is only 500 Watts. We expect the Grace superchip to be the highest performance and twice the energy efficiency of the best CPU at that time. Grace will be amazing at AI, data analytics, scientific computing, and hyperscale computing.
And Grace will be welcomed by all of NVIDIA's software platforms – NVIDIA RTX, HPC, NVIDIA AI, and Omniverse. The enabler for Grace-Hopper and Grace superchip is the ultra-energy-efficient, low-latency, high-speed memory coherent NVLINK chip-to-chip link. With NVLINK that scales from die-to-die, chip-to-chip, and system-to-system, we can configure Grace and Hopper to address a large diversity of workloads. We can create systems with: A Two-Grace CPU Superchip A One-Grace, One-Hopper Superchip A One-Grace, Two-Hopper Superchip And systems with Two-Grace, Two Hoppers Two-Grace and Four Hoppers Two-Grace and Eight Hoppers The composability of Grace and Hopper’s NVLINK, and the Gen 5 PCI express switch inside CX7, give us a vast number of ways to address customer’s diverse computing needs.
Future NVIDIA chips – CPUs, GPUs, DPUs, NICs, and SOCs – will integrate NVLINK just like Grace and Hopper. Our SERDES technology is world-class. From years of designing high-speed memory interfaces, NVLINKs, and networking switches, NVIDIA has world-class expertise in high-speed SERDES. NVIDIA is making NVLINK and SERDES available to customers and partners who want to implement custom chips that connect to NVIDIA's platforms. These high-speed links have opened a new world to build semi-custom chips and systems with NVIDIA computing.
NVIDIA has accelerated computing a Million-X over the past decade by GPU-accelerating algorithms, optimizing across the full-stack, and scaling across the entire data center. The computer science and engineering is captured in NVIDIA SDKs. NVIDIA SDKs with CUDA libraries are the heart and soul of accelerated computing. NVIDIA SDKs connect us to new challenges in science and new opportunities in industry. RAPIDS is a suite of SDKs for data scientists using popular Python APIs for DataFrames, SQL, arrays, machine learning, and graph analytics. RAPIDS is one of NVIDIA’s most popular SDKs, second only to cuDNN for deep learning.
RAPIDS has been downloaded 2M times and has grown 3 times year-over-year. It is used by over 5,000 GitHub projects and over 2,000 Kaggle notebooks, and is integrated into 35 commercial software packages. NVIDIA RAPIDS for Spark is a plug-in for accelerating Apache Spark.
Spark is the leading data processing engine used by 80% of the Fortune 500 companies. Users of Spark can transparently accelerate Spark data-frame and SQL. Operations that take hours now take minutes.
NVIDIA cuOpt, previously called ReOpt, is an SDK for multi-agent, multi-constraint route planning optimization used for delivery services or autonomous mobile robots inside warehouses. With NVIDIA cuOpt, businesses can, for the first time, do real-time planning of thousands of packages to thousands of locations in seconds with world-record accuracy. Over 175 companies are testing NVIDIA cuOpt. Graphs are one of the most used data structures to represent real-world data, like maps, social networks, the web, proteins and molecules, and financial transactions. The NVIDIA DGL container lets you train large graph neural networks across multiple GPUs and nodes. NVIDIA Morpheus is a deep learning framework for cybersecurity.
Morpheus helps cybersecurity developers build and scale solutions that use deep learning to identify, capture, and act on threats previously impossible. Every company needs to move to a Zero Trust architecture. NVIDIA can for sure use Morpheus. cuQuantum is an SDK for accelerating Quantum Circuit Simulators so researchers can develop quantum computing algorithms of the future that are impossible to explore on quantum computers today.
cuQuantum accelerates the top QC simulators Google Cirq, IBM Qiskit, Xanadu’s Pennylane, Quantinuum TKET, and Oak Ridge National Laboratory ExaTN. cuQuantum on DGX is the ideal development system for quantum computing. Aerial is an SDK for CUDA-accelerated software-defined 5G radio. With Aerial, any data center, cloud, on-prem, or edge, can be a 5G radio network and provision AI services on 5G to places not served by WIFI.
The 6G standard will emerge around 2026. The megatrends shaping 6G are clear - hundreds of billions of machines and robots will be the overwhelming users of the network. 6G is taking shape around a few foundational technologies. Like networking, 6G will be highly software-defined. The network will be AI-driven. Digital twins performing ray tracing and AI will help optimize the network.
NVIDIA can make contributions in these areas. We are excited to announce a new framework, Sionna, an AI framework for 6G communications research. Modulus is an AI framework for developing physics-ML models. These deep neural network models can learn physics and make predictions that obey the laws of physics at many orders of magnitude faster than numerical methods. We are using Modulus to build the Earth-2 digital twin. Monai is an open-source AI framework for medical imaging.
The NVIDIA Monai container includes AI-assisted labeling for 2D and 3D models, transfer learning, and autoML training; and it’s easy to deploy through DICOM. Monai is used by the world’s top 30 academic medical centers and has over 250,000 downloads. FLARE is NVIDIA’s open-source SDK for federated learning, letting researchers collaborate in a privacy-preserving way – sharing models but not data. Millions of developers and tens of thousands of companies use NVIDIA SDKs to accelerate their workload. We updated 60 SDKs with more features and acceleration at this GTC.
The same NVIDIA systems you own just got faster and scientists doing Operations Research, quantum algorithm research, 6G research, or graph analytics can tap into NVIDIA acceleration for the first time. And for the companies doing computer aided design or engineering, the software tools you depend on from Ansys, Altair, Siemens, Synopsys, Cadence, and more, just got a massive speed-up. From first-hand experience, it has transformed our engineering. So, go to NGC, NVIDIA GPU Cloud, and download our SDKs and frameworks that are full-stack optimized and data-center-scale accelerated. The Apollo 13 crew was 136,000 miles from Earth when a faulty electrical wire caused one of the two oxygen tanks to explode.
And the now infamous words radioed back to NASA - “Houston, we’ve had a problem.” To “work the problem”, NASA engineers tested oxygen-preserving and power-cycling procedures on a replica of the Odyssey spacecraft. Apollo 13 would have ended in disaster if not for the fully functional replica on Earth. This was an important moment. NASA realized the power of the replica, but not everything can have a physical twin.
So NASA coined the term “digital twin,” a living virtual representation of something physical. Extended to vast scales, a digital twin is a virtual world that’s connected to the physical world. And in the context of the internet, it is the next evolution. And that’s what NVIDIA Omniverse is about – digital twins, virtual worlds, and the next evolution of the internet. Over 20 years of NVIDIA graphics, physics, simulation, AI, and computing technologies made Omniverse possible.
Simulating the world is the ultimate grand challenge. Omniverse is a simulation engine of virtual worlds. Omniverse worlds are physically accurate, obeying the laws of physics.
Omniverse operates at vast scales. And Omniverse is sharable, connecting designers, viewers, AIs, and robots. But what are the applications of Omniverse? I will highlight several immediate use-cases today.
Remote Collaboration of designers using different tools. Sim2Real Gyms where AI and robots learn. And industrial Digital Twins. But first, let me show you the technologies that make Omniverse possible.
Omniverse technology will transform the way you create! Omniverse is scalable from RTX PCs to large systems. RTX PCs connected to someone hosting the Omniverse Nucleus are sufficient for creative collaboration. Industrial digital twins, however, need a new type of purpose-built computer. Digital twin simulations involve multiple autonomous systems interacting in the same space-time. Data centers process data in the lowest possible time, not precise time.
So for digital twins, the Omniverse software and computer need to be scalable, low latency, and support precise time. We need to create a synchronous data center. Just as we have DGX for AI, we now have OVX for Omniverse. The first-generation NVIDIA OVX Omniverse computer consists of eight NVIDIA A40 RTX GPUs, 3 CX6 200 Gbps NICs, and dual Intel Ice Lake CPUs.
And the NVIDIA Spectrum-3 200 gigabytes per second switch fabric connects 32 OVX servers to form the OVX SuperPOD. Most importantly, the network and computers are synchronized using Precision Timing Protocol, and RDMA minimizes packet transfer latency. OVX servers are now available from the world’s top computer makers.
And for customers wanting to try Omniverse on OVX, NVIDIA LaunchPads are located around the world. Generation one OVX are running at NVIDIA and early customers. We are building our second-generation OVX, starting with the backbone. Today, we are introducing the Spectrum-4 switch.
At 51.2 terabytes per second, the 100-billion transistor Spectrum-4 is the most advanced switch ever built. Spectrum-4 introduces fair bandwidth distribution across all ports, adaptive routing, and congestion control for the highest overall data center throughput. With CX7 and Bluefield-3 adaptors, and the DOCA data center infrastructure software, this is the world’s first 400 gigabytes per second end-to-end networking platform. And Spectrum-4 can achieve timing precision to a few nanoseconds versus the many milliseconds of jitter in a typical data center – that is a 5 to 6 orders of magnitude improvement. Hyperscalers will enjoy increased throughput, quality of service, and security, while reducing power and cost.
Spectrum-4 enables a new class of computers for Omniverse digital twins in cloud and edge data centers. NVIDIA Spectrum-4, the world’s most advanced ethernet networking platform and the backbone of our Omniverse computer, samples in late Q4. Omniverse is a network-of-networks connecting virtual worlds. The value of the network amplifies when diverse ecosystems connect through Omniverse into a unified workflow.
Since last year’s GTC, we’ve increased connections from 8 to 82. We have connectors with Chaos Vray, Autodesk Arnold, and Blender. Adobe’s 3D Substance Painter, Epic’s Unreal Engine 5, and Maxon’s Cinema 4D. Many developers want to OEM and connect Omniverse directly into their software suite.
Bentley Systems is the leading infrastructure design, construction, and management platform. They integrated Omniverse into their LumenRT platform to do interactive, engineering-grade, millimeter-accurate, 4D visualization of massive infrastructure digital twins. Bentley’s LumenRT with Omniverse is available now.
We are launching a major release at GTC. Omniverse Kit for building Omniverse extensions, apps, and microservices. We are releasing 26 manufacturing format converters to connect industrial workflows to Omniverse.
DeepSearch is an AI search engine for massive, untagged, 3D asset libraries using either natural language or images. Omniverse Replicator, a framework for generating physically accurate synthetic data to train and test AI models. And Omnigraph, a multi-GPU, multi-node, compute engine to run data-center-scale Omniverse applications. And, a very big project, Omniverse Avatar, a framework for building avatars, or digital humans. Let me welcome back Toy Jensen, an application we built with Omniverse Avatar.
Remember, Toy Jensen is not recorded. He is completely real-time. Making eye contact, making conversation, and fully animated. Hey TJ! I have a few questions for you. Sure! Ask me anything.
What is synthetic biology? Synthetic Biology is about designing biological systems at multiple levels from individual molecules up to whole cells and even multicellular assemblies like tissues and organs to perform specific functions. It is also known as SynBio which stands for 'synthetic life'. Can you give me some examples? Some current uses of synthetic biology involve creating biofuels; engineering bacteria with useful properties not found in naturally occurring organisms; developing biosensors capable of detecting environmental toxins; producing biopharmaceuticals; and manufacturing chemicals via fermentation rather than petrochemicals. What are its medical applications? Synthetic biology has many medical applications. For example, synthetic biology is being used to develop new vaccines,
new antibiotics, and new treatments for cancer. TJ that was great! Can you tell us how you were made? Avatar is a framework built on the Omniverse platform. It lets you quickly build and deploy Avatars like me for a wide range of uses. It starts with my voice. It is fully synthesized by Riva’s Text to Speech RADTTS and is also used to drive my facial animation using Omniverse Audio2Face.
Next, Omniverse animation’s animation graph is used to define and govern my movement, from hand and arm movements to subtle head and body motion. NVIDIA’s open-source Material Definition Language, MDL, adds the touches that make my cool jacket look like synthetic leather and not just plastic, while the RTX renderer brings me to life in high-fidelity—in real-time. Finally, I can listen and talk to you thanks to the latest in conversational AI technologies from Riva and our Megatron 530B NLP model, one of the largest language models ever trained. Megatron helps me answer all those tough questions Jensen throws at me. What’s also exciting is that I can be run from the cloud, the data center, or any other disaggregated system, all thanks to Tokkio. Tokkio is an application built with Omniverse Avatar and it brings customer service AI to retail stores, quick-service restaurants, and even the web.
It comes to life using NVIDIA AI models and technology like computer vision, Riva speech AI, and NVIDIA NeMO. And because it runs on our unified computing framework, or UCF, Tokkio can scale-out from the cloud and go wherever customers need helpful avatars like me, with senses that are fully acute and responsive, and above all, natural. I hope you enjoyed a quick overview of how I was made. Back to you, Jensen!
Today’s AI centers around perception and pattern recognition, like recognizing an image, understanding speech, suggesting a video to watch, or recommending an item to buy. The next wave of AI is robotics, where AI will also plan and act. NVIDIA is building several robotics platforms – DRIVE for autonomous vehicles, Isaac for maneuvering and manipulation systems, Metropolis for autonomous infrastructures, and Holoscan for robotic medical instruments.
And just as NASA recognized, we will need digital twins in order to operate fleets of robots that are far away. The workflow of a robotic system is complex. I’ve simplified it here to four pillars: Collecting and generating ground truth data, creating the AI model, simulating with a digital twin, operating the robot. Omniverse is central throughout. DRIVE is our autonomous vehicle system, it’s essentially an AI chauffeur.
As with all of our platforms, NVIDIA DRIVE is full-stack, end-to-end, and open for developers to use in-whole or in-parts. For ground truth data, we use our DeepMap HD mapping, human-labeled data, and Omniverse Replicator. To train the AI models, we use NVIDIA AI and DGX. DRIVE Sim in Omniverse, running on OVX, is the digital twin. And DRIVE AV is the autonomous driving application running on our Orin computer in the car. Let’s enjoy a ride with the latest build of NVIDIA Drive.
We will take you through a highway and urban route in San Jose. You can see what the car sees from the confidence view rendering We will navigate complex situations such as crowded intersections, And your AI chauffeur will be a friendly driving companion. Welcome Daniel. I see a text from Hubert asking,
“Can you pick me up from the San Jose Civic?” Should I take you there? Yes, please. Okay, taking you to San Jose Civic. StartDRIVE Pilot. OK, starting DRIVE Pilot. Can you tell Hubert we’re on our way? Sure, I’ll send him a text.
I see Hubert. Can you please take me to Rivermark Hotel? Okay, taking you to Rivermark Hotel. Thanks for picking me up! Definitely. Start DRIVE Pilot. Ok, starting DRIVE Pilot. What building is that there? That building is San Jose Center for the Performing Arts What shows are playing there? Cats is playing tonight. Can you get me two tickets for Saturday night? Yes I can.
You have arrived at your destination. Please park the vehicle. OK, finding a parking spot. Hyperion 8 is the hardware architecture of our self-driving car and it’s what we build our entire DRIVE platform on. It consists of sensors, networks, two chauffeur AV computers, one concierge AI computer, a mission recorder, and safety and cybersecurity systems.
And it’s open. Hyperion 8 can achieve full self-driving with a 360-degree camera, radar, lidar, and ultrasonic sensor suite. Hyperion 8 will ship in Mercedes Benz cars starting in 2024, followed by Jaguar Land Rover in 2025.
Today, we are announcing Hyperion 9 for cars shipping starting in 2026. Hyperion 9 will have 14 cameras, 9 radars, 3 lidars, and 20 ultrasonics. Overall, Hyperion 9 will process twice the amount of sensor data compared to Hyperion 8, further enhancing safety and extending the operating domains of full self-driving. NVIDIA DRIVE Map is a multi-modal map engine and includes camera, radar, and lidar.
You can localize to each layer of the map independently, which provides diversity and redundancy for the highest level of safety. Drive Map has two map engines – ground truth survey mapping and crowdsourced fleet mapping. By the end of 2024, we expect to map and create a digital twin of all major highways in North America, Western Europe, and Asia – about 500,000 kilometers. The map will be expanded and updated by millions of passenger cars.
We are building an earth-scale digital twin of our AV fleet to explore new algorithms and designs, and test software before deploying to the fleet. We are developing two methods to simulate scenarios – each reconstructs the world in different ways. One method starts from NVIDIA Drive Map, a multi-modal map engine that creates a highly accurate 3D representation of the world. The map is loaded into Omniverse.
Buildings, vegetation, and other roadside objects are generated. From previous drives, the dynamic objects, cars, and pedestrians, are inferred, localized, and placed into the digital twin. Each dynamic object can be animated or assigned an AI behavior model. Domain randomization can be applied to generate diverse and plausible challenging scenarios. A second approach uses Neural Graphics AI and Omniverse to transform a pre-recorded drive video into a reenactable and modifiable drive. We start by reconstructing the scene in 3D.
Dynamic objects are recognized and removed, and the background is restored. After scene reconstruction, we can change the behavior of existing vehicles or add fully controllable vehicles that behave realistically to traffic. The regenerated drive, with 3D geometry, and physically based materials, allow us to properly re-illuminate the scene, apply physics, and simulate sensors, like LIDAR. The pre-recorded scene is now reenactable and can be used for closed-loop simulation and testing. DRIVE Map and DRIVE Sim, with AI breakthroughs by NVIDIA research, showcase the power of Omniverse digital twin to advance the development of autonomous vehicles.
NVIDIA DRIVE Map, DRIVE Sim, Hyperion 8 with Orin, and DRIVE AV stacks are available independently or together as a whole. Electric vehicles have forced a complete redesign of car architectures. Future cars will be highly programmable, evolving from many embedded controllers to highly centralized computers. The AI and AV functionalities will be delivered in software and enhanced for the life of the car. NVIDIA Orin has been enormously successful with companies building this future.
Orin is the ideal centralized AV and AI computer and is the engine of new-generation EVs, robotaxis, shuttles, and trucks. Orin started shipping this month. Today we are thrilled to announce that BYD, the second-largest EV maker globally, will adopt the DRIVE Orin computer for cars starting production in the first half of 2023.
NVIDIA’s self-driving car computer, software, and robotics AI is essentially the same computing pipeline as next-generation medical systems. Let me show you what Holoscan can do for an incredible instrument called a lightsheet microscope. Invented by Nobel laureate Eric Betzig, lightsheet microscopes use high-resolution fluorescence to create a movie of cells moving and dividing, giving researchers the ability to study biology in motion. The problem is that lightsheet microscopes produce 3TB of data per hour – the equivalent of 30 4K movies.
It takes up to a day to process the 3TB of data. With NVIDIA Clara Holoscan, we can process the data in real-time. Now with Clara Holoscan and NVIDIA index we can visualize the entire large volume of living cells in real time as the data is being recorded directly from the microscope. Watching these living cancer cells move about, we can see normal healthy biology and maligent processes at the same time. The fluorescent marker rendered in blue marks nuclei, which we see splitting to form two cells from one cell. A hallmark of cancer is cell division occurring more frequently and with less error checking than normal, healthy cells.
Using Berkeley’s lattice light sheet microscope, the ultimate-high resolution allows scientists to see what is hidden to normal light optics – not seen using traditional microscopes. As we zoom in, watch cancer cells display what is thought to be a rare event even for cancer cell lines – see one cell split into 3 cells. This phenomenon has only been reported anecdotally in a couple of scientific publications. Scientists do not yet know what we will see – but this technique, enabled with real-time processing and visualization, now allows the scientific community to discover new unseen events like this.
Let’s see what the future has in store. Clara Holoscan is an open, scalable robotics platform. Clara Holoscan is designed to the IEC-62304 medical-grade specification and for the highest device safety and security level. The amount of computation in Holoscan is insane. The core computer is Orin and CX7, with an optional GPU. Holoscan development platforms are available for early access customers today, general availability in May, and medical-grade readiness in Q1 2023.
Future medical devices will be AI instruments, assisting diagnostics or surgery. Just as NVIDIA DRIVE is a platform for robotic vehicles, Clara Holoscan is a platform for robotic medical instruments. We are delighted to see the enthusiasm around Holoscan and to partner with leading medical device makers and robotic surgery companies. The demand for robotics and automation is increasing exponentially.
Some robots move, and other robots watch things that move. NVIDIA is working with thousands of customers and developers, building robots for manufacturing, retail, healthcare, agriculture, construction, airports, and entire cities. NVIDIA’s robotics platforms consist of Metropolis and Isaac – Isaac is a platform for things that move. Metropolis is a stationary robot tracking moving things.
Metropolis and Isaac platforms, like DRIVE, consist of 4 pillars – Ground Truth generation, AI model training, Omniverse digital twin, and the robot with robotic software and computer. Metropolis has been a phenomenal success – has been downloaded 300,000 times, has over 1,000 ecosystem partners, and operates in over a million facilities, including USPS, Walmart, cities including Tel Aviv and London, the Heathrow airport, Veolia recycling plants, and the Gillette football stadium. And now, customers can use Omniverse to create digital twins of their facilities to drive better safety and efficiency. Let’s take a look at how the Pepsi company is using Metropolis and Omniverse. PepsiCo’s products are enjoyed 1B times a day around the world. Getting this many products to their 200 regional markets requires over 600 distribution centers.
Improving the efficiency and environmental sustainability of their supply chain is a key goal for Pepsi Co. To achieve this, they are building digital twins to simulate their packaging and distribution centers using NVIDIA Omniverse and Metropolis. This allows them to test variations in layout and optimize workflows to accelerate throughput before making any physical investments.
As new products and processes are introduced, Omniverse Replicator and NVIDIA TAO can be used to create photorealistic synthetic data to re-train the real-time AI models. These updated models and optimizations are then transferred to the physical world. From here, NVIDIA Metropolis applications monitor and adjust conveyer belt speed in real-time using AI enabled computer vision helping prevent congestion and downtime across miles of conveyor belts. What’s more, with NVIDIA Fleet Command, all these applications can be securely deployed and managed across hundreds of distribution centers from one central plane. By leveraging NVIDIA Omniverse, Metropolis, and Fleet Command, PepsiCo is streamlining supply chain operations, reducing energy usage, and advancing their mission towards sustainability.
One of the fastest-growing segments of robotics is AMR – autonomous mobile robots – essentially driverless cars for indoors. The speed is lower, but the environment is highly unstructured. There are 10’s of millions of factories, stores, and restaurants. And 100’s of millions of square feet of warehouse and fulfillment centers. Today, we have a major release of Isaac – Isaac for AMRs.
Let me highlight some of the key elements of the release. Isaac for AMRs, like the DRIVE platform, has four major pillars, each individually available, and completely open. New NVIDIA DeepMap for Ground Truth generation, NVIDIA AI for training models, a reference AMR robot powered by Orin, new gems in the Isaac robot stack, and the new Isaac Sim on Omniverse. First, Isaac Nova, like DRIVE Hyperion, is a reference AMR robot system on which the entire Isaac stack is built.
Nova has 2 cameras, 2 lidars, 8 ultrasonics, and 4 fisheye cameras for teleoperation. We’re announcing Jetson Orin developer kits are available today. Nova AMR will be available in Q2. Nova AMRs can be outfitted with NVIDIA’s new DeepMap LIDAR mapping system so you can scan and reconstruct your environment for route planning and digital twin simulations. Isaac robot SDK includes perception, localization, mapping, planning, and navigation modules. And today we’re announcing major updates to build an AMR.
Isaac includes gems like object and person detection, 3D pose estimation, LIDAR and visual SLAM localization and mapping, 3D environment reconstruction, free space perception, dolly docking using reinforcement learning, a navigation stack, integration with NVIDIA cuOpt for real-time planning, and robotic arm motion planning and kinematics, and more. There is even an SDK for teleoperation. Finally, Omniverse is used to build Isaac Replicator for synthetic data generation, Isaac Gym to train robots, and Isaac Sim for digital twins. The Isaac development flow integrates Omniverse throughout. Isaac Gym highlights the importance of Omniverse’s physics simulation accuracy. In Isaac Gym, a new robot learns a new skill by performing it thousands to millions of times using deep reinforcement learning.
The trained AI brain is then downloaded into the physical robot. And since Omniverse is physically accurate, the robot, after getting its bearings, should adopt the skills of its digital twin. Let’s take a look. Successful development, training, and testing of complex robots for real-world applications demand high-fidelity simulation and accurate physics. Built on NVIDIA's Omniverse platform, Isaac Sim combines immersive, physically accurate, photorealistic environments with complex virtual robots.
Let’s look at three very different AI-based robots being developed by our partners using Isaac Sim. Fraunhofer IML, a technology leader in logistics, uses NVIDIA Isaac Sim for the virtual development of Obelix— a highly dynamic indoor/outdoor Autonomous Mobile Robot, or AMR. After importing over 5400 parts from CAD and rigging with Omniverse PhysX, the virtual robot moves just as deftly in simulation as it does in the real world. This not only accelerates virtual development, but also enables scaling to larger scenarios. Next, Festo, well known for industrial automation, uses Isaac Sim to develop intelligent skills for collaborative robots, or cobots, requiring acute awareness of their environment, human partners, and tasks. Festo uses Cortex, an Isaac Sim tool that dramatically simplifies programming cobot skills.
For perception, AI models used in this task were trained using only synthetic data generated by Isaac Replicator. Finally, there is Anymal, a robot dog developed by a leading robotics research group from ETH Zurich and Swiss-Mile. Using end-to-end GPU accelerated Reinforcement Learning, Anymal, whose feet were replaced with wheels, learned to 'walk' over urban terrain within minutes rather than weeks using NVIDIA's Isaac Gym training tool. The locomotion policy was verified in Isaac Sim and deployed on a real Anymal. This is a compelling demonstration of simulator training for real-world-deployment.
From training perception and policies to hardware-in-loop, Isaac Sim is the tool to build AI-based robots that are born in simulation to work and play in the real-world. Modern fulfillment centers are evolving into technical marvels – facilities operated by humans and robots working together. The warehouse is also a robot, orchestrating the flow of materials and the route plans of the AMRs inside. Let’s look at how Amazon uses Omniverse digital twin to design and optimize their incredible fulfillment center operations.
Every day, hundreds of Amazon’s facilities handle tens of millions of packages, with more than two thirds of these customer orders handled by robots. To support this highly complex operation, we deployed hundreds of thousands mobile drive robots, and associated storage pods, which allow us to store far more inventory in our buildings than traditional shelving. And which help us move inventory in a safer, more efficient way. Key to the scaling has been our ability to simulate
2022-03-24 18:22