NVIDIA Keynote at COMPUTEX 2022
Once upon a time, it was all fun and games. NVIDIA started off by making chips for video game machines. Graphics became serious business when people started using NVIDIA for blockbuster movies, medical imaging devices, and the world’s most powerful supercomputers. And then one day, researchers discovered that our technology was perfect for AI. Today, NVIDIA is the engine of AI. Engineering the most advanced chips and systems and the software that makes them sing.
So robots can lend us a hand… Cars can drive themselves… And even the Earth can have a digital twin. We live to tackle the world’s biggest challenges. And don’t worry. We still love our fun and games. Welcome to Computex! Our close partnership with the innovators in Taiwan is foundational to our work. We have a lot to cover today, across the breadth of our company – gaming, robotics, data center, and, of course, AI.
Let's start there. AI is transforming large markets, and every day we work closely with our partners to help bring new AIs to life. We collaborate on the systems, the physical and software infrastructure, the AI frameworks and AI applications – a platform that is continuously growing. Data centers themselves are transforming into AI factories as companies manufacture intelligence to process every engagement, every product, every recommendation to deliver great customer experiences.
This transformation requires us to reimagine the data center at every level, from hardware to software, from chips to infrastructure to systems themselves. This data center transformation will drive massive business opportunities for our partners in Taiwan. Delivering AI factories for training and inference combined with the continuously growing need for traditional high performance computing, comprise a $150Bn market for our ecosystem partners.
In addition, AI is enabling the new market of digital twins, where we can reproduce, virtually, the complex products the world wants to build, and extend all the way to a digital twin of the world itself to study climate change. Digital Twins opens up a new $150Bn market. A third workload is also emerging: cloud-based gaming. Streaming games from the cloud to all devices expands the datacenter market by another $100Bn. Combined, these workloads represent a half-trillion-dollar market opportunity that our partners can tap into by leveraging our open platform.
There are 4 key elements of this data center transformation that require us to reimagine these modern AI factories. First, we need 3 processors working in harmony to handle different aspects of these massive new workloads: CPUs, GPUs, and DPUs. CPUs manage the overall system, while GPUs are the workhorses that perform the computations, and DPUs handle network traffic securely and perform the in-network computing to optimize performance. Of course, as training scales up, faster interconnects are necessary to allow the compute to scale. These massive workloads consume exaFLOPs of compute, so infrastructure that excels at energy efficiency ensures sustainability in AI factories.
NVIDIA GPUs are so efficient at compute that if all the world's current AI, HPC and data analytics workloads were running on GPU servers, we estimate we'd save over 12 trillion watt hours of electricity per year! That’s the equivalent of taking 2M cars off the road annually! Getting AI running in production requires both tools to manage development workflows and tools to run the models for inference. But it also requires tools to manage and deploy AI models among fleets of servers distributed across central and edge data centers. This requires robust software to run AI factory operations 24/7. Powering these modern AI factories requires end-to-end innovation at every level.
These include Hopper GPUs, Grace CPUs, and BlueField DPUs as building blocks networked together by Quantum and Spectrum switches. All of these combine to deliver the infrastructure of the data center of the future that handles these massive workloads. Finally, getting all of this to run seamlessly requires NVIDIA AI Enterprise software, which delivers robust 24/7 AI deployment. At every scale, NVIDIA has worked on this entire canvas of innovation so that our partners can take advantage of our open platform to deliver state-of-the-art servers and racks for the modern AI data centers. Earlier this year, NVIDIA announced the H100 GPU, the most advanced chip ever built.
It's an order of magnitude leap in performance over A100. Built with a custom TSMC 4 nanometer process, it features 6 groundbreaking inventions. A faster, more powerful Tensor Core—6X faster than its predecessor, Ampere.
It's built to accelerate Transformer networks, the most important deep learning model today. The 2nd-generation multi-instance GPU partitions the GPU into smaller compute units that allow CSPs to divide each H100 into 7 separate instances. This greatly boosts the number of GPU clients available to data center users. Confidential computing allows customers to keep data secure while being processed, maintaining privacy and integrity from end to end on shared computing resources.
Our 4th-generation NVLink allows GPUs to communicate faster than ever before, at 900 GB/s bandwidth between server nodes, and scaling up to 256 GPUs to solve these massive workloads in AI factories of the future. The DPX instructions are dedicated cores that speed up recursive optimization problems, like gene sequencing, protein folding, and route optimization—up to 40X faster. For the largest-scale AI factories that need to solve workloads like conversational AI agents and planet-scale digital twins, NVLink now scales across servers. The new NVLink Switches allow up to 256 GPUs or 32 HGX servers to communicate at NVLink speeds. This switch and interconnect network forms the NVLink Switch System. NVLink Switch System allows up to 20.5TB of HBM3 memory
at an incredible 768 TB/s of memory bandwidth. This delivers an exaFLOP of compute in a single pod— truly we have reimagined the data center. The performance boost over Ampere is incredible. Training the latest Transformer-based models, the combined benefits of Hopper's raw horsepower, the new Transformer engine with FP8 Tensor Core, NVLink with SHARP in-network computing, and NVLink Switch, as well as the latest Quantum-2 InfiniBand, results in a 9X speed-up! Weeks turn into days. For inferencing, the benefit is even greater.
H100 throughput is up to 30X higher over A100. It's the most significant leap we've ever delivered. NVIDIA H100, the new engine of the world's AI infrastructure. Our Taiwan partners have helped accelerate the world's servers. Now, they're designing the world's next generation data centers with NVIDIA accelerated computing.
Let's take a closer look at the chips in our open platform that deliver this incredible acceleration. Here's Brian Kelleher to tell us more. Thanks Ian.
NVIDIA has been built on a consistent flow of innovations and technology, delivered through a new GPU architecture family every two years. NVIDIA focuses on inventions that solve new challenges and open new markets. Our work in accelerated computing, AI and machine learning, edge computing, and Omniverse are examples of how NVIDIA and our partners have created unserved new markets. We’ve established a reputation for unmatched roadmap execution, designing the world’s most complex silicon with an expectation that the first silicon out of fab will go directly to production.
That dependability helps align our company, and just as importantly, helps align our partners through trust in our roadmap. Our new data center roadmap includes three chips: CPU, GPU, and DPU. We’ll extend our execution excellence and give each chip architecture a two-year rhythm.
One year will focus on x86 platforms. One year will focus on Arm platforms. Every year you will see exciting new products from us. The NVIDIA architecture and platforms will support x86 and Arm – whatever customers and markets prefer. 3 Chips; Yearly Leaps; One Architecture. Let’s dive into the latest announcements and discuss the impacts to our partners’ servers… Grace is our first data center CPU.
Grace is built for AI workloads that have emerged only in the past few years. Grace is built to power AI factories – a new breed of data centers. Grace is on track to ship next year and will be available in two form factors. Grace-Hopper, shown on the left, is a single superchip module with a direct chip-to-chip connection between the Grace CPU and the Hopper GPU. The CPU and GPU communicate over NVLink-C2C – a low-power memory-coherent interconnect at 900GB/s! Grace will transfer data to Hopper 15X faster than any other CPU can and will increase the working data size of Hopper to up to 2 TB. Grace-Hopper is built to accelerate the largest AI, HPC, cloud and hyperscale workloads.
We’ll also offer Grace in a superchip made of two Grace CPU chips connected coherently over NVLink-C2C. The Grace superchip has 144 CPU cores, an incredible 1TB/s of memory bandwidth, and twice the energy efficiency of existing servers— the entire module including 1TB of memory consumes only 500W. Grace is based on Arm, the world’s most popular CPU architecture, and rapidly growing in hyperscale cloud and edge computing. Grace will be amazing at AI, data analytics, scientific computing, and hyperscale computing. And, of course, the full suite of NVIDIA software platforms will run on Grace.
The enabler for Grace-Hopper and Grace Superchip is the ultra-energy-efficient, high-speed, memory coherent, NVLink chip-to-chip interconnect. Future NVIDIA chips – CPUs, GPUs, DPUs, NICs, and SOCs – will integrate NVLink just like Grace and Hopper. Our SERDES technology is world-class, with expertise established over decades of designing high-speed memory interfaces, NVLinks, and networking switches.
NVIDIA is making NVLink and our SERDES open to customers and partners who want to implement custom chips that connect to NVIDIA's platforms. In addition to NVLink-C2C, NVIDIA will also support the developing UCIe standard announced earlier this year. With NVLink that scales from die-to-die, chip-to-chip, and system-to-system, we can configure Grace and Hopper to address a large diversity of workloads.
We can create systems that range from Grace CPU-only to accelerated with up to eight Hopper GPUs. The composability of Grace and Hopper’s NVLink gives us a vast number of ways to open new markets and address customers’ diverse computing needs. Here’s Ying Yin to tell us more about these cutting edge systems.
Thank you Brian, Advanced technology needs world-class partners. Collaborating with the world’s best system makers, we have created a broad array of data center solutions. Together, we offer hundreds of configurations of x86 and Arm systems to power the world’s need for HPC and AI, and we are preparing new systems for Hopper and BlueField. These systems are open for all partners to expand their markets by leveraging our ecosystem. Today, we’re introducing Grace-based reference designs for the 5 massive new workloads of reimagined data centers: CGX for cloud gaming OVX for Digital Twins and Omniverse HGX for HPC and Supercomputing and, last but not least, a new HGX architecture for AI. It features our most powerful AI training and inference platform using the Grace-Hopper CPU-GPU superchip module.
All of these servers are optimized for NVIDIA accelerated computing software stacks, and can be qualified as part of our NVIDIA-Certified Systems lineup. Let’s take a closer look at two of the HGX systems. Today, we are announcing HGX Grace and HGX Grace Hopper systems.
NVIDIA will provide the Grace Hopper and Grace CPU Superchip modules as well as their corresponding PCB reference designs. Both are specifically designed for OEM 2U high-density server chassis. Our partners can modify the reference design to quickly spin up the motherboard, leveraging their existing system architectures.
Since OEMs already widely use the 2U high-density chassis, they can easily repurpose it to build Grace-based servers. We’re pleased to announce these OEM hardware titans who will be part of the first wave of providers. The Grace systems will start shipping in the first half of 2023. Let’s now talk about how we connect these servers to build racks and clusters with NVIDIA networking.
Here’s Michael Kagan to tell us more. Thank you Ying Yin. The wave of AI sweeping the world is accelerating the demand for computing and creating new services that further demand even more data processing power. Data centers are becoming AI factories. This is the new unit of computing based on software defined infrastructure, delivering software defined services.
It is a single harmonic computing engine, delivering millions of services to billions of users. NVIDIA networking solutions are based on three key components. BlueField DPU, the data processing unit that connects compute nodes to the data center network, the InfiniBand Quantum switch, and the Ethernet Spectrum switch.
The NVIDIA BlueField DPU is the computing platform running the data center operating system. BlueField offloads and accelerates networking and storage services presenting virtual infrastructure at native bare metal performance. BlueField is an essential part of cross-tenant performance and security isolation at the data center scale. Its operation is optimized automatically through built-in AI accelerators. A few months back, we introduced BlueField 3, the 400-gigabit DPU.
BlueField-3 integrates high-performance compute cores, exposing fully-programmable data-path acceleration for network, security, and storage. BlueField is a zero-trust infrastructure computing platform with hardware-based platform attestation and transparent “always on” data encryption. DOCA is the infrastructure framework for cloud native data center. It simplifies development of networking, storage, security, and infrastructure management services. DOCA is designed to host certified third-party infrastructure services.
The NVIDIA InfiniBand Quantum network platform is designed for AI and HPC workloads. It is the foundation of cloud-native supercomputers that deliver bare metal performance with the convenience of cloud usage. The NVIDIA Ethernet Spectrum networking platform is the fastest and most efficient Ethernet platform for the enterprise cloud data centers. NVIDIA maintains a strict cadence introducing a new generation networking platform every other year. This year, we introduced the 400 gigabit end-to-end networking platform, the fastest end-to-end network solution in the world. The BlueField DPU along with the Quantum and Spectrum networking switches comprise the infrastructure platform for the AI factory of the future.
Thanks Michael. Solving challenges with AI requires a full-stack solution and NVIDIA provides the software that brings these hardware innovations to life. NVIDIA AI Enterprise is a suite of software to power the end-to-end workflows of AI and data science. From data preparation and analytics with RAPIDS, to model training with TensorFlow, to real-time inferencing with Triton. You can think of this software as the operating system of AI. This software is fully supported by NVIDIA to run on leading enterprise platforms from cloud to data center to edge to help enterprises and organizations start AI projects and keep them on track.
On top of this core software, NVIDIA has created frameworks to help solve specific challenges. For example, Riva is a framework for speech AI, Merlin for recommender systems, and Metropolis for vision AI. NVIDIA AI Enterprise is available through our partners around the world.
When it comes to reimagining the data center, NVIDIA has the complete, open platform of hardware and software to build the AI factories of the future. NVIDIA is Taiwan's partner to scale this technology portfolio into products, to move from servers to racks to data centers that manufacture intelligence. We've talked about AI factories. Now let's talk about the next wave of AI: robotics. Here's Deepu Talla to tell you more. We are entering the age of robotics— autonomous machines that are keenly aware of their environment and that can make smart decisions about their actions. This drive towards automation makes robotics a major new application for AI.
Across industries such as manufacturing, retail, agriculture, logistics & warehouses, delivery, and healthcare, we see a clear demand for automation. Robots of all forms and sizes, with wheels, with arms, with legs, with wings, on the ground, in the air, under water, and even robots that are stationary watching other things that move and providing outside-in perception are increasingly being deployed. NVIDIA Isaac is our robotics platform. It has four pillars.
The first pillar is about creating the AI, a very time-consuming and difficult process that we are making fast and easy. The second is simulating the operation of the robot in the virtual world before it is tried in the real world. It is far safer, cheaper, and faster for robots to be born in the virtual world before existing in the physical world.
The third pillar is building the physical robots. We will show the tools that help bring real robots to market. Last, the fourth pillar is about managing the fleet of deployed robots over their lifetimes, typically many years if not more than a decade. Now, let’s double-click into how we are simplifying the creation of AI for robotics. Training AI models requires data—lots of data.
Capturing real-world data and human labeling is necessary but not sufficient. With synthetic data generation, or SDG, corner cases can be added and model development bootstrapped. Image attributes such as lighting, textures, and colors can be randomized to ensure diversity in the dataset.
And the SDG tool delivers the datasets with perfectly labeled data. Augmenting the real training dataset with synthetic data is increasingly being used to improve accuracy and also reduce time to create or update an AI model. Using the NVIDIA Omniverse platform, we have created Isaac Replicator for SDG in robotics applications. Starting with a good AI model dictates how fast the model can be adapted to a particular use case and target device. NVIDIA pre-trained models vastly speed-up the model creation time. Several of our customers have reported up to 10X improvement.
Imagine hiring an engineer for a critical job. You can hire a high school student and train them for a few years starting from the basics, or you can select an experienced engineer and train them in a few weeks in the job’s expertise. NVIDIA pre-trained models are essentially experienced engineers and are available on NGC for download. The NVIDIA TAO toolkit allows you to fill in the gaps. TAO stands for Train, Adapt, and Optimize. You take a great engineer and then delta-train them for your specific environment.
You can take any model, whether it’s an NVIDIA pre-trained model or your own model, and use this toolkit to create AI models optimized for both accuracy and performance. Every month, we are seeing several thousands of downloads of our pre-trained models and TAO toolkit. Synthetic data generation plus pre-trained models and TAO greatly simplifies AI model creation. Make sure you try these out if you are building AI for robotics.
Now let's talk about the second pillar. Robotics is not easy, particularly building and testing a physical robot. Imagine you want to build a robot arm and it weighs 500 pounds. And it’s made of solid metal, capable of manipulation and gripping.
And it needs to work with a human in a manufacturing plant. I certainly do not want to be that human crash test dummy. Until it has been tested a million, billion times, I'm not going to be comfortable operating beside it. I don’t mind being the human in the virtual world working with the robot in a simulation. We can simulate thousands of robots in parallel.
Simulation makes it safe, cheap, and fast. NVIDIA Omniverse brings together high-fidelity graphics and accurate physics to become a platform to simulate the real-world— an environment for creating digital twins. Using NVIDIA Omniverse, we have built Isaac Sim for robotics. With Isaac Sim, various robot 3D models can be imported. All the robot sensors can also be imported.
Isaac Sim can be interfaced with the ROS ecosystem. Several companies are using Isaac Sim for simulating both navigation and manipulation. Our focus on Isaac Sim is closing the Sim2Real gap, where the simulation closely matches what happens in the real world. Today, we’re announcing Isaac Sim 2022.1 release, which introduces new features that make it the simulator for the age of AI robotics. Features include a new tool called Cortex, which makes it easy to program co-bots, like the large robot arm I mentioned earlier.
We also have added Isaac Gym, which allows reinforcement learning to be leveraged to train robotcontrol policies in minutes, as opposed to days. Along with SDG capabilities of Replicator mentioned earlier, these new tools bring the power of NVIDIA AI to robotics simulation. Now let's take a look at a demo of Isaac Sim in action.
Successful development, training, and testing of complex robots for real-world applications demand high-fidelity simulation and accurate physics. Built on NVIDIA's Omniverse platform, Isaac Sim combines immersive, physically accurate, photorealistic environments with complex virtual robots. Let’s look at three very different AI-based robots being developed by our partners using Isaac Sim. Fraunhofer IML, a technology leader in logistics, uses NVIDIA Isaac Sim for the virtual development of Obelix—a highly dynamic indoor/outdoor Autonomous Mobile Robot, or AMR.
After importing over 5400 parts from CAD and rigging with Omniverse PhysX, the virtual robot moves just as deftly in simulation as it does in the real world. This not only accelerates virtual development but also enables scaling to larger scenarios. Next, Festo, well known for industrial automation, uses Isaac Sim to develop intelligent skills for collaborative robots, or cobots, requiring acute awareness of their environment, human partners and tasks. Festo uses Cortex, an Isaac Sim tool that dramatically simplifies programming cobot skills.
For perception, AI models used in this task were trained using only synthetic data generated by Isaac Replicator. Finally, there is ANYmal, a robot dog developed by a leading robotics research group from ETH Zurich and Swiss-Mile. Using end-to-end GPU accelerated Reinforcement Learning, ANYmal, whose feet were replaced with wheels, learned to 'walk' over urban terrain within minutes rather than weeks using NVIDIA's Isaac Gym training tool. The locomotion policy was verified in Isaac Sim and deployed on a real ANYmal.
This is a compelling demonstration of simulator training for real-world-deployment. From training perception and policies to hardware-in-loop, Isaac Sim is the tool to build AI-based robots that are born in simulation to work and play in the real-world. So we talked about the first two pillars of the NVIDIA robotics platform. Now let’s switch gears and talk about building real-world physical robots and deploying them. NVIDIA Jetson has become the de facto AI platform for edge and robotics applications. Jetson has over 1M developers. And over 6000 companies are using Jetson for production.
With over 150 partners, ranging from system builders to application software companies, the breadth of edge AI and robotics products being deployed continues to grow. Sharing the same architecture as NVIDIA data center platforms allows us to run the latest and greatest AI cloud-native software on physical robots. We call this JetPack SDK. DeepStream SDK is being downloaded more than 10K times every month and accelerates vision AI applications. Riva, our conversational AI SDK, is now available on Jetson.
We’ve been working with Open Robotics to accelerate ROS on GPUs. We call it Isaac ROS. In addition, we have several robotics algorithms available, we call them Isaac GEMS. There has never been a better time to build robots with all of the NVIDIA Isaac software tools now available. And we continue to add and improve the software stack. NVIDIA Orin has set a new bar for edge AI as evidenced by the recent MLPERF competition— up to 5X measured performance over previous generation, Xavier, while maintaining 100% software and form-factor compatibility.
Powered by the Ampere Tensor core GPU and twelve ARM A78 CPUs, it delivers up to 275 Tera operations per second. Basically a server in the palm of your hand. The Jetson AGX Orin developer kit is available now at distributors worldwide. Production modules starting at $399 will be available starting in July. The Orin NX module is just 70mm x 45mm size for the full computer: CPU, GPU, networking, memory, and power management.
Production systems from partners with Jetson Orin are available now. Many partners are announcing their products this week at Computex. More than 10 partners are in Taiwan. These systems come in various form factors tailored towards specific industries and use cases. Systems come as fanless or with a fan, varying degrees of IO, ruggedized or commercial, and other such parameters.
Autonomous mobile robots are one of the fastest growing segments of robotics due to the growth of e-commerce, supply chain challenges, and shortage of labor. In addition to warehouses, AMRs are being deployed in hospitals, retail stores, factories, campuses, and airports. The technology challenge is that AMRs operate in highly unstructured environments, even though they move slowly. To accelerate the development of AMRs, we created Nova Orin. Nova Orin is a reference design for state-of-the art compute and sensors for AMRs. It consists of two Jetson AGX Orin and supports multiple sensors such as two stereo cameras, four wide-angle cameras, two 2D lidars, one 3D lidar, and up to eight ultrasonic sensors.
The reference architecture will be available later this year. The software stack on Nova Orin will consist of the navigation stack along with additional NVIDIA software application frameworks, such as DeepMap, CuOpt, and Metropolis. DeepMap provides an accelerated framework for 3D map creation, deployment, and dynamic updates of the deployment space. CuOpt provides accurate, dynamic route planning, and scale-out capabilities to hundreds if not thousands of AMRs in a single large warehouse or factory. Metropolis brings outside-in perception and situational awareness while the AMR itself can only do inside-out perception. And finally, NVIDIA Fleet Command provides a secure fleet management capability.
Today we talked about the NVIDIA robotics platform. The four pillars: AI training, simulation, building physical robots, and deploying robots. This is possible because we leverage NVIDIA investments in AI, high performance computing, and graphics from the bottom to top of the stack, starting with hardware systems and system software, to AI and Omniverse platforms, and finally to domain-specific application frameworks.
This is the industry's most comprehensive end-to-end robotics platform and we continue to invest in it. Happy roboting! And one more thing. DRIVE Hyperion is the computing and sensing architecture of our self-driving car.
It is central to our entire AV platform. It consists of sensors, networks, chauffeur AV computers, a concierge Al computer, a mission recorder, and safety and cybersecurity systems. Hyperion is designed for a full self-driving car with a 360-degree camera, radar, lidar, and ultrasonic sensor suite. Importantly, it's open for the entire industry, enabling them to build all types of vehicles.
This is a key reason why Hyperion is being adopted all over the world. Hyperion version 8 will ship in all new Mercedes-Benz vehicles starting in 2024, followed by Jaguar and Land Rover cars and SUVs in 2025. This platform evolves and has a great future roadmap. At our most recent GTC we announced Hyperion 9 for cars shipping starting in 2026. Hyperion 9 will have 14 cameras, 9 radars, 3 lidars, and 20 ultrasonic sensors.
It will process twice the amount of sensor data compared to Hyperion 8, further enhancing safety and extending the operating domains of full self-driving. Today we are excited to announce Foxconn, Quanta Computer and Desay as our newest DRIVE Hyperion supplier partners to help OEMs scale this breakthrough technology into production. Now, let’s talk about where it all began, NVIDIA Gaming. Here’s Jeff Fisher. Hi everyone. This is our 3rd virtual Computex, and like you, I am eager to get back in person to see all our great friends and partners. Taiwan is the birthplace of the PC ecosystem and the spirit of Computex is to celebrate the incredible journey that built this $500B industry.
NVIDIA's journey started here as well, and together with our amazing partners, we are delighting and empowering hundreds of millions of gamers and creators. Over the past 20 years, we launched multiple generations of gaming GPU architectures, each pushing the industry forward. Our latest, NVIDIA RTX, introduced real-time ray tracing and AI, once again reinventing graphics. We turned MaxQ into a game-changing approach to laptop system design. Beefy transportable desktops have transformed into thin, portable powerhouses.
And we looked beyond the PC into gaming monitors, showing the world buttery smooth gaming with NVIDIA G-SYNC and setting the standard for image quality. And let’s consider the market we have built together. PC Game Hardware is projected to be a $67B market this year and to grow double digits over each of the next several years.
100M new PC gamers were added to our ranks in just the past two years. 80M creators and broadcasters are fueling an economy of $100B. 920M people will watch game live streaming this year, up 1.5X in the last 3 years, and over a half a billion people will watch esports. To our partners, thank you again for the commitment and passion to deliver amazing products that inspire gamers and creators, year after year.
NVIDIA remains dedicated to building the ultimate platform for gamers and creators. At its heart is GeForce RTX, powered by our Ampere architecture, with 28 billion transistors, 40 Shader TF, 78 RT TF, and 320 Tensor TFs, it is the world's fastest GPU. Ampere features 2nd gen RT cores for real-time ray traced cinematic graphics, and 3rd generation tensor cores, which power NVIDIA DLSS, our groundbreaking AI rendering technology. For competitive gamers, we invented NVIDIA Reflex, providing the lowest latency and best responsiveness. For game live streamers, the RTX platform includes NVIDIA's advanced video encoder, engineered to deliver the highest quality video stream alongside maximum game performance. RTX is also your AI-powered home studio.
Our Broadcast app leverages Ampere's dedicated Tensor Cores to turn any room into a live streaming station. For digital artists, we built NVIDIA Studio, an RTX-powered platform that includes dozens of SDKs and accelerates the top creative apps and tools, including NVIDIA Omniverse. Finally, the RTX platform is constantly optimized and improved with Game Ready and Studio Drivers.
Our customers depend on GeForce to just work across every system configuration and thousands of games and apps, out of the box, every time. Our drivers are an invisible force that makes our platform like none other. We are so proud of this effort, I invited Thiru to tell you more. At NVIDIA, we want gamers and creators to get the best gaming and app experience on day zero. We work closely with developers to ensure our drivers deliver the best possible performance and the reliability you count on to game more and create faster.
For many years, we were working very closely with the NVIDIA team. It's like a safety net of constant cooperation on making sure that the drivers are supporting the latest update in optimization on both ends, and that secures the gameplay experience for all the players. If you really want to see the game at its best, the way we actually imagine, the way we designed it, you need to get the Game Ready Drivers. Our engine is so purpose built and finely tuned to give you the best experience playing Doom Eternal, and the driver team at NVIDIA are almost like part of our team so we can achieve the vision that we have.
NVIDIA Studio Drivers provide our artists, creators, and 3D developers the best performance and reliability. As I always say, it's very hard math to get these things to work, and the companies are working together to ensure that we're doing the math, so our creators don't have to. Our mission with drivers is to be invisible, so gamers can just play and creators can just create—faster than ever. RTX momentum continues to build.
The cinematic look of Ray Tracing and performance-boosting AI are defining the next generation of content. Now there are over 250 RTX games & applications, doubling since last Computex. And GeForce Gamers continue to upgrade, with over 30% now on RTX and logging over 1.5 billion hours of playtime with RTX ON.
Agent 47 returned to HITMAN III in this dramatic conclusion of IO Interactive's HITMAN trilogy, which has sold over 50 million copies. I'm happy to announce that today, the most successful game of the franchise is getting a big update with ray traced reflections, shadows, and NVIDIA DLSS. Let's take a look at HITMAN III with RTX ON. The popularity of Formula 1 racing continues to grow as does Codemasters’s F1 racing game. I'm excited to announce that the next season, F1 22, will launch July 1st with RTX ON.
Gamers will feel even more in the drivers seat with the cinematics of ray tracing and the performance of DLSS. HITMAN III and F1 22 add to the RTX momentum and join a number of new games that will be turning RTX ON. Now let's talk about NVIDIA Reflex. Immersive gameplay requires low latency. It connects your mind to the game. It’s also critical for competitive gaming.
It seems obvious that low system latency helps all gamers, not just the pros. But by how much? To measure this, we recently conducted the System Latency Challenge. The largest study of its type. Partnering with Kovaaks, the popular aim trainer, we collected data from 20 thousand gamers.
Measuring aim accuracy across a range of system latencies. From 25ms to 85ms. What we found was interesting. High skilled gamers, or the top 25%, hit 2X the number of targets at low latency.
And the least skilled gamers, or the bottom 25%, increased their shot accuracy by 2.5x. Here is Tion to tell you more about Reflex. Game and performance isn't just about FPS, it also involves latency. We created NVIDIA Reflex to reduce system latency. By integrating Reflex directly into games, we cut latency in half. Over 35 games have adopted Reflex, including eight out of the top 10 shooters.
Now, over 20 million gamers play with Reflex on each month. With NVIDIA Reflex, the game and the graphics driver coordinate to dynamically reduce system latency. Gamers should definitely turn on Reflex. It's one of the first settings I turn on when I'm playing a game that supports it. And in Valorant, we turn it on by default on hardware that supports it.
When you think about it, targeting 60 frames per second, every frame is only 16.6 milliseconds long. On PC, competitive players are trying to get higher and higher frame rates, 100 frames a second, 200 frames a second. When you get to those numbers, frames happen in just a handful of milliseconds. So being able to reduce system latency by a few milliseconds, that's a really big impact for players.
NVIDIA Reflex was really easy for us to integrate into Fortnite. And in fact, we've made it a plugin in Unreal Engine so that all Unreal Engine developers can easily enable it in their games as well. The Reflex ecosystem continues to grow.
In addition to games, it is featured in 22 monitors and 45 mice. This will soon include Icarus, a gritty PvE survival game where you explore a savage alien wilderness in the aftermath of terraforming gone wrong. Icarus, which already features DLSS, adds Reflex next month, so every gamer can better survive this hostile environment. Today, we are introducing the newest member of the Reflex family: the ASUS ROG Swift 500Hz Gaming Monitor. The lowest latency, highest refresh rate G-SYNC Esports display ever created. It has been designed from the ground up for competitive gaming featuring a brand new ETN panel for maximum motion clarity, G-SYNC Esports mode with adjustable vibrance, and of course the NVIDIA Reflex Analyzer.
Gamers, creators, and students have made GeForce high performance laptops the fastest growing PC category. Over 60 million people are gaming and creating on them. And this past year, over 35% more RTX laptops were sold, with 3X growth in Studio laptops. Today there are over 180 models for gamers and creators that feature our RTX 30-Series GPUs and Max-Q technologies.
Since we introduced Max-Q technologies at Computex 5 years ago, it has transformed GeForce laptops into thin and high performance machines. This year, we announced 4th gen Max-Q, bringing new innovations and even more power efficiency. CPU Optimizer allows the GPU to balance system performance and power. It utilizes a new low-level framework we developed with CPU vendors to improve power efficiency and boost frame rates. Rapid Core Scaling delivers more performance for creators on the go.
The GPU can sense the real-time demands of the application and scale to the optimal number of cores, for a performance boost of up to 3X. Battery Boost 2.0 enables gamers to play for longer while unplugged by finding the optimal balance of GPU and CPU power usage, battery discharge, and image quality. And Advanced Optimus delivers the smooth, stutter-free gameplay of G-SYNC while offering better performance, latency, and longer battery life. At Computex, we are showcasing new gaming laptops from MSI, ASUS, Gigabyte, and others.
These gaming laptops feature cutting-edge designs and the incredible performance of the RTX 3080Ti and 3070Ti. There are also exciting new NVIDIA Studio laptops for content creators, like the Asus Zenbook Pro 16X, the Acer ConceptD5, and the Lenovo Yoga Slim 7i Pro X. NVIDIA Studio is our platform designed to enhance and accelerate digital artist workflows.
It's an end-to-end approach, starting from our GPUs that include dedicated hardware to accelerate ray tracing for gamers and creators, AI features to simplify content creation, high performance encoders and decoders to accelerate video editing, and CUDA for compute-intensive tasks like image processing and simulation. There are now over 200 NVIDIA Studio-accelerated apps. We have also developed our own applications for NVIDIA Studio, including Broadcast for live streaming, Canvas for painting landscapes with AI, and Omniverse for advanced 3D design and collaboration. NVIDIA Studio and RTX laptops deliver a significant performance advantage in Ray Tracing and AI acceleration. These laptops are up to 6X faster in 3D rendering than the fastest MacBook Pros with M1 Max processors, and up to 3 times faster in AI processing, unlocking new workflows for artists. 3D Designers and artists are the builders of the next digital frontier.
Vast virtual worlds filled with factories, homes, shops, museums, robots are now being built. Today, a 3D artist typically works sequentially across multiple applications. Exporting and importing large files many times along the way. Omniverse was designed to unlock the potential of 3D design and allow creators to collaborate on large interconnected spaces.
Omniverse is an open platform, connecting the industry's leading 3D tools including Adobe, Autodesk, and Epic's Unreal Engine into a shared, single environment. And it’s fully accelerated by RTX Ray Tracing, AI, and compute. Over the past year, we have seen a 10X increase in Omniverse downloads, with 120K unique users creating or developing on the platform.
Omniverse is the future of 3D content creation and how virtual worlds will be built. And we continue to update Omniverse with new capabilities. Omniverse Cloud has added Simple Share. With one click, users can send an Omniverse scene for others to view. We’ve added Audio2Emotion, an AI-powered animation feature that generates realistic facial expressions based on just an audio file.
Omniverse XR is now available in beta. You can open your photorealistic Omniverse scene and experience it, fully immersive, in Virtual Reality. And Omniverse Machinima has been updated to make it easier than ever for 3D artists to create animated shorts. I'm in the Omniverse. And check out our Made in Machinima contest. It’s in full swing.
Create an easy animated short with Omniverse materials, physics, and game assets for a chance to win top of the line RTX Studio laptops. Over the past 20 years, NVIDIA and our partners have dedicated ourselves to building the best platform for gaming and creating. Hundreds of millions now count on it to play, work, and learn. RTX has reinvented graphics and the momentum continues to grow. There are now over 250 games and applications. Gaming laptops are the fastest growing PC category and MaxQ 4.0 is delivering a new level of power efficiency.
These are our most portable, highest performance laptops ever. Massive, interconnected 3D destinations are being built today. NVIDIA Studio and Omniverse are designed to enable collaboration and construction of these virtual worlds. Finally, I want to thank our partners for working so hard alongside us to bring new innovations to the market and helping build this amazing PC ecosystem. Thank you for watching. We wish you—and everyone around the world—safety, peace, prosperity, and good health.