NVIDIA Executive Keynote | COMPUTEX 2021
We miss Taiwan and wish we could be there in person for Computex. So we created Taipei City in Microsoft Flight Sim and flew in virtually on a GeForce RTX 3080. Thousands of mods have been created for Flight Sim, and they’ve been downloaded over 10 million times. This mod will be available just after the keynote. Welcome to Computex 2021! It’s always great to talk directly to our partners, and this year we have a lot to talk about.
I’m going to kick it off with Gaming, and then I’ll hand it to Manuvir Das to talk about AI and our Enterprise platforms. So let’s get started. Every person born this year will be a gamer, 140 million of them. In fact, Gen-Z prefers video gaming as their favorite entertainment activity over just about anything.
In Taiwan, 2/3rds of the population plays video games. So it is no surprise that gaming has transformed into one of the largest and fastest growing forms of entertainment. And it is here to stay. Gaming revenue grew to $180B, bigger than cinema, music, and streaming video combined. We added 60 million PC gamers to our ranks, and over 10,000 games were published for them to enjoy.
In April, the number of concurrent gamers on Steam was at an all-time high of 27 million, 1.5 times that of 2019. Like any sport, people love to watch others play. 100 billion hours of gaming content was viewed on Youtube and more than 430 million people tuned into esports competitions. And it doesn’t stop there.
Minecraft’s open world is estimated to have grown to 4 billion square kilometers, or 8X the surface of the planet. Valve just announced that the 2021 DOTA Championship will feature a $40 million prize pool. Roblox had 42 million daily active users in the first quarter, up 80% year over year. And it is not just about playing. There are 45 million professional and freelance creators fueling an explosion in digital content. 30 million broadcasters.
And 75 million higher-ed STEM students. All relying on the latest technology to improve their productivity and quality of work. Over the past 20 years, we have built GeForce into the number one gaming platform. With 1.5 billion GPUs shipped, GeForce has fueled the growth of PC gaming.
And we continue to delight our customers with new ways to game and create. Our scale and pace of innovation was only possible by partnering with the best companies in the world. Together, we have transformed PC gaming. Ten years ago, we introduced Optimus, dramatically extending battery life in gaming laptops.
All told, Optimus has saved 5.8 billion kilowatt hours of energy. Enough to power over half a million homes for a year. Max-Q changed the way laptops were built.
The performance only available in big, heavy gaming machines was engineered into thin-and-light laptops. Max-Q has shed over 8 million pounds from gaming laptops sold. G-SYNC introduced stutter-free gaming. With over 20 trillion buttery-smooth pixels now shipped. Once you game with G-SYNC, there is no turning back.
Last year, we introduced NVIDIA Reflex, which changed the game for esports competitors. In Call of Duty Warzone alone, Reflex gamers have over 50 million wins. Finally, NVIDIA Studio is our platform for Creators.
Engineered for the needs of creative professionals, our laptops have accelerated over 25 billion hours of content creation. Thanks to all our partners who are just as excited as we are about reinventing this market, and are joining us in the next major leap forward. RTX is a huge reset to computer graphics. A decade in development, RTX introduced the world to real-time ray tracing and AI for graphics. RTX is now available in the full range of desktops and laptops, in every country, from every OEM, and even streaming from the cloud.
With over 100 top games and apps available today, RTX is the new standard. For creators and broadcasters, RTX is accelerating the #1 photography app, the #1 video editing app, and the #1 Broadcast app. For gamers, RTX is powering the #1 Battle Royale, the #1 RPG, and the #1 Best Selling game of all time. Last month, we added Outriders, and Call of Duty Warzone to the list. And today, we have more exciting games to announce. Dying 1983 is the second game in the Dying franchise from the Chinese developer NEKCOM.
It’s a Japanese-inspired horror-themed puzzle game. Its stark and frightening world is brought to life with RTX ray tracing and DLSS. Let’s take an exclusive look at DYING 1983 with RTX ON. Tom Clancy’s Rainbow Six Siege is one of the most popular esports games in the world, with over 70 million players worldwide.
In March, Rainbow 6 gamers gained the competitive edge of NVIDIA Reflex. Today, we are announcing that RTX gamers will get even more performance with the addition of DLSS. Let’s take a look. New Zealand-based developer RocketWerkz was founded by Dean Hall, best known as the creator of DayZ. Their highly anticipated new game, Icarus, is a stunning, multiplayer survival game set on a savage alien planet.
Today, we are announcing that Icarus will come alive with RTX ray-traced global illumination and DLSS when the initial chapter, First Cohort, launches this Fall. Let’s take a look at some exclusive new footage. Red Dead Redemption 2 is one of the most critically acclaimed games of all time, with more than 275 perfect scores. 175 Game-of-the-Year Awards, and more than 37 million copies sold to date. We are happy to announce that Rockstar Games will be adding DLSS to Red Dead Redemption 2. Coming soon, every RTX gamer will see a free boost in performance.
VR is taking off. VR game revenue is up 70% on Steam in 2020. The installed base of PC-compatible VR headsets is expected to exceed 30 million in the next 5 years. With almost twice the resolution of a desktop monitor, and unforgiving FPS requirements, VR is very demanding on the GPU. And RTX is now coming to VR. No Man’s Sky, the popular open-world space game, now features DLSS. VR gamers can soon play with ultra-graphics at 90 fps.
Wrench, a mechanic simulator where you repair and maintain race cars in VR, now features RTX ray tracing and DLSS. And the VR survival shooter, Into the Radius, adds DLSS to boost performance and keep you immersed in this dystopian world. 75% of GeForce gamers play esports. We invented NVIDIA Reflex for them. The difference between winning and losing is a matter of milliseconds. On average, Reflex reduces system latency by 20ms, it gets the PC out of the way, so a gamer’s real skill can come to play.
And gamers love it, more than 90% of them compete with Reflex ON. Today we are announcing even more NVIDIA Reflex games. Gaijin's War Thunder, the most popular air-land-and-sea battle arena with 40 million gamers. Naraka Bladepoint, the highly anticipated Battle Royal slasher from 24 Entertainment. In the last closed beta, it broke into Steam’s Top 5 games played.
Tencent Game’s Crossfire HD is a remastered version of the original with over 560 million gamers in China. And Escape from Tarkov, the intense survival shooter from Battlestate Games, is one of the top 10 most-played competitive shooters. Coming soon, 12 of the top 15 competitive shooters will feature NVIDIA Reflex. Let’s take a look at Reflex in Escape from Tarkov. For serious competitors, reducing system latency starts by measuring it, from end to end. This requires an ecosystem of partners.
Last year, we introduced the Reflex Latency Analyzer, a measurement tool built directly into gaming monitors. Today, we are adding Lenovo, Viewsonic, and EVGA. 13 partners are offering a total of 15 monitors and 21 compatible mice.
Now, more gamers can simply plug a mouse into their monitor, and accurately measure end-to-end latency, from click to photon. Almost 10 years ago, we introduced our Kepler architecture and created a new class of laptops for gamers. While the market reception was great, these laptops were large, bulky and had limited battery life. More transportable, than portable. In 2016, we introduced the Pascal architecture and Max-Q technology, a full-system-design approach to bringing high performance to thin-and-light laptops. Performance increased over 4x, and for the first time, high performance gaming laptops were less than 20 millimeters thick. This year, we launched Ampere, our 2nd generation RTX.
It featured our 3rd generation Max-Q, and delivered twice the performance of Pascal. But the real magic is DLSS. Gamers have taken notice. And GeForce laptops are the fastest growing gaming platform. Laptops are a marvel of engineering. No effort is spared to deliver the highest performance in a thin-and-light device.
In laptop design, system power determines everything, including performance, battery life, thinness, and acoustics. What if you could get a massive boost in gaming performance, without expending any more power? Let’s say you are playing Control at 47 frames per second. At 1440p, the GPU is rendering almost 4 million pixels and drawing 80 watts of power. NVIDIA DLSS uses AI-powered Tensor Cores to boost framerates.
With DLSS, the GPU only needs to render a fraction of the pixels and then uses AI to complete the frame. The result is an image just as beautiful as native 1440p, and you get 1.5 to 2 times the performance at the same power. Alienware was one of our earliest partners, designing high performance GeForce PCs for gamers. For the past 2 years, we’ve been working together to create an amazing new laptop. Today, we’re announcing the Alienware x15. An Ultra Thin GeForce RTX 3080 Laptop. Powered by Max-Q technologies including Dynamic Boost, WhisperMode and Advanced Optimus, and featuring a 1440p display.
It is the world’s most powerful sub-16mm 15-inch gaming laptop. Tune in to the Alienware Update event on June 1st to learn more. This year brought a record launch for RTX laptops, with over 140 models from every OEM. Starting at $799, and featuring MaxQ, there is now an RTX laptop for every gamer. We developed NVIDIA Studio to address the growing needs of 3D designers, video editors, and photographers. These are specially configured systems, optimized and tested for creator workflows, and supported with a monthly cadence of Studio Drivers.
Today, we are announcing new Studio laptops from HP and Acer. The 14-inch HP Envy brings the capabilities of RTX to an ultra-portable laptop. Great for students and creators on the go. The new Acer ConceptD offers a variety of traditional clamshell options, and an Ezel sketch board design, to give creators even more flexibility. Gaming has been an inspiration to digital creators and artists for decades. Machinima, an art form started in the 90’s, uses real-time 3D technologies with game assets, visual effects, and character animation to create short, humorous clips or full-length movies. For these artists, we created Omniverse Machinima, making it easy to produce cinematic, animated stories with advanced real-time ray tracing and AI.
Users import their own game assets, or draw from our growing library. They can then apply advanced physics effects, like destruction or fire, with NVIDIA’s Blast and Flow. Animate movements using only a webcam with Wrench’s AI Pose. And use simple audio tracks to animate character faces with Audio2Face. The final scenes are rendered in the Omniverse RTX renderer. Machimina was just released in Beta. Let me show you how you can create your own masterpiece.
RTX has been a huge success. And today we are announcing a new addition to the family. The GeForce RTX 3080 Ti is our new flagship gaming GPU. Based on Ampere, with 2nd generation RT Cores and 3rd generation Tensor Cores, Ampere is our greatest generational leap ever. The 80-Ti class of GPUs represents the best of our gaming lineup. The GTX 1080 Ti, released in 2017, could tackle all the games of its time.
But this well-loved GPU simply can’t keep up with the demands of modern games. The RTX 2080 Ti was introduced as the only way to play in 4K with RTX ON. But the production value of games continues to march forward.
New titles like Cyberpunk 2077 and Watch Dogs: Legion, have elevated realism, demanding even more of the GPU. The RTX 3080 Ti is 1.5X faster than its predecessor and tears through the latest games with all the settings cranked up. The RTX 3080 Ti is a powerful GPU with 34 Shader-TeraFLOPS, 67 RT-TeraFLOPS, and 273 Tensor-TeraFLOPS. It comes equipped with 12 gigabytes of ultra-fast G6X memory and a 384-bit memory interface. Availability will begin on June 3rd starting at $1199.
To show you what this beast can do, we have a special announcement. Doom is a storied franchise known by a generation of gamers for pushing the boundaries of speed, power and visual effects. Doom Eternal, the latest in the series, takes this even further. Based on the id tech 7 engine, it is blistering fast and beautiful. Today, we’re announcing Doom Eternal is adopting RTX ray tracing and DLSS. Here is the first look at Doom Eternal with ray tracing at 4K running on a 3080 Ti. And that’s not all.
The RTX 3070 quickly became our most popular Ampere GPU. Now, we’re adding a Ti to this class of GeForce, with more cores and superfast G6X memory. Today we’re announcing the all new RTX 3070 Ti.
It’s 1.5x faster than a 2070 Super and availability will begin on June 10th, starting at $599. Thanks everyone for your time, before I hand it off to Manuvir, let me summarize. Gaming is transforming entertainment and RTX is changing everything, not just for gamers, but for 150 million creators, broadcasters and students. With over 100 of the top games and creative apps accelerated, RTX is the new standard. GeForce Laptops are the fastest growing gaming platform, now fueled top to bottom by RTX, 3rd generation Max-Q, and the performance boosting magic of DLSS.
And Studio, specially designed for Creators, extends their reach beyond gaming. Finally, the RTX Desktop family just got better with our new flagship gaming GPU, the GeForce RTX 3080 Ti and the GeForce RTX 3070 Ti. Every person born today is a gamer. We have an amazing future ahead, and we look forward to building it with all of you. And here to tell you how we are putting AI into the hands of every company is Manuvir Das, Head of Enterprise Computing.
Thank you Jeff, and thank you everyone for joining us today. Our journey at NVIDIA began with gaming, but today we are also the driver of the most powerful technology force of our time, Artificial Intelligence. Let’s take a look at just some of the ways in which AI, powered by NVIDIA, is improving our world. I am a creator Blending art and technology To immerse our senses I am a healer Helping us take the next step And see what's possible I am a pioneer Finding life-saving answers And pushing the edge to the outer limits.
I am a guardian Defending our oceans And magnificent creatures that call them home I am a protector Helping the earth breathe easier And watching over it for generations to come I am a storyteller Giving emotion to words And bringing them to life. I am even the composer of the music. I am AI. Brought to life by NVIDIA, Deep Learning, and brilliant minds everywhere. And that was just a hint of what AI can do.
Taiwan has always been a very special place for NVIDIA. From system builders and solution providers, to universities and the government, all have partnered with us. On behalf of Jensen and NVIDIA, thank you to all of our many friends in Taiwan, who have worked alongside us to enable these amazing AI breakthroughs. Together, we have laid the foundation for a new computing model based on AI, software that writes software. This foundation includes GPUs deployed in the cloud and in data centers, AI software and SDKs tailored to the world’s largest industries, and an ecosystem of startups and developers.
Now it’s time to build the house on top of this foundation. It is time to democratize AI, by bringing its transformative power to every company and its customers. There are three fundamental shifts that will bring AI to every company. The first shift is that every application used by any company will be infused with AI. AI is essentially a two step process.
First, a rich set of existing data, and known outcomes from that data, are used to train a model. Then, the model is applied to fresh data to make fresh judgments. The model is effectively a piece of software that can be embedded into any application. As the application runs, it queries the model to make judgments, giving it intelligence. As new data becomes available, the model is re-trained and improved, automatically making the application better, without human developers having to create a new version. A great example of this infusion of AI is in Microsoft Office, the world’s most popular productivity application.
Microsoft is adding Smart Experiences such as smart grammar correction, Q&A, and text prediction to Office. With NVIDIA GPUs, Microsoft was able to improve responsiveness to less than 1/5th of a second, enabling real time grammar correction. And at 1/3rd the cost of the same work on CPUs. Going forward, every application will have to be built in this way, or be left behind.
Now this is not the first time we’ve seen this kind of shift. For example, consider the introduction of the graphical user interface. The GUI. Once the transformative value of GUI became evident, every command-line application had to adopt GUI, or be left behind. In the same way, every application will need to be infused with AI. And all of these applications will need the most performant, power-efficient, and cost-effective systems to run on, regardless of form factor or location, from edge to cloud.
Every system will need a GPU. The second shift is that every datacenter used by any company will be infused with AI. Computing is increasingly being driven by large amounts of data.
Neither the data, nor the computation done on it, can fit on one server. Therefore, the flow of data through the network becomes crucial to both the capability and the security of the datacenter. A new kind of hardware is needed that sits on the data path and intelligently optimizes, inspects, and protects the data, and protects the applications from one another. This new hardware is the Data Processing Unit, or DPU. Every server will need a DPU. The third shift is that the lifecycle of every product produced by any company will be optimized by AI.
Consider the very common and old-fashioned example of soap. Yes, soap. Going forward, a soap company will design and tailor its soap product portfolio by analyzing rich data on the usage and experience of the customer base. The soap manufacturing process will be automated and optimized with AI. Customers who use soap, pretty much the entire population of the world, will be engaged socially, and sold to online.
Smart soap dispensers will track usage and automatically order refills. And all the data from the customer experience will be fed back to guide the design and supply chain of the soap. This soap company, and every other company, will need a new form of IT, and IT infrastructure to operate the AI-optimized life cycle of its product. These are fundamental shifts that will require massive amounts of accelerated computing. Now let’s talk about how you can build on the technology foundation of NVIDIA to make this a reality.
How you can democratize AI. We start with the hardware. NVIDIA DGX is the instrument of AI.
Since its launch in 2016, DGX has been used by companies across industries at the forefront of AI. The reason for this is that DGX was designed from the ground up as the optimal system for AI. Every component and every interconnect was designed or chosen with great care.
And today, DGX incorporates learnings from hundreds of millions of hours of usage on thousands of systems across the planet. The pinnacle of AI capability is the DGX SuperPOD, where multiple DGXs are clustered together and tuned for AI, based on all of NVIDIA’s learnings. So the first step to democratization is to make this best-of-breed machine more accessible, and more obtainable. To make it more accessible, we are providing a software stack called Base Command Platform. Using Kubernetes, this software allows administrators to share this powerful supercomputer across an organization of data scientists, and a mix of workloads. It provides users with a simple interface for AI.
And it allows organizations to understand and optimize their usage of this valuable equipment. We have been using Base Command within NVIDIA for many years, sharing SuperPODs across thousands of data scientists, who have run over a million jobs. To make it more obtainable, we are introducing a subscription model for SuperPOD. Customers can now experience the capability of a SuperPOD, or a smaller part of a SuperPOD, for months at a time.
This is a hosted offering in which NVIDIA will manage the infrastructure, reducing the operational burden on customers. The offering includes powerful all-flash storage and integrated data management from NetApp, the pioneer of Enterprise NAS storage. NetApp is synonymous with enterprise-grade storage with over 38,000 customers today, on-premises and in the cloud. The subscription model is in early access now. For example, Adobe is already using the infrastructure to train StyleGAN, a powerful image-generation AI. The model will open up for paid access this summer.
With this offering, we expect that many more customers will be able to experience the unique capability of a SuperPOD without the up-front investment. From there, they can graduate to more permanent infrastructure at scale, either their own SuperPOD, or in the public cloud. Speaking of the cloud, for some time now we have worked with our cloud partners to offer powerful GPU instances in the cloud, leveraging the internal components of DGX. We are proud to announce today that Amazon and Google are working with us to enable Base Command Platform for clusters formed from these cloud instances. This work offers the promise of a true hybrid AI experience for customers.
Write once, run anywhere. But even as we democratize access to DGX, the larger objective is to decompose DGX into its smallest AI-optimized parts, so that system manufacturers can compose those parts into different form factors for different scenarios, while adding unique value-add capabilities for their customers. First, we productized the GPU board that combines multiple GPUs into a tightly interconnected computing fabric. This is HGX. Then, we offered the core GPU from the board as a standalone PCIe card.
This is the A100. Now, we have further broken down the A100 into smaller form factor GPUs such as the A30, that draw less power, and are lower cost, while still providing powerful acceleration. And finally, we have productized the same BlueField-2 DPU that is inside DGX SuperPOD, now usable in any server. With this, we have now enabled a complete ecosystem of form factors, from supercomputers, to pizza-box servers, workstations, and edge devices. These form factors can be used in a variety of scenarios, from compute to graphics to virtual desktops to data center infrastructure.
All made better by AI. HGX boards are the engine of supercomputing, A100 and A30 are optimized for compute, A40 and A10 are best for a mix of compute and graphics, and A16 combines multiple small GPUs into a great form factor for VDI. In order to help system manufacturers create AI-optimized designs, and to ensure that the systems can be relied on by customers, we created NVIDIA-Certified, a program for servers that incorporate GPU acceleration.
NVIDIA-Certified provides blueprints for system design, as well as test suites, so that system manufacturers can validate their designs. Today, we are announcing the expansion of NVIDIA-Certified to systems with NVIDIA BlueField DPUs. Going forward, the DPU will be an essential component of every server, in the data center and at the edge. This is because, as shown in the diagram, the DPU offloads, accelerates, and secures functions that must otherwise be performed by the host CPU.
Last year we announced BlueField-2, our state-of-the-art DPU. NVIDIA is working closely with VMware to use BlueField-2 as the host for their ESXi hypervisor, as part of Project Monterey. This year, we released the first version of DOCA, the SDK of BlueField. We expect that DOCA will do for the DPU what CUDA has done for the GPU.
Enabling millions of developers with a long-lasting, consistent SDK, that they can use from one generation of BlueField to the next. And we are especially pleased to announce that the world’s system manufacturers are joining us in the DPU journey by producing state-of-the-art mainstream servers accelerated by NVIDIA BlueField-2. Three of these companies are right here in Taiwan. Asus, Gigabyte, and Quanta. BlueField-2 is especially well suited to inspecting the network for security breaches. The typical approach is to run an agent on the host CPU.
This approach consumes host CPU cycles, and handles only a narrow window of the network traffic. BlueFIeld-2 allows full inspection with no host CPU overhead. NVIDIA has created a software platform called Morpheus, that uses the BlueField-2, and AI, to automatically detect and address security breaches.
Today we are announcing that Red Hat is working with NVIDIA to provide Morpheus developer kits for both OpenShift and Red Hat Enterprise Linux, or RHEL for short. RHEL is the most commonly used version of commercial Linux in Enterprise data centers today. Cybersecurity companies will now be able to use Morpheus on RHEL and OpenShift to bring advanced security to every Enterprise data center. But this journey is just beginning. We are already working on BlueField-3.
22 billion transistors. The first 400 gigabits per second networking chip. 16 Arm CPU cores to run the entire virtualization software stack. BlueField-3 takes security to a whole new level, fully offloading and accelerating IPSEC, TLS cryptography, secret key management, and regular expression processing. Whereas BlueField-2 offloaded an equivalent of 30 CPU cores, it would take 300 CPU cores to secure, offload, and accelerate the networking traffic at 400 Gbps.
This is a 10x leap in performance. BlueField-3. The next generation, but the same DOCA SDK. Today, we are also announcing the expansion of NVIDIA-Certified to accelerated systems with Arm-based host CPUs.
As the GPU and DPU accelerators take on more of the compute workload for AI, it becomes useful to view the host CPU as an orchestrator, more so than as the compute engine. Energy-efficient Arm CPUs are well suited to this task. And the open-licensing model of Arm inspires innovators to create products around it. An exemplar of this approach is NVIDIA’s own Arm-based Grace CPU, which will be available in 2023. Grace is purpose-built for accelerated computing applications such as AI, that process large amounts of data.
Two years ago, we announced that we were bringing CUDA to Arm, simplifying the development of AI and HPC applications on Arm. Today, along with our partner, Taiwan-based Gigabyte, we are happy to announce a devkit. This devkit can be used by application developers to prepare their GPU-accelerated apps for Arm. So there it is. Democratizing the hardware for AI, with a wide range of accelerators, turned into myriad form factors by a broad ecosystem of system builders. 103 different server and workstation form factors, from 16 different system builders.
More every day. Now let’s turn to the software of AI. Every AI workflow can be thought of as a four-step process, in a loop. The first step is to take a large amount of unstructured data and prepare it, by extracting and organizing its features.
The second step is to use the prepared data to train the models. In many cases, the models are then validated or simulated before being used in production or in the physical world. And finally, the actual use of the models. Which in turn generates more data that is then fed back to the process.
This is the recipe of AI. Over the years, NVIDIA has applied this recipe to create application frameworks for a variety of use cases, including conversational AI, drug discovery, self-driving cars, robotics, virtual collaboration, and recommender systems. The engine of the Internet. These frameworks help companies jumpstart their adoption of AI. For example, NVIDIA Clara is our application framework for AI-powered healthcare.
It includes SDKs and reference applications for drug discovery, smart hospitals, medical imaging and genomic analysis. One customer that uses Clara is AstraZeneca. With Clara, they created an AI model for generating new drug candidates that target a given disease, while maximizing safety. This method has seen recent success, with Insilico Medicine using AI to find a new drug in less than two years.
Companies across the world are using these AI workflows from NVIDIA to improve their businesses, and to help their customers. A great example from Taiwan is TSMC. We all know TSMC as a giant of the technology ecosystem, producing millions of wafers every year for semiconductor and technology companies across the planet. Analysts have referred to TSMC as “the Hope Diamond of the semiconductor industry”. In fact, NVIDIA itself is a very large partner of TSMC.
Proactively identifying defective materials, and classifying them to minimize further defects, is a critical aspect of the chip production process. Engineers at TSMC have developed an AI-powered system to automate inspection and defect classification. Their system is 10 times faster, and more accurate, compared to the previous human method. All of these AI workflows are built from the same essential software libraries and toolkits.
Now, we have brought all of these components together as NVIDIA AI Enterprise, a coherent, optimized, certified and supported software platform for AI. NVIDIA AI Enterprise is fully integrated with VMware vSphere, the de facto standard of Enterprise computing. It is licensed, priced and supported in the same way as vSphere, for a consistent experience. And it all runs on NVIDIA-Certified servers.
Mainstream, volume servers that are constantly racked into enterprise data centers. 71 different servers from 16 vendors, and more every day. But the components of NVIDIA AI are useful for more than AI.
NVIDIA AI includes RAPIDS, a set of libraries that accelerate machine learning on GPUs. The same libraries can also accelerate the core engines of big data processing, such as Apache Spark. Cloudera is the de facto provider of Apache Spark to enterprise data centers around the world, with over 5 exabytes of data under management. As the technology landscape has changed, Cloudera has developed Cloudera Data Platform, a hybrid data cloud platform that brings customers forward to new technology, without having to reinvent their big data pipelines.
As we speak, the 2,000 customers of Cloudera are migrating to Cloudera Data Platform. We are pleased to say that NVIDIA and Cloudera have partnered to add transparent GPU acceleration using NVIDIA RAPIDS. A fully integrated solution from Cloudera will be available starting this summer, with the release of CDP version 7.1.7 Customers do not need to understand RAPIDS, or change their workflows in any way, in order to obtain the benefits of this GPU acceleration. An example of an early customer is the United States Internal Revenue Service. With zero changes to their fraud detection workflow, they were able to obtain three times the performance, simply by adding GPUs to their mainstream big data servers.
This performance improvement translated to half the cost of ownership for the same workload. But, even as the world is transformed by AI, another shift is underway, amplified by the global pandemic. It is becoming more and more important for virtual teams of people in different locations to collaborate in real time. This is especially true for 3D workflows, from visualizing architecture, to creating photorealistic content for games and movies, to simulating physically accurate worlds. This is a domain NVIDIA is very familiar with.
And the same GPUs and systems we talked about in the democratization of AI are in fact the ideal foundation on which to build a platform for real-time collaboration and simulation of 3D designs. This is NVIDIA Omniverse. Omniverse makes it easy for teams to collaboratively design and simulate in 3D. We built Omniverse with open standards, based on Pixar’s USD, a powerful file framework and scene description language. With open standards, Omniverse enables teams to connect and work across multiple design applications, or asset libraries, in real time, and, at the highest fidelity. Just as we did with AI, we have put the functionality of Omniverse into an Enterprise-grade software platform, NVIDIA Omniverse Enterprise.
Omniverse Enterprise can run on any NVIDIA RTX-powered infrastructure, from professional laptops, to workstations, to GPU-accelerated virtualized or bare metal servers. We are excited to partner with leading global system providers, who will offer NVIDIA Omniverse Enterprise later this year, starting at just $14,000 per year per company. The Omniverse subscription comes with Connectors, which are plug-ins for the world’s most important third-party design applications, allowing multiple designers and creators to use their preferred tools, while simultaneously editing a scene together. Companies around the world and across industries are already using Omniverse to collaborate in amazing ways. Let’s see how Omniverse is ushering in a new era of virtual worlds.
So there you have it. It is time to put accelerated computing to use for every company. This is a giant opportunity, and responsibility, for all of us. What we’ve talked about today are three essential ingredients built by NVIDIA. The hardware foundation from which to build any system.
The software platform for artificial intelligence. The software platform for collaborative design. Together, we can use these three ingredients to transform every company. And yet, this is the same hardware and software technology that is at the heart of GeForce, the gaming GPU, which is where it all began. I’ll close with a simple thank you.
On behalf of my co-presenter Jeff Fisher, our CEO Jensen Huang, and all of NVIDIA, thank you. Thank you for joining us today; thank you for being with us every step of the way; and thank you for continuing forward with us on this amazing journey.
2021-06-03 07:03