NVIDIA Special Address at SIGGRAPH 2021

Show video

It all began a lifetime ago. When we first discovered the unity of computing and graphics. Born from visions of wonder in its most basic form. Explorations of color and unlimited new possibilities fueled by curious minds and the science of expression. We were awakened to a new need, the need to create, to feel, to explore, inspired by light, by nature, by science, and by pure entertainment.

New realities took shape as quickly as we could imagine them, Taking us to amazing places and new worlds, Giving us new perspectives in the blink of an eye, driving us forward, embracing discovery, and inspiring lifetimes of exploration yet to come. Welcome to SIGGRAPH 2021, we just got to see some amazing computer graphics history where art, science, and research come together. You know the very first SIGGRAPH was in 1974 in Boulder, Colorado. Twelve years later was my first SIGGRAPH in San Francisco. And in 1993, NVIDIA's first SIGGRAPH was in Anaheim, California.

And since that time we've had close to 150 papers that have been presented, over 200 talks and panels, countless demos, and major product launches that we dedicate to this important conference. The most important announcement that we've had was in 2018 when we launched RTX: the world's first ray tracing capability in a desktop computer. This had been a dream for many, many years. The idea of doing photo-realistic ray tracing in real-time.

Since that time, there's been over 125 creative apps and games that have adopted RTX technology. And there's more coming all the time. Our research teams have taken the Turing architecture and extended it in all sorts of areas of AI, whether it's for scientific visualization, medical imaging, or just brand new breakthroughs in computer graphics. We now have a research team of over 80 people dedicated to advancing this important architecture. Let's take a look at some of the amazing work that they're doing. Hi, I'm Sanja Fidler, a senior director of research at NVIDIA, focused on 3D deep learning for Omniverse.

It's super exciting to share some of the recent research being done at NVIDIA. At the heart of graphics is content creation, assets, motion, worlds, as well as rendering or more generally simulation. Today, we'll look at NVIDIA's latest research in these two key areas. First, we'll look at advances in real-time rendering, visualizing virtual worlds where light and other elements of physics work exactly the way they do in the physical world while being simulated in real-time.

As the rendering becomes more powerful, the graphics community looks for ways to create more advanced content. At the same time, AI is finding its way into every single area of graphics. It's becoming particularly essential for content creation.

Researchers on either side of these technology advancements in graphics and AI are craving better results for neural rendering and more control over content creation. Here are some of the works from the graphics research team. Neural radiance caching allows real-time rendering of complex dynamic lighting effects such as light shining through a tiger's fur. It introduces live AI to rendering, which brings Tensor Core training and inference into the heart of real-time path tracing.

Neural reflectance fuel textures: these are neural textures that can represent far more complex materials than traditional materials. It includes fur, hair, woven cloth, grass, and more. ReSTIR GI: this discovery makes path tracing up to 160 times faster by sharing light between pixels. Real-time path tracing: NVIDIA researchers combined all the latest innovations in real-time path-tracing, including neural rendering, and can now real-time path trace dynamic worlds made of billions of triangles, millions of lights, and rich materials. StrokeStrip enables the next generation of artists' drawing tools, by letting computers reconstruct curves from rough, overdrawn strokes in the same way that humans draw.

Unicon is our enforcement learning motion controller that scales physics-based animation from a handful to thousands of realistic motions. We're on the path for AI to be THE powerful tool that can bring us closer to a hundred percent realism with incredible ease that makes content creation possible to everyone. Continuing on with NVIDIA researching GANs. Let's take a look at a variety of cutting-edge AI tools for content creation, both 2D and 3D.

StyleGAN has revolutionized the quality of AI-generated imagery and its architecture has become the standard for cutting-edge GAN research worldwide. GauGAN creates photorealistic images from segmentation maps, which are labeled sketches that depict the layout of a scene. GANcraft is a neural-based rendering engine that can convert user-created Minecraft worlds to real worlds, turning Minecraft gamers into 3D artists. GANverse3D turns photographs into animatable 3D models without ever seeing any 3D examples during AI training.

It learns about 3D purely from 2D imagery, such as those found on the web. DriveGAN is a data-driven neurosimulator that learns to simulate driving given user controls. It also allows for interactive scene editing, such as adjustment of objects and weather.

Vid2Vid Cameo is a neural GAN-based talking head rendering engine that can animate an avatar by using facial cues as the source. Using my facial cues as a guide, watch as AI animates a digital avatar right in front of your eyes. These two areas of research—real-time graphics and AI— will start blending together in amazing ways in the near future. It's super exciting to see the synergy as a researcher.

Our creative team has made a video of all the great work NVIDIA research is doing in the world of graphics and AI. Let's take a look. I am a visionary, bringing history to life and adding motion to memories. I am painting with the sound of my voice, a peaceful lake, surrounded by trees, and letting art speak for itself.

Did you know that white rhinos and black rhinos are both gray? I am changing the way we see the past, imagining the present in a new light, and creating new dimensions. I am redefining storytelling, paving a new path to innovation, and driving toward a better future for everyone. I am AI brought to life by NVIDIA and brilliant creators everywhere. That was some incredible work from our partners and researchers. You know, it's our goal to expand RTX into as many industries as possible.

Today, we're announcing the RTX A2000. This will expand RTX power to millions of designers and engineers. The innovative design of the A2000 brings several industry firsts to a wide range of professional graphics solutions, including a compact low profile power-efficient design that can fit into a wide range of workstations, including the rapidly growing segment of small form factor workstations. And it does this while delivering up to five times the application performance of the previous generation. The A2000 is powered by NVIDIA Ampere GPU architecture.

This brings the power of RTX accelerated ray tracing AI and compute to this segment for the very first time. This will enable millions of engineers and designers to incorporate rendering and simulation right into their existing workflows. This amazing form factor is small. How small?

This small. It fits in the palm of your hand. This amazing card will be available in October of this year. Neal Stephenson described in the 1990s novel SNOW CRASH where the metaverse is a collection of shared 3D spaces and virtually extended physical spaces, extensions of the internet. Today, we have massive but disconnected twin worlds from content creation to gaming, to virtual training environments for AI, and factories. We built Omniverse to connect these worlds.

More content and economics will one day be virtual than physical. We will exist both in the physical and virtual worlds. NVIDIA Omniverse is connecting the open metaverse. Omniverse is a platform that connects existing workflows using familiar software applications into a world where they can share the latest technologies from NVIDIA, like AI, physics, and rendering. On the left, collaborators here are working on different software tools, each composing their part of the scene, whether it's modeling props, building the environment, texturing, painting, lighting, or adding animation and effects.

They're connected into Omniverse via the Omniverse Connector, which brings them into the platform live. Now in the center, we see the core of Omniverse: Omniverse Nucleus, the database and collaboration engine that enables the interchange of 3D assets and scene descriptions. Finally, on the right, users can portal in and out of Omniverse with workstations or laptops, allowing them to teleport into the environment with VR or they can mix with AR and anyone can view the scene on Omniverse by streaming RTX to their device.

Omniverse is based on USD: Universal Scene Description. It is the enabling technology that has this platform come alive. You can think of USD as the HTML of 3D. It was originally developed by Pixar as a way to unify their production pipelines and assets across different software tools and their massive production teams. Even though it started in M&E, it's quickly being adopted in other industries like architecture, design, manufacturing, and robotics. What makes USD unique is it's not just a file format.

It is a full scene description, allowing all the complexities in a 3D world to be unified and standardized. We based Omniverse on USD and other companies support USD, like Apple who supports USD on all of their different products. And like the journey from HTML 1.0 to HTML 5.0, USD will continue to evolve from its nascent state today to a more complete definition for the virtual worlds as the community comes together to ensure a rich and complete standard. Apple had been working on a definition of physics for USD and NVIDIA was working on a definition of physics for USD. We came together with Pixar and decided to work together so that there would be one standardized definition of physics in USD.

And I'm happy to report that that first step, Rigid Body Dynamics, has been ratified. Now there's a long journey to go and there's more elements to get done, but it shows that the community will come together to make this a rich and comprehensive platform. We've been doing a lot with physics on the Omniverse platform, and I'd like to show you some of the work our physics team has done. Take a look. Another two elements of the Omniverse platform that are incredibly important are materials and path trace rendering. We want to take rendering to the next level and make it available to all the workflows that are working with Omniverse. It's been one thing to see static imagery, and things have been rendered beautifully, and they've done a great job.

But where it really gets complex is when you take on a challenge like a digital human. How do you take the things that are perfectly imperfect and make them believable? So what do I mean about perfectly imperfect? Well, we know what a human looks like. We are familiar yet different.

And one very challenging area is creating the materials and rendering of humans. High-quality realistic rendering of digital humans has always been labor-intensive and time-consuming to render. Modeling the realistic appearance of skin, hair, eyes, clothing, all parts of the digital human—extremely challenging. This is Digital Mark.

Our first reference digital human that was used to develop a few of the technologies we're making available today in Omniverse. We want to be able to simplify how we create realistic digital human appearances so that anyone can easily create these effects. And later this week, there's a talk specifically designed for digital humans in Omniverse.

Be sure to check it out. Material is a key element to the perfect rendering. Today, we are introducing Omnisurface, a physically-based Uber material for rendering complex surfaces based on MDL. MDL is a core component of the Omniverse platform to physically describe correct material, a portable Material Definition Language, layers and mixes BSDFs to physically model a diverse range of materials including plastic, metallic car paint, foliage, human skin, fabric, and much more. It is designed to simplify the look development for end-users, like digital humans. Using AI to make 3D easier, let's take a look about how research paves the way for the next generation workflow.

GANverse3D was developed by our research lab in Toronto. A GAN trained purely on a 2D photograph is manipulated to synthesize multiple views of thousands of objects. In this case, cars. The synthesized dataset is then used to train a neural network that predicts 3D geometry, texture, and parts segmentation labels from a single photograph using a differentiable render at its core.

For example, a single photo of a car could be turned into a 3D model that can drive around a virtual scene complete with realistic headlights, taillights, and blinkers. This accelerates 3D model workflow, gives new capabilities to those who don't have experience in 3D modeling, and GANverse3D is available today in Omniverse. Let's take a look at this research in action. Here, one of our artists' experimentation featuring an AI and Omniverse accelerated workflow using GANverse3D extension, Omniverse Create, Reallusion Character Creator 3, and Adobe Photoshop.

First up, using GANverse3D, they were able to quickly create car assets for the scenes. Next, using Omniverse real-time physics allowed them to blow up those assets and stack them on each other. Next, using Character Creator 3, they were able to pose and concept out the character for that scene. Once that was finalized, they used the Omniverse Connector to easily bring the character back into the scene. Finally, they were able to use our real-time ray-traced renderer for the highest quality of rendering lighting and atmospherics. Let's talk about the future of digital twins and virtual worlds.

In particular, I want to start by talking about training robots in a virtual world, compared to the physical world. Now in the physical world, you could plug a robot into a computer and you can train it to do certain things, and that robot will learn how to do those things. But in a virtual world, you could have hundreds of robots or thousands of robots, and then using AI, you can train those robots in the virtual world so that when you take all of those learnings and download it into the physical robot, it's going to be many thousands of times smarter. True-to-reality simulation achieves faster time to production, allowing software development before hardware exists, hardware changes that could be prototypical and validated before even being built. Developing robots in the virtual world before the physical world has three main advantages. First, training the robots.

Autonomous machines are built on AI engines that require large data sets that can be time-consuming and costly to acquire. Second, testing robots. Automated testing ensures software quality in testing and simulation, which allows corner cases to be validated that are impossible to do in the real world.

And finally, at scale, multiple robots in large environments demand a scalable solution. At our recent GTC, we showed a vision of the factory of the future with BMW. As one of the premier automotive manufacturing brands, BMW produces over two and a half million vehicles per year. 99% of those vehicles are custom, from tailored performance packages to custom trim and interior options. BMW must continually strive to exceed the expectations of their most exacting customers.

Producing custom vehicles at this scale requires a tremendous amount of flexibility and agility in their manufacturing process. The NVIDIA Omniverse platform has allowed BMW to generate a completely new approach to planning highly complex manufacturing systems. Omniverse integrates a wide range of software applications, planning data, and allows for them to have real-time collaboration across their teams and geographic locations. Autonomous vehicles and other autonomous machines need AI to perceive the world around them. The world for an autonomous vehicle is a complicated one. There are street signs, other vehicles, pedestrians, obstacles, weather, and so on.

Training the AI agents to handle all of these objects and scenarios that it might encounter on the road requires a massive amount of properly labeled data. And even while collecting and labeling this real-world data, there are some situations that are dangerous or so infrequent that you can't even get to them in your dataset. Photorealistic synthetic data generated in Omniverse can close the data gap here and help deliver robust AI models. At GTC, we announced that Bentley is building their digital twin called iTwin on the Omniverse platform.

Now, this integration allows for engineering-grade, millimeter-accurate digital content to be visualized with photo-realistic lighting, environmental effects on multiple devices, including web browsers, workstations, tablets, virtual reality, and augmented reality headsets from anywhere in the world. The combination of Bentley iTwin and NVIDIA Omniverse provides an unmatched high-performance user experience at a scale that had previously not been possible. Let's take a look at the progress being made.

Next, I want to talk about a new project called a cognitive mission manager, or CMM for wildfire suppression. The CMM group is an AI visualization research team at Lockheed Martin. They have started a long-term project to develop an AI mission manager to perform prediction and suppression recommendations for wildfire management on the Omniverse platform. Now, this is an extremely important topic and something that we are all becoming acutely aware of over the past few summers. Omniverse will connect new and existing AI technology to predict how quickly fires will spread, in which direction, and what role environmental variables like wind, moisture content, and the type of ground cover being burned will impact the behavior of the fire, so firefighting teams can better respond and reduce the damaging impact of these fires. Core design elements include a federated system that enables decision-making centrally and at the edge, turntable AI that allows for varying levels of human intervention and explainable predictions, and course of action recommendations.

These are important things that are being done that save forests and more importantly, could save lives. We announced Omniverse Enterprise at our recent GTC, and today I'm thrilled to talk about how we're now moving into a limited early access of that platform with a few of our key partners, partners who are helping us to test and validate Omniverse in their workflows. Partners like ILM, Industrial Light & Magic, have been actively evaluating Omniverse in their workflow and providing incredible feedback. BMW has been working with us to visualize the factory of the future. Foster and Partners, one of the leading architectural firms in the world, has been evaluating Omniverse to collaborate across their different geographic locations.

Omniverse is for everyone, everywhere. We've had over 50,000 individuals download the Omniverse open beta and over 400 companies now actively testing the platform across major industries, industries like architectural visualization, media and entertainment, game development, research visualization, robotics, and of course, autonomous driving. Omniverse is everywhere. Omniverse is a platform that enhances existing workflows. It doesn't replace them. It makes them better. By connecting your existing software products to Omniverse, you get to take advantage of the latest and greatest technologies that NVIDIA is developing. And by working together with our pioneers, who are helping to evaluate the platform, and our partners who are certifying their hardware to run the platform, and of course our ISV partners who are building their products to connect to it, we are helping grow an open platform and connecting the worlds together.

Next up, I'd like to give you some updates on some of the things we're doing with our partners and Omniverse. NVIDIA and Adobe have been collaborating on a Substance 3D plugin that will enable Substance material support in Omniverse. This will unlock a new material editing workflow that allows Substance materials to be adjusted directly within Omniverse. These materials can be sourced from the Substance 3D asset content platform, or they can be created in Substance 3D applications. As an industry standard, Substance will strengthen the Omniverse ecosystem by empowering 3D creators with access to materials from Substance 3D Designer and Substance 3D Sampler.

And here's one we're really excited about. Tangent, Blender, and NVIDIA have collaborated to bring USD support to Blender 3.0. Now this will be available in the main Blender branch. And what's so amazing about this is it will bring a way to connect Omniverse to 3 million Blender artists. And along with that Blender announcement, I'm happy to tell you that Blender will be available in the Omniverse open beta launch or directly.

And we will have support for the Omniverse universal material mapper directly in Blender 3.0. And as Omniverse is so important for our customers, it's equally important for our developer community. So today we're extending the NVIDIA developer program to now include Omniverse with specific tools, examples, and tutorials on how to develop using the Omniverse Kit SDK.

This brings Omniverse to over 2.5 million developers currently in the program and makes it available to our Inception program for startups. We have trained over 300,000 developers across AI, accelerated computing, and accelerated data science. This is the first time DLI is offering a free self-paced hands-on training for the graphics market. We areannouncing DLI training for Omniverse, starting with the "Getting started with USD for Collaborative 3D Workflows," available today at nvidia.com/dli.

The self-paced course will take you through the important concepts of layer composition, references, and variants. It includes hands-on exercises and live-scripted examples. This is the first in a series of new Omniverse courses for creators and developers. There will be a new teaching kit for 3D graphics and Omniverse. Based on the consultation with some of the top film and animation schools in our studio education partner program, designed for college and university educators looking to incorporate graphics and Omniverse into their classroom Sign up today for early access again at nvidia.com/dli. NVIDIA is a computing platform with over 1 billion CUDA GPUs in the marketplace and over 250 ExaFLOPS in the cloud, over 2,000 GPU accelerated applications, and over 27 million downloads of CUDA.

There's 150 SDKs, over 8,000 startup companies in our Inception program, and over 2.5 million active developers. And this computing platform, which brings together graphics and AI, along with our software and hardware partners and our pioneers and researchers, we are bringing to you the reality of the metaverse. This week at SIGGRAPH, we have a lot of things in store for you from papers and panels to incredible demos, an art show, and our very first Omniverse user group meeting.

So I invite you to go to our website and build out a schedule for a very busy week ahead. From all of us at NVIDIA, we'd like to thank our hardware and software partners, our researchers, our academic institutes, our friends, and especially our families. It's been a crazy year and a half, but we look forward to seeing you all in person real soon. Now, before we go, we have a special treat for you. At this past GTC, our creative team created the Holodeck to create a virtual kitchen for the keynote.

That keynote was seen over 20 million times. And this was a collaboration between 50 NVIDIANs across research, engineering, product groups, and creative teams. They used Omniverse Create and Omniverse View and they used third-party products like Substance Painter, Maya 3ds Max, Houdini, and DaVinci Resolve. And all of the batch rendering was done with Omniverse Farm. They were able to accomplish in weeks what would have taken months. We made a documentary about this revolutionary keynote, and that'll be coming out later this week here at SIGGRAPH.

But I want to share with you a glimpse of the amazing work done by our creative team. Please take a look and enjoy SIGGRAPH and thank you very much for being here. We're doing this awfully early.

Amazing increase in system and memory bandwidth, the base building block of the modern data center What I'm about to show you brings together the latest GPU accelerator computing Today, we're introducing a new kind of computer. Come along.

2021-08-13

Show video