NVIDIA SC22 Special Address

NVIDIA SC22 Special Address

Show Video

[Music] super Computing is the driving force of Discovery in every field scientific to Industrial allowing researchers to understand the behavior of the smallest particles furthest expanses of the universe to unlock the meaning of life [Music] and with digital twins its giving Industries superpowers to time travel letting them explore an infinite number of futures [Music] for different lenses [Music] and with million x higher performance powered by accelerated Computing Data Center scalability and AI super Computing will unlock new opportunities for assault [Music] Computing is the instrument of scientific discovery the engine of Industrial Automation and the factory of AI Computing is the Bedrock of modern civilization and it is going through full-scale reinvention for two decades from the mid 80s to the mid-2000s CPU performance scaled with transistors and order of magnitude every five years increasing ten thousand times over 20 years but CPU performance scaling has plateaued incremental performance comes with disproportional increase in constant power meanwhile demand for computing continues to grow exponentially across Science and Industry driven by a broad range of applications from digital biology and climate science to AI robotics Warehouse Logistics and consumer internet services without a new approach Computing costs and even more urgently computing power will grow exponentially in the coming years data center electricity has already reached nearly two percent of global electricity use exponentially growing demand is coming up against moratoriums on the energy consumption of data centers and corporate commitments to achieve Net Zero the industry has awakened to the need to advance Computing in a post Moore's Law world a 100x growth in Computing within the next decade is unsustainable without a new Computing approach it is now broadly accepted that accelerated Computing is that approach but it takes work accelerated Computing requires full stack optimization where software and Hardware are co-designed we optimize the entire stack including the chip compute node networking storage infrastructure software acceleration libraries and the application and this is done one application domain at a time the work done for molecular Dynamics differs from that of fluid dynamics seismic processing CT reconstruction quantum chemistry Ray tracing Logistics optimization data processing and deep learning the diversity and combination of applications algorithms and Computing infrastructure are daunting Nvidia is a multi-domain acceleration platform with full stack optimization for a wide range of Science and Industrial applications nvidious and clouds super Computing and Enterprise data centers PCS industrial Edge devices robots and cars our dedication to architecture compatibility has created an installed base of hundreds of millions of gpus for researchers and Developers and nvidia's Rich ecosystem connects computer makers and cloud service providers to nearly every domain of Science and Industry the results of accelerated Computing are spectacular accelerated workloads can see an order of magnitude reduction and system cost and energy consumption and with the speed ups and savings applications from molecular Dynamics to climate simulation to deep learning have scaled up one hundred thousand to a million times in the last decade we have dedicated the full force of our company to advance Computing in this new era today we will update you on our latest work Ian Buck will announce Next Generation data center platforms Tim Costa will update you on our acceleration libraries and Quantum Computing work getika Gupta will announce new platforms for remote sensing and Edge Computing platforms and Dion Harris will highlight our work in applying AI to physics called modulus physics ML and AI to physical systems or Omniverse digital twins I want to take this opportunity to congratulate Jack dungara for receiving the touring award Jack's seminal work and numerical libraries MPI for scale out distributed computing and standard benchmarking to objectively measure the performance of a computer will continue to be the foundation of high performance Computing for years to come I am very much looking forward to his touring award lecture have a great super computing 2022. thanks Jensen one of the most exciting things about coming to Super Computing each year is the amazing work that's being done by the HPC community scientific Computing is a critical tool in solving some of the greatest challenges facing our world today over the last year there have been some amazing breakthroughs powered by HPC a combined team from Stanford Oxford nanopore Nvidia UCSC and Google published a method where the entire genome was sequenced in just seven hours currently there are over 7 000 rare genetic disorders and this new workflow offers hope for patients with undiagnosed or rare conditions the University of Pennsylvania researchers have used convolutional neural networks to catalog and classify the shapes of over 27 million galaxies they use data from the Sloan digital Sky survey in the dark energy survey to create a model that was 97 accurate in detecting even the faintest galaxies resulting in the largest catalog of galaxies and their shapes to date early detection of covid-19 variants is Paramount for the proper management of the ongoing pandemic a team led by Kareem bagir used a large language model with diffusion dnns to generate a theoretical covid-19 genome variant they then simulated it using openmm and predicted the Threat Level using Alpha fold their results showed that this variant had the potential to escape immunity this early warning system could evaluate new variants in minutes and provide critical information to help contain the spread of covet 19. all of these amazing breakthroughs were powered by a diverse mix of systems across multiple sites extending to the edge connected to the cloud and even remote sensors we see five major workloads that are powering breakthroughs in HPC at the center of simulation and the foundation of HPC and will continue to be the Bedrock of Science in addition HBC plus AI can improve our scientific productivity by several orders of magnitude over the last five years the number of research papers published in AI accelerated simulation has increased by 50 times supercomputers are being brought closer to Edge experiments turning data collection instruments into real-time interactive research accelerators digital twins use simulation AI surrogate models and observe data to create real-time digital twins that are revolutionizing industrial and scientific HPC and finally Quantum Computing there is a ton of research being conducted at Super Computing centers today to emulate the quantum accelerators of Tomorrow the modern super computer will leverage all of these Technologies to solve the grand challenges of the 21st century accelerated Computing is a full stack problem that requires coordinated Innovation at all layers of the stack it starts with amazing Hardware our CPU GPU and dpus are integrated into Hardware platforms from The Edge to on-premise into the cloud Nvidia melanox networking is the connective tissue of the accelerated data center making the data center the new unit of compute with sdks Frameworks and platforms we aim to provide researchers the technology to build the software the world needs for the next Discovery these include Hollow scan our Edge Computing and AI platform that captures and analyzes streaming data for medical devices and scientific instruments modulus a framework to help build AI models which learn from simulation data and physical equations the Nvidia HPC SDK simplifies the application development process by providing GPU optimized acceleration libraries in ISO standard languages nvidia's Quantum Computing Technologies KU Quantum encoda are helping bring Quantum Computing and Quantum applications closer to reality and Omniverse which provides an open collaborative environment to co-locate multiple heterogeneous types of data and visualize it all as an accelerated data center platform company we rely on our partners to build Integrated Solutions that power the workloads of the modern data center the Nvidia h100 gpus are in full production and our OEM partners are unveiling dozens of new hgx and OBX systems at Super Computing this year be sure to stop by their boost to see the incredible work that we're doing together the foundation of the Nvidia stack is the processors themselves and we couldn't be more excited about hopper in the Nvidia h100 earlier this year Nvidia announced the h100 GPU built with a custom tsmc four nanometer process it features five groundbreaking inventions a faster more powerful tensor core six times faster than its predecessor and ampere it's built to accelerate Transformer networks the most important deep learning model today the second generation Mig are multi-instance GPU partitions the GPU into smaller compute units that can divide each h100 into seven separate instances this greatly boosts the number of GPU clients available to Data Center users confidential Computing allows customers to keep data secure while being processed and maintain privacy and integrity from end to end on shared Computing resources our fourth generation Envy link allows gpus to communicate faster than ever before at 900 gigabytes a second of bandwidth between server nodes and scaling to up to 256 gpus to solve these massive workloads in the AI factories of the future the DPX instructions are dedicated cores that speed up recursive optimization problems like Gene sequencing protein folding route optimizations up to 40 times faster the golden Suite is a tool we use inside Nvidia to measure progress across HPC Ai and data science workloads It's a combination of common applications like Amber gromax AMD Quantum espresso icon chroma and vasp but it also includes common AI training benchmarks like Bert large and Resident 50 and random Forest data analysis you can see the performance of cpu-based servers starting with a dual Broadwell system common in 16. over the past six years CPU only servers have only approved less than about 4X while a p100 equipped server was already 8X faster than that Broadwell Baseline today we're very pleased to show how the flagship of our product line the Nvidia h100 achieves nearly 250 times performance since that original system in 2016. every day more apps take advantage of GPU acceleration but many hundreds of Legacy applications have yet to adopt in addition we're entering an era where performance and efficiency must grow together the cost of energy is more uncertain than ever we are very excited about what our First Data Center CPU will enable the grace Superchip CPU pairs the highest performance arm Neo versus V2 CPU core with one terabyte a second of memory bandwidth and it's designed for the compute intensive and memory bound applications common in HPC while Grace's performance excels its focus is on Energy Efficiency and can provide up to 2.4 x better performance per watt than today's CPUs even applications that have adopted accelerated Computing have large portions which remain CPU Limited either because the cost of communication to the GPU is too high or refactoring the vast lines of code still running on the CPU has just hasn't been taken on the Grace Hopper Superchip offers a first of its kind NV link chip to chip interconnect so that both the CPU and GPU have coherent access to a combined 600 gigabytes of memory this capability Bridges the gap for legacy CPU applications and makes accelerating HPC in ISO standard languages truly possible compared to the a100 solutions Grace Hopper will deliver two to five times more HPC performance just like the gray CPU the Grace Hopper is very energy efficient depending on the needs of the workload Grace Hopper can dynamically share between the CPU and the GPU to optimize application performance making it an excellent choice for energy efficient HPC centers assuming a one megawatt data center with 20 of the power allocated for CPU and 80 towards the accelerator portion using grace and Grace Hopper data centers can get 1.8 x

more work done for the same power budget compared to the traditional x86 deployment Grace and Grace Hopper is the path forward to maximize performance and save on energy costs in the future every company will have ai factories super computers need to become Cloud native the Nvidia cloud native super Computing platform brings several Technologies to provide performance storage and Security One accelerated performance with programmable in-network computing two security and isolation including job isolation and performance isolation three computational storage functions including compression and file system management and finally four enhanced Telemetry for smart scheduling to improve utilization of the super Computing resources the combination of nvidia's Quantum 2 400 gigabit a second infiniband switch and Bluefield 3 dpu brings amazing capabilities of in-network computing the Nvidia Quantum 2 switch includes integrated Hardware engines for data reduction operations covering both small and large messages all at 400 gigabits a second speed Bluefield 3 includes Hardware engines for MPI tag matching and all the wall Communications and of course the armed cores and the datapath accelerator this combination enables MPI and nickel from the host to the network and the new dpu Computing platform to increase the performance of AI in scientific simulations as a result we're accelerating a variety of applications by 20 or more Microsoft ventured very early into machine learning and artificial intelligence with Nvidia but also has been a long time Pioneer in traditional HPC and leveraged the performance and flexibility of infiniband across their HPC instances at Super Computing Microsoft is announcing two new instances with the Nvidia Quantum 2 400 gigabit a second infiniband networking that will turbocharge HPC workloads from cfd to FEA to molecular Dynamics and weather simulation we have already entered the era of exascale AI in the last year five new exascale AI systems based on the h100 gpus Grace Hopper and Grace CPU Superchips have been announced in 2021 we announced Alps at cscs and earlier this year at ISC we highlighted venado Atlanta the most recently announced AI supercomputers are the mar nostrum 5 at the Barcelona supercomputing Center powered by h100 gpus and the Shaheen 3 at the Kaus supercomputing Center powered by Grace Hopper Superchips we look forward to seeing how researchers take advantage of these new exascale AI systems next up we have Tim Costa to share more about our accelerated libraries and the work we're doing in Quantum computing thanks Ian today's Grand challenges require simulation to occur at unprecedented scales simultaneously domain scientists are focused on reducing time to science rather than on increasingly complex computer science enabling developers to address this scale and Technology with velocity requires a complete bottom-to-top ecosystem of software libraries tools Frameworks and applications with Innovative features for data center scale accelerated Computing enabling developers to focus on science starts with an investment in accelerated libraries from linear algebra to Signal processing Quantum simulation communication data analytics Ai and much more Nvidia libraries provide a foundation upon which scientists can build applications that get the best performance from The Accelerated data center while transcending Hardware Generations all with drop in ease of use and the scope of Nvidia libraries continues to grow with new libraries that enable novel compute applications and groundbreaking features like multi-node multi-gpu Support over the past 20 years nvidia's investment in enabling scientists has transformed the field of programming languages the launch of Cuda in 2007 ushered in a revolution in the accessibility of accelerated Computing and in 2020 we announced another breakthrough with the support of standard languages running on gpus natively with the Nvidia HPC SDK C plus Fortran and python developers can now write parallel first code directly in their language of choice and benefit automatically from GPU acceleration but the true measure of these software Investments is an application impact Magnum io's nickel optimizes Collective communication patterns for multi-node multi-gpu vasp is one of the most widely used HPC applications in the world we've worked with the vast developers to integrate nickel into the application resulting in never before seen scalability ffts are an essential component in many scientific domains the coup fft library now supports multi-node multi-gpu execution enabling large ffts to scale to full data center scale sizes when integrated into gromax this results in a 5x performance improvement over the existing code as well as the ability to scale out to significantly larger problems than before the roadmap for studpar or standard language parallelism is rich the C plus committee is now working on adopting a new model for asynchrony called senders which gives C plus programmers a way to express asynchrony and concurrency this feature is expected in C plus 26 but we believe it is too important to wait and I've created an implementation of this proposal that's available in our HPC SDK with our sender's implementation the palibos application for lattice boltzmann simulation has been ported to stimpar using senders we were able to strong scale palibos to 512 gpus with near perfect scaling and remember this is pure standard C plus plus no additional programming model for the GPU for the node or for multi-node communication this is just the start of what's possible for developers adopting standard parallel languages in their applications Quantum Computing is a fundamentally new model of computing with many unique challenges and opportunities to impact a broad range of applications the progress made by the community over the past decade has been impressive from one to two qubit devices at Institutes of higher education a decade ago to systems with tens to hundreds of qubits available for General users in public clouds today this is remarkable progress however to get to the point of useful Quantum Computing there remains tremendous work to do on the hardware side Quantum processor Builders need to continue improving qubit scale infidelity it's generally understood that quantum computers will be ready to act as accelerators for some important applications when they reach the point of fault tolerant Quantum Computing with thousands to millions of qubits error corrected to hundreds or thousands of logical fault tolerant qubits just as important is the work to be done in software applications and algorithms with improved classical Quantum integration and Innovation and algorithms and applications the scale and Fidelity required for useful Quantum Computing can be reduced accelerating the path towards useful Quantum computing to meet that challenge and prepare for a Quantum accelerated future governments institutes for Higher Education and Research as well as industry are investing heavily across the board in Hardware software and algorithm development GPU super Computing is essential for Quantum Computing in two areas the first is quantum circuit simulation with Quantum circuit simulation on the Nvidia platform researchers can develop algorithms at the scale of valuable Quantum Computing long before the hardware is ready on the Nvidia platform we're already simulating Quantum algorithms with tens of thousands of perfect qubits representing the future state of fault tolerant Quantum Computing the second area where GPU super Computing is essential is hybrid Quantum classical Computing as we move past basic algorithm r d and work on building full Quantum applications with tight classical Quantum integration a platform for hybrid Quantum classical Computing with emulated Quantum Resources is an essential research platform long term all useful applications of quantum Computing will be hybrid with quantum computers acting as an accelerator for key kernels alongside GPU super computing in 2021 we introduced KU Quantum an SDK for accelerating Quantum circuit simulation kuanum is built to accelerate all circuit simulation Frameworks and is integrated into Circ kisket Penny Lane Orchestra and more with KU Quantum researchers can simulate ideal or noisy qubits with a scale and performance not possible on today's Quantum Hardware or with unaccelerated simulators KU Quantum has been adopted by a broad range of groups spanning the entire Quantum ecosystem including supercomputing centers academic groups Quantum startups and some of the largest companies in the world BMW is leveraging coup Quantum to optimize pathfinding and routing for robots large consultancies like Deloitte and softsurf are developing applications in Quantum machine learning for both materials and Drug Discovery to address their customers most pressing problems Fujifilm is leveraging quantum to explore tensor Network methods for Material Science simulation with thousands of qubits the coup Quantum Appliance is a container consisting of leading Community Frameworks accelerated by KU Quantum and optimized for the Nvidia platform coming in Q4 of this year the KU Quantum Appliance will provide native multi-node multi-gpu Quantum simulation this means scientists can leverage an entire accelerated supercomputer as a single Quantum resource through a software container with the same familiar interfaces they are using for the quantum work today recently researchers leveraged the coup Quantum Appliance to participate in the ABCI Grand Challenge a variety of problems including Quantum volume Quantum phase estimation and the quantum approximate optimization algorithm were run across 64 nodes on up to 512 gpus the performance achieved was up to 80 times better than alternative multi-node Quantum circuit simulation solutions enabling problems with scales that would otherwise be time prohibitive we're really excited about the results our partners are seeing leveraging Coupe Quantum to accelerate their work and I'd like to point out a few recent highlights we recently partnered with xanadute to integrate to Quantum into Penny Lane the leading framework for Quantum machine learning and with AWS to make that available to customers through their bracket service combining these tools AWS saw speed up of over 900 times on simulating Quantum machine learning workloads along with a three and a half times reduction in cost for their users Xanadu is also leveraging coup Quantum for Research into novel Quantum algorithms at supercomputing scale in general the ability to simulate Quantum circuits is limited by system memory with the world's largest supercomputers limited to qubit scales in the mid to high 40s using a novel circuit cutting technique they were able to accurately simulate the quantum approximate optimization algorithm with up to 129 cubits running on perlmotter through nurse Quantum information science initiative and Johnson Johnson who recently spoke about this result at our GTC conference has seen a 100 time speed up from KU Quantum for their work applying a variational Quantum eigensolver to the seven mer protein folding problem in a collaboration with strangeworks a critical consideration as we look towards the quantum accelerated applications is that they will not run exclusively on a Quantum resource but will be hybrid Quantum and classical in nature in order to transition from algorithm development by Quantum physicists to application development by domain scientists we need a development platform built for hybrid Quantum classical Computing that delivers high performance interoperates with today's applications and programming paradigms and is familiar and approachable to domain scientists to address this challenge we recently announced Nvidia Quantum optimized device architecture or coda coda is the platform for hybrid Quantum classical Computing built to address the challenges facing application developers and domain scientists looking to incorporate Quantum acceleration into their applications Coda is open and qpu agnostic we're partnering with Quantum Hardware companies across a broad range of qubit modalities to ensure it provides a unified platform that enables all hybrid Quantum classical systems Coda integrates with today's high performance applications and is interoperable with leading parallel programming techniques and software it allows the domain scientist to quickly and easily move between running all or parts of their applications on CPUs and gpus simulated Quantum processors and physical Quantum processors and now I am really excited to show our first proof Point running Coda with both an emulated Quantum resource leveraged in Q Quantum and on a physical qpu continuums h12 processor in this experiment we're running the variational quantum eigensolver vqe is a hybrid algorithm for computing the ground state of a hamiltonian and is key to both quantum chemistry and condensed matter physics the plot on the left is simulated with Coda compiled to a cool Quantum back end running on an a100 GPU representing what you expect from perfect qubits the plot on the right is the exact same code of code now compiled to execute the quantum kernels on the Continuum processor simply by altering a compiler flag this is a simple example but demonstrates how easy it is to use both Quantum and classical Computing Resources with coda and this is just the start we will continue to work to enable all Quantum processors to be accessible through Coda ensuring that researchers can leverage the best resources for classical Computing simulating Quantum Computing and physical Quantum computing and now guedica will tell us about new platforms for remote sensing and Edge computing thanks Jim scientists use instruments like electron microscopes telescopes x-ray light sources and particle accelerators to further their knowledge and understanding for all kinds of matter for example researchers at Argonne National Lab are experimenting with enzymes to tackle plastic waste using AI models trained on 19 000 protein structures they shortlisted specimens to examine and confirm their hypothesis using an x-ray beam line at the advanced Photon Source these instruments that are playing detective at a nanoscale level are often as big as a football field they're spread out all over the world from Australia to UK to South Africa millions and billions of dollars of investment is taking place right now to upgrade them after the upgrade these instruments and sensors will produce up to 500 times brighter raise generating crisp clear clear and high resolution images this huge jump in the brightness translates to thousand times more data easily running into petabytes and exabytes this poses a data analysis and data migration problem the streaming data software pipeline should be multimodal easy to use scalable flexible and include Ai and ml techniques to filter the most relevant bits of information in seconds instead of days let's first look at how these experiment sites or labs are laid out be it Oak Ridge National Lab or a university like Purdue the campus typically they have a center for biosciences Material Science nanotech robotics or Advanced manufacturing with a small number of workstations and servers connected to the main Computing Center the site may also be receiving data from other sister sites and remote sensors out in the field for such a multi-disciplinary campus investments in compute storage and connectivity are key to support open science projects Infinity band that can run end-to-end data center to Data Center enables researchers spread within or across sites to collaborate data coming from an experiment at one end of the campus can feed into another instrument sitting in a separate building in minutes instead of days or months today we are announcing multiple products where Nvidia is investing to enable scientific discovery that spans from The Edge to the data center Nvidia holoscan is an SDK that can be used by data scientists and domain experts to build GPU accelerated pipelines for sensors that are streaming data the developers can use C plus plus or python apis to build modular blocks that are flexible and reusable for added performance they can also include Jacks machine learning or AI holliscan sits on top of gxf that manages the memory allocation to ensure zero copy data exchanges so developers can focus on the workflow logic and not worry about file and memory i o the new features in holoscan will be available to the HPC developers in mid-December 2022 the second piece of the solution is NVIDIA Metro X3 or long-haul infiniband infiniband is synonymous with server rack standing side by side but Metro X3 extends the reach of infinity band Network to up to 25 miles or 40 kilometers now taking advantage of native RDMA users can easily migrate data and compute jobs from one infiniband based mini cluster to the main data center or combine geographically dispersed compute clusters for higher overall performance and scalability Metro X3 systems are managed by the Nvidia unified fabric manager or UFM UFM enables data center operators to efficiently provision Monitor and operate all the infiniband data center networks Metro X3 systems are going to be available end of November 2022. the third piece is Bluefield 3dpu using intelligent storage offloads for data migration at supercomputing ZR is demonstrating their data migration solution with Bluefield 3 for infiniband connected data centers and remote HPC sites where an x86 based solution needs 13 U of Rackspace with Bluefield 3 it can be accomplished with just 4u of Rackspace UT Southwestern is one of the customer evaluating Metro X3 to connect their Healthcare researchers who are using a dispersed compute infrastructure as part of the initial research project they are connecting the UT Southwestern West Campus with a cluster that is about 10 miles away Metro X3 customers can also take advantage of nvidia's full stack architecture this includes native RDMA in network computing GPU direct and ease of management with UFM to operate the compute nodes that are miles apart as one large supercomputer scientific discovery in this decade will come from an end-to-end converged workflows comprising the data coming from various experiments at the edge feeding into simulation and training AI models running in the cloud or data center for HPC at the edge holoscan SDK can be used to develop data streaming pipelines to convert raw data into actionable insights in seconds instead of days or weeks by analyzing data as it's being generated the researcher can avoid errors retries and false starts added benefit is that the researcher can steer and control the experiment as the data is being collected make decisions and discoveries on the spot filtered and compressed data can be sent over metrox to the main data center on campus or into the cloud not only for archiving but for a lot of interconnected applications observational data can be used to enhance simulations train or refine AI surrogate models or can be fed into a digital twin offline simulations Ai surrogates and what-if scenarios from digital twins can recommend parameters for the next experiment to enable the converged workflow for scientific discovery and videos investing in holoscan gxf and UCF UCF stands for unified compute framework that enables developers to combine optimized and accelerated sensor processing pipelines as microservices from the cloud supporting the new era of scientific discovery will need workflow management systems that can orchestrate all the moving parts and are not bottlenecked by data migration disk copies and file transfers HPC community and those working on adapting the workflow management systems can pick up holoscan and UCF to maximize the utilization of all the instruments run simulations do AI training and even build interactive digital twins to elaborate more on AI and digital twins Dion Harris will walk you through modulus and Omniverse for HPC thanks kitika AI is quickly becoming the fourth pillar of scientific discovery for centuries science has been built on the foundation of three pillars observation experimentation and Theory more recently HBC has become a critical tool in scientific research HBC simulations are important because they allow us to study things that happen too slowly or too quickly to observe in real time AI allows us to reshape and accelerate the scientific discovery process by training a model on observed or simulated data we can create a physically informed AI model to predict new scientific outcomes these predictions are then verified by experimental observations and the model continues to evolve over time the power of AI is that it can automate the process of doing simulation by analyzing data this means that we can take on problems that were too big or too complex for traditional scientific methods we can now use AI to help us find new cures for diseases design new materials or find new energy sources AI is not just a tool for science it's a new way of doing science Nvidia modulus is an AI and training inference platform that enables developers to create physics ml models to augment and accelerate traditional simulations with High Fidelity and in near real time today we are announcing modulus is available via Nvidia Launchpad and several major csps modulus accelerates a wide range of scientific applications climate and weather simulations are critically important to develop policies to help fight climate change Nvidia collaborated with Lawrence Berkeley National Labs and researchers from the University of Michigan rice Purdue and Caltech to build an AI model called forecast cast net that performed extreme weather predictions 45 000 times faster and 12 000 times more energy efficient than conventional numerical simulations to fight climate change consumer electronic companies can design more energy efficient products to reduce their carbon footprint Nvidia research used modulus to optimize the design of our workstation GPU heat sinks resulting in 35 percent higher power densities on the GPU when compared to Conventional Vapor chamber and heat pipe combinations see Ms gamessa is using Nvidia Omniverse and modulus platforms to improve Wind Farm simulations enabling them to create more accurate models of complex interactions between turbines using a high fidelity high resolution simulations that are based on low resolution inputs this will help to optimize the performance of its wind farms and reduce the maintenance costs finally industrial plants and facilities are complex to plan build and operate whether factories warehouses or even data centers for the 7 million worldwide data centers achieving Optimal Performance and Energy Efficiency is of Paramount importance let's take a look Nvidia Omniverse can be used to build digital twins with high performance data centers to help optimize every step of planning building and operating complex super Computing facilities in constructing nvidia's latest AI supercomputer engineering CAD data sets from tools like SketchUp PTC Creo and Autodesk Revit are aggregated so designers and Engineers View and iterate on the full Fidelity USD based model together patch manager can Flex topology of support connections bracket node layout and cabling can be integrated directly into the live model next cfd Engineers use Cadence Six Sigma dcx to simulate thermal designs Engineers can leverage AI circuits trained with Nvidia modulus for Real Time what-if analysis and with Nvidia air a network simulation platform connected to Omniverse the exact Network typology including protocols monitoring and automation can be simulated and pre-validated once construction is complete the physical data center can be connected to the digital twin via iot sensors enabling real-time monitoring of operations with the perfectly synchronized digital twin the engineers can not only simulate common dangers such as power picking or cooling system failures but also validate software and component upgrades for cicd before deploying to the physical Data Center with digital Twins and Omniverse data center designers Builders and operators can streamline facility design accelerate time to build and deployment and optimize ongoing operations and efficiency digital twins are evolving NASA used physical twinning to duplicate spacecraft on the ground that matches spacecraft in orbit on April 14 1970 this approach was used to resolve the Apollo 13 oxygen tank explosion from 200 000 miles away the term digital twin was coined by John Vickers of NASA in 2010 when they created digital simulations of spacecraft for testing batch simulation digital twins are built on large amounts of data generated from complex simulations that can take days or even months to complete the advancement of iot cloud computing and AI has created a new class of interactive digital twins Siemens energy built an interactive digital twin of its heat recovery steam generator plant to develop new workflows to reduce frequency of plan shutdowns while maintaining safety by simulating and visualizing the flow conditions in the pipes Siemens energy can understand and predict the aggregated effects of corrosion in real time virtual and physical steering digital twins are on the horizon Bill Tang at the Princeton plasma Physics laboratory and researchers from Argonne National Laboratory have developed an AI model called sgtc to simulate the plasma physics of a fusion reactor when paired with Nvidia modulus sgtc will be included as a part of real-time control system to optimize the operation of fusion reactor experiments operated in Nvidia Omniverse here a researcher generates simulation data to measure the state of fusion plasma visualizing it in pair of view two more researchers are using particle code simulations and surrogate models to further simulate and explore the state of the plasma however scientific research and simulations don't exist in a vacuum the infrastructure design and 3D cat elements are critical to understanding how the system will behave in full production today each of these four workflows exist separately with no way to see the combined interactive model but now with Nvidia Omniverse researchers can connect pair review and 3D CAD data sets to see their aggregated model in real time and when ingested in the Omniverse their visualization workloads are accelerated by core Omniverse Technologies RTX Ray tracing index neuro VDB physics and modulus creating a real-time end-to-end workflow admin versus an open development platform Bridging the physical and Virtual Worlds so teams building these virtual simulations can connect them to physical worlds to build a virtual physical steering digital twin the scientific Community is in the early stages of adopting Ai and digital twin Technologies Nvidia is committed to partnering with researchers to advance science and make these Technologies more accessible and today we are announcing Omniverse for HBC Omniverse now supports batch rendering and synthetic 3D data generation workloads on Nvidia a100 and h100 systems with Omniverse nucleus also supported on a100 and h100 systems the scientific Community can now take full advantage of more streamlined collaborative workflows across multiple applications multiple teams and even multiple continents to achieve full Fidelity interactive digital twin simulations customers can adopt Nvidia ovx powered by l40 gpus a Computing system purpose-built for developing and operating persistent Virtual Worlds if you have an Nvidia a100 system you can get started with Omniverse today all over the world extreme weather events are becoming disturbingly common the last decade has seen some of the most damaging floods storms and wildfires and recorded history about a year ago we announced our Earth 2 initiative to help the scientific Community tackle climate change one of the greatest challenges of our time a challenge of this magnitude will require a long-term commitment and collaboration across government researchers and Industry today we are announcing that Lockheed Martin and Nvidia have been selected by the National Oceanic and Atmospheric Administration to deliver the foundational Technologies for the Earth observation digital twin project when completed this project will lay the groundwork for building the Earth observation digital twin of noaa's Next Generation Brown enterprise system we are at an inflection point the ability to create digital Twins and physical systems and then use AI to control and optimize them in real time is revolutionizing many Industries and scientific domains we are just scratching the surface of what's possible and I'm incredibly excited to see what the future holds Ian back to you to wrap things up nvidia's multi-domain acceleration platform is at the Forefront of the accelerated Computing era and the engine of the modern supercomputer we are excited to see how h100 Grace Hopper and Grace CPU Superchips deliver unmatched performance and most importantly Energy Efficiency and our Cloud native supercomputing platform delivers scalable secure high performance infrastructure to develop scientific Computing applications our HBC and Quantum Computing sdks allow users to scale performance and accelerate both traditional and Quantum computing simulations super Computing is reaching to the edge with the help of holoscan while modulus and Omniverse are accelerating today's HPC workflows and introducing a new era of fully live interactive digital twins for scientific use cases the future of super Computing is exciting and I hope to see all of you at sc22

2022-11-16 10:21

Show Video

Other news