Nvidia's New Computer Released A Terrifying WARNING To ALL Humans!

Nvidia's New Computer Released A Terrifying WARNING To ALL Humans!

Show Video

Minutes ago, Nvidia's new AI computer sent a  shocking alert that has every other computer   on edge! This beast, armed with Hopper  architecture, is turning all older tech   into relics. Giants like Microsoft and Google are  the only ones who've touched this level of tech,   and Nvidia's value has skyrocketed to two  trillion dollars overnight. What exactly is this   alert about, and how will it change the game for  everyone else? Let us dive into what this urgent   message from Nvidia's AI could  mean for the future of technology. How Nvidia's Hopper Redefines Computing Nvidia's newest AI processor is creating a lot  of excitement in the tech world, breaking through   limits that were once thought to be impossible.  This isn't just a small improvement—it's a major   shift that is changing how we see technology.  The buzz is all about the Hopper architecture,  

which has reshaped the competitive landscape. With  performance leaps that dwarf its predecessors,   the Hopper architecture is setting a  new standard, leaving other processors   struggling to keep up. But what if this is just  the beginning? Could this unprecedented power   reshape industries overnight, rendering  entire technologies obsolete? Stay tuned,   because the future of computing is  about to take an unexpected turn.

This AI processor is a game-changer, challenging  even some of the basic principles of physics.   Only four companies in the world have  managed to create something like this:   Microsoft, Apple, Google, and  the company based in California,   which saw its market value skyrocket from  one trillion to two trillion dollars in   just eight months. This huge jump came from the  high demand for its cutting-edge technology,   which is leading today's AI revolution. It's  amazing to think that the company, which started   in one thousand nine hundred ninety-three to  make video game graphics better, has now become   a major player in the AI world in the twenty-first  century. In March of two thousand twenty-two, the   Hopper architecture, designed especially for  data centers to support AI work, was revealed.  

This launch created a lot of excitement in  the AI community and led to strong demand. But the real surprise came in two thousand  twenty-three, during the AI boom, when prices for   these products shot up due to shortages and heavy  demand. People who ordered H100-based servers   had to wait between thirty-six and fifty-two  weeks to receive them. Despite these delays,   the company still managed to sell five hundred  thousand H100 accelerators in just the third   quarter of two thousand twenty-three. The  strong position in the AI market and the  

success of its Hopper products played a big  part in boosting the company’s market value. However, this wasn’t the end of the big moves.  Looking to the future, the Blackwell architecture,   named after the famous American mathematician  David Blackwell, was introduced. Blackwell made  

groundbreaking contributions to fields like  game theory, probability, and statistics,   and his work has had a huge impact on AI  models and how they're trained. Interestingly,   he was also the first African-American to be  inducted into the National Academy of Sciences. In October twenty twenty-three, the updated  plans for data center technology at an investor   event were revealed. They introduced the B one  hundred and B forty accelerators, part of the   new Blackwell architecture. This was a change from  their earlier plans, which called the next step  

after Hopper just Hopper-Next. Later, on March  eighteenth, twenty twenty-four, Blackwell was   officially introduced at the Graphic Technology  Conference (GTC). Over eleven thousand people   attended the event, including software developers,  industry experts, and investors. It was held over   four days at the Pro Hockey Arena in San Jose,  and the main speaker was the CEO, Jensen Huang. Huang explained that Blackwell is more than just  a chip—it's a full platform. The company is famous   for its GPUs, but Blackwell takes things a step  further. At the core of this technology is Hopper,  

which is currently the best GPU technology  with an incredible twenty-eight billion   transistors. What makes it unique is its  architecture, which, for the first time,   combines two dies into one chip that communicates  at a mind-blowing speed of ten terabytes per   second. It does this without facing memory  or cache problems, acting as one large chip. Some people were doubtful about  reaching such ambitious goals   with Blackwell. But the company pushed  forward and created a chip that fits   perfectly into two different systems. One system  works easily with Hopper for smooth upgrades,   while the other, shown on a prototype board,  demonstrates its powerful features and future   possibilities. Imagine a system with two  Blackwell chips and four Blackwell arrays,  

all connected to a Grace CPU through super-fast  connections. This setup could change the world of   computing. Let's see how this new technology  packs powerful computing into a small space. A Glimpse into Next-Gen AI Computing Huang highlighted that this system is  groundbreaking because it packs so much   computing power into a small space. However,  the company didn’t stop there. To truly push   the boundaries, they added new features  that challenge the limits of physics.   They introduced the fifth-generation MV  Link inside the new Transformer engine,   which is twice as fast as Hopper and enables  computing within the network itself. This is  

important because when several GPUs work  together, they need to share and sync   data efficiently. With this innovation, the  company is setting a new bar in technology. The new AI supercomputer is built with incredibly  fast connections that allow it to handle data   right inside the network, making it much more  powerful. Even though it's officially rated at   one point eight terabytes per second, it  actually performs even better than that,   making it faster than the Hopper model. This  new chip improves training speed by two and   a half times compared to Hopper, thanks to  its new FP6 format. It also comes with FP4,   which makes tasks like quick responses  and predictions happen twice as fast. These improvements are not just  about speed. They help save energy,  

reduce the amount of data that needs to  travel through the network, and save time,   which is becoming more important as AI technology  grows. The company calls this phase Generative AI   because it represents a big shift in how  technology works. The newest processor   is designed for this change, using FP4 to  quickly create content. It’s part of the AI  

supercomputer and can produce five times  more output than the older Hopper model. But that’s not the most impressive part. The  company is already working on an even bigger   and stronger GPU that will go beyond  what’s possible now. This new chip,   which includes the MVLink switch, has fifty  billion transistors, almost as many as the   Hopper model. It also has four MV links, each  running at one point eight terabytes per second,  

allowing all the connected GPUs to  work together at their top speed. This is a huge step forward, pushing the  boundaries of what computers can do today.   If you look back six years, the first DGX1  could handle one hundred seventy teraflops,   or zero point one seven petaflops. Fast forward  to now, and the company is aiming for seven  

hundred twenty petaflops, getting closer to  reaching one exaflop for training. This is a   massive achievement, making it the world’s  first exaflop machine, all in one system. To give you some background, only a few machines  in the world today can reach exaflop levels of   computing. This DGX isn’t just another AI  tool; it’s a powerhouse, all neatly packed   into a single, sleek rack. But what makes this  possible? It’s thanks to the MVLink backbone,   which provides an incredible bandwidth of one  hundred thirty terabytes per second. To put   that in perspective, this speed is faster  than the entire Internet combined. And it  

does this without needing expensive optics  or transceivers, which also helps save a   lot of energy—about twenty kilowatts in a system  that usually uses one hundred twenty kilowatts. The system keeps a steady temperature of  twenty-five degrees Celsius while it’s working,   with the help of air conditioning. The water that  comes out is around forty-five degrees Celsius,   similar to the temperature of a hot tub, and  flows at a rate of two liters per second. This  

setup ensures everything stays cool and runs  smoothly. Now, let’s talk about an important   part of this system: the GPU. While some  might see it as just another component,   for Nvidia, it’s a game-changer.  The days of clunky GPUs are over;  

modern GPUs are incredibly complex, made up of  about six hundred thousand parts and weighing   around three thousand pounds. That’s about one  and a half tons—pretty amazing, right? Now,   let's see how these tech upgrades make  training AI faster and more efficient. Powering AI with Less Training a GPT model with one point eight trillion  parameters is a huge challenge. Not long ago,   this process could take several months  and use a lot of energy. But thanks to  

advances in technology, especially in GPU  architectures like the Hopper and more   recently, the Blackwell architecture,  things have changed dramatically. With the Hopper architecture, training models  of this size required eight thousand GPUs and   about fifteen megawatts of power, and the  whole process took about ninety days. This   was already a big improvement over older setups  that needed even more resources. But then came   the Blackwell architecture, which took things even  further. Now, the same task can be done with just  

two thousand GPUs and only four megawatts of  power, making everything much more efficient. The Blackwell architecture has several  upgrades that focus on reducing energy   use while boosting performance. It includes  a second-generation Transformer Engine,   which optimizes processing by  using as little as four bits   per neuron in the neural network.  This doubles the compute bandwidth,   allowing faster processing of large language  models without increasing energy consumption. But this wasn't the most impressive part.  Blackwell's NVLink switch also improves   GPU communication. It can handle up to one point  eight terabytes of traffic in each direction and  

can support up to five hundred seventy-six  GPUs, compared to Hopper's limit of two   hundred fifty-six. The raw power of the  Blackwell B200 model is also noteworthy,   delivering up to eighteen PFLOPS for certain  tasks and boasting a memory bandwidth of   eight terabytes per second, making data  transfer and processing lightning fast. These GPUs aren’t just about  raw power, though. They are   designed to handle complex scientific  computing and simulations. For example,  

Blackwell GPUs can run simulations for things  like heat, airflow, and power use in virtual   models of data centers. This ability can speed up  such simulations by up to thirty times compared   to traditional CPUs, making everything  more energy-efficient and sustainable. And there’s more. The high-bandwidth memory and  advanced tensor cores in these GPUs allow them   to handle even the toughest AI training  and inference tasks. This makes them   suitable for a wide variety of applications, from  scientific research to enterprise AI solutions. The real breakthrough here is not just  in cutting down the resources needed to   train large models, but also in  showing how quickly AI hardware   is evolving to keep up with the  growing complexity of AI tasks.

At the center of the Groot Project is Isaac  LAB, a unique platform created by the company   for training robots. Using Omniverse  Isaac Sim, which allows for simulations,   Isaac LAB gives robots a virtual space  to practice and improve their skills,   helping them get better at handling real-world  challenges. Working alongside Isaac LAB,   Blackwell is set to become the company’s  biggest product launch yet. As we explore   the world of Robotics, it’s clear this  technology is about to make a huge impact. The company is stepping into the  fascinating world of physical AI,   where machines can interact with the real  world. Up until now, AI has mainly worked in the  

digital space, limited to systems like DGX.  But imagine a future where AI goes beyond that,   where robots have their intelligence and can  move around the physical world on their own.   This is what the company calls the Robotics  ChatGPT Moment. The company has been hard at   work developing advanced robotics systems,  including AI training from DGX and AGX, the   world’s first robotic processor designed to handle  high-speed sensor data in an energy-efficient way.   Next, we'll explore how these advancements  are shaking up the world of robotics.

Bridging Virtual Realities and Robotics The company's Omniverse is a platform that  connects the virtual and real worlds using   Azure cloud services. Imagine a warehouse that  runs on its own, where people and machines work   together smoothly. In this setup, the warehouse  works like a traffic controller in the sky,   making sure everything moves safely and in  order. What’s even better? You can interact   with this modern warehouse right away.  Each machine has its own robotic system,   making the whole process more efficient.  Thanks to Blackwell’s leadership,   it feels like robotics are closer than  ever. But the journey doesn’t end here.

Let’s take a closer look at the progress being  made in robotics. We are getting closer to the age   of humanoid robots, and it’s expected that robots  will soon be taking over different industries.   They can offer a safer and more efficient way  of working. One industry that’s about to see   a big change is the car industry. Next year,  the company plans to team up with Mercedes,   and later with JLR, to introduce its  latest technology. The company’s CEO,  

Jensen Huang, also announced that BYD, the  world’s largest electric vehicle maker,   will use the company’s newest creation:  Thor. Thor is a powerful computer system   designed for advanced machines. This could  change the world of robotics and might even   lead us closer to having humanoid robots.  But this isn’t the most exciting part yet.

Nvidia’s Groot Project is one of the most  exciting projects in the world of robotics.   It aims to change how robots learn and interact  with their surroundings. The company isn’t just   thinking about these ideas—they are making them a  reality. The Groot Project is a huge step forward,  

giving robots the ability to follow complex  instructions and learn from their past   experiences. With advanced algorithms, these  robots can decide what to do next on their own,   making the connection between human  instructions and robotic actions even   better. But that’s not the end of  it. There’s still more to come. Osmo is a new platform that helps  organize and run training sessions   and simulations more efficiently. It uses  powerful DGX and OVX systems to ensure   everything goes smoothly. One of the most  impressive things about the Groot Project   is how it can learn with very little help  from people. By watching just a few examples,   robots equipped with Groot technology can  perform everyday tasks and copy human actions   with great accuracy. This is made possible by  advanced technology, which uses complex systems  

to understand what people do and turn those  actions into tasks that robots can perform. But that's not all. The technology  doesn't just make robots move;   it also helps them understand and respond  to spoken commands, making them even more   useful and easy to interact with. Whether it’s  simple gestures or more complicated tasks,   robots using Groot technology show a lot of  intelligence and flexibility. This is thanks to  

Jetson Thor robotic chips, which were designed  specifically to power these advanced robots. The Groot Project's impact goes beyond cool  features. It is leading the way in robotics,   bringing us closer to a future where  robots are a normal part of daily life,   transforming industries and how we  use technology. The commitment to   cutting-edge technology promises a bright future  for humanoid robots with endless possibilities. Looking back, the journey started in the  nineteen-nineties, when technology was quite   different. Personal computers were just starting  to become popular, but graphics were very basic,  

often limited to text and simple images  unless used for special purposes like   animation or engineering. For better graphics,  people relied on video game consoles. However,   three engineers from California—Jensen Huang,  Chris Malachowski, and Curtis Priem—had a   different vision. They wanted to create a special  chip that could handle more complex graphics on   personal computers. Let's step back and see how it  all started with a game-changing idea in GPU tech.

The Birth of Nvidia and the GPU After many long brainstorming sessions, often at  a local diner, they realized that while regular   computer processors (CPUs) were good at handling  one task at a time, creating 3D graphics for games   needed something that could handle many tasks at  once. Their solution was a new type of chip that   could process tasks in parallel, which changed the  world of computing. This chip, later known as the   GPU, wasn’t meant to replace the CPU but to work  alongside it, especially for graphics-heavy tasks. But that wasn’t the hardest challenge. In its  early days, the focus was on making PC gaming  

better, a market that was quickly growing.  The company was founded in 1993 in a small   condo in Fremont, California, and its main  goal was to bring parallel processing GPUs   into regular household computers.  The name 'Nvidia' comes from 'NV',   short for 'next version', and 'Invidia',  the Latin word for 'envy'. The green in   their logo was chosen because their powerful  chips were meant to make others envious.

The company was started by three  skilled engineers. Jensen Huang,   a Taiwanese-American electrical engineer,  had a lot of experience from working as   the Director of CoreWare at LSI Logic  and designing microprocessors at AMD.   Chris Malachowski brought in valuable  engineering skills from his time at HP   and Sun Microsystems. Curtis Priem, on the  other hand, had designed graphic chips at   IBM and Sun Microsystems. Even with all their  expertise, starting a new company wasn't easy.

In nineteen ninety-three, the three  founders reached a point where they   weren't sure how to move forward. To help  them navigate the legal side of things,   they decided to hire a lawyer. Jensen Huang  only had two hundred dollars at the time,   but he chose to invest this small amount to  officially start the company. This initial   investment not only got the company  incorporated, but it also gave Huang a   twenty percent stake in it. However, this  was just the beginning of their journey. The next big challenge was getting  enough money to turn their ideas into   reality. Convincing investors was not  easy because many venture capitalists   preferred to back founders who already  had successful businesses and a clear,   exciting vision. Still, the company  eventually caught the attention  

of Sequoia Capital and Sutter Hill Ventures,  securing twenty million dollars in funding. Looking back, it’s clear how important those  early investments were. Huang’s connection with   the CEO of LSI Logic came in handy, helping  them secure a meeting with Sequoia Capital,   the same firm that had also invested in LSI  Logic. Although Sequoia was initially unsure,   they eventually recognized the potential in  the graphics card market and decided to invest.  

They first put in two million dollars, which was  later increased by an additional eighteen million,   putting the company on the path to success. But this wasn’t the hardest part yet. Back then,  investing was seen as risky. Out of eighty-nine   companies with similar goals, only AMD and the  company survived the intense competition. By   the time it went public in nineteen ninety-nine,  its value shot up to six hundred million dollars,   proving the early investors had made a smart  bet. But we’re getting ahead of the story—the  

company was still a small group of engineers  working hard to launch their first product. With the funding secured, it took two more years  to build a team and create their first product,   the NV1, which came out in nineteen ninety-five.  During this time, the company made a deal with   Sega to produce chips for their gaming consoles.  These chips powered popular games like Virtual   Fighter and Daytona. Interestingly, the  NV1 chips were also compatible with PCs,   allowing players to enjoy Sega Saturn games on  their computers—an exciting concept for its time. The company took a bold step in how they  designed the NV1 chips, using a rendering   method based on quadrilaterals instead of the  more common triangles. The goal was to speed up  

the rendering process by lightening the load  on the CPU. In theory, this would use fewer   polygons and better capture rounded shapes,  giving game designers more creative freedom. But as technology advanced and memory  became cheaper, this method became less   effective and even caused compatibility  issues since it didn’t work with OpenGL. Could quadrilateral rendering make  a comeback with today's tech? Like,   comment, and don't forget to  subscribe for more discussions!

2024-11-18 18:32

Show Video

Other news