New Computer Breakthrough: Light-Speed Unlocked

New Computer Breakthrough: Light-Speed Unlocked

Show Video

Lightmatter have just released a new kind of  computer one that based on light and this is a   big deal for the entire industry let me explain  why today computing demand is growing faster   than silicon chips can keep up with to get more  performance chip makers nowadays just throwing   more silicon at the problem double the area double  the RAM double the cost and it's been working for   now but there is a catch because the rule of the  game in semiconductors is that you pay per area   per silicon area used and the costs nowadays are  skyrocketing nowadays a single GPU costs way more   than your rent one thing is clear we can't double  down on silicon we have to rethink how we compute   well if we think about it at the data center  scale it comes down to 3 main aspects first of all   compute interconnect and memory let's start with  compute for the past decade the main engine behind   AI from transformer models to reasoning models has  been accelerated matrix math GPUs pushed it TPUs   refined it and then ASICs squeezed out every drop  of efficiency and now the spotlight is shifting   to computing with light but why in fact we don't  know much about dark matter but light matters a   lot it's the source of energy growth and time and  now it's not just powering the life on Earth but   also computing you've likely heard this idea that  light-based computers are faster because light   travels way faster than electrons well it might be  partially true but it misses the real point let's   take a regular chip AMD or NVIDIA GPU for example  it's built off hundreds of billions of transistors   those tiny little switches that are constantly  turning on and off to perform computation and   it's getting even more interesting here because  actually in digital chips every time we want to   switch from 0 to 1 or from 1 to 0 we have to stop  the data take time to either charge or discharge   a capacitor think of it like filling a tiny  basket with electric charge just to flip a switch   this takes time and now imagine doing it  billions of times per second this is where   the real slowdown is coming from and this is  exactly where photonic computers shine because   those are analog chips not digital and this makes  the whole difference in light-based computers we   are using light waves and light doesn't have  to stop to charge up there is no capacitance   like with silicon this means we can process data  on the fly without any delay for switching and   that's why photonic chips are so much faster now  one very interesting thing to understand about   light-based computers that those are governed by  Maxwell's Equations and Maxwell's Equations are   linear and this has a huge effect on what these  kind of computers can actually do it turns out   that at the core of modern AI workloads are  actually additions and multiplications and   those are linear operations and that's exactly  what photonic computers excel at now let's put   a spotlight on it and see what happens let's say  you want to do a matrix multiply accumulate 128 by   128 so if you do it on the Lightmatter photonic  processor you get result back in roughly 200ps well how this compares to a conventional GPU on a  conventional GPU for this you would roughly need   100 cycles and if we take 1ns per cycle this  is roughly 100ns what I'm saying is that the   photonic processor can do the whole job under  1ns which is roughly very roughly 100-1,000   times faster so you see how much faster we can  compute when we don't actually have to stop the   data another very interesting property of light is  that it operates at much higher frequency here we   are in terahertz range compared to the gigahertz  range in electronics this practically means that   we can compute much more data simultaneously and  by using different colors of light we can compute   lots of data in parallel and this practically  means that we can achieve massive parallel   computing without spending more area or more  power just think about it this is mind-blowing   despite this glowing interest in photonics there  is a catch or two first of all analog chips are   super efficient but this comes at a cost of  precision until now analog chips have never   achieved the precision that we actually need you  don't want your banking transactions to run on   a light-based computer because so far they were  nowhere near the precision of the digital chips   now Lightmatter finally solved this with their  new photonic chip in a very elegant way in fact   they managed to achieve a precision that is very  close to precision of 32bit digital chip and I was   lucky to get an opportunity to discuss it with  co-founder and CEO of Lightmatter Nick Harris   we've built the first alternative computing system  that doesn't use transistors that's able to run   economically valuable and useful workloads  things that you would actually want to run  what we were able to do is we built a photonic  computer that can play Atari video games it can   run Transformers it can run large language models  and I think that's a historically significant   milestone you had to prove that an alternative  form of computer like a photonic computer could   run these workloads accurately as accurately  as a digital computer and that's what we did   now let's have a closer look at the photonic  engine it's essentially an accelerator designed   to accelerate linear algebra which boils down to  adds and multiplies if we break it down this new   photonic computer includes photonic tensor cores  and electronic chips integrated vertically via   high speed links first of all there are two  electronic control chips that are on the top   of the chip package their goal is to communicate  with the photonic tensor cores and then there are   also four photonic engines which are 3D stacked  underneath you see all nonlinear math is actually   offloaded to the digital chip while all the heavy  math like multiply accumulate is happening in the   light domain so in total inside there are six  chips in a single package and overall it's 50   billion transistors that coordinate 1 million  photonic devices we will dive more into the   details how it works later on but looking at the  high level a digital chip sends a request to the   photonic engine and then in roughly 200ps gets  the result back now one picosecond is 1 trillion   of a second so this happening very fast now  think how of all of this is synchronized it's   a rocket science if we have a closer look into the  photonic engine here you can see the photonic core   the wave guides through which light propagates  during the computation and here is a close shot   at the electronic part of the chip which is  of course a bit less interesting because we   can't see much due to the metal layers which are  hiding all the beauty of the circuits underneath   now let's get laser focused on how actually  computing with light works let's say we have   some data an image and it's described by a vector  which represents pixels in the image and the   values are between 0 and 1 describing how much red  blue or green it is first we map this vector into   the optical domain then the light travels through  optical devices and get multiplied by weight using   so-called Mach-Zehnder interferometer (MZI) and multiplying here actually means turning a   number turning a weight in how bright the light  actually gets then the light arrives to the end   points which are all connected to a single  electrical wire so in this way signals get   summed along that wire meaning additions happens  naturally and this is the true beauty the true   power of light you know no delays no clock cycles  it's all happening effortlessly just pure speed   let me know what you think in the comments next we  will explore the performance of this chip and what   it means for the future of computing but before  that have have you ever wondered how much of your   personal information is circulating around the web  your name your home address phone number and even   information about your family members could be  floating around online this happens because data   brokers collect and sell your personal information  without you ever realizing it and this exposes you   to risks of data breaches and personal security  and this is where Incogni the sponsor of today's   episode comes in Incogni helps you regain control  by removing your personal information from the   databases that data brokers rely on i use it  myself and you will be surprised how simple it is   you sign up authorize Incogni to act on  your behalf and they send data protection   low compliance requests to these companies  forcing them to remove your information from   their databases and the best part you can track  every step of the progress in real time right   from your dashboard as someone who values  privacy a lot I highly recommend you to try   out Incogni it's a simple way to reduce unwanted  spam and keep your data off the grid use my code INTECH at the link below to get 60%  off an annual plan thank you Incogni   for sponsoring this episode analog chips have been long time dismissed as too imprecise of course if  you can't get math right computer is not useful   and now Lightmatter for the first time got  it right and I asked Nick to explain their   elegant solution well we have a number format  called ABFP16 and what we do is we assign   to a block of numbers a scale so we factor out a  number we save that in the digital processor and   then we query the photonic tensor core to do those  adds and multiplies we get the result which is a   vector and we multiply it by the scale function  again but that's just one part of the equation   they're using other tricks as well for example  they are over amplifying small signals so neural   networks are not losing critical bits in our  terms it's called LSB's least significant bits   to simplify it think of it like zooming in on the  most important math numbers close to zero and it   works for the first time light-based chip achieved  the precision close to digital chips and it's not   just a demo but a real functional chip well what  we witnessed over the last couple of years that   AI is moving in the other direction from 16  to 8 bits and now to 4 bits now 4bit format   is becoming a new standard because it's allows us  to reduce compute and memory requirements and this   shift represents a huge opportunity for photonics  because it turns out each time we drop precision   efficiency increase exponentially photonic engines  are crazy efficient at low precision if you look   at just the tensor core itself so just the math  engine that's operating at a few hundred tops   per watt which is extremely high energy efficiency  it's also performing at that efficiency at a very   high throughput so that's a tricky point to be at  when you want to go fast with a digital computer   you spend energy and it's nonlinear so going twice  as fast costs more than twice the amount of energy   in many cases with this system you're in this  quadrant of it's very fast and it's efficient   so that's a very interesting spot to be in of  course real world systems have other components   that eat into efficiency but it turns out there is  a lot of room how we can further optimize it for   example now Lightmatter is using just one color  for computation but they can easily increase it   to 16 or 32 and then reuse all the components  to perform massive parallel computations   just think about it let's say they go from 1  color to 16 and they immediately have 16 times   higher throughput or higher computational density  if you will without significantly increasing area   this could power the future of intelligence let me  know what you think in the comments and if you're   enjoying this episode remember to subscribe to  the channel this makes me and my team very happy   now all of this sounds brilliant but we know that  when it comes to exotic computing approaches they   either take decades or even never escape the  lab take analog computers that are based on some   sort of a resistive device such chips have huge  potential but are very unflexible and struggle   with running AI models as you can't just run it  out of the box it requires translation of the   models and even additional training while the new  Lightmatter processor can already run Deepmind's   Atari and nano GPT which is a reduced version  with 100 million parameters and it doesn't require   any translation of the models or any additional  pre-training that's great but remember at the very   beginning of the video we discussed that photonic  chips are governed by Maxwell's Equations and   there is a second catch because they can easily  accelerate linear operations but unfortunately   they can't manage logic the challenge with light  is that photons don't interact beams pass through   each other like ghosts in order to make them  to feel each other you need exotic nonlinear   materials and deep difficult physics and that  interaction in fact is the essence of logic   light doesn't know how to play this game that's  why in the future we are likely to see photonic   engines accelerating linear math and probably  financial trading but it's unlikely that they   will run Linux or Windows at least not anytime  soon then there is one more fundamental problem   with photonics as actually one of you pointed  out in the comments it's really an honor that   such bright smart people are watching my videos  thank you for your comments so when it comes   to computing let's say you want to invent a new  computing paradigm from scratch what we need to   do we need to be able to manipulate signals like  add them multiply them and then we must be able to   remember intermediate results so we can use it for  the further computations or to act on this result   now when it comes to photonic we can perfectly  manage the first two but there is no storage   available do you remember we discussed those  capacitances which slowing down digital signals   this is the way the intermediate results  stored in digital chips and this does not   exist in photonics what typically happens in a  photonic chip we will convert this light signal   into a digital one so back to ones and zeros and  this is a slow part and also it drains a lot of   power this means computations that don't require  the memory truly shine in photonics but we still   have to figure out the memory part and to be  honest this is not the problem that everyone   is focusing on right now we are on this channel is  a couple of steps ahead of the rest of the world   now everyone is figuring out how to efficiently  link those large GPU clusters together and when   doing so interconnects matter because in modern  AI workloads no single chip does the job alone   here thousands of GPUs work in parallel and they  constantly exchange data even nano second delays   in data exchange between GPUs have a huge impact  on the time it takes to train an AI model if we   manage to solve that we can release new AI  models way faster and not only that there is   a new class of models so-called reasoning models  like DeepSeek R1 or so-called Deep Research models   those are very accurate but it takes them 10  minutes to generate a solution for you so if   we can solve the interconnect bottleneck and  connect more GPUs together efficiently we can   reduce this response time from 10 minutes  to let's say 10 seconds this would be cool   the solution is to replace copper that is  currently being used to link up racks with   photonic interconnect and Lightmatter is solving  it with their Passage product so this is the big   opportunity for photonics and we're building the  fastest photonic engines in the world we announced   M1000 at our event a couple weeks ago M1000 is  114 terabit per second in a single optical engine   we've built platforms for customers that are  60 TB per second in a single optical engine we   announced L200 which is our standalone general  purpose IO tile for GPUs and for switches 64 TB per second so Lightmatter is really the bleeding  edge on how fast these systems can be we're about   8 to 10x faster than any of the companies that  are out there people are announcing 8 TB and 6 TB   we're at 64 and as soon as this interconnect  bottleneck is solved everyone will start looking   more into the improving the efficiency of  computing and here photonics can enable   the next big leap clearly the future is  optical optics will be everywhere let me   know your opinion in the comments now  please let this video to see the light   share it with your friends colleagues and on  social media i really appreciate your support   finally if you're obsessed with light as much as  I do check out this episode where I explain new   photonic chip from NVIDIA and basically the main  trend which is happening in the industry right   now must watch thank you for your support  and I will see you in the next episode ciao

2025-05-02 12:41

Show Video

Other news

Tech Stocks Rise on US-China Trade, Trump Talks Apple | Bloomberg Technology 2025-05-16 08:06
What the Internet Was Like in 1995 2025-05-15 00:42
Browser Fingerprinting Masterclass: How It Works & How To Protect Yourself 2025-05-11 01:10