AIX Exchange wtih Max Welling VP Technologies at Qualcomm Technologies Netherlands

Show video

My name is Max Welling. I am currently a Vice President of Technologies at Qualcomm   and I am also full professor at the University of  Amsterdam. So my job in at Qualcomm is to lead the   AI strategy for the company and at the University  of Amsterdam I lead a research lab, well actually     there's a very broad spectrum of topics that we  study mostly around deep learning these days    but also causality is a big topic  reinforcement learning is a big topic    and sometimes with applications  in for instance medical imaging.

Yeah so what excites me most is to work on new  problems new directions in machine learning   and connect them to to the real world applications  within Qualcomm this is our bread and butter   but I like to think of problems and abstract  them into sort of mathematical problems or   machine learning problems and then work on these  abstractive problems so let's say to map a problem   for instance in the design of chips back to maybe an optimization problem that can be tackled with reinforcement learning and so there's many  more of these differences we are working there   also not necessarily me but in Qualcomm, we're  also working on source compression and we   do that using variational autoencoders with which  I've been involved, we work on compilation problems   and you can map those for instance to to graph  neural nets so I like to work on this interface   and but also I like to work on completely  novel directions within the field which do   not necessarily have immediate applications.  For instance the thing that I'm very excited   about now has to do with quantum computing,  so is there a possibility to apply sort of   the mathematics of quantum mechanics and quantum computers to the benefit of deep learning and     machine learning. This is of course a widely  open field and quantum computers are not yet   sort of fully developed and so that this field  holds a lot of promise and then also connecting that for instance to the symmetries If we worked a  lot on symmetries the world is full of symmetries   and this can make this can put structure into your  neural network that's very beneficial for learning  and in some sense what's happening is that  ideas from physics which are often mathematics   very deep ideas from physics and mathematics are  finding their way slowly into a sort of machine   learning and it's kind of interesting in two ways  first of all I think these ideas will help make   our machine learning algorithms better and more  principled in the future but also we might maybe   learn something about sort of physics in some  in some way at least for me it's interesting   that I can learn more about physics by studying  these ideas in in the context of deep learning.   Yeah I think human-centric AI is actually very important  but what sometimes confuses   the discussion in my opinion is that AI and you know machine learning are extremely   broad topics, so machine learning is a subset of  AI so AI you know is a very large container term for all sorts of things and sometimes  when some people talk about AI or machine   learning, they talk about things where you know explainability and human centric   are less applicable and then  when other people talk about   these concepts they are extremely important and I don't think necessarily that everybody understands   that there is this very broad spectrum of machine  learning and everybody talks about their own view   of the world. So let me give an example to make  this clear, so think a bank that needs to   decide on who will receive a loan and so this is  clearly a decision about people and there it   is extremely important that these decisions  are made in a responsible way, in a fair way and so there I feel that these you know regulation  and you know and this human-centric design   is extremely important. However there's also many  

applications and actually many  of those you find in Qualcomm   where it is much less interesting and important  to think about these things so for instance   imagine I have a black box that will predict  and I have no idea how it does it but it will   predict for me the best design of a chip, right  so I ask it to design the chip with these specs   and it will provide for me the perfect layout of  all the transistors and all the memory elements   in a way that optimizes you know our our metrics,  now I can check that it is a really good design   and there basically I can just accept  that it is a good thing and I can just   you know build it and then done done with it. Now that to me is a you know and there's many   of these example compilers are very similar, source  compression is very similar but they're quite far   away from the humans that actually use them and  you can very easily check whether they are good   designs or not so good designs right and I think  so we need to really make this distinction when we   talk about you know the importance of responsible  AI and we have to sort of first try to classify   you know what particular AI we are talking about  and then we have to develop methods that will   you know for that for every class that will be appropriate   for that particular class and so I truly  support the moves from the European Union   to design this GDPR regulation that basically  says that any anytime an AI system decides   something you know about humans or for humans  then you know this this needs to be explainable   and it needs to be you know regulated so  that it is for the benefit of those people.   So yeah so that's kind of my high level view  of this particular topic so myself I am mostly   involved in the more technical aspects that  are often but somewhat further away from the sort of human decision making about humans and other people who you know who talk about the    sort of the importance of responsible and explainable AI, they are more talking about  these applications of AI that are more that  have a more direct societal impact I would say. Yeah so to me of course, so I think  the biggest barriers are things like security,   privacy and trust, so as we build as we put  lots of these AI devices in our homes right now    they start to interact with humans and you  know they the more technology we use the more   vulnerable it also becomes for hacking from  you know from outside from you know from   adversarial players let's say and so how do  we keep everything secure and safe. How do we     make sure that the data that is being collected  or potentially being collected by these devices   is not shared with parties that we don't want to  share our data with, so that that all these things   are private by design that we can truly trust that  our data stays in you know in our own sort of   safe and not moves to the cloud to somebody else's sort of system and sort of   how do we certify things like this right, so when  these devices become increasingly more complex   and potentially even self-learning, how are you  going to certify something that is changing and   so that we have to I mean it's already hard to  certify things which are highly complex but we   do manage to do this somehow right, we certify  airplanes and we you know and cars which are   already extremely complex pieces of engineering  but if these things become self-learning that   becomes increasingly challenging to do that um  and so overcoming that barrier of certifying   these things so that we can sort of guarantee that  the device is safe and privacy preserving for the   people that that will use it and also fair and  all these other dimensions which are important. That's a difficult question I am you know  for every technology there is positive and     negative applications it's always the same  and you know it has always been the same, so   if we design an axe you can use an axe to  cut trees and build houses but you can also   use it to you know to kill someone right  and so this has always been. So personally  

you know I feel a society evolves and becomes more technologically advanced there's no stopping to it   there's no stopping to it   and we should as people, we should make sure  that we put in the right regulation and laws so that the we sort of mostly benefit on the  positive side and avoid the negative side but I'm   realistic enough to know that, that's not always  true so first of all it's being done on purpose   which means that you know people make  military weapons from AI   and it seems unstoppable, it seems very hard,  it's like a weapons race and it   seems very difficult to somehow stop this or you  know I don't you know it's also not my field to    so much to understand this but it doesn't  seem that very soon here we will be living in   utopia in that sense, so I'm realistic and not always all that optimistic about   people and politicians and how they can  handle these kinds of questions. I do   think there's a lot of really good applications  of AI in particular inferences in health care   but also many other fields that are worth  pursuing it but it's yeah so it's it in  some sense we built the technology and we need to  make sure and explain to the politicians that how   this technology works and what is the potential  upside and downside of these technologies and then   we need to involve other people like you know  legal experts, you know humanities experts,   philosophers, politicians, regulators to make  sure that this new technology that is emerging   is used you know for the benefit of humanity  rather than against it and sometimes I feel   that you know too much of that is put at the at  the research right so they often say you know   the researcher itself should sort of make these  decisions already when they develop the technology   but I think that's somewhat unrealistic  in the sense that you know it's like asking a     mathematician not to work on a fundamental  theory of mathematics because in the future  you know it might be used for bad as well as for  good but it might also be used for bad right, so  I also feel I work on the fundamentals of  machine learning closer to mathematics     than to the actual application and it's when you  actually start to work on applications that you're   gonna make these may be tough choices, I mean I  never work on military applications other people  might but I don't and and so that's  where you make these sort of choices I think. Yeah so what I find really important  again and I'm just going to repeat that is that   we this is a truly interdisciplinary effort so  it's we should not leave these questions to the   technologists and I find this very important  because we have a very limited view of the world  right, we develop these technologies but we have a very limited view of what technologies do with our society or what they do even you know with our  own sense of security you know we will maybe adopt   all sorts of technologies and you know they  might change our psychology or they might change   actually the way we function or the  way we think and that's just very hard  you know to predict and I think we should really  make sure that experts who think about this    get involved in order to decide whether certain   developments are you know are something that  we that we want to happen. So maybe one I can   give you one example that sort of has worried me a  little bit which is as we make, so we've seen   for instance for social media which we use in  our phones every day, that it does   change people and in fact the things that we read,  the things that we get served by social media   does actually change the way we think and maybe  even the way we vote and sort of I hadn't really   thought about that very deeply, I've never worked  on this problem but you know I could imagine that   I could have worked on fundamental technology  for this but you know maybe in you know improving   a graph neural net somewhere but I never  thought about you know what the impact would be    along the lines. I think we need to listen and  involve people who do know how these kinds of   things will actually impact our lives and to try  to help us a little bit figure out you know how to   how to sort of make sure that it works in the direction that we, that   is good for us and so here's one   example that I worried about a little bit and  I still worry about which is sort of virtual  reality so it's quite clear that virtual reality   you know, right now we're looking on our phones all  day and you know I can see kids walking and you know they walk and they look at their phone at the  same time all the time, so it seems very strange. 

Now if we will develop these sort of augmented  reality classes then we will be connected to   internet into our virtual worlds all the time and  so I think they will be adopted very quickly   and what then you know and these systems might get so good in capturing our attention that we will     sort of develop some kind of addiction to this, right so that we want to be in this virtual world   all the time and it may be stronger than ourselves.  I mean for some it might already be so     some you know there's clearly the phenomenon of  addiction to gaming and things like this but this   this only gets worse and it doesn't feel that  we as humans evolve to create better defenses  against it right, so where does that end up at  some point in the future.   So this is one device that you know we might  have around us and that we might just choose    to pick up but we need to have people think very  hard about how to make sure that doesn't go     in a direction that's you know that will hurt a lot of people and might even hurt our    society or might hurt our political system or  democracy etc. But again I want to emphasize  that I think it's really important that we  involve other disciplines in thinking about that. 

Well I think it's not necessarily that they need  to be involved at a particular point but I think   you I mean they need to be involved now and all  the time they need to be able to monitor what's    happening what the technology can do and so they  need to be also sort of technically sort of    intelligent, you know they need to understand  the technology and they need to basically,    they need to to think about that and advise  about that all the time but maybe you're   thinking about a particular technology maybe  like a self-driving car or something like that when would you involve people from  different disciplines and thinking about the impact   of a particular technology and well I would think  as early as possible right as soon as you   invest in a technology that you know that might  have a societal impact, you can start to   especially at the regulatory level you can start  thinking about it and involve experts and the   earlier you have regulation around certain  technology the better it is also for the companies   that are making that technology right so because  they know you know in which direction to steer the   ship, if you know if the technology is if the  regulation is very clear at that point in time. Ok so a big theme throughout the last years that has been going on in my lab in the University   Lab but now also at the Qualcomm is you know  something that we call acquivariance or symmetries   and so the idea is that the world has certain  symmetries like it almost literally like the   physical world has symmetries, so for instance  if you take electromagnetism something looks like   an electric field for one observer that's standing  still but then if you move past that electric   field it becomes a magnetic field just because  you're an observer which has a different speed   and so this transformation you know  is typically encoded in this kind of   what we call covariance or maybe equivariance of  the mathematical objects that you're describing. So   similarly for for neural net you could have that, you would transform something about the input   it could be you rotate the input or translate  the input or your shear or scale or whatever   yet you think that the prediction of  the neural net should be either   invariant you know the object identity  doesn't change as you rotate the object   but also it could be you know equivalent which  means if you do some kind of segmentation then   the segmentation map should sort of transform or  rotate let's say similarly as you rotate the input.   Now building this kind of idea into neural nets in various ways has been a big theme,   we now also do similar things for instance a reinforcement learning so you can  think in reinforcement learning that   you know something that if you know you  have you're looking at a particular state   of the system and there could be a symmetry  that if you take this action in this state   it should be the same as this other action in this  other state and and thinking about these kinds of  mappings and these symmetries that might exist  in reinforcement learning is another theme.     We've worked a lot on graph neural networks  so this is if your input space is not   a sort of an image or sound one dimensional  signal two-dimensional signal but some kind of   how would you apply the general ideas of deep  learning to these graphs. This is important for   instance for you know molecules studying molecules, so we recently looked at how could you take in a   molecule, the structure of a molecule as an input  and then map it onto some properties of that   a particular molecule like you know will it  will it cure, you know Covid or something like   this right you know of course these properties  are more subtle than that. So it becomes  

like a prediction about you know what a molecule,  how a molecule will act in the body   and then you can combine these things you can also  think about what if I would now add symmetries   to this to this molecule clearly if I rotate the  molecule maybe certain properties remain invariant   maybe others change instead of building that  into the you know into the structure of the graph   and we've looked at that and so basically  this whole field you know thinking about   symmetries, thinking about deep learning on  manifolds which I haven't really talked about like   curved manifolds, thinking about graphs this called  geometric deep learning. So that's been a big theme  and what's interesting about that is that you see that the the mathematics that you're   using when you think about that is very close to  the mathematics that you also use in physics for   instance gage field theory is a theory about  symmetries in this case local symmetries that   you would need if your if your variables live on a  sort of curves manifold and that theory   as it describes physics also describes this  sort of these neural networks these deep   neural networks and there's a striking you know  resemblance between these two in some sense like a deep neural net is just an iterated map, like you  you transform things again and again and again   and if you think about an image this looks a lot  like a space, the space can be curved and space can   have certain symmetries and if you go through the  neural net it's like evolve time forward and so in   some sense we are building a little mini universe,  in some sense in this neural net and you can apply   the laws of physics to this kind of mini universe  and so we recently extended these ideas to say   well what if you know if we are describing  sort of a mini universe in physical sense   in a neural net could we also apply quantum  mechanics, what about quantum mechanics   and so this is on my mind right now I work  together with people at Qualcomm on this   but also people at University of Amsterdam and  so the idea is there that quantum mechanics.     First of all if you can implement things on  a quantum computer you could get you know a   lot more powerful neural networks because you know quantum computer is more powerful than a classical computer,   that would be one way of looking at it which  is already very interesting. The other way   the flip side of it would be to say well is  quantum mechanics itself perhaps a language   that is different than ordinary probability  theory because now instead of having just   positive probabilities which can only add up  we will now also have negative amplitudes and   positive and negative amplitudes can  actually cancel each other out and this   is this interference is very different dynamics, could even classically on a classical computer   sort of designing neural networks with the  mathematics or the statistics if you want   of quantum mechanics could that already  give us a new sort of modeling paradigm   and that is another sort of direction  that we are looking into. Okay so the  

the brain basically has two systems one is a  somewhat unconscious system that you know you can think of bottom up  where you know information hits our senses   and sort of gets automatically processed through  layers of sort of biological   neural nets that you know in our brain to  basically see things in a completely unconscious   way or hear things right so if I hear you talk  I'm not consciously processing you know necessary,   you know the translation from the sound that  hits my ears to the words that are down here   and similarly if I look around me I see  objects but it goes instantaneously and   completely unconscious. So that is one type  of process you can think of fast processing,   you can think of it as a shortcut in a way we  have learned to do these things effortlessly.   Now there's another process which is much more  conscious and I think that Daniel Kahneman uses     this type 1 and type 2 thinking or something  fast and slow thinking, so the slow thinking   is much more iterative where you sort of build  an argument slowly over time it's reasoning and   it takes much more effort for humans to do that  and it's also conscious you have to use   your consciousness to basically do this reasoning.  It's the kind of process if you try to solve   you know a math puzzle right you're, you're really  thinking you know about you're building   steps and you know you're joining sort of  reasonings together, you're chaining things   together and that's the part  that hasn't become conscious and I think that  you know when we learn we you know we're putting you know things like if you practice   something a lot right if you practice playing the  violin a lot then some of this conscious effort   turns into more and more unconscious effort  and you're starting instead of you know the   unconscious things now you know maybe  playing some small little you know piece you know becomes much less conscious but then you can start to focus on the bigger sort of holistic    things about a musical piece and you know how  to how to put you know your interpretation on   a musical piece like like a director would do. So okay so that's sort of these two processes,    

but so of course when we build neural  networks all the way back they are inspired   by how the brain works in particular this fast  processing sort of these bottom up neural nets   but I think more recently we have come to  understand that the slow thinking the reasoning.   I mean there was actually sort of a separate line  of research maybe symbolic AI or logic AI where   this is more this reasoning type of methods and  people have started to think well maybe you know   reasoning and causality and sort of symbols  and all those kinds are actually you know,   can we merge those with the sort of fast  processing, so do we have a fast shortcut sort of   recognizer, classifier can we join that with a  sort of more slow reasoning machine and that's   I think one of the big questions that the field  is facing now and how do you how do you sort   of emerge these two things together. Now that's one part of the one part of the equation, the   one where I believe neuroscience is becoming  increasingly relevant is because in one aspect the  human brain is a lot more efficient than the  hardware that we're building. In fact    it's estimated that the human brain is about  100 000 times more power efficient than hardware   for the same task and so clearly as our  you know AI tasks you know the models become   become more and more complex like gpt3 huge  you know is a huge model and so it will also   cost a significant amount of energy to execute  it and so you cannot really execute it for   every task because at some point the revenue  that you'll get maybe out of your ads   is not enough to pay the energy bill or if you  would run it on a phone your phone would get   way too hot in your pocket right, so there is  ceilings where we basically cannot sort of use   the latest and the greatest technology because  it's basically due to you know energy hungry.   So I think there might become an increasing pressure to get you know to make these models   and make the hardware more energy efficient  and so at Qualcomm we do things like model   compression and quantization so that we run these  models on much less precise hardware like 8 bit   integer rather than 32 point floating point  in sort of precision, that would save a huge   amount of energy if you do it that way right.   Now there's still a lot to win because the brain 

has a fairly different compute paradigm and it's  far more efficient and people nowadays think that   that's because we're in this vonneumann type of  compute paradigm where the the memory is separated   largely from you know the registers, where the  compute happens right where you do the big matrix   multiplications and stuff so you have to move  the weights of your neural net, you know off chip   to the places where you do the computation  and you have to do the computation and have   you move the results all the way back off chip  and this movement is extremely power hungry.  So people are starting to think well can we  design hardware more like a brain, can we make   the memory for instance sit very close and  distribute it but close to where the computation   happens so we don't have to move this sort of  this information back and down all the time and so   that kind of developments I think are  extremely promising and they can bring down   the power sort of consumption of these  networks on the set of new type of hardware   and then of course we haven't even talked about  spiking neural nets but it's yeah it's unclear yet   whether spiking buys you something in the  hardware the silicon hardware that we build or   whether that's something that's mostly useful  in the sort of wet wear that's in our brain   but these ideas at least are very interesting  and we will be looking to the brain and   in order to make our AI more energy efficient in the future. Yeah so first I wasn't necessarily  talking about edge computing because sort of     edge computing is sort of a different dimension  which is that some of the computing happens    nor on the cloud nor on your device but sort of  more on sort of the service which are on the edge   of the compute network and then sort of your  device you know I guess your device could also    thought of as the edge, some of the computation  happens at your device and at the sort of   smaller sort of edge clouds and the 5g network  would sort of move data back and forth between   these things. So that's more like distributed  learning or federated learning if you want, it's   a whole different topic so here I was talking  about you know any any chip that needs to do a   computation, whether it's in a phone or it's in a cloud or whatever. You know how can we  

make that particular chip more power efficient  um and so is there an environmental angle, so   yes there is but I want to say that I am  not too confident that that if we make our chips   more energy efficient we will be using less power  because I think it's basically we will use   as much power as is possible you know and as  is economically sort of viable. So in other words   if you make things I mean it is what I  call the refrigerator effect so when I   when i moved from the Netherlands to  the U.S to live there for 15 years   my refrigerator was a lot bigger but like four  times bigger or something and it took a couple   of weeks to fill it right and then that one was  full again all the time. So this is the same with   computing if I give you the same intelligence  but you know for four times less power   you will use all of that you know you will use  four times more intelligence in some sense because   your power is basically the budget.  So it still is very useful but I think  we will have to think about this in a different, way we will have different mechanisms to     use less power in a way, so for instance I would  believe that in order we would make sure that the polluter   pays right in other words if you use power you got to pay for your carbon print your carbon imprint that you're making by doing that  computation and I think we should really tax   pollution and the use of energy a lot  more than we're currently doing because   you know now our future generations are paying  for the pollution that we cause where we should   really be paying for it right now. So to come back  to your question of do I think there is an issue with AI using a lot of energy, my answer is  yes as AI is increasing you know becoming increasingly important it will use an  increasing fraction of the available power and   you know as there is a very strong economic  driver behind, it won't just stop by telling   companies or people hey you're polluting, you  should stop that right so that's that's just   not gonna happen you should basically tax uh  and you know put a sort of a climate tax on   the use of energy and I'm a very strong you know  proponent of doing just that but it but it   is a big issue the amount of energy that's being  used you know by by AI certainly in the future.

So definitely to me you know it's easiest to  answer the question in terms of an application I've been working on myself that I'm very excited about, so this has to do with the fact that there are nowadays machines that can radiate tumors   at the same time as image the body right, so with   at the same time as image the body right so with  an MRI machine, so this basically means that you   get a real you know a a very fast sort of  imaging of the body as it moves, let's say as you   as you breathe and at the same time there is  some kind of radiation beam that sort of destroys   the tumor but of course as you're breathing this  tumor is moving around and you want your beam to   sort of adapt to that movement and nowadays this  is not possible, you first make an an image   and then you sort of immobilize the patient hoping  that the patient doesn't move at all but of course   he or she does and then when you radiate you often  hit healthy tissue which is very damaging and you   don't hit the cancerous tissue which means that  it's it's not dead and so I was very excited   about you know seeing that application but what  it means is that you will have to do far faster   MRI reconstruction. So what we need is basically so  MRI works as follows so you basically observe   sort of you know measurements in the frequency so  the fourier domain is what it's called basically   certain frequencies and from that you need to  reconstruct the image and typically you know you   need let's say end of these three a moments that  you need to measure but that would take you too   long to reconstruct. So now we're going to do with  10 times less of these measurements so we can get   10 times faster imaging yet you still need to be  able to reconstruct that image and so one way to   do it is to basically learn from many images that  have been reconstructed in the past basically look   at okay with these types of measurements this  is what the image should look like and then   this type of measurement this is what should  look like. So you basically learn to reconstruct   a high resolution image and so there was   this particular competition a fast MRI competition   which was organized by Facebook and NYU and  and sort of we participated in that with the   technique which are called neural augmentation  where you know you use a neural net but you also   use a more sort of a classic signal processing technique and you sort of   combine these two together to get the best of both  worlds. So we applied you know this kind of   technology and and we won in one of the tracks  together with the collaboration with Phillips   and so you know that I thought was a very  neat sort of I would say application of   of AI to healthcare and we are still  advancing this further to now also think   about which of the measurements do you actually  want to do not just the number and doing   well with a small number of these measurements  but actually thinking okay which sequence   of measurements should I do. So do the first  measurement reconstruct look at the result then  

okay what is the next measurement that we need to  do right and instead of you sequentially reason   your way through you know which measurements you should do, which could give another boost in   terms of efficiency of these kind of methods. So  in healthcare I think there's enormous opportunity   but of course there's many other you know also  in mobility there's clearly huge opportunity.   You know to to build safer cars and you know  and safer ways to move from a to b   right now there's about you know what is it  one point point you know two million deaths   per year and so we should be able to get  that down a lot with with applying AI technology. Yeah that's a great question so I think it's important to think about machine   learning as a horizontal technology and with this I mean it is not so much you know don't think of it as kind of an another application   like autonomous driving or something like or wireless even   but think of it as a technology that will permeate  everything that we do. So all other technologies   and this is what we are currently seeing slowly every field is being basically transformed   by AI and machine learning and so it  started from you know speech recognition and then image recognition and video and all these kinds of things   and then entered the medical field, medical image analysis   and so it's much more like the internet or a  computer or something like that. So there's no way you can avoid it right,  it's like it is a basic technique that will be part of every other   application you know that you will build  in the future and so where will it make   big impact, well it's hard question because it  will make an impact everywhere but I can give   you a few examples of where I think it's  going to make a very interesting impact.  

So the first one is perhaps in sort  of materials design and drug design.  So there I think with machine learning  techniques we will be able to model molecules in the way that molecules interact a lot better with machine learning tools and we will be able to not only   predict which molecules will have you know the  right type of properties that we're looking for   either as a drug or some other material but  we can even sort of specify the properties and   then let the system generate the molecules  directly, so I can envision a tool where   I could say I want to I want a particular material  that does this and this and this is this and then   the system will turn on and it will just spit out  possible molecules that you can test in the lab and then you know that that have you know  potentially these desired properties. So I can   see that will that's on a very interesting  sort of trajectory that kind of application   Similarly I could imagine in an agro culture  where you know I can see a self you know   I would say a self-regulating greenhouse or  something like this right, so in the future   we may not want to use you know large  swaths of sort of field in order to grow our   vegetables and other stuff. We want to  maybe like we do in the Netherlands at a very   large scale build greenhouses or maybe you know  we build huge factories multi-level factories or   something like that where exactly the right amount  of carbon dioxide and exactly the right climate   exactly the right moisture and direct exactly the  right light type that is being used every   phase of a plant growing is being optimized and  self-regulated, so that we will grow you know   stuff in these in these factories and you know at  the unprecedented efficiency. I think that might  

help us sort of you know feed the world and  it also it also might be much more independent of   the actual environment in which you place  these things right it could you know   it could be in the Netherlands but it could also  be in Africa it's some arbitrary place you    could put these things and you could sort of  you know you run them almost automatically as  like a self-driving car but you know you know the  software would sort of self-learn and self-adapt.    And then within Qualcomm I think there you  can also see a couple of directions where AI    is making a profound impact I would say machine  learning that's for instance in    wireless communications of course Qualcomm is  a company that is all about wireless communication.   So up to 5g I would say many things were hand  designed engineers basically building on decades of knowledge they  designed these systems, these modems and   all these kinds of things by hand. What you  see now is that machine learning is going to   optimize these kind of systems, you know much  further than humans ever could right so you   generate a huge amount of data so think about  you want to send data over a noisy channel   right now, you don't know precisely what the  channel looks like but you can sort of   create huge data sets of you know data in you  know and noisy data out and then you can learn   a system that will learn to decode or what's  called demodulate you know the signal on the   other end of the channel much better than sort of  the hand design systems that we currently have. So  

what you see is the machining again asset of  this holistic you know technology will help   and replace the traditional signal processing  tools with these sort of self-learning tools and   improve these systems dramatically and so in the  next generations of wireless communication systems   I think machining will play a very important  role and then maybe one other one is perhaps   chip design, so in chip design we're also seeing  that techniques such as reinforcement learning   can sort of where usually teams of hundreds  of engineers would sit together and build these   chips together over at you know months  and months time. It's a very very complex   process to you know the engineering process  so what's happening now is that bits and   pieces of that process are being taken over  by machine learning tools and eventually it's   not hard to predict that machine learning  tools will take over the entire process   and since this is such a complex problem they will  do a much better job than these human teams do. Now here it's also interesting that humans  will have to work together of course,  you know with the machine learning  algorithm certainly in the beginning   you know there's there's you know they become  sort of tools that humans will use to do a better   design but at some point I think it will be mostly  automated and it will it will be clear   that the automated procedure is actually better  than the procedure where the humans do the   work because it's just too complex. So these are  just four examples but there's many many many more   basically everything you can think of will  probably be prone to at some point in time   to be sort of revolutionized by machine learning  in one way or the other that's my prediction.

2021-01-12

Show video