CES 2021: Thomas Friedman and Prof. Amnon Shashua on Artificial Intelligence
uh i'm known shalom hi hi tom great great to be with you i guess we can take our best off since we're five feet and five thousand miles apart yeah six thousand miles away you know i think the audience are not aware of that i'm in tel aviv you are the east coast and they should see us in the same room so tom we have been having running conversations for the past couple of years you know talking about technology talking about revolutions talking about your fascinating book you know how fast fused and and deep talking about how ai figures into it what happens after uh ai autonomous driving so we thought that would be nice that we start sharing all those conversations that we had with with the public and i'm so happy that we have the chance to do it even though that we're 6 000 miles away from each other me too i'm not i've been really looking forward to this so let's let's go right into it you know speaking about the technology you know society has been experiencing leaps technological leaps for for decades you know from the industrial revolution landing on the moon computer age social networks internet so what's so special about our times today you know you have a very very crisp kind of notion of the fast fused and deep well let me let me take a few minutes i've known to share that framework you know i basically look at advances in technology through what i call promethean moments you know prometheus was the greek god who uh basically stole fire from mount olympus and gave it to humans to develop civilization if you think of three great promethean leaps forward the printing press the industrial revolution and i think the one we're going through right now and this promethean moment is not about a single invention not a printing press or a combustion engine it's actually three simultaneous accelerations in what i call the market mother nature and moore's law i.e technology globalization and climate change all three are accelerating in a non-linear fashion and what those three accelerations are doing is making the world what i would argue fast fused and deep to a difference of degree that's a difference in kind and it's really what's forcing us to change everything including how we govern ourselves so if you think of the industrial revolution how did we manage that was a very destabilizing moment when capitalism met the industrial revolution and it took us over a century to figure out how do we govern that and what we governed it through was something we called the welfare state and the welfare state had started three broad components it had walls walls against the transfer of trade and human beings it had floors to cushion people and it had a ceiling a ceiling was on the pace of change basically it was implied and politics left right politics was really a debate about how high the wall should be people on the left said let's high wall have high walls people on the right side low walls how thick the floor should be people on the left said let's have thick floors cushioning people people on the right said thin floors and the ceiling unchanged was kind of basically implied you know it was limited by the industrial revolution what i would argue i'm known because of advances in software chips and lately ai which you're going to talk about um we've actually blown away the walls we've crashed through the floor and we've blown off the ceiling as the world's become fast fused and deep so let's let's go through all of those very quickly so fast means basically what happens when the world gets this fast is that the half-life of skills just gets shorter and shorter and shorter and that has a real education implication so you know you probably read about in america we had these people in hollywood who went out and hired some guy to get their kids into uh universities in usc and yale and whatnot i want to call those parents up and say um excuse me but if you're actually going to bribe someone to get them into a university could i suggest you bribe to get them into infosys is in-house university emphasis the indian technology company because i've known if i showed you emphasis at the in-house university it would blow your mind we say what's that about well in the old world in the industrial world i government educated you business employed but when the half-life of skills gets this short and the pace of change in every job changes this quickly suddenly i is business i have to be educator and employer at the same time i have to offer not just in case learning but what emphasis calls just in time learning and i think basically now education is going to move to a kind of ecosystem between businesses educating and employing and governments educating and partnering with business for education so you see we have to be in kind of multiple states at the same time now as business i got to be educator and employer when the skills are changing that fast and my employees need just in time learning not just in case learning then when the world gets disfused when the world gets just fused through technology through climate change you get the same phenomena i was recently talking to don rosenberg is the chief legal officer for qualcomm don was explaining to me i'm known qualcomm's relationship with huawei the big chinese technology telecommunications and 5g giant he said qualcomm's relationship with huawei huawei for us is our customer our partner our competitor our supplier and our shared global standard center they have five different relationships with huawei and a fuse world now what about deep deep but you may have noticed i've known in the last two years three years we suddenly started adding the adjective deep to everything deep state deep fake deep mind deep medicine deep research there's no global lexicographer who ordered this we just did it intuitively and it even started to show up in popular culture you know the song to one of the oscars a couple of years ago by bradley cooper and lady gaga was called shallow but what were the ver what was the main verse i'm off the deep end watch as i drive in i'll never meet the ground crash through the surface where they can't hurt us we're far from the shallow now oh i'm known we are far from the shallow now technology is going deep inside us deep inside our systems our businesses our bodies toggling our dna at a speed and depth and precision we've never seen before and again how does that show up in governing i learned the best lesson about this from a guy named amnon chashua because in the old world eye government regulated you mobilize and intel innovated but what happens if mobile and intel are innovating so deep i don't even know who you are so i want to tell our audience a little story i was in israel last year before the pandemic and i i i was at a dinner with amnon and he said to me tom have you ever ridden in a self-driving car and i said i was just out in in mountain view riding in the self-driving car out there he said mountain view that's a grid try driving in a self-driving car in jerusalem where there are no two parallel streets so i couldn't resist i went up to jerusalem i met with omnon we rode and mobilized self-driving car around jerusalem hills hairpin turns donkeys camels jews arabs rabbis and this car is just driving me all through the streets of jerusalem all by itself we get done and ana tells me an interesting story you know when when the world gets this deep and you want to develop a self-driving car you need to test and iterate and design all in the same place and to do that you actually need an insurance protocol that defines what is safe self-driving otherwise anything you hit and anything that hit you you'd get sued so that was a problem for mobileye because the rabbis running jerusalem don't know a lot about self-driving cars so what did amnon and mobilei do they actually convened an ecosystem of volkswagen their support their car supplier mobileye the rabbis who run jerusalem and the israeli ministry of transportation and together as an ecosystem they developed a protocol an insurance protocol for safe self-driving it was so good that yandex russia's google now tests their self-driving car in israel and china just took the whole mobile uh protocol translated into chinese and made it their safe self-driving formula so what you see in all these cases i'm known and the reason politics is kind of blowing up all over the world is those old left-right frameworks don't work anymore you really need ecosystems solutions because we all need to be in kind of multiple states at the same time just to conclude unknown it's a bit like the transition we're going through between classical computing and quantum computing so classical computing is like flipping a quarter if you zero one zero one heads or tails zero one if you could flip a quarter a billion times on a transistor you got compute and storage quantum computing is more like spinning a quarter you can be in multiple states at the same time and i would argue that the world we're going into you need actually to be in multiple states at the same time innovator and educator innovator and regulator and that calls for ecosystems what i call complex adaptive coalitions that's what mobileye built in israel to govern safe self-driving but to have a complex adaptive coalition you need to have shared values and so my question to you amna is how do you get shared values in this world because we not only need shared values between men and men women and women but we need shared values between men and machines women and machines in a fast fused and deep world fascinating fascinating talk so now let's make it more complicated and add a technological spin to it you know when you think about the computers computers are driven by compute and due to moore's law the abundance of data the rise of machine learning and and ai compute has transformed to to something much more than a tool you know it affects our life deeply and when something affects your life deeply you need to ask about you know what are the values of this thing because it's affecting it's affecting me on a daily basis so let me give you two two examples in in the first example now let's assume that we have an agent deployed in the real world and this agent is interacting in the real world and we need to uh we need to kind of convince ourselves that while this agent is interacting in the real world and improving itself let's assume it has a reward function and wants to optimize that reward the function through interaction with humans so it's improving itself so a computer agent is interacting in the real world and we need to convince ourselves that you know this agent will converge onto a state in which the the it's aligned with human interests does this agent understand human interest does this agent understand you know human values because it's improving itself so let me more let it be more concrete so let's assume that we are building a chat bot companies are working on it academia's are working on it so i'm not talking about something really futuristic you know that already exists so let's assume take the existing chat bots but slightly better okay now deploy them into the real world and now let's put a reward function that we want to optimize let's assume that the reward function would be you know something altruistic not make me more money you know make me happy make make society happy so while you're interacting with with society with people you know improve yourself to make the people that you're interacting and interacting with more happy can't be better than that yeah but now you know we're talking about a very sophisticated uh you know software agent powered by ai and this software agent at some point will figure out or can figure out that if you lower people's iq they tend to have less worries and maybe become more happy now i know what i mean now this is something that we completely did not anticipate as engineers when we programmed this this ai and it's not something that would be evident uh uh quickly it could take decades you know until you know people's iq get lowered because this software agent you know will convince me to you know why work so hard you know drink beer go go have fun you know don't read don't need to kind of excel and so forth and so forth and you know over time you know people get dumber and dumber and dumber and you know it could take decades until we understood that this ai you know which had very very good intentions you know look look what a catastrophe it generated so this idea of understanding what the ai will do is very very important because this ai you know its values are important let me give you another example another example would be autonomous driving so if we have self-driving cars you know the promise is that will save millions of lives because robotic drivers are not distracted they're very predictable they're not distracted there are no reason that they'll cause accidents but once you deploy such a robotic agent amongst human drivers then accidents will happen and then you ask yourself how do i guarantee or how do i convince myself as a society that these machines are making decisions that align with human judgment because when we drive we we have judgments we are making assumptions we have we have judgments so there are traffic laws uh with traffic laws there isn't much judgment if there's a red light i need to need to stop but there is a duty to be careful now for for example a right of way is given but not taken right even though i have the right of way i should not force it because if the other guy is not giving me it's not it's not granting me my right of way i should i should yield right so because there is this directive of be careful now how do you go and translate be careful into into code into a mathematical formalism because you want you want society to understand what this machine is doing that its judgment is aligned with the way humans judge a situation for example if i want to change lane and the next lane is congested you know i'm i'm trying to push my way and i i want the other guy in the next lane to slow down so that i can push my way in so so humans make assumptions and and have certain ways of of doing judgments and when those judgments break we'll say that the human had the lapse of judgment or was reckless all right so we don't want machines to be reckless but we need to define to them what is this borderline between careful and recklessness so everything needs to be defined mathematically so this this is a way in which you want to code value because we want to have shared value between the machine in this case it's a robotic agent and and humans because other humans are also occupying the road there are also road users so this is something that we at intel and mobilize took up upon ourselves three years ago and and we published an academic paper called the responsibility sensitive safety or rss and this ties to what you mentioned before we came to the conclusion that if we wait until regulators will come and figure out how to regulate the self-driving technology we'll wait forever we need to be proactive we need to come with the model and then go to regulators and tell them look this is the model this is how we define careful driving let's have a conversation you know give me feedback you think that we defined it wrongly so you know here are parameters let's have a conversation on on the parameters let's let's do a collision with other industrial actors for example in the u.s there's this ieee program p2846 which is chaired by intel but has all the actors in self-driving industry to start working on this kind of model how do you go and regulate what it means to be careful driving we're working with regulatory bodies you mentioned china so china already adopted it while working in israel in in you know the uk in france the idea here is to work together with regulators as an ecosystem exactly what you mentioned before and until we spoke about it i didn't even realize that we're building an ecosystem it was so natural for us to say no we can't wait for the regulatory bodies to do it so shared values between humans and machines become critical when you have the age of ai did you have another alignment example though on another machine making people's um dumber to make them happier are there any other alignment issues that you see coming up around ai because it feels like we're not stopping at ai we're heading for agi and how will that really intensify this value challenge how close are we to that bringing artificial general intelligence you know to these technologies okay so agi adds another another spin so when you think about ai today ai today is narrow so we're talking about software that is optimized to solve one and only problem you know translate from english to german play chess or play minecraft or you know do pattern recognition detect a pedestrian or a vehicle in an image or in a or in a video uh do self-driving it's all you know one and only problem and and these skills are not transferable unlike humans when we develop skills those skills are are transferable you know the more skills i have i can then go apply to other domains so this is called in the technical circle it's called broad ai or artificial general ai agi now when agi will emerge no one knows it could happen next year it could happen in a decade it could happen in 50 years or it could never happen at all can i ask you about that just a quick question is that an intel chip software issue that is with enough processing power it comes or with enough processing power and enough you know the right software what what is the technological barrier to that well it it's a it's a great question and the answer is not is not crisp so when you talk about just computing power you are really talking about brute force so 20 years ago when you talk about brute force through a computer scientist a computer scientist will brush you away you say now we have clever algorithms we are not doing things in a brute force way we are developing a theories on how to solve a problem in the most crisp in an oh comes razor way the era of deep networks is showing that brute force does matter it means if you have more and more compute even if you are not more sophisticated from an algorithmic point of view if you have more and more compute and more and more data you can do things that few years ago would be considered science fiction i'm not i'm not saying that brute force alone will solve intelligence that brute force alone will solve this agi problem but we respect today brute force much more than we respected in the past so i'll give an example you know the the the the next frontier or the new frontier today is language so the frontier of yesterday was pattern recognition artificial intelligence which became better and better and in pattern recognition so what mobile is doing is all about pattern recognition you know cameras and other sensors understanding the visual world and then using that interpretation in order to drive decisions and then do self-driving but language is the next frontier what do i mean by by language if you have a computer a software that can read a book can read text even complicated text comprehend it understand it to a degree in which you can make a conversation with the computer so you can think of once a computer can be able to do that it means the computer understands context understands common sense understand temporal dimension understand commerce understand you know there's so many things to understand if you understand the story that that you read you can also write i can you know i want to write a book of say the harry potter book rather than writing the entire book i'll just you know summarize the main ideas and let the computer write everything itself and then i can you know interact with the computer refine it and so forth and so forth now this is not science fiction in the past two years there's had that have been you know leaps in terms of progress on language understanding on language comprehension and i would say that in the next couple of years two three years maybe five years but that's at the most we can see uh computers understanding text you know passing reading comprehension test you know high school reading comprehension tests are very very complicated no computer today can can pass it yeah but in two years i believe it will being able to write it to write a text this i think will be the next split in ai because let's try to imagine what what can we do with such a computer a computer that reads text let's assume that i'm that we want to have a conversation should we take the pfizer vaccine for coffee 19 yes or not right so i have the computer read the fda report that was submitted by pfizer uh read about vaccines in in general and then me and the computer will have a conversation you know what are the dangers what are the side effects uh what are the dangers if i don't take the the vaccine what is the past experience about you know side effects of vaccines what is what's special about messenger rna all this the computer can understand because if it can read the text and understand it it can read zillions of text not just one book not just one article and this is not science fiction now what does that do to our shared value to this ai alignment it takes the ai alignment to a a a new level yeah a new level that i don't know even how to start grappling uh grappling with it because what i think about with that tell me if i'm right um you know if we think of the difference between alphago and alpha goes zero where you know the computer basically simply teaches itself the game and gets to a point where its depth of understanding is better than it could ever learn from from ingesting all the way humans played yeah i think about that with broader learning with with science or medicine where ultimately a computer could teach itself um exactly and and then it would have a depth of understanding that would be beyond human capacity i i'll tell you now this is the very fine distinction between alphago and alphago where zero you know when alphago came out and you know was able to beat lisa dole in in in go i was kind of impressed but not surprised because alphago what what alphago did it imitated humans yeah so it had it had zillions of games and simply tried to uh interpolate imitate the humans so you know i was not surprised impressed from that engineering achievement not from the scientific achievement alphago zero i was at hours at all because alphago zero did not imitate humans alphago zero did not see a single game it simply played against itself again and again and again you know leveraging what i said before the brute force leveraging the fact that compute density has reached such a threshold that you can run so fast that you can do things in a brute force manner and play against itself again and again and again and again and develop strategies that are uh that were alien to humans it did not even imitate humans so this i think is is something miraculous and uh and and will push ai much much further down the road but we need to we need to appreciate that the world of alphago it's called reinforcement learning in the technical circles it's a world of simulation so in a world of simulation where you know the state and where you know how to map state to state you know the reward function in a world of simulation alphago zero shows us that you can reach super intelligence yeah so the shared value between man and machine is not something that is that science is grappling with today now people talk about ethics they're still not talking about alignment ai alignment and i believe in the next few years as ai progresses this will become an issue we wrote an academic paper too about four months ago about the eye alignment issue my colleague professor shaish alex varus and myself and uh and with the shaked shama about the this ai alignment problem does not need to wait for super intelligence with today's ai it becomes dangerous my one of my teachers i'm known seidman likes to talk about healthy and unhealthy interdependencies you're in a fast fused and deep world we are now interdependent we're beyond interdependent we're fused the question is will we have healthy interdependencies between men and men women and women men and women and machines or will we have unhealthy interdependencies and the unhealthy interdependencies will be the ones that emerge from not thinking through it seems to me these values and ethics questions and and the problem is with unhealthy that it could lead to catastrophes yeah this is so that it becomes it becomes dangerous so i believe that in the next couple of years this will become an issue also for scientists and to make sure just like we're doing that with autonomous driving to make sure that we understand the shared value the alignment between human interests and and the machine you know what when we spoke in the past you mentioned something quite quite intriguing to me about dual use about you know the world going from hardware to software when in the world of hardware you know the use was very very simple you know if you build a weapon you know exactly what what what it's doing today with you know in the world of software things are much more murky you know it will be interesting to hear your take about dual use and how it's affecting society well i mean i i think this would be one of the biggest geopolitical and political issues in the in the world you know we're we're going into right now because as you said you know originally dual use meant either government invented a piece of hardware and i decided whether you business could use it or not you know in a civilian application and then later business started inventing things and government said well you can in america you can sell it to the russians or you can't sell it you know to the russians but now in a fast-fused and deep world where we've got this acceleration and software capability and chip capability everything is dual use my toaster is dual use if if if my toaster is speaking to my refrigerator and i can install my refrigerator and my toaster into your country's you know kitchens i can listen to you you know so when we start putting intelligence into everything everything becomes dual use and of course we we've seen that and a lot of the the recent tension between america and other countries around the world this this has been a big issue on the u.s huawei issue you know if i sell
china or or russia or any other country you know uh chips or software um what you know how do i control the use of that and i i think going forward again this becomes a values question but actually becomes a trust question you know um it becomes a trust question within the society you know uh boy in israel in america you know can we let these powerful chips be used by cyber hackers or gangsters whoever and it's becoming a huge issue globally when everything becomes dual use uh suddenly we come back to that values question then we are we gonna have healthy interdependencies or unhealthy interdependencies you know and i don't know if you have any thoughts on how can we develop both the trust and is there a technological answer to a world where everything's going to be dual use how will we trade it's really becoming a big a bigger and bigger issue now all the time in the past in the when we thought about globalization we thought about you know borders between nation states kind of you know falling apart and we become one big village you know the planet becomes one one big village and and then there was kind of a backlash against it you know people need solidarity nation state is is important but what you are saying is that you know even if you have nation state borders from a technological perspective we are in a flat world we are in a world in which because everything is is dual use you have to be much more nuanced in the relationship between between countries it's not as simple uh simply defined as it was in the old days you know you asked me about what comes after ai agi so i'll ask you well now what comes after deep well now we're entering into 2021 what comes after deep you know i've been thinking about that a lot i've known especially lately and and if you think again my image if of the old political order was a set of walls ceilings and floors and what a fast fused and deep world has done is broken away the walls it's crashed through the floor and it blew off the ceiling on the pace of change and what that leaves you is a world that is radically open and i think open is what comes after fast fused and deep and governing a world that is this open where more people and more places are now empowered to participate to write to export ideas uh in a truly borderless way it's a it's a on the one hand it's just incredibly empowering um and incredibly exciting the speed at which we got this vaccine for instance that's partly a product of a world which is radically open where so many people can can participate in a solution but there was a bit of a naive notion i think at the dawn of this world that if we just connect everybody uh the results will be good um but of course you know humans are capable of many things good and bad and automatically connecting people is is not the natural solution we've we've been seeing this in america in particular lately the reason i came to this whole notion of ecosystems complex adaptive coalitions is i really sat back a few years ago and as i surveyed what was going on in the world it was becoming such a complex system um and i really sat back and said well what is as complex as that is there anything in my experience that is as complex as the globalized world we're seeing right now especially adding the ai and agi layer and of course the only thing is nature um uh and and i would say the complexity of the globe today is now much more mirroring the complexity of mother nature and what do we learn from mother nature which are when the climate changes which ecosystems thrive those that are built on complex adaptive networks where the all the network elements of the ecosystem network together to maximize their resilience and propulsion and i would argue in human societies those communities countries businesses like like mobile i did with your coalition that built complex adaptive coalitions to manage this change will be the ones that are going to thrive in the 21st century and we learned that from darwin you know and and we we saw that even with the you know um i would tell people with covet 19 you know the challenge of cover 19 is that um we're not up against another country another human adversary we're up against mother nature and and who does mother nature reward um uh in uh in in these kind of moments of change not the smartest actually not the strongest but the most adaptive and and that's been that's been the challenge here and i think it's going to be the challenge going forward because mobileye intel you're like a force of nature now and um you're changing the the uh the the system in which we're operating and we're gonna need new you're going to need complex adaptive coalitions i think to manage it so tom it was a great conversation we covered both society and machines within 30 minutes so now we are back to the real world put on back our masks unfortunately let's hope for the vaccines to come quite soon absolutely but i know i tell you doing jazz with you is one of the great pleasures of my life so thanks so much thank you thank you tom you take care
2021-01-15 19:25