seems like AI has kind of bubbled into our Consciousness in the last year or so and the question is who do you talk to who can give you the best possible perspective today I'm talking to Dario Gil who's the big AI honcho at IBM and we're going to be talking about the future of AI and IBM you know has been at the center of uh of AI research for decades now I mean I'm sure we'll talk about that today but going back to deep blue and Watson and all these kinds of um so they you know there sort of no better place to start than with someone who's been at the center of that work um for a long time um so that's one of the reasons I'm I'm so excited about this conversation I'm just excited about the mystifying the technology and uh remove a lot of the lingo that is associated with this topic and try to bring it to a point where we can have a better understanding of what it is what it can and cannot [Music] do I wanted to say before we get started this is something I said backstage that I feel very guilty today because I you're the one you know inarguably one of the most important figures in I AI research in the world and we have taken you away from your job for a morning it's like if you know oppenheimer's wife in 1944 said let's go and have a little getaway in the Bahamas it's that kind of thing you know what do you say to your wife I can't we have got to work on this thing I can't tell you about I do interviews for a living that's what I you know I generate hours and hours and hours and hours and hours of transcripts of interviews of tape of interviews used to be we would send the tapes out to be transcribed now they're transcribed in literally 5 Seconds that's like day one trivial case but multiply that out and extended into the future and you start to see how wow this makes a lot of ordinary operations a lot more efficient well I think the first thing is that even though AI as a feel has been with us for a long time since the mid 1950s at that time AI was not a very polite word to say meaning within the scientific community people didn't use sort of that term they would have said things like you know maybe I do things related to machine learning right uh or statistical techniques in terms of classifiers and so on but AI had a mixed reputation right he had gone through different cycles of hype and uh it's also a moments of you know a lot of negativity towards it because of lack of success um and so I think that that would be the first thing we probably say like AI is like what is that like you know respectable scientists are not working working on on AI defined as such and that really changed over the last 15 years only right uh I would say with the Advent of deep learning uh over the last decade is when that re-enter again the Lexicon of saying Ai and that that was a legitimate thing uh to work on so I would say that that's the first thing I think we would have noticed at contrast 20 years ago for AI you know at the heart of it is the ability to uh build machines and systems that are able to learn and to learn by example so on the positive side there's just so much digital knowledge that we have accumulated uh over the last number of decades that we have this tremendous potential to train these machines to learn from all the past uh knowledge that humans have accumulated and then use those machines to help us with productivity right to in some way to collaborate with us or automate things that we don't want to do Etc so at what point in your 20e tenure at IBM would you say you kind of snapped into present kind of wow mode I would say in uh late 2000s uh when uh IBM uh was working on the Jeopardy uh project and just seeing the demonstrations of what could be done in question answering it literally Jeopardy is this crucial moment in history of AI you know there had been a long and wonderful history uh in uh inside IBM on I so so for example like you know in terms of like these Grand challenges at the very beginning of the field founding which is this famous Dartmouth conference that actually IBM sponsored uh to to create there was an IBM there called Nathaniel Rochester and um and and there were a few others who right after that they started thinking about demonstrations of this field and they for example they created the first um you know game to play Checkers and to demonstrate that you could do machine learning uh on that uh obviously we saw later in the '90s like chess that was very famous example of that that was deep blue with deep blue right and playing with Casper off and then but I think the moment that was really those other ones felt like you know kind of like Brute Force anticipating sort of like moves ahead but this aspect of dealing with language and question answering felt different and I think for for us internally and many others was when a moment of saying like wow you know what are the possibilities here and then soon after that connected to the so the sort of advancements in Computing and with deep learning the last decade it's just been an allout you know sort of like front of advancements and that and I just continue to be more and more impressed and the last few years have been remarkable too my hope is that we the good outweighs the bad um my real hope is that the benefits are distributed um so if all of it does does is make the wealthiest Nations wealthier that's a good thing but it doesn't solve you know the fundamental problem we have as a world which is that there is a big gap between Hales and have knots if AI ends up helping the have knots more than the halves then it becomes really interesting that's actually one thing I really want to talk to to um to Dari about what is the kind of um what's the shape of the of uh of the of the impact um you know is it widely distributed or is it um is it is it concentrated near the top the use of AI will be highly democratized meaning the number of people that have access to its power to uh make improvements in terms of efficiency and so on will be fairly Universal and that the ones who are able to create AI uh may be quite concentrated so if you look at it from the lens of who create wealth and value over susp sustained periods of time particularly say in a context like business I think just being a user of AI technology is an insufficient strategy and uh and the reason for that is like yes you will get the immediate productivity boost of like just making API calls and you know that would be a new Baseline for everybody but you're not acre value in terms of representing your data inside the AI in way that it gives you a sustainable competitive Advantage so I always try to tell people is don't just be an AI user we an you know AI value Creator and I think that that will have a lot of consequences in terms of the Hales and half Nots as an example and that will apply both to institutions and regions and countries Etc so I think it would be kind of a a mistake right to just develop strategies that are just about usage so there's a lot of considerations in terms of equity about the data and the data sets that we acrw and what problems are we trying to solve I mean you mentioned agriculture or health and so on if we only solve problems that are related to marketing as an example that would be a less Rich World in terms of opportunity that if we incorporate many many other broader set of problems yeah who do you think what do you think are the biggest impediments to the adoption of of AI as you would like as you think AI ought to be adopted I mean if you look what are the sticking points that you would look in the end I'm going to give a non-technological answer as a first one has to do with workflow right so even if the technology is very uh capable the organizational change inside a company to incorporate into the natural workflow of people on how we work is it's a lesson we have learned over the last decade is hugely important so there's a lot of design considerations there's a lot of how do people want to work right how do they work today and what is the natural entry point for AI so that's like number one and then the second one is you know for the broad uh value creation aspect of it is the understanding inside the companies of how you have to curate and create data to combine it with external data says that you can have powerful AI models that actually fit your need and that aspect of what it takes to actually create and curate the data for this modern AI um it's still uh work in progress right I think part of the problem that happens very often when I I talk to institutions is that they say AI yeah yeah yeah I'm doing it I've been doing it for a for a long time and the reality is that that answer can sometimes be a little of a copout right it's like I know you were doing machine learning you were doing some of these things but actually the latest version of AI what's happening with Foundation models not only is it very new is very hard to do and honestly if you haven't been you know assembling very large thingses and spending hundreds of millions of dollars of compute and so you're probably not doing it right you're doing something else that is in the broad category and and I think the lessons about what it means to make this transition to this new wave is still in early phases of understanding that one of the most persistent critiques of Academia but also of many in of many corporate institutions um in recent years has been siloing right different parts of the of the organization are going off on their own and not speaking to each other is a potent is is a a real potential benefit to AI the kind of breaking down a simple tool for breaking down those kinds of barriers is that a very is that an elegant way of of sort of saying what a what I really think and I was actually just having a conversation with a Provost very much on this topic very recently exactly on that which is all these this you know this appetite right to collaborate across disciplines there's a lot of uh attempts towards our goal right creating interdisciplinary centers uh creating dual degree programs or dual appointment programs but actually uh in a lot of progress in Academia uh happens by methodology too right like a new you know when when some methodology gets adopted I mean the most famous example of that is a scientific method as an example of that but when you have a methodology that gets adopted it also provides a way to speak to your colleagues across different disciplines and I think what's happening in AI is is linked to that that within the context of the scientific method as an example the methodology of about what we about what we do discovery the role of data the role of these neural networks of how we actually find proximity to Concepts to one another is actually fundamentally different than how we've traditionally applied it so as we see across more professions people applying this methodology is also going to give some element of common language to each other right and in fact you know in this very high dimensional representation of information that is present to neural networks we may find amazing adjacencies or connection of them and topics in ways that the individual practitioners cannot describe but yet will be latent in these large scal networks we are going to suffer a little bit from cality from the problem of like hey what's the root cause of that because I think one of the unsatisfying aspects that this methodology will provide is they may give you answers for which they don't give you good reasons for where the answers came from and uh and then there will be the traditional process of discovery of saying if that is the answer what are the reasons so we're going to have to do this sort of hybrid uh way of understanding the world but I do think that common layer of AI is a powerful new thing I would say my favorite movie for AI is uh Space Odyssey because it really has shape so profoundly in in this case kind of like the bad side of AI but it has shaped the way we talk about the topic sometimes in the writer strike that just ended in Hollywood one of the sticking points was how the studios and writers would treat AI generated content right would writers get credit if their material was somehow the source for a but more broadly did the writers need protections against the use of I could go on you know what you probably were familiar with all of this had you been I don't know whether you were but had either side called you in for advice during that the writers had the writers called you and said dar what should we do about Ai and how should we that should be how should that be reflected in our contract negotiations what would you have told them I the way I think about that is that I divided I would divide it into two pieces first is what's technically possible right and anticipate scenarios like you know what can you do with voice cloning for example uh you know now for example it is possible there's been um dubbing right like let's just take that topic right around the world there was all these uh folks that dub people in other languages well now you can do these incredible renderings I mean I don't know if you've seen them where you know you match the lips is your original voice but speaking any language that you want as an example so obviously that has a set of implications around that I mean just to give an example so I would say create a taxonomy uh that describes technical capabilities that we know of today and uh applications to the industry and to examples of like hey you know I could film you for 5 minutes and I could generate two hours of content of you and I don't have to you know then if you get paid by the hour obviously I'm not paying you for the other thing so I would say technological capability and then map with their expertise consequences of how it changes the way they work or the way they interact or the way they negotiate and so on so that would be one element of it and then the other one is like a non-technology related matter which is an element of almost of distributive justice is like who deserves what right and who has the power to get what and and then that's a completely different discussion that is to say well if this is the scenario of what's possible you know what do we want and what are we able to get and uh and I think that that's a different discussion which is old as life which when do you do first I think it's very helpful to have an understanding of what's possible and how it changes a landscape uh as part of a broader uh discussion right and a broad negotiation because um you also have to see the opportunities because there will be a lot of ground to say actually you know if we can do it in this way and we can all be that much more efficient in getting this piece work done or this filming done but we have a reasonable agreement about how we both sides benefit from it right then that's a win-win for [Music] everybody this will remind us about how much we like real interaction and it will improve the nature of our personto person interactions by removing the onerous tasks that human beings were are not very good at doing and we never meant to do in the first place so when we were talking about doctors I think when you go to the doctor and the diagnosis is really quick and easy and the doctor can spend the rest of their time talking to you about what's really wrong with you that's a much better interaction um and it's better because um uh not because a AI is duplicating what the doctor does but because AI is doing something completely different but one of your daughters you said uh is thinking that she wants to be a doctor but being a doctor in a post aai world is surely a very different proposition than being a doctor in a pre-ai world do you think have you have you tried to prepare her for that difference have you explained to her what you think will happen to this profession she might enter yeah I mean not in like you know incredible amount of detail but but but yes at the level of understanding what is changing like this lens of the information lens with which you can look at the world and what is possible uh and what it can do like what is our role and what is the role of the technology and how that shapes at that level of abstraction for sure but not at the level of like don't be a radiologist you know because this is this is what we want from you I was going to say if you if you're unhappy with your current job you could do a podcast called parenting tips with Dario which is just an AI person gives you advice on what your kid should do based on exactly this like should I be a radiologist Dario tell me like I seems to be a really important question yeah let me ask this question in a more I'm joking but in a more serious way um surely it would if I don't mean to use your daughter as an example but let's imagine we're giving advice to someone who wants to enter medicine um a really useful conversation to have is what are the skills that are will be most prized in that profession yeah 15 years from now and are they different from the skills that are prized now how would you answer that question yeah I I think I think for example this is goes back to how is a scientific method on in this context like the practice of medicine going to change I think we will see more changes on how we practice the scientific method and so on as a consequence of what is happening with the world of computing and information and how we represent information how we represent knowledge how we extract meaning from knowledge as a method uh than we have seen in the last 200 years so therefore what I would like strongly encourage is not about like hey use this tool for doing this or doing that but in the curriculum itself in understanding How We Do problem solving in the age of like data and data representation and so on that needs to be embedded in the curriculum of everybody you know that is I would say actually quite horizontally but certainly in the context of medicine and scientists and so on for sure and to the extent that that gets ingrained that will give us a lens that no matter what uh specialty they go with in medicine they will say actually the way I want to be able to tackle improving the quality of care the way to do that is in addition to all the elements that we have practiced in our in the field of medicine is this new lens and are we representing the data the right way do we have the right tools to be able to represent that knowledge am I incorporating that in my own sort of with my own knowledge in a way that gives me better outcomes right do I have the rigor of benchmarking to and quality of the results so that is what needs to be incorporated I really can't assign an essay anymore can I can I sign an essay yeah can I say write me a research paper and come back to me in three week can I do that anymore I think you can how do I do that I think you can that look uh this's so there's two questions around that I I I think that if one goes and explains in the context like what is the why are we here why in this class what is the purpose of this and um and one starts with assuming like an element of like decency on people or people are there like to learn and so on and you just give a disclaimer look I know that one option you have is like just you know put the essay question and click go and like and give an answer you know but that is not why we're here and that is not the intent of what we're trying to do so first I would start with the sort of like the Norms of intent and decency and appeal to those as step number one then we all know that there will be a distribution of use cases that people will like that will come in one ear and come out of the other and do that and uh so for a subset of that you know I think the technology is going to evolve in such a way that um we will have more and more of the ability to discern right you know when that has been AI generated right and uh and created it won't be perfect right but there's some elements that you can imagine inputting the essay and you say hey this is likely to be generated right around that and for example one way you can do that just to give you an intuition you could just have an essay uh that you write with pencil and paper at the beginning you get a baseline of what your writing is like and then later when you uh you know generate it there'll be obvious differences around what kind of writing has been generating one than the other yeah but you've turned it's everything you're describing makes sense but it greatly in this in this respect at least it seems to greatly comp complicate the life of the teacher whereas the other two use cases seem to kind of clarify and simplify the role right suddenly you know reaching student perspective students sounds like I can do that much more kind of efficiently a lot yeah I can bring down Administration cost but the teaching thing is tricky well until we develop the new Norms right I mean again I mean I know it's an abuse analogy but calculators we deal we deal with that too right and uh it says well calculator what is the purpose of math how we're going to do this and so on and we have can I tell you my dad's calculator story yes please my father was a mathematician taught mathematics at University of water Canada and in the 70s when people started to get pocket calculators his students demanded that they'd be able to use them and he said no and he they took him to the administration and he lost so he then changed completely throughout all of his old exams introduced new exams where there was no calculation it was all like right deep think you know figure out the problem on a conceptual level and describe it to me and they were all students deeply unhappy that he had made their lives much more complicated but to it's to your point to your point I mean he he Pro the result was probably a better education right he just remove the element that they could gain with their pocket calculators I suppose it's a version of I think it's a version of that and so I think they will develop the equivalent of what your fathered did and I think people say you know what if like these kinds of things everybody's doing it generically and none of us have any meaning because all you're doing is pressing buttons and like the intent of this was something which was to teach you how to write or to think or something there may be a variant of how we do all of this I mean obviously some version of that that has happened is like okay we're all going to sit down and doing with pencil and paper and the computers in the classroom but there'll be other variants of creativity that people will put forth to say you know what you know that's a way to solve that problem too I'm really interested in the in the pace how how quickly does he think we go from here to something you know even more dramatic um are we talking about you know when people talk about the aid driven future are they talking about 5 years or 10 years or 20 years that's one question um I'm curious to find out his level of optimism about AI I mean there's a band of people who think that um it could have really destructive effects on and bring all kinds of dangers and others who point out the kind of positive aspects how does he balance those those two sides of it that's the other big question I have for him I think we're at a significant inflection point that uh it feels the equivalent of the first browsers when they appear and people imagine the possibilities of the internet or more imagine experience the internet the internet had been around right for quite a few decades AI has been around for many decades I think the moment we find ourselves is that people can touch it yeah and they can before there were AI systems that were like behind the scenes like your search results or or translation systems but they didn't have the experience of like this is what it feels like to interact with this thing yeah so so that's why I mean I think maybe that analogy of the browser is appropriate because it's all of a sudden it's like whoa you know there's this network of machines and content can be distributed and everybody can self-publish and there was a moment that we all remember that and I think that that is what the world has experienced over the last nine months or so on so and but fundamentally also what is important is that this moment is where the ease of the number of people that can build and use AI has skyrocketed so over the last decade you know technology firms that had large research teams could build AI that work really well honestly but when you went down into say hey can everybody use it can a data science team in a bank you know go and develop these applications it was like more complicated some could do it but it was more the barrier of Entry was high now is very different what struck me Dario throughout our conversation is um how much of this revolution is non-technical let to say you guys are doing the technical thing here but the real the revolution is going to require a whole range of people doing things that have nothing to do with software that have to do with working out new new human Arrangements talking about Val I mean the I keep coming back to the Hollywood strike thing that you have to have a conversation about our values as creators of of of of of movies how are we going going to divide up the exactly credit and the like that's a that's a conversation about philosophy and you know it is and it's it's in the grand tradition of why uh you know um uh a liberal education is so important in the the broadest possible sense right there's no uh common conception of the good right uh that is always a contested uh dialogue that happens within our society and technology is going to fit in that context too right so that's why personally as a philosophy I'm not a technological determinists right and I don't like when colleagues in my profession right start saying like well this is the way the technology is going to be and by consequence this is how Society is going to be I'm like that's a highly contested goal and if you want to enter into realm of politics or the realm of other ones go and stand up on a stool and discuss whether that's what Society wants you will find that there a huge diversity of of opinions and perspective and that's what makes you know uh you know in a democracy the richness of our soci society and in the end that is going to be the centerpiece of the conversation what do we want yeah you know who gets what and so on and that is actually I don't think it's anything negative that's as it should be because in the end is anchored of who we want as humans you know you know as friends family citizens and we have many overlapping sets of responsibilities right and as a technology Creator my only responsibility is not just us a scientist and a technology Creator I'm also a member of family I'm a citizen and I'm many other things that I care about and I think that that sometimes in the debate of the technological determinists they start now budding into what is the realm of of you know Justice and you know and society and philosophy and democracy and that's where they get the most uncomfortable because it's like I'm just telling you like you know what's possible and when there's push back it's like yeah but but now we're talking about how we live and how we work and uh and how much I get paid are not paid so that technology is important technology shapes that conversation but we're going to have the conversation with a different language as it should be and Technologies need to get accustomed to if they want to participate in that world with the broad consequences hey get accustomed to deal with the complexity of that world of politics Society institutions unions all that stuff and you know you can't be like whiny about it it's like they're not adopting my technology that's what it takes to bring technology into the world [Music] I think one of the the challenges that we have in this conversation is that there is a lot of sort of attribution to AI to actions that in herly about humans and about institutions it's energy it's it's adding a huge jolt of electricity to a lot of things that we do you can put that electricity to use in a dangerous way or you can use it to you know light homes and make cars guttle when AI might be at its best is when we don't even notice that is there but it's just making whatever we're interacting with [Music] better
2023-12-15