Live From Davos 2023: Risks and Rewards of AI | Global Stage | GZERO Media

Live From Davos 2023: Risks and Rewards of AI | Global Stage | GZERO Media

Show Video

[Music] thank you foreign [Music] thank you [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] no no no no no no no no no no no no no no no no no foreign [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] [Music] thank you no no no no no no no no no no no no no no no no no no no thank you foreign [Music] foreign foreign [Music] [Music] [Music] foreign [Music] thank you [Music] [Music] [Music] foreign [Music] [Music] thank you [Music] foreign [Music] foreign [Music] ER president of Eurasia group and G zero media and I'm Brad Smith Vice chair and president of Microsoft our Global stage series gives you a front row seat at some of the most important Gatherings around the world from Davos to Munich to the UN General Assembly in New York and we host critical conversations about the biggest challenges the world is facing right now at the intersection of Technology politics and Society you'll hear from public and private sector leaders and innovators on topics like cyber security climate change an ongoing war in Ukraine join us for live streams podcasts and more throughout the year head to gzeromedia.com Global stage and learn more [Music] thank you hello everyone welcome to a special G zero live stream we're coming to you live today from Davos Switzerland site of the 2023 World economic Forum I'm Nicholas Thompson CEO of the Atlantic this program is part of the award-winning Global stage series produced in Partnership between G zero media and Microsoft Global stage brings you conversations about issues at the intersection of Technology politics and Society like today's discussion which is a great example of that we're talking about artificial intelligence and the risks and rewards disruptive Technologies present to the world I'm joined today by Brad Smith Vice chair and president of Microsoft Eileen Donahoe executive director of the global digital policy incubator at Stanford University and former U.S ambassador to the UN Human Rights Council azim Azar Tech expert and founder of the influential newsletter exponential View and Ian bremmer founder and president of Eurasia group and G zero media welcome to you all let's start Ian hello lovely day how are you doing great you just published a top risks report all the things we need to worry the hell about things that are going to disrupt the world you said it's a Tipping Point for AI what did you mean what's up well first of all it's first time in the history of the firm that AI has actually written uh the name of a risk we put the risk in and the chat GPT which I'm sure we're going to talk about and it came up with weapons of mass disruption which frankly was pretty damn good and we've been talking about climate change here at the world economic forum for decades it's a big super tanker we know it's getting worse we talked about sort of geopolitical tectonics and the growing challenges of Russia and China and the US AI has been kind of bumping along the bottom of the road then suddenly wow taking off like a shot this does feel like a transformative moment a transformative moment for productivity for Hope for efficiency for connectivity also of course a transformative moment for danger for disruption in the hands of Bad actors and this is an environment geopolitically where Rogue actors are more powerful than they've ever been in the history of the world I mean we have a number of individuals who have enormous power concentrated in their hands and the willingness and capacity to make really bad decisions on the basis of really poor information without checks and balances around them we see that in Iran we see that in Russia we also increasingly even see that in China empowered by tools like artificial intelligence that can become weapons in the wrong hands that is and Brad talks about this that is something that can become transformative the geopolitical environment it's never been a risk in 25 years of Eurasia group it is this year all right Brad there are two ways that Ian could be wrong he could be wrong that it's a risk right now he could also be wrong that this is the Tipping Pullman is he wrong in either of those ways or is he right that actually this really is the Tipping Point I do think Ian is right in both of these respects too bad makes it less interesting we will think about the history of technology and there are certain inflection points and there are certain inflection points when a technology really is embraced by the public and then frankly life is not the same again the most recent was probably 2007 when the introduction of the iPhone transformed the movement towards Mobility it's easy to forget that until 2007 Microsoft was actually the leader in smartphones and apple took this Leap Forward the same thing was true in 1995 when netscape's web browser suddenly pushed us all into this new internet era phones existed before 2007 the World Wide Web existed before 1995. the AI has existed it's been a topic of discussion now for six or seven years at Davos but this is the year I do believe 2023 will be the year that is remembered as the inflection point because these large language or foundational models are enabling people to do things they didn't believe would be possible really even this decade and it's going to be used in so many ways for good and in ways to create new risks and challenges as well all right so azim do you believe that um the reason we're at a Tipping Point is actually because the AI Advanced substantially and these large language models you know finally figured it out or is it just because open AI built a really simple to use interface and we started to use it and actually we're overestimating the technological advance that we've made in the last year uh I think it's it's a technological advance I think the church EPT and the work they've done and other companies are building similar sorts of things do represent new advances on technology but there's a second thing that's happened which is firms like Microsoft and others over the last seven or eight years have been helping large corporates get ready for AI and they've been getting them tooled up they've been getting the data in place they've been getting skills in place so all the big firms who control the way we interface with our bank accounts and our travel schedules they are now much much better place than ever to implement AI systems thanks to the help of Microsoft and others and now they've got a really really great technology on which to do that and I think that those two things combine to make 2023 sorry to agree with Ian a Tipping Point yeah can I ask maybe I'll put this to anybody um why did this huge Advance which has created this Tipping Point which is open air why did it come from a reasonably small company and not one of the gigantic companies that has had thousands of engineers in AI I've heard a couple of hypotheses in Davos one of which is that the large companies actually do have it but they can't release it because of regulators and the other is well you know it's easier to be a small company be Innovative Brad you can answer if you want or somebody else can take a crack at it well I'll offer a few thoughts first you know open AI has had the ability to move extraordinarily quickly because they have this relatively small group of 300 people who are extraordinarily talented very focused and unencumbered by you know frankly what you often have in a large organization while at the same time they have had the benefit of a huge tech company behind them it's not as if this open aim I or any AI model with a large language model gets you know constructed on a PowerPoint slide or a piece of paper it's literally created with the benefit of an AI supercomputer in this case in Azure super computer that was built in a dedicated and very expensive way by Microsoft and what we should really recognize in my view at this point is that there are three institutions that are at the Forefront of these large language models and there will be others as well but you have open AI with the benefit of the support of Microsoft you have deep mind which is part of Google and you have the Beijing artificial intelligence Institute and then others as well but this is it is easy to create apps that harness the power of an AI model but creating an AI model with billions of parameters that is an extraordinarily computationally intensive approach now I would say and open AI created a better to use interface think about the smartphone and what touch interface did it transformed it it made the power of the technology more accessible I think this is a liftoff point because it's true everybody's been getting ready to use this type of thing and become more familiar but it has been a genuine technological advance non-linear advance in my view in the size and the capability of the model itself and when was the last time you really heard anyone in the tech industry when have you ever heard anyone in the history of the tech industry say I have a product it is so good I just can't show it to you because Regulators won't let me I have heard a number of people in the tech industry say that we had the product we just couldn't release it because the Regulators but let's go to someone who knows a lot about regulation what is a good framework right so we have this thing it's we can talk about the specifics about chat GPT and AI but how do you think about getting regulations right when there's an emerging technology that could just be awesome or just be so destructive yeah so let me go backwards and just say I haven't heard that it's because of Regulation I've heard that it's proprietary interest of Google and deepmind wanting to keep it in their own hands as opposed to this experimental iterative release approach that open AI has taken and so that's a really interesting question to me the pros and cons of you know there's some companies releasing it to the wild there's risks that there's some like open AI That's done this iterative partial pilot and then there's others that keep it in-house and so the pros and cons of that I think is something worth debating in terms of a framework um so I am enumerates person and so from the human rights point of view I think it's interesting to look at Brad said you know the past decade or you said six or seven years here at Davos the human rights conversation around implications of AI for human rights and then I can touch a little bit on this moment and all the energy around generative Ai and the progress and this release of chat GPT human rights point of view the sort of three levels of conversation happening the first of which is the most dominant and obvious which is what are the actual implications of things that have been integrated already there's civil political rights that's the part everybody knows about the obvious concerns are equal protection non-discrimination right to privacy exercise of civil liberties you know panopticon level surveillance from AI those kinds of things and I would say on the Civil political rights most of it has been about risk less yet on reward economic social cultural right conversation is about um who enjoys the benefits and even though there's been a lot of concern about jobs and displacement of people from work I think there's been a bigger emphasis on the potential rewards of inclusion in the AI Revolution and so that's kind of an interesting tension second level is this more speculative conversation about AGI and what are the implications for humans and Humanity obviously the the concern well what do we even mean by replicating human intelligence are we and are we talking about human consciousness or human awareness um so so that's one one level and then the third level which Brad you talk about e and U2 all the time is the geopolitical um and this is where the human rights Community really has to step up and wake up because AI as a foundational technology will have dramatic has had and will continue to have dramatic implications for economic superiority military superiority and really the power to shape Global norms and so it is really important that it's the dominance in this foundational technology remain in the hands of democratic stakeholders and people who care about human rights well that leads so nicely into our poll but I will add on AJ I was just at a panel where we talk about the most interesting risk I've heard of AGI which is that if we have a chat bot that you can really emote with people fall in love with their chat Bots will stop having babies and then they'll be total demographic decline okay but that is not the biggest risk according to the survey so G zero surveyed lots of folks across social media challenge channels a ton of people responded and we got some really good data so the first question the biggest threat that AI presents is potential for job loss coming in at 23 percent privacy and security risks came in at 35 and dangerous to democracy was the big winner congratulations dangerous to democracy winning with 42 Ian how did you vote I would have voted dangers to democracy I mean in our own polls I don't tend to vote but you know you want to keep the data clean uh but but look it seems to me that we have um we're in an environment where last 30 years right you come to the weft and people are talking in various ways about the digital divide I mean ever since the internet has created who's on who's off you already talked about inclusivity right this year so far and it's almost the end of weft I have not heard anyone talking about digital divide first of all because the world is getting people they're connected secondly because the pandemic's only speeding that process up but third because increasingly that's not the problem increasing the problem is stuff you know that ain't really so and we've got people all over the world that are dealing with disinformation that is fundamentally ripping at the fabric of Civil Society in democracies everywhere now last year at our Global stage Brad and I were talking this was back in May it was only a few months after the Russians invaded Ukraine we were talking all about cyber and we were worried about whether or not the Russians were going to be able to destroy Ukraine with their cyber offensive capabilities and over the course of only a few months we're feeling a lot more confident Brad can talk about this that cyber defenses properly applied actually doing a really good job in beating away the strongest cyber attacks that the Russians have to deploy against Ukraine that's awesome but we are nowhere close to that if you want to talk about disinformation if you want to talk about influenced influence campaigns nowhere close not only on Ukraine but on Brazil on on elections all over the world you name it 37 of Brazilians right now are saying that they want military intervention to overthrow the Lula government the largest democracy in South America you know why because they know things that are completely untrue and when you suddenly take those populations and they can no longer tell the difference between a real human being they're interacting with and a bot you have threatened democracy the only way I would change my response to that question is I wouldn't just say threats democracy I'd say threat to free market because we all remember meme stock we all remember the GameStop Scandal and how you had all these crazy folks on Reddit they were saying GameStop to the Moon even though there's no underlying value in the company whatsoever okay what happens when on top of that you have all of these individual speculators that should not be speculating that are suddenly being driven by thousands millions of bots I don't know how we deal with that I think we're ready and and again according to the people dealing with the defensive Technologies we aren't yet prepared for what's coming out the fight is he is Ian right that the good guys are winning on Cyber and the bad guys are winning in this information well I think that the problem with this information is that it's not just the bad guys I mean these really powerful AI models that have come out this year the large language models it's a good are Adept Liars so even with the greatest of good intention you use one of these models and you I think that persuades you but has a mistake in it now do they ever do the opposite when you're trying to lie and they tell you the truth well I haven't tried that one uh my heart is too pure for that sort of thing a leading uh Tech news site uh had been using one of these models to to author articles over the last few months and they've started to have to pull pull down dozens of them because there are lots of factual errors in them there are lots of mistruths and part of the problem with that is that if even in a good source you can't tell what's really valid and what isn't that starts to tear away nibble away at our sense of what can we trust what can we not trust and I think that creates a sort of a foundation for the sorts of terrible situations that Ian paints and Linda we need policies around disinformation or do we just need social norms in education just social norms binary choices I've done enough on the on the cultural level and we haven't done enough in terms of civic education and I think the the power of generative AI will take us to the next next level in that category I 100 million percent agree with Ian that that this technology is a dramatic risk to democracy and the intact you know we depend on some level of Integrity in the information realm we have been dealing with the misinformation disinformation problem for the last decade or really since 2016 we've been facing it this takes it to the next level and puts it on steroids um so I think the harder question though and this is for Regulators is is the Instinct of democratic governments going to be to ban these Technologies which is I I think you would say is not even possible it's not it's not in the realm of possibilities but there is some Instinct at least to ban some applications or really put put a stop on it and um I know my partner at Stanford Larry Diamond world renowned democracy scholar he's kind of like he doesn't like the fact that this stuff is being released on the other side is you've got people at open AI Sam Altman Reed Hoffman both you know really concerned and committed to the future of humanity democracy and they are convinced and we're all trusting them that releasing this stuff iterating on the release and staying in the lead of development is really essential for the good guys and so I'm trying to have confidence in this they may have a stronger economic incentive for their opinion than Larry Diamond does for his but correct and but that doesn't mean they're wrong it doesn't Okay so let's Brad yes I would say look let's zoom out first I'll give you one area where I disagree with what Ian said excellent thank you if people at Davos are not talking about the digital divide that's a weakness at Davos there are still three billion people who do not have access to the internet and let's remember there are 770 million people more than twice as many who live in the United States who don't yet have access to electricity the greatest invention of the 19th century right so we have a lot of work to do to get the world up to a common position where it can use this technology then I would say all of these risks I agree are real and we need to take them very seriously you know one thing I would say my my phrase for this week in Davos when it comes to AI is that 2023 is not yet in is not only an inflection point it is the year when we should be curious not judgmental let's go into all of this with our eyes wide open let's recognize that that we're seeing a non-linear Improvement in a technology in terms of its power but it's also an iterative advance in terms of everything it can do you know so you have generate a False Image yes you could do that as soon as you had Photoshop and then it keeps getting better generate text that is false and deceptive yes unfortunately that goes back to the invention of writing and it keeps getting better all of this is a tool that can be used to inflict harm it can also be a tool that is used to protect against that harm being inflicted and it really means two things first it does go to when and how and for what purpose and under what terms is it released so that you have responsible AI controls and second how do you develop the capability to use it as a tool to combat the harms we should all worry about anyone want to jump in there what I I agree completely if this is a year to be curious and not judgmental I also think that in a period of inflection point being curious also means being hyper aware of when something is potential actually an opportunity and Hyper aware of when something is potentially a threat absolutely I remember last year when there was a an engineer well-employed thought of engineer at Google that suddenly went rogue because he was convinced that Google had created sentient intelligence he was in love he was in love and I brought that up because when Nick said well that's the main risk ever everyone talks about oh people are going to fall in love with their Bots and and Brad and I both know some people that are developing AI right now that are you know putting together these AI like kind of chat bot help mate type things that are meant to be very productive but also that everyone is developing relationships with them in beta which is kind of crazy but the other side of that is when you can't tell the difference between a person and a bot it's not just about developing a relationship with a bot that is human-like it's also that your relationships with human beings will become bot-like in other words right now when we engage with people even though there are some places that are like a hellscape on social media the fact is that we do think that our fellow human beings deserve a basic level of common Humanity we really do we do better in one person than we're online but still it's a person at some level and you can break through when you no longer can tell the brain is going to wire itself to start treating human beings default like Bots and the impact that will have in divided Societies in divided democracies I think is one of these areas that we should not be judgmental we should be curious but we should ready we should be alert to that threat very good point I tell my kids not to kick the Roomba point I mean we can turn to the the world's regulatory superpower which is of course the EU and in the the new AI legislation will if it gets passed ban any AI system that mimics a human you'll have to disclose that you're talking to a bot now there's all sorts of questions of enforcement how would you tell how you you chase down the person this technology will be highly highly uh diffused but there is at least some sense of some sensible hard lines that we can imagine Regulators bringing out but I also think that we're so aware of this risk the risk the Adept liar risk that I described that the developer developers of the technology themselves are aware of it and they're trying to tackle it and for me one of the most interesting things that's happened with AI in the last week has been Microsoft announcement that it's going to support these chat tools through its Cloud platform because in my experience of Microsoft it thinks very very hard of around these implications right I mean I trust Brad's team to to have been thoughtful about this and I'm quite curious I'm sorry to take your role for a second Nicholas but I'm quite curious about the thought process that went on in Microsoft given that you know that we're all sitting here saying this could destroy democracy to make this thing broadly available because I trust your thinking I just love to hear what it was well we have a responsible AI infrastructure that is similar to what we have for privacy security and digital safety um you know and by that we have principles of policy we have an organizational infrastructure there is there is training there is testing there are engineering tools you know there is work that goes on before product is released there's a testing of it afterwards there are there's a whole compliance regimen you know so you know we do feel good for example not only about the API we released or announced on Monday was coming including with access to the same technology as chat GPT even though that's going to advance very substantially very quickly this year as well as what it will mean is Satya Nadella said you know to integrate this into all of our first party products the operating system application search all of these things and we'll continue to have all of these controls and really building on your point before which I think is really quite important you know the one of the huge advances that comes from this is the ability of say Enterprise customers government NGO business to use it with their own data sets you know and I think in some ways chat GPT has gotten everybody so focused on one aspect of this the generative AI which I think should be a great tool for Creative expression there's the other side of this which is call it fact fighting a great tool for critical thinking so that people can find new insights discover new facts and all of that requires controls and you know it we've all sort of we've all gotten so excited which is amazing on the one hand with a Beta release that yeah we haven't yet seen the finished product and because I get to work every day with the internal builds at Microsoft you know I'll be the first to say that in October you know when I was using what you are using today I was like whoa I'm not sure we've thought through everything and now it's the middle of January and I'm like I am feeling like we're in much better shape to address the kinds of concerns that you're talking about and will be in even better shape when these things go live all right let's let's uh that's a very nice segue from fears to happiness let's go to our second uh poll which is what is the biggest threat of AI we got four options here benefit sorry you should threat did I say threat you did because you're just in that mode I'm usually a constructive Optimist but uh clearly my brain has been warped what is the biggest benefit of AI we get four options economic growth Health Medicine advances improved efficiency or better data analytics analysis let's go around the room and just quickly say which one you vote for Ian uh economic growth yeah product I would say all of the above as a benefit I think they're all benefits but if you're making me pick one I'll go with the first as well in terms of growth well I would have said improved efficiency first Falls slightly by Health Medicine advances fought a little bit about better data analysts and last economic growth just based my opinion let's see oh the audience matches that exactly 34 for improved efficiency 33 for Health Medicine advances Ian why do you not like your followers I love my followers but as you know I know I specifically say in my pin tweet that if you you should follow people you disagree with so I'm really aligning with that why did you pick economic okay let me ask you this not answer why you picked economic growth but also why you think such a small percentage of the people who responded picked that well I think there's overlap between improved efficiency and economic growth I think you know in part it's construction of the data set um you know it is we we see when you talk about AI you talk about productivity gains um and there's no question that I mean that that's what everyone's been talking about in a recessionary cycle and inflationary cycle so it could be that people are thinking in the next year they know that we are heading into a you know contraction the IMF has said 2023 they expect to be a global recession two percent Global growth that is not where we want to be and in that environment trying to convince people that oh AI is coming with the economic growth there may be cognitive dissonance with that I'm thinking about what AI is going to unlock in terms of so many new technologies in terms of distance learning in terms of agriculture in terms of data analysis around climate change I'm more concerned than anything around the impact that the tens the hundreds of trillions of dollars of damage the climate will wreak upon the global on humanity and as a concept in the global economy I think AI is the best shot to make meaningful impacts on that in the near to medium term given how how long we've waited because we can't use it to plant crops more efficiently set the solar cells understand exactly what the metrics of the planet really are to get us the efficiency so that the gains that we get from the Investments we make are massive so for me that means that AI means not just economic growth but avoiding economic meltdown for me that's number one well there's a thing called the productivity J curve with new technologies we don't know how to use them at the beginning so we waste a lot of time and then eventually we figure out what to do with them and it races away and we saw this with a typewriter and we've seen it with it and we're going to see it with AI we companies have now got used to it they will make use of these Technologies and within my own firm we are already using these generative AI tools one of my favorite uses of chat GPT is um I'll write an analysis I'll send to chat to epgpt and say critique this as if you were Professor so and so or critique this as if you're a professor XY and I'll get this critique back of my writing and I'll go back in and I'll say these are relevant points now I should improve it and if I had to do that without chat cpt's assistance I would have to email the professor she wouldn't have had any time I'd have had to chase her up she still would have had any time and now I've got this slightly dumb Professor helping me improve the quality of my outputs and then we use it to generate the images for that for the PowerPoint presentations sorry for those of you have to watch my PowerPoint presentations they're all created by AI systems but these are real productivity improvements in my work which is white collower I like the productivity Jacob I was trying to think of what the productivity curve for most social medias and I think it's a peaker if you go around in a circle and then you plummet um but could I just say that take that point to me it really does take us back to what I see as the two fundamental roles of this technology a tool for critical thinking and creative expression yeah the notion of can I get somebody to look at what I've written and give me feedback so I can think more critically it's helping you use your mind in new ways and then you're able to use that and a variety of people can to be more creative with your own expression to write better to integrate Concepts and you know one of the things that I think is so interesting about this that makes me quite enthusiastic is this is I think a tool that I hope it should be our goal to help reach everybody regardless regardless of your education level regardless of your income level and Technology hasn't really broadly speaking done that in the 30 years of this enormous technology diffusion you've seen a widening of the income divide between people with more education than people with less if we can equip people with this these two capabilities just maybe we can put ourselves on a path to address one of the problems of our time I gotta jump in so I honest I completely agree with that and the idea that you get the technology into the hands of people and they use it is and develop critical thinking is how you solve the disinformation problem that has exploded and that will explode I think that's the only way is if people it's normalized you you know how to think about what these tools can do and how people are using them that's a big part of cultural education similarly on the economic disenfranchisement problem or concern you got to get the technology into the hands of everyone and if you fail to do that then you really exacerbate Global income so one of the things that makes me optimistic about AI on the productivity and growth side is that either Brad's right in which case it is going to be this incredible efficiency tool that's going to allow us to actually really level up even though I hate that term all these people that otherwise don't have access to the kinds of Education critical critical thinking that they really need or this is going to be incredibly displacing of like everybody's labor and that doesn't worry me it would worry me if it displaced the bottom 10 the bottom 20 if it was like that because then you've got a whole bunch of people that are really in power that are like okay we'll say we're gonna do something but we don't really care but if AI turns out to displace everybody's jobs if it starts to actually hit the top 10 the people who actually have influence over policy and their kids God forbid then those are people that are going to make regulatory changes they will demand it and you'll actually have the technology driving a change in how society and the social contract and governance works that's okay too it's only when you have technologies that nibble away at the disenfranchised at the at the people that are at the bottom of the barrel or the middle classes that we don't necessarily care about right that's not what's going to happen right I mean if you think about the jobs that are super high risk of AI right now call center employees around the world right that job is going to go away right because AI is going to do it truck drivers once we have self-driving cars they're most likely to come to truck drivers you're going to have whole classes of jobs that are not at the very high end of the income levels that are going to be wiped away that's not great at all well some people including Sam Altman are saying that it's actually the creative jobs and the drivers I'll get rid of all right yes there are lots of let's leave outside we're in trouble my friend yeah it's really all about Nick that's why I'm pivoting to television because then you have Holograms here all right let's move to regulation so we can make sure that I still have a job which is the most important regulation to have so trickiest aspect of this issue is regulation Tech Advance happen it pays far too fast for governments and policy makers so we asked a question let's go to poll number three who should regulate development of AI governments private sector multilateral organizations or no one and the winner was no one just kidding the winner was governments 45 oh wow private sector seven percent complicated multilateral organizations 36 no one gets 12. there's some hardcore Libertarians yeah Silicon Valley group I I got a point on this one which is that um Edelman just came out with their annual trust index and they are trumpeting the fact that the private sector is trusted like never before and people want to hear from their employers because they want to understand what's really going on that they're the people they're connected to and that is true and that is helpful that is very different from the people that make the rules they do not want their companies making rules they still want the governments in the multilateral organizations ultimately making rules it's important to recognize that disconnect absolutely yeah it's got to be together I mean it's got to be the regulations have to have a source of democratic legitimacy uh it it so it has to be the governments and then if you're in Europe it's the EU and that's that's how it works there and and the other reason is that ultimately the state is the final Arbiter you know the state is the one that has the judges and the prison cells and the enforcement mechanisms and legitimacy and the idea that you might but the state's so bad I mean it's so it's so slow at doing it right I mean you disagree I think you didn't have the right answer in the poll which was multi-stakeholder approach you need you need all of the above and you didn't even mention I was waiting for the all of the above well it's not only all of the above you didn't really mention civil society and in technological regulation and and policy development that has to be part of the equation and government I you know of course at the table and it's really important that has Democratic accountability but do doing it without the input of the technology companies technologists or Civil Society is not right and and I completely agree and I would pull on that thread in two ways first think about what is happening right now companies are innovating which is what companies do best really governments uniquely make the rules they're called laws and regulations as companies will have to have high standards and controls we'll have to comply with government regulation and I think that the role of civil society is a Paramount importance which by the way works best when it has access to the information and can use the technology itself which is what happens when you open this up you give everyone a voice now then I'll just say think about what we're going to experience this will be I think a very interesting experiment over the next three years take chat GPT ask at the question is the leader the head of state of my country ineffective or good or bad leader you can fill in any country what you are almost certain to get is different people have different views here are the arguments in that about this leader being effective here are the arguments about this leader being ineffective that is a testament not only to the technology model the values but fundamentally I'll call it democracy deliberative democracy there will come a day when there will be an AI model that is produced in China we will all be able to compare when the question is asked is the leader of this country effective will that use the same approach and say some people say yes some people say no or will it be yes or no that is when Western and Democratic philosophy meets authoritarianism in a way that will be easier for people to compare and contrast than is often the case wait so you're arguing that it is a good thing because we will be able to see the clear difference in the AI models not it is a bad thing because people in China will be have a new mechanism for extremely effective propaganda well I I'm not going to predict what the Chinese model will be until we get to see it but if the Chinese answer is yes this person is good or know that person is bad and the answer to a subjective question is an absolute then the world is never before I would argue going to have this really interesting opportunity to talk about where technology meets human rights I mean it'll have an opportunity of course but whether or not enough people are willing and capable of using that opportunity in ways that are constructive or whether we still have incredibly divided populations that are not prepared to listen to to ingest a there are two sides to this conversation I mean again I think that the chat GPT multiple sides the chat GPT model I mean you can listen to a whole bunch of stuff on PBS and if you're over 70 and you're you know sort of in like awake at that time you can turn to it but that is not the way most Americans are getting their news and so I fear that this is not only about having the appropriate tools but also having the structural environment that facilitates that and of course the Chinese will ensure as they roll this out that that is the thing I mean we're in in an environment right now where so many people ask me well so Putin's failing so obviously economically geostrategically I mean diplomatically in every way he's failing so when's he going to be out and the answer is inside Russia the level of support for Putin May well be higher than it was before the war started that information environment with all the technology around it is one that they are able to control and I I worry deeply that the Chinese will be more effective with their population in three and five years time with this AI then perhaps the Americans and other democracies but I think I think that Brad's thoughts experiment is a really powerful one because you could put that question and a whole class of other questions into the system and you could put where and and it's hard to see the Chinese State doing anything other than providing a yes or no answer and then you could put analogous questions in and and China has an educated population they'll see the weaknesses in in the in those are those answers and those responses and the Clarity the understanding that this information system is being controlled will be even higher than it is today and so so in a funny way it feels to me like it's a little bit of a threat to any kind of uh or autocracy that's trying to control information because once you control the information about about the leader you're also going to be visibly having to not control it around pain relief or something else that's trivial so that's quite a hard position you've put them in exactly that's why I say this is a huge opportunity if we think it through and do it right to make this a tool for critical thinking to help people expand the spectrum of information they're getting you don't want it ever to become something where people then take every answer at face value to be honest what you really want to do is get people exposure to both sides of an issue and then the ability easily to go out and learn more and Ian you're absolutely right no one technology or tool can ever solve the problems of such a divided world but can we use this as a tool to try to address some of these divides that are so important in the world that is the goal Eileen do you agree it seems like everybody is agreeing that actually Chachi BT would be a tool for information that can actually help I think it can be a tool for both and it depends on who's ahead in getting it into the hands of you of I was going to say users I think Ian's comment about what China is going to be able to do and terms of social engineering with these kinds of tools in control of the information realm is because it's in the hands of the government if it's in the hands of the people and citizens I think it has the potential to lead to Greater critical thinking and the ability to see when you're getting propaganda and that's the question and I don't want to be a controversialist about this but I think the the way the Chinese control their information system is inimical inimical you got it right yeah I got it right eventually it's the altitude of Davos to open-ended research I mean open-ended research means you should be able to ask any kind of question and we are just at the foothills of what we can do with AI and other Advanced Technologies so if you're in a society where you there are certain things that you can't ask but you don't know what you can't ask and the the the the the the the penalty for asking those things you don't know that you can't ask is very high I think it will start to limit the capabilities of researchers to to explore when I look at the us or Europe I see much Freer societies I get worried about academic freedom closing down because of culture War issues because I think our real fundamental strategic Advantage at this moment of exponential change in technology is that people have the freedom of thought to think and challenge in their process of research and China May with all have all of these advantages the demographics and the state Direction but that is for exploitation it's not for exploration we're in a phase of exploration and I think it's Advantage democracies China has no demographic advantages but that's not what we're talking about right now uh but on the I agree with you completely that this is going to be a serious problem for China's ability to economically innovate if that's being driven by human beings right it's going to hurt entrepreneurship it's going to hurt human capital development but it is going to strengthen top-down political stability that has been the direction of travel what we've seen from Xi Jinping is his willingness to deploy his political Capital to ensure political stability at the expense of economic growth has thus far been high now over the last two days we've seen from the Chinese leadership they are intending to change that message do we believe it we'll see I am skeptical all right let's move to our final concluding poll we have one more poll today G zero ask their followers this final question the long-term impact of AI on society will be a mostly positive B mostly negative seek too soon to tell here's how people voted mostly positive 30 mostly negative 23 interesting optimism there too soon to tell which seems kind of where we all are at 47 all right so I want you to each conclude with one thought on the long-term impact I did notice that it is interesting that if you go through these different three different sections part one biggest risk is to democracy part two biggest benefit will be the way we can all educate and learn part three there's a way it will challenge authoritarianism there's a little bit of uh asynchronicity in between the three of them not that it can't both be the case that it's a threat to democracy but also can help in all these ways any case one big thought on the ultimate effect that AI will have and then we'll wrap this up Mr Bremer my biggest thought I think is that um the speed of AI development and transformation makes me somewhat more pessimistic about the implications globally because of the ability of political systems and institutions to react rapidly enough this is not climate change where we're eventually getting it right but it took us decades we have to act a lot faster on this one and that's true but if we can take the advantages of AI and make the changes that we need which are hard changes they're about uh agency they're about subsidiarity they're about localism then I think we can take ourselves to a much higher energy level where people feel more part of a society all right now in I would say this technology is not going away somebody lots of people are going to be developing it we can't hold it back and it better be pushed by the right people and therefore I try to stay optimistic that we will and hold on to that idea that that the good guys beat the bad guys and I would say I always feel like when people ask should we be optimistic or pessimistic and say we should be determined and we should be determined in this instance to ensure that this technology is used and deployed and developed responsibly and ethically ensure that it really advances economic competitiveness protects National Security that we find ways to spread the economic benefits broadly so that it equips more people to gain in their lives and we need to think hard about what it means for frankly a younger generation that in some ways is struggling with a mental health pandemic in part because I think of the impact of technology let's make sure that this doesn't replace interaction with others but is a tool that brings people together to think more critically to create more expressively but it's only going to be what we make it and nothing less all right well that wraps it up so everybody please go run some queries on chat GPT and then go hug your mom all right that does it I'd like to thank Brad Smith Eileen Donahoe azimazar and Ian bremmer for being here thanks to all of you for watching around the world you can follow G Zero's coverage of the 2023 World economic Forum by heading to g0 media.com I'm Nicholas Thompson have a

great rest of the week foreign [Music]

2023-01-28 05:23

Show Video

Other news