CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival

CEO of Microsoft AI speaks about the future of artificial intelligence at Aspen Ideas Festival

Show Video

good morning everybody good morning um it is a privilege to be here uh to talk about AI the future of it where we are the challenges uh of it safety and so much more and we are here with um demonstrably one of the ogs of the AI World Mustafa saliman is the CEO now of Microsoft AI he spent more than a decade at the Forefront of this industry before uh we even had gotten to feel it in the past couple of years now uh he co-founded a deep mind back in 2010 that was later acquired by Google in 2022 he co-founded a company called inflection uh he did that with Reed Hoffman and then recently um moved that company inside of Microsoft and is now the head of Microsoft's AI efforts so thank you so much for being here and just as a a side note his personal story if I could just for a second his dad was attack Taxi Driver his mother was a nurse and to see him now is quite a thing so thank you for being here it really is something very very special thank you um thanks a lot I want to talk about the good the bad the ugly and everything in between and where we're going but let's just start very basically if if AI was a baseball game what inning are we in where are we in this because I think so many of us have played with chat GPT and other kinds of AI products we all think we know what it is a little bit we all keep talking about what the future of our life is going to be with AI it might be helpful just to sort of level set for us where we really are look I I think the challenge is that as humans we are riddled with biases of all kinds so we look back and it feels incremental but in the moment it really is far more exponential Than People realize I mean it two years ago nobody was talking about agents you know AI systems that can take actions in your world today everybody is excited that over the next couple of years agents are going to be everywhere doing things on our behalf organizing our lives prioritizing and planning four years ago nobody was really talking about language models today everybody is talking about language models so it's just a very surreal experience to be inventing Technologies which at one moment feel impossible and then just a few years later and now you know coming to pass if you just think about it it is now possible to ask any one of these chat Bots any question that you could possibly imagine and get an answer that is pretty much as good as any one of us could answer in this room on average and it's easy to take that for granted but that is like years and years of very slowly and steadily adding capability researching experimenting testing things out and then in some it really I think creates quite a magical experience so if I were to look out over the next 10 years it feels like we're right at the very beginning of this moment we've all played with with with chat GPT but others talk about AGI this this sort of uh world where artificial generalized intellig intelligence can effectively do everything anything and everything that is sort of the thing that is both the great opportunity and also the thing that I think people fear in many ways is that on your decade long road map I I just not quite sure that's a sort of helpful framing I mean we tend to get fixated on super intelligence this moment when an AI could do every single task that a human can do better than humans and you know if you really push me I have to say theoretically it is possible and we should take the safety risks associated with that super seriously I have been advocating for the safety and ethics of hii for almost 15 years and so I I really do care about it a lot but people tend to Lunge at it as though it is a inevitable B desirable and C coming tomorrow and I just think neither of those things are true um it is an unprecedented technology and as a species we're going to have to figure out a new kind of global governance mechanism so that technology continues to always serve us and make us more healthy and happier and make make us more efficient and add value in the world because there is a risk that the technologies that we develop this Century do end up causing more harm than good and that is just a kind of curse of progress we have to face that reality right these Technologies are going to spread far and wide everybody is going to have the capacity to influence people at Scales previously considered to be completely unimaginable and so that's already going to be a question of governance and reg can I ask you a question about that which is that leaders in this industry including yourself and Sam mman and so many others have gone to Capitol Hill and elsewhere around the world and said please we may need not we may we need regulation that there are lots of problems uh potentially with this technology it's a unique in that very very few Industries do the CEOs go to the regulators and say please regulate me is that a sincere effort or is that a protective effort to say look we told you there could be problems we knew you were never going to regulate a anyway see social media um and here we are now you know again I I think that this is also a false frame you know is it a you know a sincere effort or is it actually about regulatory capture because because if there was actual regulation I I have a funny feeling we would have a different conversation about which pieces were regulated and how it's regulated and there'd be a lot of push back all of a sudden may maybe it's because I'm a sort of you know Brit with European Tendencies but I don't fear regulation in the way that sort of everyone one seems to by default think that regulation is evil here I think there is no technology in history that hasn't been successfully regulated all of our just look at a car I mean we've been regulating cars since the 20s and that has consistently steadily made cars better and it's not just the car itself it's the street lights it's the traffic lights it's the zebra Crossings it's the culture of safety around developing a car driving a car being you know so this is just a healthy dialogue that we just need to encourage and stop framing as a kind of black and white it's either evil regulation or it's cynically minded to drive regulatory capture I think it's a great thing that technologists and entrepreneurs and CEOs of companies like myself and Sam who I love dearly and think is awesome he is not cynical he is sincere he believes it genuinely so do the rest of them and I think that's just great we should embrace that and of course there will be downsides to it right so I'm not denying that you know poor regulation can slow us down make us less compe itive create challenges with our International adversaries you know it and it certainly shouldn't slow down the open source Community or the startups let me ask you a more complicated question Microsoft has a deep deep relationship with open AI U that is how Microsoft at least for right now is pursuing its AI Ambitions I know there's lots of other efforts underneath it and we're going to talk about those in a minute but you referenced Sam Alman who's going to be here uh and speaking tomorrow there have been a lot of headlines and I want you to help us make some sense of them where we have seen a number of people inside open aai who worked on their safety teams at open AI not just leave the company but openly object to the approach that the company has taken I'm going to read you a quote from one of the the people on the safety teams who left you said these problems are quite hard to get right and I am concerned we aren't on a trajectory to get them right it is very unusual to see employees leave and speak out the way they have about what's happening in open inside open air what do you think is happening look I'm I'm proud that we live in a country and we operate in a tech ecosystem where there can be whistleblowers and those whistleblowers are encouraged and supported and if those people feel like this is the moment where they need to make that statement I celebrate and support that I personally have enormous respect for everything that open AI has achieved I think they're going to go down as one of the defining technology companies in history and I genuinely think that they're grappling in a sincere way with the challenge of pushing forward with the technology as fast as possible but also putting safety on the table front and center I mean they've been leading the Safety Research agenda let's just be clear about that publishing papers committing to academic archives attending the conferences raising awareness of these issues open- sourcing safety tooling and and infrastructure take us no inside the room though when people are having this debate to the extent we're see and we're seeing it play out what is the debate over is it about resources how much money is being devoted towards the safety issue it's about the psychology and philosophy about how how quickly to move forward and where does Microsoft sit in that discussion look I think smart people can sincerely disagree about the same observations so some individuals look at the exponential trajectory that we're on and they will argue that for the last six years we have seen an order of magnitude more compute and more data applied to the same method so 10 times more compute and 10 times more data applied every single year in fact for the last decade and each time you add 10x more you deliver very measurable improvements in capabilities the models produce fewer hallucinations they can absorb more information more factual information they can integrate real time information and they're more open to stylistic control so you can tune their behavior both from a product perspective and from a sort of the personality of the model and the people who are most scared about this argue that they can see a path over the next three or four or five orders of magnitude call it five years eight years where that capability trajectory doesn't slow down actually it just gets better and better and better the counterargument to that is that we're actually running out of data we need to find new sources of information to learn from and we don't know that it's just going to keep getting better there could ASM toote right we've seen that many times in the history of technology so I think it's a healthy debate to have you mentioned the word hallucination and anyone who's played uh with any of these products has probably experienced a moment where it appears that uh the bot has hallucinated one of the things we hear from people in the industry is they say you know what we can't really explain why it hallucinates and we can't really explain what's going on inside the Box can you I think I think so I can't uh in a way that would satisfy you um but I also can't explain why you chose to wear a blue shirt this morning I can explain very little about who you or I are the the requirement for explanation is I think again a little bit of a human bias when I asked you to explain why you had scrambled eggs for breakfast this morning you will creatively imagine an explanation in Hind sight and depending on your mood and the context that you're in and the other like Associated metadata of the explanation you're probably going to say it slightly differently and it's not clear that it was entirely causal there is some basis in reality for it but this onetoone mapping is a very hyper rational imposition on the way that we think as humans in fact we just don't really operate like that we operate I think far more by association should we worry about that though I mean should we worry that we it's one thing to not understand ourselves it's another to not understand how the computer gets you know most people think that all this is just a big mathematical problem right 2 plus 2 should always equal four if it's not equal to four we need to understand what's wrong with the computer right yeah so that that's just not how sort of human reasoning and culture actually works in my opinion we human reasoning works as a result of behavior observations when you consistently do the same thing over and over again I gain trust you become more reliable when you tell me that you are unsure about something and then I look back and it turns out that your uncertainty was a correct assessment of that outcome that uncertainty gives me reassurance right so I learn to trust you and to interact with you by observation and I think that that's actually how we all operate in relation to one another as how we produce culture and society and that's how we'll treat these models it's you know there there was a there was an there was competition of Radiologists three months ago panels of Radiologists in this live environment tens of the very best Consultants going head-to-head Against the Machine and obviously there's lots of skepticism why is the machine producing this correct cancer diagnosis why is it producing this correct eye examination and identifying that this is glaucoma well you know we can sort of allude to some of the training data that it has used to train on it but the fact is it's doing it more accurately more reliably than the human I think think that that empirical observation of the facts is what we should rely on to be able to trust these systems one of the things you said just before that uh was the idea that uh the AI machines the training that that AI does is running out of data to consume and we're going to have a conversation I think in a moment or at least I want to talk about the idea of synthetic data that is actually um Digital Data that's almost reproduced off of data that that doesn't exist but I want actually about ask you about the data that does exist uh there are a number of authors here at the Aspen ideas festival and a number of journalists as well and it appears that a lot of the information that has been trained on over the years has come from the web and the question and and some of it's the open web and some's not and we've heard stories about how uh open open AI was turning uh YouTube videos into transcripts and then training on the transcripts and who was supposed to only IP who was supposed to get value from that IP and whether to put it in a very blunt terms whether the AI companies have effectively stolen the world's IP yeah I think look it's a very fair argument I think that with respect to content that is already on the open web the social contract of that content since the '90s has been that it is fair use anyone can copy it recreate with it reproduce with it that has been freear if you like that's been the understanding there's a separate category where a website or a publisher or a news organization had explicitly said do not scrape or crawl me for any other reason than indexing me so that other people can find that content that that's a great area and I think that's going to work its way through the courts and what does what does that mean when you say it's a gray area well if if if so far some people have taken that information I don't know who who hasn't but that's going to get litigated and I think that's right do you think that the IP laws should be different right I could go as an author I could go write a book and in the process of writing my book I could go to the uh the library or buy uh 40 other books on Amazon I could read those books put them in my bibliography and hopefully produce a book and I would owe the authors of those 40 books nothing more than whatever it cost me to buy those books and and maybe the library had bought them once as well nobody ever imagined that I was going to be able to produce a million books every 10 seconds right and and what the economics of that should be right you know look the economics of information are about to radically change because we're going to reduce the cost of production of knowledge to zero marginal cost and this is this is just a a very difficult thing for people to intu it but in 15 or 20 years time we will be producing new scientific cultural knowledge at almost zero marginal cost it would be widely open- sourced and available to everybody and I think that is going to be you know you know a true inflection point in the history of our species because what are we collectively as an organism of humans other than a knowledge an intellectual production engine we produce knowledge our science makes us better and so what we really want in the world in my opinion on new engines that can turbocharge Discovery and invention okay now let's get philosophical then I get you on the couch here more than that well no about the because I think you actually just raised the fundamental question which is who are we and what are we and what are we if if this is Su is as successful as I think you want it to be what is our value I have children who are in high school right now or junior high right now um and we're trying to figure out what are they supposed to study how are they get supposed to think about writing papers are they ever going to have to learn how to write a paper speak to that look I think what when people ask me what should my kids stud I I say spend more time digitally learning from everybody else as fast as possible because the traditional method sort of textbook based learning is still going to be valuable and it is obviously going to be really important that you can deeply pay attention and you know read for two hours and then write an essay for two hours of course that's valuable but I think that generalists who can speak multiple social languages who can adapt very quickly that agility is going to be one of the most valuable skills and open-mindedness and not being judgmental or fearful of what's coming and embracing the technology by the way do your kids no not yet not yet okay here's my question for you for the parents in the room and because you're saying we should embrace this stuff one of the things that I worry about is and I wonder whether you think there should be policies whether Microsoft should implement it or apple apple by the way just put Apple intelligence into their phones the new phones on on on notes or or Pages it'll write the paper for you and these kids are going to go to school and they're going to have to write the paper are they really going to write the paper are they not going to write the paper what kinds of tools or restrictions or other things do you think should be put in place for students let alone adults yeah look I think we have to be slightly careful about fearing the downside of every tool you know just as when calculators came in there was a kind of this gut reaction that oh no everyone's going to be able to sort of solve all the equations instantly and it's going to make us Dumber because we won't be able to do mental arithmetic you know partly that's true you know I've got worse at remembering telephone numbers since I've had my phone right when I was young I I I remembered probably 40 or 50 numbers you know so I definitely am losing some skills maybe my map navigational skills are not quite as good now because I just go to Google Maps or Apple Maps straight away right so there are going to be some shifts that we have to make but net net I believe that if we really Embrace this technology and respond with the governance in an agile way this is super important it's clear that some of our kids were getting phone addiction it's clear that they were getting spending far too much time in social media that it was making them feel anxious and frustrated and frankly it was probably obvious to those of us in technology sooner than we made a big fuss about it right so we what we have to do is make sure that we're reacting with our intervention our interventions and our governance as fast as we're creating the technology okay let me ask you a question there are a number of entrepreneurs in this room uh some in the digital space some who are trying to get into Ai and others who are thinking about it what would you advise them in terms of where to go and what to do and what industries are places uh that there's going to be opportunity and in a world in which AI should be able to write all of these apps and other programs for me at any moment yeah if there is no uh Mo defensive moat around a business because AI is going to be able to just do it for you what would anybody invest in anyway look I I think that there is going to be this bacation where the at the very largest scales with the greatest Capital infrastructure we're going to be producing models which are eye-wateringly impressive but equally knowledge spreads faster than we've ever seen in the history of humanity and so the open- source models which are 100% free are within 18 months of the quality and performance as they were in the private space you know just a moment ago and that that's a very very big deal right and I think that trajectory is going to continue gpt3 cost tens of millions of dollars to train and is now available free and open source you can operate on a single phone certainly on a laptop gbt 4 the same story so I think that that's going to make the raw materials necessary to be creative and entrepreneurial cheaper and more available than ever before okay so I know you're at one of now the big giants in Tech but let me ask you this how do you think very honestly about the concentration of power in technology especially as it relates to AI because one of the things that's fascinating about AI at least for these large language models is you need an enormous amount of compute which means you need an enormous amount of money and this is not stuff that's being built in a garage anymore and so the the true power has actually gone back to Microsoft you you ended up back at Microsoft uh there is Google Amazon is is trying to build an AI even open AI which was on its own decided to ultimately partner with Microsoft is this a good thing is this a bad thing Lena Khan and others have looked at it it's not that there's been wholesale mergers of these things but even the Partnerships unto themselves yeah this it look it makes me very anxious the the reality is everywhere you look we see rapid concentrations whether it's in news media and the power of the New York Times the financial times The Economist the great news organizations or whether it's in cities the concentration of power around a few big Metropolitan Elite cities or whether it's in technology companies and the Practical fact is that over time power compounds power has a tendency to attract more power because it can generate the intellectual resources and the financial resources to be successful and out compete in open markets so while on the one hand it feels like an incredibly competitive situation between Microsoft meta Google and so on you know it clearly is also true that we're able to make investments that are just unprecedented in corporate history can you speak to what I imagine is a friendly or it could turn into a friendenemy situation with open AI which is you have this partnership now but you guys are developing your own Tech as well your your own uh Ai and in fact they helped you get to the lead I want to read you something this is semaphor I thought it was actually a fascinating analogy do you know this of course okay with the with the tour to France coming up here's a bike analogy that I think everyone will understand open Ai and Microsoft are in a two-person Breakaway far ahead from the pelaton by working together now they might be able to keep the lead if either goes Soo they may fall both behind both companies have to think about their Finish Line strategy when they will have to ditch the other you're on the bike again I don't don't buy the metaphor that there is a Finish Line This is another false frame it's actually very much also true in the context of our race with China we're going head-to-head it's zero sum there will be a Finish Line and when we cross it we will have an AGI and if we get it three years before them we'll be able to disempower them this is just stop it we have to stop framing everything as an adversarial race like it is true that we have ferocious competition with them they are an independent company we don't own or control them we don't even have any board members you know so they do entirely their own thing but we have a deep partnership I'm very good friends with Sam have huge respect and you know trust and faith in what theyve done and that's how it's going to roll for many many years to come you mentioned China Speak to where you think we I know you don't like the frame but I think people want at least understand what's going on which is to say where are we relative to where are they yeah if we approach the this with a default adversarial mindset which with all due respect to my good friends in DC and the military industrial complex is the default frame that it can only be a new Cold War then that is exactly what it will be because it will become a self-fulfilling prophecy they will fear that we fear that we're going to be adversarial so they have to be adversarial and this is only going to escalate and will end in catastrophe and so we have to find ways to cooperate be resp respectful of them whilst also acknowledging that we have different set of values and frankly when I look out over the next Century I think that peace is going to be a product of us as in the west and particularly America leading the tip of the spear there knowing how to gracefully degrade the the the Empire that we have managed over the previous Century because this is a rising power of phenomenal force with a different set of values for us and so we have to find ways to coexist without judgement without going to war with them unnecessarily because I think that would be terrible for both of us you you may differ with my next framing so I'm but I'm going to try great there is a debate in the industry whether you like it or not about this idea of Open Source versus closed Source AI one is closed source is a lot of the things that actually open Ai and you're doing at Microsoft um open source is available uh to the public you can see inside of it and and uh right now you would argue that meta the owner of Facebook is the leader there with its llama product there are questions about both models uh Elon Musk thinks everything should be open source um and I know a whole bunch of people who think that um there's a fear that actually if you allow the open source stuff into the wild that it could be misused and therefore that's why there are advocates for closed Source yeah okay so again obviously I object to the framing you know so we at Microsoft in my fourth week open sourced the strongest model for its weight class 53 that is in the world hands down still is you can use it on mobiles and on desktop the best open source model so we totally believe in open source as well again it's a misframing but we also believe in creating very powerful very large very expensive models which we may open source some of over time so it's not that we're against open source tool we just you know see that you have to have a mixture of approaches um all of these large language models and I know there's smaller models as well but the large language models in particular as I mentioned at the beginning um require a huge amount of compute computing power and that also means they require a ton of energy there are a whole number of conversations happening here at the Aspen ideas Festival around energy use and climate and the like and I am curious what you think about given all of the pledges frankly that technology companies made even only two and three and four years ago around climate and getting to to Zero by 2030 and the like it it appears that on this trajectory all of those things would go out the window no so we look we we Microsoft's already 100% renewable by the end of this year by 2030 it will be fully net zero um we're actually you know 100% sustainable on our water by the end of next year so look we're very committed to keeping up with that and the good news about new demand is that it comes with with a new opportunity to reinforce sustainability whereas old Supply I think is much harder to move so I think that we're actually in a very good place on it it's true that it's going to put unprecedented burden on the grid because scale-wise we're definitely consuming far more energy than the grid has previously managed but it's you know the grid is long overdue a radical uplift right these are the kinds of infrastructure Investments that we need to be making in our countries in the West for the future of you know our our societies spending Less on War and more on spending hundreds of billions of dollars upleveling our our grid so that we can manage all of the new battery uh spare battery capacity that's going to come from local generation in your homes and small businesses and cars do you think youve spent a lot of time in Washington do you think the folks in Washington which I think struggle to even understand social media understand this whether it comes to the implications of AI but also the energy implications and everything else yeah I I think to some extent the conversation is Shifting now towards the renewable energy question I think people do realize that we're consuming vast amounts of energy and you know this this this has to shift but it needs this isn't going to shift just because Microsoft bu 100% renewable which of course Google does as well and the other companies are heading in that direction you know so I think it should be celebrated that we're setting a bit of a standard but it needs National infrastructure to support that transition um how do you think about the competition with others and the idea that some it is possible that actually these large language miles maybe you'll disagree with this framing could become commoditized which is to say you've got one Google has one Apple's now partnering with open AI but they're apparently going to partner with lots of other people uh Amazon which owns a piece of anthropic uh you could argue has one and is building its own will everybody have one and if everybody has one how how valuable is it Anyway look as knowledge proliferates everything essentially becomes commoditized app development used to be a super unique highly skilled set of things that only a tiny group of people now everyone can spin up an app instantly web development do you remember when that was like a real thing now you can just sort of plug and play and drag buttons around and you need no technical skills whatsoever other than being able to point a mouse to build a pretty decent website so over time as the knowledge and capability proliferates it does become commoditized how do you think about Ai and how fully integrated and this goes to a privacy question also goes to by the way an antitrust question should get integrated into everything which is to say for example right now the EU as you know has accused Microsoft of breaching antitrust rules uh you should know Apple has delayed uh introducing its own AI features in the EU because of the regulatory environment and I think there's a real question about you know how connected all of these products need to be frankly both to work but also there for them what they do to the rest of the business yeah I mean I I I think that's a good question so you like the framing yes finally F friction is going to be our friend here and that is a different you know that is a a different reality to what we've experienced in the past where every you know second that we can gain in putting technology out into the world is always producing net benefit and I just think you know these Technologies are becoming so powerful they will be so intimate they'll be so ever presentes that this is a moment where it's fine to take stock and think and if it takes six months longer or 18 months longer or you know maybe even longer than that it's it's it's time well spent right um I want to open up to questions in just a moment but I I want to talk about the idea uh before we do of the idea of emotional intelligence and AI this was something that you actually spent a lot of time thinking about and working on when it came to inflection and what inflection was about yeah um inflection uh his his his AI original project has an a remarkable emotional uh IQ or EQ component to it and there were some surveys and other things that said that people actually thought that they when they were having conversations with it it was better than having conversation with a human maybe this gets back to some of the philosophical conversations we were having before but you know we've all seen or a lot of people have seen the movie her is that realistic you know I I think for the longest time in Silicon Valley and in technology we have been obsessed with functional utilitarianism you know you want to efficiently get somebody into an app solve their problem help them to buy something book something learn something and then get them out and we now have a new kind of design material a new clay if you like as creatives to be able to produce experiences which speak to the other side of our brains you know and that's an amazing opportunity to design with real intent to be kind and respectful and even-handed and you're right in a bunch of the studies people found that Pi the AI that I built previously was just very kind and caring it just asked you questions it listened it responded with enthusiasm it was supportive it challenged you occasionally it was always even-handed and non-judgmental even if you came with quote unquote you know judgement or with a sort of racist or discriminatory view if you were you know talking about how you were afraid of new immigrants arriving in your area taking your jobs or afraid of your child marrying a black person or you know it wouldn't just shut you down and say that's like you shouldn't be saying that it wouldn't add more toxicity to the equation it would actually engage with you and talk and and help you explore your feelings whilst also being hyper knowledgeable and you know teaching you and providing you with access to information in real time time on the web and so I think it was a proof point that Technologies designed with intentionality really can make a difference and are possible now with this new llm though an interesting point though because of the controversial topics that all of us may ultimately end up talking to an AI bot about which is who is the the God if you will that's going to tell us what's right and wrong um and this is something we've heard about by the way from an Elon Musk who said that he believes that there should be multiple AI Bots and there should be uh you know if you want to have a racist AI bot that should be available to you uh from a free speech perspective I don't know if you think that's true yeah yeah I mean look that's a very good question so at the very least I think the default position is that it should facilitate your own thinking and exploration in a non-judgmental way now there will be boundaries there because if you want to facilitate more extreme experiences then this is not going to be I'm not going to build a platform or a product that enables you to reinforce those ideas that could be potentially harmful to society more generally the question is who gets to Define what that boundary is that harm and I think Elon raised a good point we're all thinking about exactly that question in social media there really wasn't an independent voice that said well this kind of conspiracy theory is legitimate as a source of active public inquiry and discussion but this kind of conspiracy theory quote unquote is over the line and we should remove it and I I I just think we've had 10 or 15 years of experience of social media and we still don't have good proposals for how Society more generally not necessarily just the politicians or the journalists or the elites but collectively gets to influence where that boundary of moderation is and we're about to see that boundary be contested even more acutely and dynamically with these models right and so you know I think that should be like the the the that is much more important a question than when will The Singularity happen Okay final question for me and then we're going to open it up and I know there's microphones in the room uh there's a whole bunch of people in this room that are going to watch and go to the watch party on Thursday night of the debates and we haven't really talked about politics in this election just play it out for us the role of AI in the election of 2024 and then play out the role of AI in the election of 2028 yeah so the The View that we've taken for Microsoft co-pilot uh so far um has been that AIS should not participate in electioneering or campaigning even if they successfully provide factual information and you know that's a view people can disagree with that view but we've taken a pretty hard line on that because we know that the technology isn't quite yet good enough to articulate that boundary between what is you know a falsehood and what is true in real time right and this is just too sensitive for us to participate so there's going to be some downside to that because the AI does also provide quite balanced and factual information most of the time but sometimes it gets it catastrophically wrong so I've always been advocating that AI should not be able to participate in elections and democracy is sacran for all of its faults and strengths it is something that humans participate in that AI shouldn't in 2030 or 2035 we're going to have to face the reality that some people feel so attached to their personal AIS that they will advocate for personhood just as there are some people who advocate for personhood for their animals for example you know in kind of more extreme animal rights cases and I I again this is a personal view but this is something that we all have to debate that I think we should take a hard line on that right warts and all democracy is for humans and other kind forms of being shouldn't really be able to participate okay Tina we have a panel in 30 in 2035 now about humanhood for for the Bots U we've got a whole bunch of hands and we've got not a lot of times we're going to try to go as fast as we can um why don't we go right over here and then we'll come down here and we'll see if we can move around as quickly as we can um in the blue blouse yeah thank you both for being here today um very intelligent conversations here um I just want to my name is Monica mayat I'm the former deputy mayor of the city of boaron Florida and I'm also an old school technologist I'm going to date myself IBM kobal programmer so I've I watched technology through the last 30 40 years and I understand the significance and implications and applications of AI but the last several years cryptocurrency has been like the big buzzword everybody's talking about cryptocurrency this and that until spf's conviction fraud several months ago so now I feel like AI is the new buzzword we've got a very short amount of time so you got to put a question mark on it okay what are there concerns that AI will be cool until the next big technology Revolution no I don't think so I I I've heard the the comparisons and I think that you know cryptocurrency in my opinion just didn't really deliver value even in the moment and I think that the value um that these models already deliver is kind of objectively clear can we get a microphone right down here if you'd like they won't be able we're going to do we're going to do every questions 20 seconds or less literally because we we I know we got a lot of a lot of hands and and very little time I'm Linda resnic we educate 5,000 underserved children and um I'm grappling you know we forced stem upon them for the last 10 years and now I'm beginning to think that we have to teach more about the humanities uh and I just wondered how you felt about that because that way they can make a choice and understand if we look at the great Scholars of our history like the aspirin Institute teaches us great I'm with you on that you felt about that I'm very much with you on that I think actually if you look back in history the greatest Scholars have always been um our multi-disciplinary Scholars and in the past there really wasn't this acute distinction between stem and Humanities again it's quite a simplistic adversar or opposition I'm a great believer in both and you know I I think that multi-disciplinary skills are are going to be the you know the essence of the future okay we're going to Miriam seiro CS non-resident senior adviser wanted to ask you a little more specifically about misinformation and disinformation and what you think um should be done to address those concerns yeah I mean I think on the disinformation front there are clearly Bad actors who for many many years have been actively trying to pollute the information ecosystem and I think that you know our our big Tech platforms could do a more aggressive job of weeding those out at the same time we're trying to Shad this line for not sort of damaging you know the the the free speech that we all value right so you know this is going to be a constant whacka Mall uh game especially as the cost of production of that disinformation goes down and the ease with which less capable actors are going to be able to produce vast amounts of that content at the same time have faith in humanity we're incredibly resilient and adaptive and as we saw in the Indian election with the spread of disinformation and deep fakes you know people adapt real quick and they can tell tell the difference and they're skeptical and that if if if it makes us all more inquisitive and questioning and engaged in the process I think that's a good thing we've got a question over here um thank you again going back to China um clearly National Security is driving our policy uh on technology and and and and clearly uh what a point of potentially no return uh uh would that result into development of two different uh technology standards what are there in your opinion no I look I I think for all my talk about cooperation I'm also very pragmatic about the balkanization which is already taking place the fact that you know our export controls now have essentially forced China to develop their own chip which they weren't necessarily on a path to do and you know again smart people can disagree about the same facts I mean we took the view I mean the DC took the view that you know we have to separate and that's the path that we're now on and they're building their own technology ecosystem and they're spreading that around the world we should really pay close attention much as we're obsessed over the question of the Middle East and the you know was an important question but let's not neglect the fact that you know digitization is a new kind of you know you know sort of shaping of values in Africa and the provision of satellite Communications and operating systems are really going to shape the next few decades in a much more profound way than boots on the ground okay let's get to mic micophone in the cheap seats or the important seats all the way in the back I think she I think I see a hand in the absolute back row and and just for sitting in the back row we're going to give you a question but but please make it quick because we've got only got about literally a minute hello uham alaykum my name is sundas uh and I am an educator in New Jersey for eight years shout out to my student she's right here um so I've been arguing a lot with a lot older teachers um for using AI cuz I'm a huge advocate for it what would you say to those specific teach teachers and how do you think teachers can actually utilize AI I know you spoke briefly about students but I want to know the Educators perspective yeah great great question I mean you know I think asking AI to produce lesson plans is sort of the first 101 introduction but performing alongside AI is kind of an interesting new direction to take it like what would it look like for a great teacher or educator to have a profound conversation with an AI that is live and in front of their audience and then occasionally engage um you know their students to get involved I mean you know literally by the end of this year we'll have realtime voice-based interfaces that allow full Dynamic interrupt and it feels like just talking you know like I am right now with Andrew it's just a completely different experience and I think that like leaning into that showing the beauty of that and being creative with with it is is is ready the future folks I need to uh both thank you and also apologize because we are out of time I want to thank you for your fabulous questions but please join me in thanking this gentleman for a fabulous conversation man I am genuinely very grateful thank you thank you everybody thanks for watching stay updated about breaking news and top stories on the NBC News app or follow us on social media

2024-07-03 14:49

Show Video

Other news