e e e e e e e e e e e e e e e e e e e e e e e e good evening and welcome to the John F Kennedy Jr Forum at The Institute of politics my name is Christopher V and I'm a sophomore here studying government before we begin please take a moment to note the exit doors both on the park side and the JFK Street side in the event of emergency exit at the door nearest to you and congregate in JFK park now before we begin please take a moment to silence your cell phones and join me in welcoming Harvard College undergraduate Peter Jones hello all and welcome to this evening's John F Kennedy Jr Forum at The Institute of politics my name is Peter Jones and I'm a senior in the college studying government and Spanish Literature and additionally I'm a member of the student Le Forum committee it's my honor tonight to introduce Cara artigas and James manika for a conversation on AI and the geopolitics of emerging Technologies the conversation will be moderated by by uh Megan o solivan carare thas was Spain's Secretary of State for digitalization and artificial intelligence from 2020 to 2023 leading the country's efforts in digital transformation and cyber security in her tenure she also chaired the Spanish Agency for AI supervision and the national cyber security Institute artigas played a pivotal role in European level AI regulation negotiations during Spain's presidency of the Council of the European Union James manika is the first senior vice president of research technology and Society at Google a role that includes overseeing Google research and Google labs and more broadly helping Google Advance its wide breadth of AI Innovations in a manner that is sustainable and responsible previously manika served as Vice chair of the United States Global Development Council in the Obama Administration and as director and chairman of the McKenzie Global Institute together Mr manika and former secretary arthas chair The highlevel Advisory body on artificial intelligence The Advisory body which convenes 39 preeminent AI leaders from 33 different countries serves as the un's primary task force empowered to analyze and Advance recommendations for the international governance of AI The Advisory body's Flagship report governing AI for Humanity was released about a month ago in September having attended an intermediary briefing on this report in June during my summer internship I can say with confidence that you're all very lucky to be learning about it in the lovely JFK Junior forum and and not the dark basement of un headquarters the conversation will be moderated by Megan o solivan uh the director of the belfer Center for Science and international Affairs and also the genan Kirkpatrick professor of the practice of international Affairs please join me in welcoming car oras James manika and Megan o solivan [Applause] good evening everyone I'm thrilled to be here thrilled to see all of you and really excited to welcome two extraordinary people to our community and to Harvard to HKS and to The Forum over the last year I've had a wonderful time getting to know Karma and learning a lot from her and over the last many years I have learned a great deal from James so it is great to to have them here we're lucky that they're sharing their time and expertise with us James is a a regular visitor or at least that's my intention to to make it uh and he's been here quite a few times uh and karma is a new senior fellow with us at the belfer center and she is right now charting out some areas where she'll be doing research uh in our community so looking forward to that and having each of them or having one of them here would be a delight but having the two of them together is really fantastic and there are lots of reasons for that as you're about to see but one of them is that the two of them have done something really extraordinary together um as was mentioned by Peter they have been the co-chairs of the UN advisory body on artificial intelligence and in the in the words of the UN Secretary General Antonio gut he said this is the first and I'm quoting the first of its kind in the AI space a geographically diversed gender balanc group bringing together experts from government the private sector civil society and Academia and all of this was to answer the question how can AI be governed for Humanity that's the big question so we'll talk some about that but they've also demonstrated to us how two people who are very different from very different backgrounds can work together and do important things as um as was already indicated in the introduction by Peter you know you probably got a sense that these are people from two different worlds um one of the things that wasn't mentioned is that Karma was the lead negotiator for the what came to be the European artificial intelligence act so this is the first really big piece of legislation regulating in AI um in Europe and again from a European perspective James is the vice president for senior vice president at at Google and my guess is not everybody at Google is thrilled with the European AI act I don't know I'm just making a supposition um so we have people from government and the private sector and again um you know from different geographies different backgrounds different perspectives working in tandem together so I thought in the time that we have we'll we'll have a conversation among the three of us for about a half hour and then I'll open it up um and I'd like to do a few things I'd like to talk a little bit about the report the UN report and its recommendations and then I'm going to like indulge and ask them a lot of kind of AI questions that are on my mind and on the minds of of many of you um who are dabbling in this or maybe deep into it and then again open it up to everyone here so let me begin with you James just to set the stage about the UN advisory body and just the Mandate that that you were given and and even more broadly like how you perceive the stakes of this endeavor like how significant is this I mentioned um the UN Secretary General saying this is the first of its kind how how weighty an Endeavor has this been well first of all thank you may for for for having us here it's such a pleasure to be back at at at belfar and to be in this conversation it's always a pleasure anytime I get a chance to do anything with K it's always a a real pleasure so thank you for for being here uh this was quite a daunting task uh we were assembled in October 2023 uh and given this Monumental task so the body itself is made up of the 39 of us uh from 33 countries and we couldn't be any more different Academia private sector government all all you know Civil Society all of the above so you know it's about a diverse a group as you can imagine and in fact our first task uh if you can believe this Megan was to in two months we had to come up with a draft report that we were going to share with all the member states uh at the UN all 193 of them uh so you can imagine the amount so we we worked in feverishly hard I think as different as K and I had I think the one thing we did have in common we wanted to get this done uh the two of us really wanted to get this done because and and so uh so the work itself was uh very complicated because as you might imagine the world's a big place uh we needed to listen to all the members they different views we also had to consult with all the member states so we we would routinely have uh briefings of the 193 member states who would give us feedback often written feedback we consulted I think something like over 2,000 experts uh around the world we did surveys we got written submissions from hundreds of written submissions as you might imagine because every knew were doing this so this is quite quite quite quite interesting but you know I I think there's nothing like having to face member states and countries in the world to kind of focus the mind so we did get to an interim report I I I won't comment on the final report because I'm sure we're going to get into that but I think what was remarkable in that interim report because I worried and I think Karma did too that this was going to be a very watered down kind of consensus thing you know with this diverse group of people but I think all the reactions was that the interim report was quite substantive in the sense that we're able to get agreement on some foundational principles that everybody kind of agree to things like this technology should be grounded on you know in the public interest uh that in fact this should benefit everybody inclusively that this things like this had to be based on fundamental human rights law and international law which countries have largely agreed on this idea that we should Center a lot of the benefits around uh the SGS and the SGS are useful uh there are 17 of them because if nothing else they are probably the best expression by everybody because 193 countries signed up to them of what it is the world wants to improve about the world everything from gender equality poverty uh you know whole climate change a whole bunch of things so we're able to Center those things and get agreement we're even particularly pleased by the way that uh when we gave feedback on this to uh to the member states they supported that and in fact on the back of that the US then put together a draft resolution and got 125 other countries to co-sponsor it and then when it was presented the you know was unanimously endorsed by the UN so this was a good foundation uh we also did highlight something like seven areas I won't go into them now of what governance had to solve for cuz in that first instance we didn't actually have any recommendations yet but we at least said there are some functions that really have to be dealt with at an international level and we identified what those were so that was that was quite a task and by the way I should say you know you know I don't know how Karma does this well we're doing this by the way she was negotiating the EU AI act at the same time so you know this because that was also came out in December if you remember she can speak to that maybe she was doing it with a uh an avatar you know J and I I think like he's a partner partnering crime because we we like sometimes he could be present in some of the presentence meetings sometimes I couldn't sometimes on the on the presentation to member states I was replacing him or the other way run I really you know uh get up to coordinate the the work uh we were supporting each other so we were not able know necessarily we present in all the meetings uh together so I would just add that it's very important that every each of us every each of the 39 members that by the way were selected among a group of more than 2,000 so more than 2,000 applications were received to Secretary General 39 of us were were had the privilege to be chosen and and and I the honor to become the co-chairs each one of us has been acting from its own capacity personal capacity so I'm not representing a Spain he's not representing Google so of course everyone comes with our own background on our own principles values and beliefs but the the importance is that we have all the freedom to work independently with no inference in our work and I think we had also the great support of the Secretary General that's correct that when we sometimes were hesitating some of the feedback we had we were too ambitious saying no no go farther go farther go further just be as ambitious as you want because then the political process will we don't know how we will end up so we with having like the independence to do that from a scientific evidence-based point of view and and and joining all this complexity of of Visions very enriching for me having people from the global South and raising on people from uh even um activists that were there and giving us a totally different perspective that I was having from the I would say first world I think that was very enriching and the conversation already hit it because when the stakes are high are very hit it and I think we're very intelligent that I was I say the the the our Merit the to split the the the report into in two parts the first one internal report was just ask asking the two answering the two questions why do we need to govern AI globally why the UN we're even questioning that ourselves and what exactly do we want to govern and then when we had to gain consensus on that the last nine months which has been the probably the hardest ones is okay now we know why and we what how do we do it and that has been the the most difficult part but you know separating these two things help us generate also consensus that the work we were doing was in the right dire yeah great if if if I could add sorry one one quick comment in that interim report because it did set the stage in addition to the principles I think if I could summarize that initial interim report it recognized a few specific things one was that this technology represents an opportunity for the world to tackle a whole range of issues societal in other words at the same time recognized that there was some incredible complexities and risks associated with this so we we went into that quite a bit we also recognized that there's also some huge gaps uh both gaps in capacity and enablers if you look around the world both to participate and do any of this and also gaps in governance uh so there's a governance deficit that we also identifi so these were really some of the key things in that initial uh report and I think perhaps because we were quite direct I mean I hopefully some of you have looked at this we were quite direct about those things I think it gave the work credibility I think because we didn't shy away from any of the uh core issues whether to do with countries companies uh any of these issues or the risks or the opportunities were very quite direct yeah excellent um the Secretary General said that your body quote tackled its complex mandate with remarkable Effectiveness which is not usually what you hear about un bodies so but I think from the I think Karma you you somewhat answered that like why was that the case is you were all there in personal capacity so you didn't have to go back and negotiate things you didn't have to you know uh have congresses and others uh you know come on board let me go to the recommendations a little bit um and I'll go back to you James so you mentioned the SGS and it's true that if for those of you that haven't yet looked at the report and I encourage you to do so there are seven recommendations and a lot of them are tied to the SGS as you said as the list of uh you know things that we might like to see AI help improve upon but the Mandate of your body was to look at things explicitly and correct me if I'm getting this wrong with from the perspective of those who are under representative under represented and left out so like to make sure that there was a a global South perspective brought to bear and that gets us to this whole question that you and I have talked about on many occasions um of equality and inequality and how AI could exacerbate or diminish the Gap um could you say a little bit more about the connection of these things and how you tackled it in the report yeah I think the question of inclusion came out of this presumption I think view that we had that this had to benefit everybody uh that was an important starting point and by the way even though of course we talked a lot about the global South the everybody also included communities even in developed countries too so it was really everybody in that sense and and and the reason we liked anchoring a lot of the benefits around the SGS is at least it gave us something and a list of things that the world has generally agreed on already so he did didn't have to go renegotiate what are those benefits what are the problems to solve uh because arguably you know it's the best list the world has uh you know you can always critique it and so forth but that's at least a list that every's agreed to we need to improve these things so we centered on that I think the the the global South was interesting because one of the things I was that came across in a lot of our work was that uh by way it was fascinating to see just the different attitudinal differences around around the world with respect to Ai and you know this is obviously I'm about to give a gross summary here but it's it's it's a it's a very high level summary we found that often the attitudes towards AI were more positive in the global South uh because people saw this as a way to transcend challenges in health and in a whole bunch of things and so they at is relative to say Western Europe and in in North America however that's what to say every in the South was happy no no no no no no there's some major gaps so they're deeply concerned about uh these capacity questions we don't have access to any of this to the technology to compute to the tools even enabling infrastructure that enables the participation in any of this Broadband electricity so there's a huge even expertise huge capacity Gap there was also a gap in even in in the global southest participation in all the conversations that have been taking place about G governance of AI in fact one of the fun exercises that we did was we so as you know in the last three years there's been a flurry of summits initiatives uh executive orders uh regulations and so forth and and all sorts of things we actually looked around the world and and in all of those things only seven countries had participated in all of them in the world and there was another roughly 120 I 118 who had not participated in any of them uh any of them at all and these were mostly countries in the global South so so these gaps around both capability capacity participation in the development use and even the governance conversations were a huge gap then of course you had the actual governance deficits themselves so that that also reinforced this idea uh that we had to pay special attention our members reminded us of this and the member states reminded of this we had to pay attention to the global South the worry I'll say one more quick thing Megan on this the worry that we had was that we're worried that there's already a digital divide first of all regardless of AI what we didn't want us for the digital divide to turn also into an AI divide so how do how could we make sure that we tackled that as well great so karma I want to come to you on a different set of recommendations and I have several of my colleagues who work on on climate and energy sitting here in the front row and um all of us would have noted there's sort of an echo of some of the the UN work on climate so two of the recommendations in out of the seven in particular one is to establish an international scientific body on AI to make regular reports to the International Community about the state of the technology and the other is to set up a global fund for AI so these are very evocative of some of the the climate um recommendations or infrastructure that has existed and I wondered was that um was that conscious in your mind were these models and and what were the lessons that you took from how the world has done on the climate side and what your aspirations are for when it comes to AI like do you think that the scientific body is going to create some kind of metric that the world looks at like I think about the you know keeping Global temperature rise beneath two degrees or 1.5 like do you have an aspiration for these bodies to set that kind of Benchmark yes you're absolutely right I mean um dim has mentioned one of the three uh Global governance gap which is inclusion but there's another two uh Global governance gaps one is lack of transparency and accountability and the other is gaps on the implementation of AI so the gap on transparency and accountability has to do with the fact that most of these technologies have been developed mostly by the private sector that has not the obligation to publish the results as if it were applied by the Academia and therefore even Regis lators are regulating from a dark place from the dark spot because we don't have transparency and how these models have been built and therefore there is lack of trust in using them and adopt them H because we are not aware not only you from uh about the risks but not even from the opportunities that's not transparency at all so I think the idea of the the the international scientific panel came uh for the fact that during the climate change conversation it was like a theoretical exercise is there is climate change some people would would think this otherwise but people think it's going to be two degrees the other 10 Dees 0.5 degrees un least there was data and science uh evidence people could not agree on the diagnosis therefore we can not agree in the solutions so I think that is going to be very very important we were trying that everything we were recommended was actionable therefore we're not inventing new mechanisms but based on the mechanisms that know that already work and that are within the UN system and uh the only difference I would say with the scientific panel on climate change is that we don't expect a recommendation every six years we expect reports every year every six months because that's the path of the AI Evolution so we we need it more than ever we need to know what is the state-of-the-art technology every year and even I qually read uh reports on on specific deep Dives that we have also tackled in the report like the how this affecting property rights or intellectual property rights Health children women human rights and that's part of the the discussion and on the climate fund we have probably different perspective here um we consider that this it needs to be linked to another recommendation which is the capacity building initiative so because there are different starting points different levels of maturity and different count countes we think that we need to have uh development programs to achieve a minimum in terms of governance mostly from the governmental point of view and therefore there are initiatives like the UNESCO Readiness assessment that is helping us on on establish the the starting point and but we need funds to allow for new entrepreneurial ecosystems to emerge in these underdeveloped countries because otherwise we are bringing technology which has been 100% developed with global North data because it has not been trained by data from the global South nobody has a spent up any in gathering first party data from the global South so the data sets are not inclusive that with Computing capacities in the noble Global in the global South out of 110 super computers high performance computers in the world there is no single one in the global South like not in Latin America not in Africa not in parts of Asia and therefore we need to de deoy this Tria of data Computing and talent to create those ecosystems so for that we need investment and we also need investment to develop AI solutions to tackle the sdgs and that Global fund is totally different in the way that we consider it's going to be managed first of all we don't recommend specifically that this is under the UN umbrella it could be within the UN or without the UN made by contributions by of course member states development funds philanthropy but also in kind contributions of private companies probably it's more interesting that Google provides Computing capabilities or Nvidia provides chips that they bring uh in fact money and that can be a fund of funds that can be something that can complement existing funds and to scale Solutions so that is part of the work that needs to be done the following year in order to how to oper operationally land these recommendations because we like could Advance as what James was his idea design principles but we have not gone into detail of the operation operational you know uh nuts and bolts yeah so before we get into some geopolitics here I have to ask a personal question about you know given your backgrounds and um and how different you are there must have been areas where you disagreed so um what were those what were the things that you had to leave on the table and just agree that you'll disagree or that that will be for a different panel a different body um what well I'll mention a few and so first of all U one of the things that K and I were at L were very United on we wanted to get this done as I said and you we wanted to make sure everybody we got to a productive useful outcome as opposed to something watered down I think some of the the debates within the body itself there are quite a few um uh I I'll mention a few um there were some for example who thought there ought to be a new UN agency created out of this uh others would didn't think that uh there was a debate you know so that was one debate there was a debate around you know should this be done by just member states uh others didn't think that they thought it this had to be a multi-stakeholder convers ation because when member states meet to negotiate the UN they're negotiating as governments often that doesn't include other stakeholders uh whether it's Civil Society Academia or even the private sector so we we had all those debates uh we had discussions and debates about you know there were some who cared deeply about the risks of this some who cared deeply about democratizing this and didn't want to be be worry as much about the risk they felt that the right way is to make this accessible to everybody and that's the way to actually D risk so we had all these uh uh uh debates and and and and and tensions I think what was also very helpful by the way was not only having debates amongst ourselves but also the conversations with the experts as I said we had something like 2,000 people who gave us written submissions we talked to spent some time we had so meetings we did surveys so that was actually very helpful but I think the we had been encouraged as as Kama said by the Secretary General to say be independent don't get bogged down by what the member states are saying because for example there were debates around you know should we talk about military use or not of this technology should that be in in is that in or out um so so we had all these debates and and discussions and what's the right time frames to think about these things should we focus on what we should do now or should we be trying to design something for the next 20 years uh so these are all some of the debates and discussions fantastic can I um I'd like to test an assumption and see if either of you agree or disagree with it often we'll hear people talk about the development of AI and large language models as being something that increasingly is going to be just in the perview view of one or two states just given the enormous cost Associated you know hundreds of millions maybe even a billion dollar to train a sophisticated model in a relatively short period of time so this would suggest you know if this is where the activi is going to be that ultimately it's going to be American companies and maybe Chinese companies that are able to you know invent make these kinds of Investments does this really boil down to an American Chinese either you know competition on this front or is there you know are there other dimensions to the development of the technology that is left out by that portrayal okay you can talk from this now as a Google or American I talk as as European so in this point of course this is I mean it's like we are like a referee disagree OB no of course I mean this is from European perspective uh it's like we can only aspire to be a as follower because it's like a battle we are like a tennis you know game and you are the referee the can this and that means that would be no space for us and I didn't see it that way first of all of course as the leadership of the technology and the industry and but don't forget that this is a general purpose technology I would say the meta technology it's a technology that will allow all Technologies to to emerge or to appear and the what I consider that very simply this is like a new operating system the fact that you're using iOS or Os or Linux doesn't make you more comp uh uh competitive than another company that is using different operating systems of course the guys operating system have their piece of the cake and that's very important because they are the enablers but imagine what happened with the Telos the Telos are enabling all the digitalization and and the benefits don't go to the Telos I mean the AI has been based on other Industries Evolution like the bandwidth that the Broadband width fix a mobile that you know the cloud and so on and not always these new uh Evolutions are proportionally distributed about the value chain on the benefits what I'm saying is that what will make one companies or or countries more competitive than others is how they are going to use AI to be more competitive to be more productive and to develop solutions to the real problems so where I see the opportunity is not only in the large models but on the small models that need to to be developed with more curated data more precise data to solve real problems for the real Industries and probably the level of adoption here is that probably in the states 55% of the industries is already using generative AI versus 65% of Industries in China only 25% in Europe that is what making the difference that won't be more competitive European industry because we have our own large language model course that's nice to have that's a piece of the industry but I think the the real thing is find the real use cases that may our economies or production means more competitive I don't know if you agree that yeah but I I think it's worth I mean it is the case there a couple things that are true so far it is the case that Mo most of the technical and scientific breakthroughs around this technology have been mostly been led by American researchers uh and European researchers that you have acquired that have moved from Europe that's true uh that is why is that yes because you a much opportunities so that is the case whether you look at the most cited papers the most read papers that is the case it was also the case uh that in fact the the acceleration in the D and this technology we've seen in the last few years um has mostly come from these Transformer based architectures the actual original paper actually published by my colleagues at Google research attention is all you need which introduce the Transformer architecture it is a characteristic of that architecture that is very computationally intensive it really is very computationally intensive and I think because of that computational intensity as you make the models more capable bigger and so forth the computer requirements go up dramatically and because they go up dramatically it becomes very expensive so the if you're looking at what's happening at the frontier it's ended up being a few research teams it's sort of in all of the private sector there just a few research teams that have been at the frontier of that that's at the level of advancing the underlying technology itself now several things will probably change that uh I think you you're going to see other breakthroughs that introduce different kinds of architectures if there are computer scientists in the room you know that there are all these new models like stage space models that are com that don't scale quadratically so that picture may change but that's the picture as it is today on the frontier part of the now I agree with karma in the sense that if you then look at the building of applications uh that's a much wider field uh uh because you don't need to be training Frontier models so in the in the application part of this you can use much smaller models you can use open- Source models uh you've got lots of startups and lots of companies doing that but the frontier part of it is the case now to back to your question so you then look at what's happening in the frontier so far it is the case that us teams have been far ahead of China so far there's a legitimate debate to be had about how far ahead is that is it by year by two by five depends what what part of the field you're talking about whether it's computer vision natural language processing or these Foundation models and and that Gap is closing so it has largely been in two places I think it's still the case that Europe and other parts of the world need to participate in this and I think what it's going to take for everybody to participate in the development and use one of it is you know having more accessible models uh large ones and small ones in open source that allow people to participate part is also having the other enablers uh including You Know Rich ecosystems to Commerce point of entrepreneurs and others in many more places and then particularly in the global South it's also going to include having these enabling infrastructure requirements around broadband and other things until we do all of that I think you'll continue to have only a few places uh be the frontier of this technology and that can be good for everybody y I just want to ask a follow on question James and then I'd like to turn to the audience so if you have questions uh please come up to the microphones um I have a lot of questions for you but I'm just going to do a follow-up one and and then we'll see what kind of conversation we have from the audience so James you mentioned you know this lack of clarity about just how far ahead the US is on from China on this so how do you respond to this question if one of these countries the us or China reaches AGI first does that allow that country to perpetuate that Advantage indefinitely or is it just you know it's just a matter of time before the other country catches up and so getting there first whatever that means and I'm sure you'll want to Define that um becomes uh like a absolute imperative from a strategic perspective well I I think part part of the what you know the term AGI is probably one of the most IL defined terms there is that you know we're all throwing around quite a bit uh you know at least there tends to be two kinds of two categories of definitions if I may one that is problematic in in the way you're asking the question one that I actually don't think it is so one end of the definitions for AGI simply ref refer to the idea that you have a generally capable model or system that can do most of the cognitive tasks or as good as uh humans can do different cognitive tasks whether it's reasoning or uh biology or mathematics Etc so it's much more of a capability definition in how General is that cap ability and how does it compare with what humans are able to do in that version of the definition I think the question of whether then you get acceleration I don't think necessarily applies because you know just like you know we know that just because you have 100 Nobel Prize winners doesn't mean therefore you've transformed your economy there are still constraints in the real world about implementation deployment etc etc etc so I don't I wouldn't worry about the acceleration you're suggesting but if you take a second kind of definition which tends to not only include what I said in the first one but then also have things like self-improvement uh self-learning self-direction all of those self defense all of those things deception blah blah right then you could imagine uh whoever gets at first getting further ahead because then you know so you have these other capabilities Beyond just simply knowing lots of subjects at topics uh but I think it's important to think about those questions already today whatever definition you've got because I think one of the things that um you know we don't spend enough time thinking about is of course these technology still has a long way to go but we still so early there's so many limitations in these systems there so many things that they can't do there's lots of work to be done and there's still all these risks but imagine we you address those capabilities and you you somehow make progress on the risks I think it's worth thinking about okay then what right how do we think about how how Society Works how do we think about capabilities how do we think about work how do we think about institutions I think those are important questions in addition to obviously working on the safety and all those things to think about what happens when this makes progress and I would hope that the belfa center and the Kenedy school also helps us think through those things how how do we think about governance how do we think about democracy how do we think about our institutions how do all these things work I think those are important questions yes my perspective is a little bit different um in the sense that it seems to me when you're you know this confrontation between between us and China these arms race for who gets first it's like The Winner Takes It All mentality and I don't think that I mean not because you have invented the first uh Tesla card means that the the second that comes doesn't Happ his pie of the market and can do it better so the me to the followers in the market indry sometimes the followers do it better because they've seen the the the mistakes of the the leader no so I think that's not a winner exit all uh situation uh and because no one not even the US not even China that Europe control the whole value chain that is requir but yeah One controls the raw materials the other controls the chips design the other controls the chips manufacturing we have the Taiwan plant and so this is more complex than it seems It's not that everybody has the control of all the resources until now where we see that the battle for the uh leadership of AI has become the battle to acquire the talent and the battle to acquire the sources of energy and then if the AI battle becomes the energy battle then there is a possibility that one captures all the you know the the number one sources of energy in the world and that prevents the other to develop but that's that's the geopolitics yeah so much there I'm going to turn to our audience let me just say that uh James has contributed to something called the digital papers which actually get at a lot of the questions that you just alluded to as being a um uh what we should be thinking about as we look at the development of this technology and how we might Envision it successfully contributing to humanity over time so I just want to encourage people to read those papers um so let me start here please introduce yourself and um be concise and ask a question um hi my name is Tom nellan I'm a sophomore at the college uh you kind of mentioned at the end there but as AI kind of grows in energy cost more and more as it gets more utilized and more in depth um how do you see the reconciliation of the increased energy costs um inherent in that with uh climate change as exists not as a geopolitical reality Cara why don't you take that one okay yes and then I will contribute to that I went to see what all right I'm a spani I need to think you know twice so I need to translate and you know so so you're right to ask the question I think it's an important question because as I said one of the things about the particular Foundation models we build they're very computationally intensive and those you know as we build bigger and bigger more complex model the energy requirements are going up and up and up and up and I think we need to address that I think that uh first of all I'd think about that question at least in a few parts one is is how do we change that make models more efficient uh smaller less computation intensive that's both a combination of a lot of efficiency work going on but also been developing new architectures that are not as computation intensive I think that work is going on I'm sort of for example I can tell you that one of the metrics we look at uh at Google is you know we often look at what does it cost to train per million tokens right and whether it's for training cost or cost and I can tell you at least in the last year that's come down by more than 80% so that's going to work but at the same time it's also worth thinking about does the technology also help us tackle the effects of climate change so there have been lots of reports that actually show and not by us by others that the the contributions whether it's to climate understanding climate science mitigation and even adaptation are far far far outweigh the cost on the energy front but that's not to say we shouldn't focus in solving the energy question just to put it in context I think the world probably consumes something like I think the the last time I looked at this was there's something like 300 over 350 ter hours in electricity costs in the world data center costs make up about 1% of that of the data center costs about 10% of them are AI related so that's a very small number but it's going up so we have to bend that curve but also make sure we're also tackling some of the effects of climate change using the technology so now I can add okay so the that is the the the real tradeoff so on one side we say that we rely on AI to be more efficient in terms of energy consumption but at the same time the the energy were required to to fit those models therefore the data center is absolutely unsustainable if we expect a growth in the use and even yeah I don't want to think the expansion to Global South on on on this model so that is a real question um the when you're talking about the impact is not only energy consumption is also water consumption and CO2 emissions there was a paper in 2019 that it says that the gpt3 was uh producing CO2 emissions equivalent to five cars during all their lifetime so I think we need to to has to put thres holes to that so this is not a free bar of energy just because the high-tech industry asks for that because they're externalizing that their R&D cost to us that's moving private debt to public debt because they expect that the public system and that's the beauty talking about this provides this energy because just they demand why that we need to require different energy policy uh do we put limits to that why are you not investing in R&D better is and and so that these models are more efficient are green algorithms in fact so I think that are the type of things that we should answer in in B Center let me just add one other thing to to I agree with that by the way um I think it's also important to keep in mind that in addition to making the models efficient I think it's incumbent on us and everybody to think about renewable energy uh I mentioned two examples something we're doing so we we have we've made a commitment it says Google that by 2030 we're going to be energy uh carbon free I don't mean neutral free uh we're on our way to do that uh we're starting to for example we have the first operational data center that uses geothermal energy in Nevada uh it's been running for about a year uh recently we started investing in nuclear for example energy so I think part of the question is also even thinking differently about the use of renewable energy particularly if these Technologies are actually giving us other benefits right so we you know benefits in you know we do work in Wildfire boundaries in for example in flood forecasting that's enabling the world to think about adaptation so even the use of energy and the sources of energy I think are part of the solution in addition to obviously changing the efficiency and size and complexity of these models y so many really interesting policy questions here so thank you for elucidating at some of those could I take a question from here yes can you hear me okay great I'm shenik Moore uh MPA candidate at the Kennedy School and my question is um from the perspective of a social worker so so um one of my professions outside of this is being a licensed social worker and I'm guess I'm concerned about the ai's ability to make decisions specifically when it requires intuition and judgment and I'll give an example of a AI software that was used as a predictive analysis to uh determine whether domestic violence offenders would go back and reoffend and harm their Partners in one instance um this AI software was not able to predict and this did lead to a number of Partners losing their lives so like these decisions are really costly and I'm wondering like what in what ways can your your your committee create standards that include and are inclusive of professions like social workers and humanists that can help co-design these algorithms thank you yes I can I can answer this so I think one of the the unique characteristics that AI has that no other technology until now has is that can continue evolving without human agency and that puts the question how do we keep human in the loop how do we make that these decisions at the end of the day there's a human uh considering those scenarios and making the final decision therefore it's accountable for so that bases challenges of what level of what is the maximum level of automation or hyper automation we can assume or we can authorize what happens with a Genki but as a as a European and and the the leading negotiator of the AI we took that absolutely into account and this is why we regulated those users as highrisk use cases of AI so um contrary to the general opinion the eua ACT does not regulate AI it regulates only the high-risk use cases of AI so it's a product based risk matric based regulation we cannot regulate the general purpose Technologies however the general purpose Technologies would we require is transparency because if you pretend that I adopt this black box and make a final uh use for health care of social worker I need to know what is in the black box so in the particular case of European a act we Define that something is high risk when it affects safety health or fundamental rights in what we call high-risk domains and one of the right highr domains is legal uh judicial uh social and also um educational environments these are things that can change over the time because technology can evolve so that a risk that exists today is no longer a risk tomorrow but anything that has to do with show ofle we are prevent we are uh prohibiting predictive crime uh social scoring like in China um um General surveillance in in massive surveillance in open spaces and of course there are law exemptions excuse me law enforement enforcement exemptions for policies if I know that somebody is is is a criminal that I can use those massive surveillance tools but not by default so I think that's a balance and I think that's where regulation can bring solutions to this uh for me uh highrisk environments great thank you over here please hello good afternoon my name is Mark I'm a midcareer MPA I would like to address my question to karmas segue Karma uh so uh you just mentioning uh the AI act I was in Brussels when that passed I was happy you know to say that it was under the Spanish presidency is very proud thank you for your work there um and you were just mentioning some things that were that were starting to get uh regulated but I was wondering what is the role of of the EU and Europe in general in this uh tennis game as you put it that you know we're kind of watching and that we don't have the national champions but what is our role in how AI will be govern like what have been the European values or contributions that been brought forward and that we can bring forward to regulate this space or to govern this space and more concretely what kind of soft power can we mobilize to make sure that they are effective so I think Europe and US share the same values as liberal Democratic societies and I think that we are in fact the Guardians that everything that's being done from the techological point of view we don't have the to pay the price of eroding in the process rights freedoms and guarantees we have taken centuries to Chief we use different tools to do that usually you tries to put a a regulation to uh give certainty as consumers and citizens uh we are much more concerned of present risks on fundamental rights that I witness the US which is more concerned about safety risks on Frontier AI models we are more okay yes that will come but we already have you know real problems to tackle and according to the way we are organized uh we have not done it very well in the past because we are 27 different countries we canot have 27 different legislations because that's not making us competitive in the digital uh uh um world where it's only profitable with economies of a scale and standardization so I think we have learned from the past and the a is a good example on which we have one legislation for 27 countries at a time that has been already enforced since since August the 1st I see the other we run in the US trying to approve uh different laws in different states while the ideal situation would be one federal law so the fact that we are good at regulating and I think I feel proud of that uh with a good balance of Regulation Innovation doesn't mean that you cannot achieve the same results with other means this is why we make a clear differentiation between ethics governance and regulation in the in the in the UN work for me governance is the instrument you need to put in place to ensure that corporations and the ones that develop the technology the ones that implement it and the one that use it including governments behave ethically and legislation is one of these tools but it's not the only one we should find other tools like Market incentives like oversight like monitoring transparency and reputation and sometimes you achieve things through reputation that we aieve through legislation I think in this particular field we need regulation we need to regulate for example and I think you can be you can be an example of prohibited uses of AI That's the elephant in the room nobody is daring to say even though this is technically feasible I don't want this to happen we don't want social scoring to happen in Europe while it's happening in the CH in China That's technically feasible but we don't want this to happen in Europe I think we can be a beacon of how to safeguard fundamental rights and I hope that you join us in the Endeavor even though with different tools if if I could add it just a quick two points to this um uh I think one of the interesting questions about regulation is at what level do you do it where what's the unit of analysis is it the state in the in the country in the case of the United States is it state by state federal is it Regional as an EU Act is it Global and I think we we this is one of the tensions we dealt with quite a bit on the UN high level body about what what what at what level do you need to do it how far do you go to make it Global before you start to affect local choices you'd want to enable by countries and Nations and those that's that's an interesting tension again I'll leave it all of you here at the Kennedy School the the other thing i' would also say though and I I is to say at least many of us I think on the body felt that at some level regulation particular when you look at the the full Global picture should do two things not one two one is it should obviously address all the things you want regulated for accountability and risk and safety and all of those kinds of things you should do that but it should also in addition enable the things you want the reason about the enablement part is important is if I think about back to the UN work and the things in the public interest there are lots of things that commercial interests alone won't get to right think about trying to address challenging health issues in the global South you may never make money doing that tackling climate change so I think that's the presumption that all the good things you will get to with commercial models I don't think is true and I think we should think about that so I think regulations do both things I'll also say one final thing if I may and um k and a have't talked about this recently but recently uh the former Italian Prime Minister Mario dragi published a European report on competitiveness uh for the E for the European Union and I think one of the most striking things in that was he pointed out that if you look at the the economic productivity of Europe versus United States you can explain away 80% of it by the differences in adoption and use of Technology explains the way 80% of the differences so therefore there's an addition to obviously the incredible work of Regulation it should also do this other thing to enable The Innovation the growth the you know tackling societal challenges that you want you should do both things just to as as more comment when you talk about governance in the UN report we clearly say we don't pretend that un becomes a global legislator that's not the aim we we forget about and we see governance as an enabler so it's not telling people what they don't they we don't have to do but explaining to them how to do things right first time so I have signaled to this person that he can ask the last question so I'm going to ask for it to be uh for us to be very quick on it because we're right up against the clock please all right thank you so much good evening my name is Brian Mion and I'm a student here at the Kennedy School uh so right before we broke out to the audience we were talking about how first mover Advantage when it comes to creating the technology may actually create a gap in a runaway uh situation um and that narrative is frequently between China and the United States as we addressed uh but when it comes to legislation I'm curious about first mover Advantage as well because it was not China or the United States who came up with the first Capstone it was the EU came up with the AI act um so is there an advantage to either being uh the the first player in the sense that they set the tone and and they have an advantage or is there also an advantage to maybe not being the first player to set that legislation and how do we think about this as we move forward Beyond AI Quantum Computing is comp coming and there's a lot of other uh fields and I'm curious first move Advantage um a positive or maybe we want to see what our competitors do first great great closing question if I could ask you both just to reflect on that in closing and we'll be on our way I don't know if it will be an advantage or not what we felt is that we had to put ordered in the house that we don't if we don't provide a framework that gives trust to Consumers and Trust to Citizens people and companies will not adopt the technology and that's the the end game and we want to preserve the the the fundamental rights um for example the law is not intended to control companies it's also intended to control the use of potential authorative regimes in the use of AI trying to root fundamental rights or human rights so we don't cannot pretend that we don't have isolated islands of legislation we will never have the same legislation like China uh but we need to agree at a global level of a bare minimum which is international law un Charter human rights and probably these common standards like sdgs thank you yeah I don't have much more to add to that other than to say I think whether the first mover becomes an advantage on regulation or not depends on on the frame of the regulation I think if the regulation is one to in this case to stop everything I don't I think that's a that's not an advantage I think if the regulation is one that is thinking through both addressing the challenges and enabling the opportunities and gives room and is also quite adaptive because keep in mind that the questions EV and regulation are changing you know the things I would have worried about with this technology 5 years ago I know that the ones I worry about now the questions have changed uh so adaptability even in regulation is also quite important so I think weever figures that out to get the balance quite right I think it's going to is the one who's going to have the advantage great great I want to thank everyone such great questions apologies to those who we didn't get to I want to thank James and Karma for bringing such a rich discussion to us giving us so many ideas of where we can be contributing to this rapidly evolving landscape please join me and thanking them have a good evening e
2024-10-30