Leadership for Society Governing Tech International Cooperation and Competition

Show video

welcome to the leadership for society speaker series uh the theme this year is tensions business Civic society and politics I'm Brian Lowry professor of organizational behavior at the Stanford Graduate School of Business and today I'm speaking with Mariano for Tino quar also known as Tino can I call you Tino you absolutely can thank you thank you so Tino is the president of the carnegi Endowment for International Peace so so good to have you on Tino thank you it's a pleasure to be here Brian good to see you again yeah good to see you too so you have a really interesting background can you tell us a little bit um about your background and and maybe a bit about what you did before Cari sure I was born in northern Mexico in a region that is known for a lot of back and forth trade legal and otherwise between the United States and Mexico it's also frequently visited by hurricanes that come up from the Gulf of Mexico and uh sometime when I was in my teenage years I had the um great opportunity to immigrate to the United States and grew up along the border but on the American side of the Border eventually in places like gexo California I was lucky to come to California because it is a complicated multi-layered Mass massive experiment in democracy I feel like there are so many aspects to it that I'm still wrapping my mind around them entirely went to school in the east coast um decided to get interested in the intersection of government psychology and economics and that uh led me eventually to law school and actually in the tail end of my college years before going to law school I got interested in in artificial intelligence but that was a long time ago in the early 1990s you could say and and then had a kind of a triple career where I focused on being in Academia and getting tenure at Stanford law school and teaching administrative law international law uh executive power citizenship and migration Tech policy and the law that kind of stuff but then I also had a career in government where I worked uh at the treasury Department at the White House eventually on the California Supreme Court as a Justice and then I've also had a career in nonprofits where I served on the board of the hulet foundation and uh now I lead a complex organization that's all about doing research to inform policy makers and to help them figure out how to cooperate effectively without being naive you've been a busy guy thank you I'm sorry my answer was a little on the long side but it's a very uh dangerous thing when you ask somebody to tell you a little bit about themselves no it was fantastic it was fantastic and I'm gonna focus in on a couple things one is the AI interest you had and the other is kind of the governance International interest because because you are the president of an organization that has International Peace in it in its name so I'm I'm curious to hear your thoughts about the most important effects of emerging Technologies like AI on international relations because we we know that it's having a huge effect um certainly domestically and how people are thinking about work and what the future of a might AI might be but there's also concerns about how it might affect um engagements across Nations so I'm wondering as you think about AI what are the the things that you um most top of mind for you in terms of international relations so I would say we should start by recognizing that throughout most of the history of what we now call artificial intelligence there's been this process where the field is Def the Enterprise of doing AI making progress on AI is defined this trying to understand the processes that will let you automate what normally is understood to require human intelligence to do but notice that that definition carries with it a kind of implicit moving line because this notion of what is ordinarily taken to require intelligence to do means that what had previously been something that you might have expected to require human intelligence like decoding the scratches I might make on a page that only I could decode into uh writing into character recognition that can become digital is really old technology now it's considered pretty commonplace kind of boring but at one point that was a kind of artificial intelligence the idea that you might build a system that could play chess effectively would have been viewed as a perfect example of Cutting Edge artificial intelligence at one point and now you know chess even the game go which has more moves and possibilities some would argue that there there might be stars or even atoms in the universe that um has fallen by the wayside in terms of a frontier that has been breached so nowadays to answer your question I would note ways in which AI has had an effect in the past is having an effect in the present and will have an effect in the future and I'll note that the Future Part is always tricky because even if we're trying to be humble and careful about what predictions we make the truth is that it's very hard to anticipate in the middle of a period of Rapid technological change exactly how it's going to turn out how did we get here so one way that AI has affected International politics is that nowadays probably very few people listening to this have ever gone a day without using some digital device without checking your email posting something on X I guess is what we're supposed to call it now who knows U or you know doing something in social media maybe having a kid look at a Tik Tok video that whole infrastructure of information consumption that is built into our lives is driven not so much by the choices you and I make about what to see but by algorithms that curate what is put in front of us nowadays that is not Frontier cutting edge AI anymore but it was in the past so I would say one way AI has affected international relations Is by being at the core of a system of information consumption that can often polarize opinion make people upset or angry about things show them videos that might be more stream facilitate the possibility that another country could use these tools to polarize opinion in one in a country that they' usew as their adversary as the Russians attempted to do with respect to the US in the very present Cutting Edge AI that is about recognizing visual stimuli and triangulating with uh fuzzy information that lets you target what might have seemed like a really difficult to discern Target is being built into drone the most advanced ones that are uh if not already being deployed will be deployed very soon and in the future of course there are two ways in which AI might be drastically important for Humanity one is that its own capacity to improve itself might help us get these technological and scientific breakthroughs to happen much more quickly the other is that as we get closer to a kind of intelligence that appears even if it doesn't actually but appears to surpass human abilities there will be more fighting and competition to control it to shape it and to decide what and what it does and how it does it that last one I'm I'm there's a number of things you said that are really interesting to me but I'm going to start with the last one um the fighting about shaping it and what we'll use it for you think internationally what you have to think about is vastly different cultures vastly different values but the Technologies as they develop don't stay within the bounds of those of any particular culture or Nation how do you think about that like how do we think about the challenges of overcoming these different value systems and ethical perspectives when so it's in in China or India or Japan or Canada or United States they don't all agree on what's right what's ethical and you have touched something so important Brian but also it's to mix my metaphors you have touched the tip of the iceberg I don't know exactly if that's physically possible but let's this for a moment but if I follow that Iceberg and look at What Lies Beneath I would say two things first of all even inside one single country think about Muslims and hisus in India think about the sheer scale of the difference between Oklahoma and California you have these contending values even inside California itself one reason I got interested in AI actually way back when I was in college a little bit was because the very rudimentary tools available back then still might give you a way of modeling human cognition getting a sense of how does people decide things right which gets a little closer to your field and I would say the irony now is that the AI systems that are becoming more commonplace as recommendation engines as the brains to a drone as uh the potential ingredient in the business plan of an organization that wants to make billions of dollars like that those systems are now shaping our own cognition and we don't just use those systems to understand and model human cognition but um more more fundamentally I would say the reason why it is important to be practical and not overly idealistic about any truly Global project to control and govern AI is because of those distinctions you mentioned there are distinctions of religion of culture and values of income and politics of you know at the end of the day of geopolitical priorities and economic interests now my organization was started in 1910 10 1011 with the vision of trying to use knowledge and ideas and research and rigorous thinking to try to help policy makers do better at planning how to cooperate without being naive how to avoid war so if I think about that Paradigm and how to app to AI I would say despite all the difficulties we've talked about all those divisions it is still possible I believe and my organization believes to find some common ground among different countries to have them work together for example to limit the risk that somebody is going to use a really sophisticated a model to build a new Cyber exploit or a new biological weapon but it will require building trust because many countries even if they have somewhat aligned interests they also have some Divergent interests and they don't really trust each other so we work on that in whatever ways we can but it's a much larger project and we require probably a lot of help in doing that so um um I don't I don't tend towards optimism but I feel like you do I get the sense that you're an optimistic kind of guy Risky Business to be in these days Brian but see at the end of our conversation so um can you give me an example where you we in you know historic example where something like this so it's not AI but had technological development since we we've been human um so as you think of the the current kind of geopolitical system where there's competition among countries you go back competition among Empires or whatever where there was cooperation on something of this importance where gaining an edge would have tremendous value for your own country or Empire or what have you is there an example where something like this has been um regulated or governed internationally in a successful way yes I think so I mean not that it's all perfect but let me start with asking you a question what is the F this most remote place you've ever traveled to that's a really good question um dep what you mean by remote I would say the the hardest place I ever went was Machu Picchu so I hik Machu Picchu oh that is a tough thing how did you get to Peru on a plane okay thank you you can get on a plane in SFO fly maybe connecting in Mexico City Maybe by Miami maybe Non-Stop and land in Lima without really worrying that much that the plane is going to blow up on the way that the parts are going to start flying off uh I mean these things do happen of course that Peru is going to say I'm not going to let this kind of plane land over there and the entire and at the same time of course Boeing and Airbus are in a furious competition for the commercial Air Market uh whether you fly United American or whatever other Airline can take you there is a matter of fierce competition and many of us who believe at some level in in the market economy would say that's a good thing and yet somewhere and that somewhere is in Montreal Canada there is an international civil aviation organization that stays up all night thinking about what is necessary to keep that system that is embedded in a competitive World from falling apart making sure that if a plane is certified to fly in Europe and in uh the US it could also fly in Peru and in Mexico for example and again not perfect but I think it is not a small thing that the world in our lifetime Brian has shrunk because you can get on a plane at SFO and get off and approv I would also add the example of nuclear weapons and here I'll do it with care because I'm not here to tell you that we've solved the problem of nuclear risk or Annihilation it's pretty scary business when you start to think about the sheer possibility that somebody might irresponsibly and recklessly risk the actual use of nuclear weapons and let's remember we're not just talking about the US China and Russia we're talking about India Pakistan we're talking talking about North Korea we're talking about the Middle East so complicated realities yet we got to the point in this world where we had you know 880,000 70,000 60,000 nuclear warheads can you imagine that any one of those Warheads often much more powerful than the one used to destroy Hiroshima or at least the core of it which I just saw um as a city and and powerfully was moved by a museum there just a few few days ago actually we have you know closer to you know 12 15,000 of these weapons now um depending on how you count them and whether you know count the ones that are deployed or not deployed or whatever but that's real progress like the world is gone from having many many many more nuclear weapons to far fewer ones the world has some slightly functioning mechanisms to try to reduce risks of U uh nuclear programs that turn into weapons programs without some International validation or understanding but we got a ways to go so those those are interesting examples and I'll take those as good examples we could quibble about both of them but I I I think um there there's certainly a case for cooperation um but there's also clearly competition right so and when you think about going back to ai ai now or you talking about the Industrial Revolution there was all kinds of Espionage to still people's you know whatever Advan they' made in terms of early technology and Industrial Revolution like there there's just been a long history of fierce comp competition um and lately with AI you could say that there's there's been um pretty good evidence that people that state actors are trying to destabilize competitors through the use of technology and so I just wonder how do you balance regulation so let's say I'm just going to focus on domestic for a second let's say you want to regulate um us um technology in some way because you're concerned about negative effects in the society but it it hampers the speed at which the technology evolves in the United States in some other country doesn't do that and now they gain an advantage how do you balance the need to protect some population let's call it the domestic population or we could name whatever population we like against the need to um keep up in terms of developments with competitors yeah it's it's an interesting set of questions I would say first of all it's not always the case that rules set by a by a by a c by a political jurisdiction that tries to make technology better more efficient is a detriment to competition I would say just take fuel efficiency standards in cars right if we as a jurisdiction and say to these companies sometimes you may not think right now you can do this but we've done the math we've looked at the engineering right now and we think you can and besides that's sort of the democratic process asking you to do you might actually Advantage your industry over time that's one point second I do think there are some examples of where most countries in the world have been concerned enough about just completely flouting any kind of ethical or normative limit around for example bioengineering invol ing humans that you know I could imagine I could tell a story it would be kind of a Sci-Fi story where a country gains an advantage by mastering that technology experimenting with a bunch of humans pushing the frontiers of what is possible and then trying to reap the benefits in their soldiers and their engineers and their scientists I think it's telling that as far as we know that hasn't really happened now that I put that in a little asteris because I do note that there may be some things we don't know uh the system isn't perfect but I think you also have to factor in to some degree the normative human impulse not to do something that is crazy or destructive the question then becomes how to build an Institutional scaffolding around it to make it meaningful so you know the the the normative human influence not do something crazy destructive is often usually backed by a fear of repercussions let's say so when you talked about the the nonproliferation or the the reduced proliferation nuclear weapons um and you'd argue not not the the fact they haven't been used since you know since World War II um you maybe chalk that up to an understanding of the consequences if if they they were um so I'm gonna now switch to a different kind of topic that you brought up earlier about Ai and the history of AI replacing human Ingenuity of what humans can do that's really what it was one potential consequence of that is um widening social inequality and so I wonder how do you think about managing that consequence of improved AI or further technological development in the direction which we're going I would say with with care and focus um ideally I I think your right to note the technology frequently widens social inequity quity I think U Darren us Malo I think has a book out that is that deals with this to some extent with a co-author and that has been a fairly constant reality over the course of human history if you think about for example the Advent of Agriculture was very much about instantiating a system that resulted in people living more unequal lives in many cases and ones that they didn't enjoy as much as being hunter gatherers in in certain situations even the term Barbarian has a a quirky history around the efforts that were made by certain civilizations to effectively pull in folks who didn't want to be part of that kind of hierarchical agricultural milu that said I think part of what makes technology so interesting part of the reason I keep getting drawn back to these conversations Brian is because sometimes at the very same time that we are widening certain kinds of social inequality we're reducing social inequality as well and how then should the world make sense of some capability that lets you do both at the same time what do I mean what's an example well I think if we were having this conversation 60 or 70 years ago first of all it wouldn't be happening the way it is digitally right now we would probably see a world where there was more inequality than there is now around whether people got their basic caloric needs to survive still in this day day in the 21st century there are people dying of hunger and that is painful to me as I imagine it is to you too but as a proportion of the global population it's a much much smaller number now how that translates into an economic system that provides for a ton of innovation globally particularly in some countries like the US that yields things that have helped us reduce hunger like the Green Revolution which by the way was also helped by philanthropy but that is a tough one and I think with respect to digital Technologies and eventually AI we may face very similar questions like on the one hand there is less inequality than there used to be with respect to how easily people get information watch a movie reach someone on the other side of the world but you know if you look at the uh wealth distribution in some countries there's growing inequality in terms of just how much raw wealth people have and I think it's fair to say that some Cutting Edge work probably even done by your colleagues suggests that it's not always a a trade-off perhaps you could have less inequality in just as much Innovation I don't know though I think that's a it's a tough one and it feels to me like that's where the conversation has to go a little bit how do we balance some of those competing considerations and that's that's my question to you how do we balance those considerations because you could argue that it's not maybe it's not inherent in the technology maybe it's in the inequality the technology produced and it's a feedback cycle so there people who have more and they have access to more and as the technology improves they benefit more from those improvements and they are the people who are making decisions about how those technologies will be used and or regulated so how do you how do you if that's true that's kind of a cycle there like how does the average person or how does Civic organizations play a role in deciding how technology will evolve or be regulated yeah so good question and I think in some ways we can get there or get a little closer to answering by first doing process of elimination I think those Among Us who believe that fast access to technology with no constraints automatically translates into improved human welfare are either not reading history or reading history but leaving it aside because they think that the outcome they're arguing for benefits them and it should be there for what happens the it is closer to we have this uh constant push and pull with technology and we can go back to literally the Advent of fire if you want to and hand tools where there is pain and there is benefit there is more nutrition and then there is more harm at the same time almost at every step for many of us that is consistent with generally supporting an economic ecosystem where we lean towards inovation and towards technological progress but boy do I think it's naive to think that that doesn't come without some need need for some regulation and so let's think for a moment about all the different ways that that has been true In Our Lifetime it was 1990 when I graduated from high school U that used to make me seem and feel young now it makes me not young but do you know that all I believe that I'm getting this right from my climate team at Carnegie that most of the emissions of carbon and human history have come since I graduated from high school all the previously meeting done in all of human history from to from the dawn of time to the present human made um are less um before 1990 now if you do the math maybe it's not that surprising we've got all these developing countries are coming online but the point is like I don't think I can say with a straight face that fossil fuels have been an unmitigated disaster for human well-being they have brought us this technology the materials that we're using the Plastics around us better lives for a lot of people the fertilizers in some cases people use as part of the efforts to feed the world and reduce but we've kind of backed ourselves into a corner now um with respect to carbon and what's happening to the planet and I think even people who are climate change Skeptics have to kind of admit that you just don't want to pretend that there's no risk there that we have something to actually worry so anyway all that is to say there are basically three tools we can use to deal with that reality and I think the the main breakthrough point that I want to make to your listeners is that all these three tools can have a global implication and effect but they start at the domestic level the first is the liability system you design a technology that works really great for you and you sell it and become a billionaire um but then that technology ends up blowing up my house hurting my kid making my wife sick you know I should be able to take you to courts and maybe the Court's answer is like you know what you assume the risk when you bought that Tino maybe the Court's answer is actually Brian acted unreasonably when he sold this because there were some easy steps he could have taken to either warn you about it or to make it safer right liability second regulation gets a bad name there is such a thing as too much regulation but if you design it right and Target it right let's take AI since that's what you wanted to talk about so if there are sophisticated systems that we increasingly use to help us make decisions about how to run an organization how to hire people how to allocate resources that we have to invest anything else that really matter how to decide whether a patient should be you know getting emergency care right now or can wait a couple days there ought to be some testing of that system that's not rocket science that's not even science it's common sense to my mind third Norms of use you know what like when you're raising your kid and you are seeing that every waking moment that your kid is not eating or playing sports or doing homework your kid is glued I'm not talking about your kid Brian to be clear but somebody's kid is glued to the screen you might think that's like not a great thing and and you might want to intervene a little bit to set a different Norm right all these things if they happen somewhat globally I think can help Society move in the direction of getting to a better set of outcomes where we get the upside but not as much of a risk that's fantastic I mean I I um I love that as a kind of ending point but I just want to open it up and see if there's anything You' want to leave the the listeners with because I think this is a a really big topic and really just obviously just scratching the the surface but as you look forward um what is your biggest hope about um the technology that we're seeing right now that are capturing people's interests what what can you like spend us um a positive vision of How It's regulated how it's operating around the world so I do think it is important to recognize we're living in a better world than we were in 50 years ago 60 70 years ago more clean water more access to good nutrition more electricity which is pretty life-changing for a ton of people um more Mobility more freedom for people to decide where they're going to live in many cases and uh you know getting to the social stuff better treatment of women more decisions that people can make in many countries about who they want to be with and that kind of cultural and social change has been almost always connected to the diffusion of Technology have to acknowledge that to my mind by the same token it feels to me like so many of the benefits that have come from the Green Revolution from the internet from electricity diffusion from development more generally are kind of better dealt with if we're just honest about the reality that we need to be prudent that we need to have some common sense and to me that doesn't mean GL Global governance of everything it means more like countries trying to do the right thing for their people but also occasionally being able to work together on a project where they act if something is going off the rails they work together they effectively share information and create the right institutions I'll say one more thing about AI specifically you know pretty much any conversation about technology ends up being a bit of a reflection about human judgment and what's good and bad about it right so if I think about all the ways in which the internet has become complicated and difficult and dark it's partly because that's a reflection of some aspects of human nature as well as some of the good that we can do which you know pulls Us in the other direction and it's kind of a battle we have with the algorithms to make sure that the better impulses that humans have get preserved but why am I mentioning that right now I just want to leave your listeners thinking about the complexity city that is built into what AI is becoming which is to the extent it can reflect us humans and imitate us a little bit more it will also be important for us to judge what aspects of our own selves we actually want to instantiate and where we want to hold back for example we all know that we occasionally have real conflicts in terms of what we want you want one thing at time one you want a different thing at time two you want to eat the chocolate you don't want to eat the chocolate you want to exercise you don't want to say how then a system that is designed to achieve our goals actually manages those inner temporal utility conflicts is a huge question it's a way that like political philosophy is becoming an engineering problem I think that's really cool and the more people are aware of that and understand that the more they're likely to have a positive relationship with technology I like that so technology is a reflection of us and us trying to manage our better Angels through technology and trying to kind of mute those those aspects of our Humanity that it calls the question of whether if we want it to help us be somebody do we want it to help us be the person we are the person we would like to become the person others would like us to be like that begs to be actually an actual choice you can make and I think that's that's new and exciting but also strange I agree and you know what's What's um both exciting and terrifying about it is it requires us to see ourselves clearly I think that's that's that's maybe the first challenge Brian that is very well said well thank you so much I really had a fun time talking to you I appreciate you taking the time to talk to me me too this was fun thanks for what you're doing and let's uh you know connect again before long let's do it

2024-02-08

Show video