How emerging technologies could affect national security | Helen Toner
Hi listeners. This is the 80,000 Hours Podcast where each week we have an unusually in-depth conversation about one of the world's most pressing problems and how you can use your career to solve it. I'm Rob Wiblin, director of research at 80,000 Hours. Today's guest is helping to found a new think tank in Washington, DC, focused on guiding us through an era where AI is likely to have more and more influence over war, intelligence gathering, and international relations. I was excited to talk to Helen because we think working on AI policy strategy could be an opportunity to have a very large and positive impact on the world for at least some of our listeners. And Helen has managed to advance her career in that field incredibly quickly, so I wanted to learn more about how she'd managed to actually do that.
It's also just a fascinating and very topical issue. Just today, Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher wrote an article warning that, quote, AI could destabilize everything from nuclear detente to human friendships. That article was out in the Atlantic. If you want some background before launching into this episode, I can definitely recommend listening to episode 31 with Alan Defoe for a brisk, and I'd say pretty compelling description of the challenges governments might face adapting to transformative AI. But you certainly don't need to listen to that one first. Before that, just a quick announcement or two. Firstly, if you're considering doing a philosophy PhD, our newest team member, Arden Kohler, just finished rewriting our career review of Philosophy careers, so we'll link to that in the show notes. Secondly, last year we did a few episodes about operations management careers in high impact organizations, especially some nonprofits. I just wanted to flag that our podcasts and articles
on that topic have been pretty successful at encouraging people to enter that area, which has made the job market for that career path more competitive than it was twelve months ago. That said, we still list a lot of operations related roles on our job board at current count, 113 of them. In fact, speaking of our job board, I should add that it currently lists 70 jobs relating to AI strategy and governance for you to browse and consider applying for. And needless to say, we'll link to the job board in the show notes. Finally, in the interest of full disclosure, note that the biggest donor to CSET, where Helen works is also a financial supporter of 80,000 hours. All right, without further ado, here's Helen. Today I'm speaking with Helen Toner. Helen is the director of strategy at Georgetown University's new Center for Security and Emerging Technology, otherwise known as CSET, which was set up in part with a $55 million grant from the Open Philanthropy Project, which is their largest grant to date. She previously worked as a senior research analyst at
the Open Philanthropy Project, where she advised policymakers and grant makers on AI policy and strategy. Between working at OpenPhil and joining CSET, Helen lived in Beijing for nine months, studying the Chinese AI ecosystem as a research affiliate for the University of Oxford's Center for the Governance of AI. Thanks for coming on the podcast, Helen. Great to be here. So I hope to get into talking about careers in AI policy and strategy and the time that you spent living in China. But first,
what are you doing at the moment and why do you think it's really important work? Yeah, so I've spent the last six to nine months setting up this center that you just mentioned, the Center for Security and Emerging Technology at Georgetown. And basically the mission of the center is to create high quality analysis and policy recommendations on issues at the intersection broadly of emerging technology and national security. But specifically right now we are focusing on the intersection of AI and national security as a place to start and a place to focus for the next couple of years. And we think this is important work because of how AI is gradually reshaping all kinds of aspects of society, but especially relevant to our work, reshaping how military and intelligence and national security more generally functions and how the US should be thinking about it. And we think that getting that right is really important and getting it wrong could be really bad. And the amount of work that was currently being put into analyzing some of the more detailed
questions about how that looks and what the US government should be doing in response, we thought was a little bit lacking. And so we wanted to bring together a team that could really look into some of those questions in depth and try and come up with more accurate analysis and better recommendations. Let's dive into actually talking about some AI policy issues and what people get right and what people get wrong about this. So a couple of weeks ago, you gave evidence to the US China Commission, which is, I guess, a commission that was set up by Congress to report back to them on issues to do with technology in the US and China.
That's right. And the title of your presentation was Technology, Trade and Military Civil Fusion, China's pursuit of artificial intelligence, new materials and new energy. We'll stick up a link to your testimony there. Yeah, that was the title of the Hearing that I testified at. Oh, that was the hearing. Okay. Right. You didn't write that yourself. How was that experience? Yeah, it was very interesting. It was a real honor to go and testify to that commission. And
it was in a Senate committee hearing room, which is a very kind of intimidating place to speak. It was encouraging as well that the questions that they asked. So they sent some questions in advance that I prepared written testimony for, and then the hearing itself was mostly Q A, and it was encouraging that the questions that they sent were very related to the types of topics that CSET had already been working on. So actually, while I was preparing, I was kind
of scrolling through our Google Drive, looking at the first and second draft reports that people had been putting together and just kind of cribbing all of their answers, which was really great. How is DC thinking about this issue? Were the people who are interviewing you and asking questions very engaged? It sounds like maybe they're really on the ball about this. Yeah, it's definitely a big topic. So a huge topic in the security community generally is the rise of China, how the US should relate to China, and AI is obviously easy to map onto that space. So there's a lot of interest in what AI means for the US China relationship. I was really impressed by the quality of the commissioners' questions. It's always hard to know in
situations like this if it's the commissioners themselves or just their excellent staff. But I would guess that at the very least, they had really good staff support because they asked several questions where it's kind of easy to ask a slightly misinformed version of the question that doesn't really make sense and is kind of hard to answer straightforwardly, but instead they would ask a more intelligent version that showed that they had read up on how the technology worked and on what sort of was concerning and what made less sense to be concerned about. That's really good. Is the government and the commission approaching this
from the perspective of no, China, like China, is rising and threatening the US, or is it more an interest in the potential of technology itself as well? So definitely different answers for the US government as a whole. I mean, it's hard to answer anything for the US government as a whole versus this particular commission. So this commission was actually set up specifically to consider risks to the US from engagement with China. So I believe it was set up during the process where China was entering the World Trade Organization and there was much more integration between the US and China. So I
believe this commission was set up to then be a kind of check to consider. Are there downsides? Are there risks we should be considering? So this commission and this hearing was very much from the perspective of what are the risks here? Should we be concerned? Should we be placing restrictions or withdrawing from certain types of arrangements and things like that? Yeah. So given that, what were the key points that you really wanted to communicate to the commissioners, make sure they remembered? I think the biggest one was to think about AI as a much broader technology than most sort of specific technologies that we talk about and think about. So I think it's really important to keep
in mind that AI is this very general purpose set of technologies that has applications and implications for all kinds of sectors across the economy and across society more generally. And the reason I think this is important is because I think commissions like the US China Commission and other parts of government are often thinking about AI the way they might think about a specific rocket or an aircraft or something like that, where it is both possible and desirable to contain the technology or to sort of secure US innovations in that technology. And the way that AI works is just so different because it is such a more general use technology, and also one where the research environment is so open and distributed, where almost all research innovations are shared freely on the Internet for anyone to access. A lot of development is
done using open source platforms like Tensorflow or Pytorch that for profit companies have decided to make open source and share freely. And so a big thing that I wanted to leave with the commission was that if they're thinking about this as a widget, that they need to kind of lock safely within the US's borders, that they're going to make mistakes in their policy recommendations. So I guess they're imagining it as kind of like a tank or something, some new piece of physical equipment that they can control. And the temptation is just like, keep it for ourselves, make sure that no one else can get access to it. But that's just like a total fantasy in the
case of a piece of software or just a much more general piece of technology like machine learning. Yeah, especially in the case of machine learning, where it's not a single piece of software, I think it's likely that there will, you know, there are already controls that apply to specific pieces of software doing know, for example, militarily relevant things. But if you're talking about AI or machine learning, that's just. I sometimes find it useful to mentally
replace AI with advanced statistics. I think I got that from Ryan Kalo at University of Washington. We have to keep the T test for ourselves, right? Where whenever you're saying something about AI, try replacing it with advanced statistics and see if it makes sense. Yeah, I guess. I think there was some statistical methods that people were developing in World War
I and World War II that they tried to keep secret. Oh, interesting piece of analysis. Is that related to cryptography or something else? Oh, well, there's cryptography, but no, also other things. I think there's that famous problem where they were trying to estimate the number of tanks that Germany had produced, and the Germans were stupid enough, it turned out, to literally give serial numbers to them that was sequential. And then they were trying to use statistics to use the serial numbers that they observed on the tanks that they destroyed to calculate how many existed. And I think that was a difficult
problem that they put a bunch of resources into. And then it was kind of like regarded as strategically advantageous to have that. I think there's probably, like various other cases, although they wouldn't expect you how to keep that secret beyond like a year or two, right? Yeah, well, they're in the middle of a total war there as well. It's a very different situation.
Very different. And also that I think is more analogous to a specific application. So perhaps a specific machine learning model or something like that, or a specific data set that is going to be used to train a critical system of some kind. Yeah, I think protecting statistics more generally from spreading would be a much heavier lift.
Yeah, I'll put up a link to that story about the tanks. Hopefully I haven't butchered it too badly. So what bad things do you think will happen if the US kind of does take the approach of trying to bottle advanced statistics and keep those advances to themselves? Yeah, I think essentially, if you think of AI as this one technology that has military implications that you need to keep safely in your borders, then you would really expect that there are various things you can do to restrict the flow of that information externally. So two obvious examples would be restricting the flow of people, so restricting immigration from perhaps from everywhere or perhaps just from competitor adversary nations. And then a second thing would be putting export controls on the technology, which would actually have a similar effect in that export controlled technologies like aerospace technologies, for example, there's restrictions on. It's what's called a deemed export. Basically,
if you have a lab in the US doing something and a foreign national walks in and starts working on it, that's deemed as an export because it's kind of been exported into their foreign brain. We've both got foreign brains here right now. Indeed. Yeah, I'm working on it. So I think those kinds of restrictions make sense. First, if the technology is possible to restrict. And second, if you're going to get buy in
from researchers that it's desirable to restrict. So yeah, you can say if you're working on rockets are basically missiles. You don't want North Korea to be getting your missile technology. You probably don't want China to be getting your missile. You probably don't want turkey to know.
Whatever. It's very easy to build expectations in the field that needs to stay in the country where it's being developed. And AI is different in two ways. One is that I think it just would be really hard to actually effectively contain any particular piece of AI research. And then second, and this reinforces the first one, it's going to be extremely difficult to get buy in from researchers that this is some key military advance that the US needs to contain. And so I think the most likely effect of anything that researchers perceive as restrictive or as making it harder for them to do their work is mostly going to result in the best researchers going abroad. And so many American researchers, if they wanted to go somewhere else, would probably look
to Canada or the UK. But there are also plenty of people currently using their talents in the US who are originally Chinese originally Russian who might go home or might go somewhere else. And it just seems like an attempt to try and keep the technology here would not actually work and would reduce the US's ability to continue developing the technology into the future. I'm not sure if this story is quite true either, but I think I remember reading that there's some encryption technologies that are regarded as export controlled by the United States, but are just widely used by everyone overseas. So it's kind of this farcical thing where they've defined
certain things as dangerous, but of course it's just impossible to stop other people from copying and creating them. And so it kind of just is an impediment to the US developing products that use these technologies. Maybe I'll double check if that's true. But I guess you could imagine that it is. And that's kind of indicative of just how hard it is to stop software from crossing borders. Yeah, I don't know about that specific case. It certainly sounds plausible. A thing that is not the same, but is kind of analogous is that if you're speaking with someone who holds a security clearance, you can get into trouble if you share with them information that is supposed to be classified, but that actually everyone has access to. So things like talking about the Snowden leaks
can be really problematic if you're talking to someone who holds a clearance and who is not supposed to be discussing that information with you, even though that has been widely published? Yeah, I guess. Is that just a case where the rules are kind of set up for a particular environment and they don't imagine this edge case where something that's classified has become completely public and it hasn't been declassified and they're like stuck. It's like everyone knows and everyone's talking about it, but you can't talk about it. I guess so. Again, I don't know the details of this case. I just know that it's something to look out for. Are there any examples of kind of software advances or ideas that people have managed to keep secret for long periods of time as a kind of competitive advantage? Yeah, I think the best, most similar example here would be offensive cyber capabilities.
Unfortunately, it's a very secretive area, so I don't know many details, but that's certainly something where we're talking entirely in terms of software and there do seem to be differences in the capabilities between different groups and different states. Again, it's perhaps more analogous. Each technique is perhaps more analogous to a single AI model as opposed to the field of machine learning as a whole. Yeah, and I guess the whole cyber warfare domain has been extremely locked down from the very beginning, whereas I guess machine learning is almost the exact opposite. It's like extremely open, even, I think, by the standards of academic fields.
That's right. And I think, again here, the general purpose part comes into play where I think if computer security researchers felt like their work could make massive differences in healthcare and in energy and in education, maybe they would be less inclined to go work for the NSA and sit in a windowless basement. But given that it is in fact purely an offensive or defensive technology, it's much easier to contain in that way. So that was your main bottom line for the committee, was you're not going to be able to lock this down so easily. Don't put on export controls and things like that. Did
you have Any other messages that you thought were important to communicate? Yeah, I think the biggest other thing would be to really remember how much strength the US draws from the fact that it does have these liberal democratic values that are at the core of all of the institutions and how the society works as a whole and to double down on those rather than. I think it's easy to look to China and see things that the Chinese government is doing and ways that Chinese companies relate to the Chinese government and things like that and feel kind of jealous, but I think ultimately the US is not going to be able to out China, and so instead it needs to do its best to really place those values front and center. Yeah. So what do you think people get most wrong about the strategic implications of AI? I'm especially wondering if there's kind of exaggerated fears that people have, which maybe you read about in the media and you kind of roll your eyes at the CSET officers. Yeah, I think maybe a big one is around autonomous weapons and how, of all the effects that AI is likely to have on security and on warfare, how big a part of that is specifically autonomous weapons versus all kinds of other things. I think
it's very easy to think, to picture in your head a robot that can harm you in some way, whether it be a drone or some kind of land based system, whatever it might be. But I think in practice, while I do expect those systems to be deployed and I do expect them to change how warfare works, I think there's going to be a much kind of deeper and more throughgoing way in which AI permeates through all of our systems in a similar way to how electricity in the early 20th century didn't just create the possibility to have electrically powered weapons, but it changed the entirety of how the armed forces worked. So it changed communications, it changed transport, it changed logistics and supply chains. And I think similarly, AI is going to just affect how absolutely everything is done.
And so I think an excessive focus on weapons, whether that be from people looking from the outside and being concerned about what weapons might be developed, but also from the inside perspective of thinking about what the Department of Defense, for example, should be doing about AI. I think the most important stuff is actually going to be getting its digital infrastructure in order. They're setting up a massive cloud contract to change the way they do data storage and all of that, thinking about how they store data and how that flows between different teams and how it can be applied, I think that is going to be a much bigger part of when we look back in 50 or 100 years, what we think about how AI has actually had an effect. Do you think that people are kind of too worried or not worried enough about the strategic implications of AI? Kind of, all things considered, just people in general? Just all the people in DC.
I think that still varies hugely by people. I suspect that the hype levels right now are a little bit higher than they should be. I don't know. I do like that classic line about technology that we generally overestimate how big an effect it's going to have in the short term and underestimate how big it'll be in the long term. I guess if I had to over generalize, that's how I'd do it. You mentioned that people kind of are quick to draw analogies for AI that sometimes aren't that informative. And I guess people very often reach for this
analogy to kind of the Cold War and nuclear weapons and talking about an AI arms race. And I have to admit I find myself doing this all the time because when I'm trying to explain to people why you're interested in the strategic and military implications of AI, that's kind of like a very easy analogy to reach to. And I guess that's because nuclear weapons did dramatically change the strategic game for war studies or for relations between countries.
And we think that possibly AI is going to do the same thing, but that doesn't mean that it's going to do it in anything like a similar manner. Did you agree that it's kind of a poor analogy? And what are the implications of people reaching for analogy like nuclear weapons? Yeah, I do think that's not a great analogy. It can be useful in some ways. No analogy is perfect. The biggest thing is this question of to what extent is this a discrete technology that has a small number of potential uses versus being this big umbrella term for many different things? And nuclear weapons are almost the pinnacle of. It's very discreet. You can say, does this country have the capability to create a nuclear weapon or does it not? If it does, how many does it have of which types? Whereas with AI, there's no real analogy to that. Another way that I find it useful to think about AI is just sort of gradually
improving our software. So you can't say, is this country using AI in its military systems? Even with autonomous weapons, you run into the exact same problem of like, oh, is a landmine an autonomous weapon? Is an automated missile defense system, an autonomous system in some way. And I think the strategic implications of this very discrete thing where you can check whether an adversary has it and you can sort of counter this very discrete technology, are very different from just gradually improving all of our systems and making them work better or making them need less human involvement, it's just quite a different picture. Yeah, it does seem like there's something quite od to talk about or to really emphasize an arms race in a technology that, as far as I can tell, is predominantly used now by kind of companies to suggest videos for you to watch and music that you're really going to like. Far more than it's being used for military purposes, at least as far as I can see at the moment. Did you agree with that? Yeah, I do agree. And I think also in general, people, again with the overestimating the short
term effects right now, the machine learning systems that we have seem so poorly suited to any kind of battlefield use because battlefields are characterized by having highly dynamic environments, highly unpredictable. There's an adversary actively trying to undermine your perception and your decision making ability. And the machine learning systems that we have are just so far from ready for an environment like that. They really are pretty brittle, they're pretty easy to spoof. They do unpredictable things for confusing reasons. So I think really centering AI weapons as the core part of what we're talking about is definitely premature. Yeah, I think I've heard you say before that you expect that the first times that this will start to, or that AI will really start to buy as a security concern is kind of with cybersecurity, because that's an environment where it's much more possible to use machine learning techniques because I don't have to have robots or deal with a battlefield. Do you still think that?
Yeah, I mean, in general, it's much easier to make fast progress in software than in hardware. And certainly in terms of, if we're talking about states using it, then the US system for procuring new hardware is really slow. Well, and then software, I won't say that they're necessarily better, but the way that they basically handle cyber warfare, as far as I know, is pretty different. So I think it will be much easier for them to incorporate new technologies, tweak what they're doing, gradually scale up the level of autonomy, as opposed to saying, okay, now we're going to procure this new autonomous tank that will have capabilities X, Y, and Z, which is going to be just a much sort of clunkier and longer term process.
When people have asked me to explain why we care about AI policy and strategy, I've found myself claiming that it's possible that we'll have machine learning systems in future that are going to become extremely good at hacking other computer systems. And then I find myself wondering, after I was saying that, is that actually true? Is that something that machine learning is likely to be able to do to just give you vastly more power to kind of break into an adversary country's computer systems? I expect so. Again, I'm not an expert in cybersecurity, but if you think about areas where machine learning does well, somewhere where you can get fast feedback, so where you can simulate, for example, an environment, so you could simulate the software infrastructure of an adversary and have your system kind of learn quickly how to find vulnerabilities, how to erase its own tracks so that can't be detected, versus things like robotics, where it's much harder to gather data very quickly. I would expect that it will be possible, for
there is already plenty of automation of some kind used in these hacking systems, which it's just not necessarily learned automation. It might be hand programmed. And so it seems like fertile ground. Again, I would love to know more about the technical details so I could get more specific, but from the outside, it looks like very fertile ground for ML algorithms to gradually play a larger and larger role. Yeah. Do you know if ML algorithms have already been used in designing Cybertex or just like,
hacking computers in general, or is that something that's kind of yet to break into the real world? I don't believe that it's widely used. There was a competition run by DARPA. It was called the Cyber Grand Challenge or something, which was basically an automated hacking competition. This was in 2016. And I believe that the systems involved there did not use machine learning techniques.
So you mentioned earlier that electricity might be a better analogy for artificial intelligence. Yeah. Why is that? And how far do you think we can take the analogy? How much can we learn from it? Yeah, I think the reason I claim it's a better analogy, again, no analogy is perfect, is that it's a technology that has implications across the whole range of different sectors of society, and it basically really changed how we live, rather than just making one specific or a small number of specific things possible. And I think that is what we're seeing from AI as a likely
way for things to develop. Who knows what the future holds? I don't want to say definite in terms of how far you can take it. It's a little hard to say one piece that I would love to look into more. I was actually just before the interview, looking up books that I could read on the history of electrification, is thinking about this question of infrastructure. Electricity is so clearly something where you can't just buy an electric widget and bring it in.
And now your office is electrified, but you really need to sort of start from the ground up. And it seems to me like AI is similar, and it would be really interesting to learn about how that happened, both in kind of public institutions, but also in people's homes and in cities and in the countryside and how that was actually rolled out. I don't know. I'll get back to you. So this analogy to electricity has become a little bit more popular lately. I think
Benjamin Garfinkel wrote this article recently, kind of try to be a bit more rigorous about evaluating how strong the arguments are that artificial intelligence is like a really important leverage point for trying to influence how, where the future goes. And I guess, yeah, when I imagine it more as electricity rather than as nuclear weapons, then it makes me feel a little bit more skeptical about whether there's much that we can do today to really change what the long term picture is or change how it pans out. You can imagine kind of an electricity and security analysis group in the late 19th century trying to figure out how do we deal with the security implications of electricity and trying to make that go better. I guess maybe that would have been sensible, but I guess it's not entirely obvious. Maybe it's just like the illusion of being so far away makes it seem like, well, everyone's going to end up with electricity soon. Like this doesn't have big strategic
implications, but perhaps it did. Have you given any thought to that issue? Not as much as I would have liked to. And again, maybe I should go away and read some books on the history of electricity and then get back to you. I do expect that there could have been more thought put into the kinds of technologies that electricity would enable and the implications that those would have. And that is something that we haven't begun doing at CSEP,
but that I would be really interested to do in the know, so far, we've been focused on this kind of US China competition angle, but it would be really interesting to think through, beyond autonomous weapons, what types of changes might AI make and what would that imply? So, yeah, in the electricity case, that might be if you have much more reliable, much faster communication between commanders and units in the field, like, what does that imply? How does that change what you can do? I don't know how much that was thought through in advance and how much it might have been possible to think through more in advance, but it would be interesting to learn more about. Yeah, it'd be really interesting to find out whether he thought that electricity had really important security implications and they're worried about kind of the country that gets electricity first and deploys it is going to have a massive advantage and have a lot of influence over how the future goes. I mean, I guess it kind of makes sense, I suppose. I think, yeah, it was like rich countries at the time that probably electrified earlier on, and maybe that really did help them with their colonial ambitions and so on, because they just became a lot richer. Yeah, certainly. I think it also makes it clear why it's a little strange to say, like, oh, who's going to get AI first? Who's going to get electricity first? It's like, well, it seems more like who's going to use it in what ways and who's going to be able to deploy it and actually have it be in widespread use in what ways. I guess if you imagine kind of each different electrical appliance as kind of like an ML algorithm, then maybe it starts to make a little bit more sense because you can imagine electronic weapons, which I Guess didn't really pan out, but you could have imagined that the military would use electricity perhaps more than we see them using it today, and then people could have worried about how much better you could make your weapon if you could electrify them. Yeah, perhaps so,
yeah, if that's the case, it seems like an AI is like electricity, then it seems like the US government would kind of have to restructure just tons of things to take advantage of it. So it seems then kind of likely that actual application of AI to government and security purposes is probably going to lag far behind what is technically possible just because it takes so long to. Military procurement is kind of notoriously slow and expensive and it takes a long time for kind of old infrastructure to be removed and replaced by new stuff. I think nuclear systems until recently were still using floppy disks that they totally stopped manufacturing, which actually, I think you're. Face palming, but I think that's horrible.
No. Well, I'm not sure it is because it had been proven to work. Like, do you really want to fiddle with something in nuclear systems? I think there was a case for keeping it, which they did point out anyway. The broader point is, yeah, government systems in general are replaced slowly, sometimes like mission critical, military area systems replaced even slower. So,
yeah. Is it possible that it will just be a little bit disappointing in a sense, and the government won't end up using AI nearly as much as you might hope? Yeah, I think that's definitely possible. And I do think that the places where it will be implemented sooner, will be in those areas that are not mission critical and are not security critical. Things like all of the DoD is basically one huge back office. So all of the logistical and HR and
finance systems, there's plenty of commercial, there's an increasing number of commercial off the shelf products that you could buy that use some form of machine learning to streamline, things like that. And so I expect that we'll see that before we see two drone swarms battling it out with no humans involved over the South China Sea or wherever it might be. Yeah, I suppose I wonder whether that can kind of tamp down on the arms race, because if both the US and China kind of expect that the other government is not going to be actually able to apply like ML systems or not take them up very quickly, then you don't have to worry about one side getting ahead really quickly just because they both expect the other side. They're just going to slow government
bureaucracy. So, yeah, you don't worry about one side tooling up way faster than you can. Yeah, I think that definitely maybe Tams it down a little bit. I do think that the whole job of a military is to be paranoid and thinking ahead about what adversaries might be think. And there's also been a history of the US underestimating how rapidly China would be able to develop various capabilities. So I think it's natural to still be concerned and alarmed about what might being
developed behind closed doors and what they might be going to field with little warning. Are there any obvious ways in which the kind of electricity to AI analogy breaks down? Any ways that AI is kind of obviously different than electricity was in the 19th century? I think the biggest one that comes to mind is just the existence of this machine learning research community that is developing AI technologies and pushing them forward and finding new applications and finding new areas that they can work in and improving their performance. And the fact that community is such a big part of how AI is likely to develop. I don't believe there's
analogy for that in the electricity case. And in a lot of my thinking about policy, I think considering how that community is likely to react to policy changes is a really important consideration. And so I'm not sure that there's something similar in the electricity case. I thought you might say that this analogy would be that electricity is a rival good, a material good, that two people can't use the same electricity. But with AI as software, if you can come up with a really good algorithm, it can be scaled up and used by millions, potentially very quickly.
Yeah, that's true as well. Definitely. I guess there's another way that it could be like transformative, perhaps a bit more quickly because you don't necessarily need to build up as much physical infrastructure. Yeah, that could be right. People have also sometimes talked about data as kind of the new oil which has always struck me as a little bit daft because kind of oil is this rival risk good, where it's like two people can't use the same barrel of oil, whereas data is easily copied and kind of the algorithms that come out of training on particular set of data can be copied super easily. It's like
completely different from oil in a sense. Yeah. Do you kind of agree that's a misleading analogy? I do, and I think it's for the reason you said, but also for a couple of other reasons, a big one being that oil is this kind of all purpose input to many different kinds of systems, whereas data in large part is very specific to or what kind of data you need for a given machine learning application is pretty specific to what the machine learning application is for. And I think people tend to neglect that when they use this analogy. So the most common way that I see this come up is people saying that, well, I think Kaifu Li coined the phrase that if data is the new oil, then China is the Saudi Arabia of data. And this is coming from the idea that, well, China has this really large population and they don't have very good privacy controls, so they can just kind of vacuum up all this data from their citizens. And then because data is an input to AI, therefore the output is like better
AI. And this is some fundamental disadvantage for the US. And I kind of get where people are coming from with this, but it really seems like it is missing the step where you say, so what kind of data is who going to have access to and what are they going to use it to build? I would love to see more analysis of what kind of AI enabled systems are likely to be most security relevant. And I would bet that most of them are going to have very little to do with consumer data, which is the kind of data that this argument is relevant to.
Yeah, I guess the Chinese military will be in a fantastic position to suggest products for Chinese consumers to buy on whatever their equivalent of Amazon is using that data, but potentially it doesn't really help them on the battlefield. Right. And if you look at things like satellite imagery or drone imagery and how to process that and how to turn that into useful applications, then the US has a massive lead there. So
that seems like much more relevant than any potential sort of advantage that China has. Oil is like mostly the same as other oil, whereas data is not the same as other data. It's kind of like saying PhD graduates, the new oil. It's like the thing is PhD graduates in what are capable of doing what they're all
like very specific to particular tasks. You can't just sub in, like ten PhD graduates. Yeah, and I mean, there are definitely complications that come from things like transfer learning is getting better and better, which is where you train an algorithm one data set and then you use it on a different problem, or you sort of retrain it on a smaller version of a different data set. And things like language understanding, like maybe having access to the chat logs of huge numbers of consumers has some use in certain types of language understanding. So I don't know. I don't think it's a simple story, but I guess that's the point. I think the story people are telling is too simple.
So let's push back on that for a second. Let's say that we get some kind of phase shift here where it's kind of, we're no longer just programming machine learning systems to perform one specific task on that kind of data, but instead we do kind of find ways to develop machine learning systems that are good at general reasoning. Yeah, they learn language and they learn general reasoning principles. And now it seems like these
machine learning algorithms can perform many more functions, eventually go into novel areas, and learn to act in them in the same way that humans do. Is that something that you consider at all? Is that a vision that people in DC think about or that people at CSET think about at this point? Not something that I consider in my day job. Definitely something that's interesting to read about on the weekends. I think in DC there's a healthy skepticism to that idea. And certainly
given that CSET is focused on producing work that is going to be relevant and useful to decisions that are coming up in the near term, it's not really something that's in our wheelhouse. So something I saw you were arguing about in your testimony is that the kind of AI talent competition, in as much as there is one, is, it's kind of the US is to lose. I guess a lot of people imagine that over time China is going to just probably overtake the United States in AI research in the same way that kind of is overtaking the US economy just through force of population. But I guess you think that's wrong. Yeah, I do. And I think it's because it's really easy to underestimate the extent to which the US
is just a massive hub for global talent. When I was in China, I had two friends who were machine learning students at Qinghua University, a very prestigious Chinese university, and I was asking them about where they were hoping to get their internships over the summer. And it was just so obvious for both of them that the US companies were by far the best place to get an internship, and therefore it would be super competitive and therefore they probably wouldn't get it, and so they'd have to go to a different place. And I think it's really easy to overlook that from within the US how desirable it is to come here. And I included in my testimony at the end a figure that came from a paper looking at global talent flows. And the figure relates to inventors, so holders of patents, which is not exactly the
same as AI researchers, obviously, but I included it because it's just really visually striking. Basically, it's looking at different countries and their sort of net position in terms of how many inventors where an inventor is a patent holder. I think how many inventors they import versus export. And first off, China is a massive net exporter, so they're losing something, or roughly, I'm just eyeballing this chart, around 50,000 people a year sort of being net leaving China.
And then all these other countries, they're sort of around that same range in the sort of thousands or maybe tens of thousands, and most of them are sort of either exporting or they're very slightly importing. And then you just have this massive spike at the far right of the chart for the United States, where its net importer position is around 190,000 people, which is just sort of way off the scale of what all these other countries are doing. And I haven't seen a chart like that for AI researchers or for computer Science PhDs, but I would guess that it would be pretty similarly shaped. And I think China is going to gradually do a better job of retaining some of its own top talent at home. But I really can't see it sort of massive political change, really can't see it becoming such a hub for people from other countries. And certainly, if you think about the prospect of the
US losing 50,000 really talented people to go live in China because they think it's a better place to live, I just think that's completely ludicrous, really. And again, this comes back to the point of the United States leaning into the advantages that we do have, and those really do include political freedom of expression and association, and even just having clean air and good infrastructure. Maybe that last point, the good infrastructure is one where China can compete, but everything else, I think the US is in a really strong position if it will just maintain that. Yeah, I think I have heard TyleR Cohen make the argument that it's clear that DC isn't taking AI that seriously because they've done absolutely nothing about immigration law to do with AI. There's no particular program for AI researchers to come into the United States,
which you'd think there would be if you were really worried about your competitive situation and losing technological superiority on that technology. If you think that the US government should do anything about AI, do you think it should change immigration laws so that AI scientists can come to America is the no brainer? Yeah, I definitely think that's the no brainer if you ignore political considerations. And the problem is that immigration is just this hugely political issue here and there is so much deadlock on all sides. And if you try to make some small, obvious seeming change, then people will want it to become part of a larger deal. And one person I heard who worked a lot on immigration policy said that if you try to put any kind of immigration legislation through Congress whatsoever, it's just going to snowball and become comprehensive immigration reform, which is then this huge headache that no one wants to deal with. So I do think it's the obvious low hanging
fruit aside from political considerations. But the political considerations are really important. So we are looking into in our project on this, looking into changes that don't need legislation that can just go through agencies or be done through executive action in the hope that those could be actually achieved. I don't know. I think Tyler Cowan's quote is like, cute, but not necessarily reflecting the way know government actually works.
Yeah. You said in your testimony that you thought would be pretty dangerous to try to close up the openness of the current AI ecosystem. How could that backfire on the US? The thing I'm most concerned about would be if the US government is taking actions in that direction that don't have a lot of buy in from the research community. I think the AI research community cares a lot about the ability to publish their work openly, to share it, to critique it. There
was really interesting release recently from OpenAI where they put out this language model, GPT-2 which could kind of generate convincing pieces of text. And they deliberately, when they released this, said that they were going to only release a much smaller version of the model and not put out the full version of the model because of concerns that it might be misused. And the reaction to this within the research community was pretty outraged, which was really interesting given that they were explaining what they were doing. They were saying that it was explicitly for sort of reasons of public benefit, basically, and still they got all this blowback. And so I think if the US government took actions to restrict publishing in a similar way, it would be much more likely to do that in a way that would be seen as even worse by the AI research community. And I do think that would prompt at
least some significant number of researchers to choose a different place to work, not to mention also slowing down the US's ability to innovate in the space, because there obviously are a lot of great symbiotic effects you get when researchers can read each other's work openly, when they're using similar platforms to develop on, when they're sort of shared benchmarks to work from. So, yeah, I guess an attempt like that to try to stay ahead of everyone else could end up with you falling behind because people just jump ship and leave and want to go do research elsewhere. And then also your research community becomes kind of sclerotic and unable to communicate. Right. And so I do think there's plenty of room for, and maybe a need for a conversation about when openness and complete openness is not the right norm for AI research. And I really applaud OpenAI for beginning to prompt that conversation,
but I think it's very unlikely that the government should be kind of leading that. So let's just be a little bit more pessimistic here about the ODS of CSET having a positive impact for a second. What reason is there to think that the US government is realistically going to be able to coordinate itself to take predictably beneficial actions here? Could it be that it's just better for the government to kind of stay out of this area, and companies that kind of aren't so threatening to other countries just lead the way in this technology? Yeah, I think I would not describe the effect we're trying to have as trying to get some kind of coordinated government, whole of government response that is sort of very proactive and very large? Instead, I would think that there are going to be government responses to many aspects of this technology, some of which may be sort of application specific regulation around self driving cars or what have you, and some of which may be more general. So there's definitely been
a lot of talk about potential restrictions on students or restrictions on companies and whether they're able to work with us partners. So I think there are going to be actions taken by different parts of the government, and we would hope that our work can help shape those actions to be more productive and more likely to have the effects that they're intended to have. And better based on a real grounding in the technology, as opposed to trying to carry out some Grand AI strategy, which I think I agree would be kind of dicey if you could get the strategy to be executed and certainly extremely difficult to get to the point where any coordinated strategy is being carried. At 80,000 Hours, we're pretty excited for people to go into AI policy and strategy and do the kind of thing that you're doing. But I guess the biggest kind of pushback I get is from people who are skeptical that it's possible to reliably inform kind of policy in such a complicated topic in a way that has any reliable effect. Even if you can understand the
proximate effects of the actions and the things that you say, the effects further down the line, further down the chain of causation are so hard to understand. And kind of the government system that you're a part of is so chaotic and full of unintended consequences. But it seems like even someone who's very smart and kind of understands the system as well as anyone can, it's still going to be at a bit of loss to figure out what they should say that's going to help rather than hurt. Do you think there's much of this critique of AI and kind of other difficult policy work? I think it's a good critique in explaining why it doesn't make sense to come up with grand plans that have many different steps and involve many different actors and solve everything through some very specific plan of action. But I also think that kind of the reality of how so much of policy works is that there are people who are overworked, who don't have time to learn about all the different areas that they are working on, who have lots of different things they're thinking about. Maybe they're thinking about their career, maybe they're
thinking about their family, maybe they're hoping to do a different job in the future. That I do think there's a lot of room for people who care about producing kind of good outcomes in the world and who are able to skill up on the technical side and then also operate effectively in a policy environment. I just think there's a lot of low hanging fruit to slightly tweak how things go, which is not going to be. Not going to be some long term plan that is very detailed, but it's just going to be having a slightly different set of considerations in mind. An example of this.
This is kind of a grandiose example, but in the Robert Carroll biography of LBJ there's a section where he talks about the Cuban missile crisis, and he describes Bobby Kennedy having a significant influence over how the decision making went there, simply because he was thinking about the effects on civilians more than he felt like the other people in the room were. And that sort of. That slight change in perspective meant that his whole approach to the problem was quite different. I think that's like a pretty once in a lifetime, once in many lifetimes experience. But I think the basic principle is the same. I guess it's the case that working with government kind of, you get this huge potential leverage from kind of the power and the resources that the government has access to. And then on the flip side, you take this hit that it's potentially a lot harder to
figure out exactly what you should say, and there's a good chance that the actions that you take won't have the effect that was desired, and you kind of just got to trade off these different pros and cons of using that particular approach to try to do good. Yeah, and I definitely think that there's a difficult thing of when you're deciding how you want to shape your career, it's difficult to choose a career where you will predictably end up in some situation where you can have a lot of leverage over some important thing. And so it's more likely that you'll be able to find something where you can either be making slight changes often, or where there is some chance that some important situation will come up and you'll have a chance to play a role in it. But then the problem is, if you go with, there's a chance that a big situation will come up and you'll get to play a role in it, there's a much greater chance that it won't. And then you'll spend most of your career sort of doing much less important stuff.
And I think there's, like, a difficult set of prioritization and motivation questions involved in. Is that the kind of career that you want to have and how to feel about the fact that looking back, probably, most likely you'll feel like you didn't accomplish that much, but maybe ex ante, there was a chance that you would be able to be part of an important time. So, all the way back in February 2017, there was this two day workshop in Oxford that led to this report, which we've talked about on the show a few times before, called the Malicious Use of artificial intelligence, which had fully 26 authors from 14 different institutions kind of writing this, I guess, consensus view on what concerns you all had about how I might be misused in future. Yeah. You were one of many authors of this report. Two years after it's written, how do
you think it holds up? And what might you say that was different today than what was written then? I think it holds up reasonably well. The workshop was held in February 2017, and then the report was published in February 2018, building on the workshop. And something that was amusing at that time was that we had mentioned in the report the possibility that machine learning would be used to generate fake video. Essentially, I believe,
in the workshop, we talked about it being used for political purposes. And then in the meantime, between the workshop and the report, there were actually the first instances of deep fakes being used in pornography. And so that was interesting to see that we'd kind of got close to the mark, but not necessarily hit the mark on how it might be used. I think the biggest thing, if were doing it again today, the biggest question in my mind is how we should think about uses of AI by states that to me, certainly, and to many Western observers, look extremely unethical. I remember at the time that we held the workshop,
there was some discussion of, should we be talking about AI that is used that has kind of bad consequences, or should we be talking about AI that is used in ways that are illegal, or what exactly should it be? And
2023-11-20 01:51