How USAID Uses AI for Good

How USAID Uses AI for Good

Show Video

- [Brandie Nonneckie] Many of you have probably noticed there's been quite a few advancements in the AI governance space. My head is literally spinning, and I'm on bated breath until the EU AI Act text actually comes out. But on October 30th, the White House issued its executive order on AI outlining eight priority actions the federal government must take to ensure the responsible development and use of artificial intelligence. I'm also the host of a show called "TecHype," and in that show we do two things.

First, I sit down with experts where we debunk misunderstandings around emerging technologies, discuss the real benefits and risks, and analyze different technical and policy strategies we can implement to better ensure we harness emerging technologies for good. In addition, we've launched a series called "TLDR," who in here knows what TLDR is? A few? Too long, didn't read. So I actually analyze and summarize those lengthy policies, laws, regulations that you all might not want to analyze. I analyze and summarize them in short videos, and we have a series on the White House Executive Order, a nine-part series, one outlining the executive order in its entirety, and then eight additional going deep into each of those eight priority actions. So I really hope that you'll check out those TLDRs and share them with your colleagues.

You can watch them at techype.org, and that's with one H, techype.org. So today we are here for a very important conversation in light of the White House executive order on AI and its outlines for federal agencies, including USAID.

I'm really honored to be able to introduce our introductory remarks today from Counselor Clinton White. Clinton White serves as a counselor for the United States Agency for International Development. He has more than 20 years of experience in the public sector and is a member of the Senior Foreign Service. Also serves as counselor to the administrator, Samantha Power. He is the senior most foreign service officer, and supports USAID's 80 missions around the world, including spearheading innovative new ways to further the mission of USAID, including ways to support digital democracy and development.

I'm very excited for Clinton's remarks about how USAID is looking at the White House executive order on AI, and its call for USAID to explore how it will responsibly design and deploy AI in support of its mission. Just a few more introductory remarks for Clinton. Your bio is very, very well accomplished.

Just a couple of other things here. Mr. White also worked in national municipal government and a range of private sector and civil society groups, including those representing women, youth, and marginalized communities, and works very hard to improve the lives of families, communities, and countries. So with that, Clinton, I would love for you to come up and give us introductory remarks to set us up for an engaging panel discussion. (audience applauding) [Clinton] Thank you.

Thank you so much, Brandie, and I am so happy to be here today on the campus of Berkeley. This is the first time that I've actually been out here, so it is actually for me a great opportunity to also opportunity to explore the campus. So one thing I will say, and then I want to digress a little bit off of my talking points, but first, I wanna thank everyone here, for the introduction from Brandie, and also thank you the University of Berkeley CITRIS Policy Lab for hosting this timely and important discussion. USAID, we're proud of our longstanding partnership with UC Berkeley, to open up access to the latest research innovation through Development and Impact Lab co-managed by the Blum Center for Developing Economies and the Center for Effective Global Action at UC Berkeley. So I'm glad to be here with all of you, and when I say with all of you, and I'm glad to be here, we're also here as a larger team.

And so what you will see from our delegation that's visiting Berkeley today, we have members from our mission in Kenya who are focusing on the digital and the technology as we're also doing outreach on this particular road show here in California. We have members from our Pakistan mission as well as distinguished representatives of the Fatima Jinnah Women's University in Pakistan right here with us as they've been traveling with us throughout the country, engaging and learning more to take certain things back to their country, but also how they can engage, impart their wisdom to us at universities at USAID and other communities. We also have members from our mission in Columbia. We have members from our mission in Fiji. We have members from our mission in Uganda.

So as you know, USAID is very international. We have brought the international here with you as well. And I couldn't also not want to forget our DROC, Democratic Republic of the Congo member that is also here with us. So just to say that we take this very seriously, and that we are all here to engage and work with you.

The rapid development and adaptation of digital technology such as AI, is transforming how people worldwide access information, goods, and services. Digital technology and AI in particular has the power to spur economic growth, improve development outcomes, and lift millions out of poverty. However, AI also opens up new avenues for authoritarians and other malign actors to suppress and surveil their populations. It equips malign actors with tools to propagate false narratives and spread online hate speech and harassment at scale. And it can be used to target civilians, silence dissenting voices, and manipulate public opinion when left unchecked.

The United States and other democracies have learned that our greatest assets in pushing back against authoritarianism are our core values rooted in democracy and human rights and equitable economic opportunities for all. And our technology must reflect that. AI can open up a world of potential, and it presents immense opportunities for good. When the design development and the deployment of AI tools are gender-sensitive and human-centered. And the potential harms are thought through and addressed well before and throughout implementation.

There are many ways in which AI can advance development in humanitarian outcomes and help us achieve the sustainable development goals. I'd like to highlight some specific ways that AI can strengthen democratic processes and combat violence in authoritarian trends that can shred the social fabric of communities and countries. AI can be used to protect citizens' rights and to help prevent conflict. It can assess complex social and behavioral phenomena from human trafficking and transnational crime, to violence and extremist activity, rapidly at a massive scale, to enable the creation of early warning systems to help protect civilians in conflict zones. It can provide avenues for civil society, investigative journalism, independent media, and human right defenders to network and communicate more effectively. It can serve as a pivotal tool in strategically countering misinformation, disinformation, and hate speech.

And in times of conflict, it can help us document atrocities and hold perpetrators accountable. Let me give you one example of what we're currently doing. In Indonesia, for example, USAID has supported the development of a state-of-the-art AI system, the misinformation early warning system, to identify altered and manipulated social media content and disinformation, and determine how it is being spread to help counteract malign narratives. This program and others like it can help promote tolerance and harmony among diverse populations, which has the potential to decrease the risk of conflict.

I also just wanna mention a little bit about some of the ways that we are helping within the AI space. So USAID is also supporting the development of a civic space early warning system, which uses machine learning to forecast potential shifts toward repression of civil society and independent media and broader freedom of expression. By analyzing over 90 million news articles, the University of Pennsylvania showed that machine learning can predict the closing of civic space one, three, and six months in the future with statistically significant accuracy. This AI-powered platform is designed to provide policymakers and civil society with advanced warning of major changes in civic space across more than 50 countries.

So as I mentioned, and what you will hear a little bit more about from Vera, USAID's AI action plan, which gets to what you were mentioning about the executive order. This plan lays out steps which we can take to responsibly engage with the potential benefits AI has to offer while managing its significant risks. These steps include constructing appropriate safeguards against risk and potential harms to communities, investing in talent that prioritizes a responsible approach to AI, and understanding how AI or any new technology is connected to the broader digital ecosystem, and the different stakeholders there and how we can strengthen these ecosystems to advance responsibilities, use of technology. And consistent with this plan, USAID is partnering with development agencies from the United Nations and Canada, along with the Bill and Melinda Gates Foundation to invest in strengthening the responsible AI ecosystem in Africa. We're also supporting the Global Index for Responsible AI, a first of its kind efforts to map how over 140 countries are progressing in terms of responsible AI.

So I'm not gonna sit here and speak all day to you all because I know that we have a very engaging, and I'm also looking forward to the panel discussion, but I would just like to say again, thank you so much for allowing us to be here with you, allowing us to have this discussion, which is so important as we think about the future, but also think about the present. And as USAID, as we're thinking about our business model, what is it that we have to reimagine? What is it that we have to change as development practitioners in the way that we're doing business? We're seeing so much that has changed, that this agency, it's not the same agency and the same development work that we were doing 10, 20, 30 years ago, but now we really do need to figure out how we embrace technology such as AI and other ways to effectively implement our programs and activities, but also how we're solving and creating solutions for the world. Because when it comes down to it, it is about relieving suffering. It is about giving people hope. It is about ensuring that everyone is represented, and everybody's voice is heard, but it also comes right back down to human dignity, that everybody is valued in this country and around the globe. Thank you.

(audience applauding) That too. - Thank you, Counselor Clinton. Thank you. And if I could have the panelists come up and take your seats, I'll take this first seat here, the moderator seat.

And while they're making their way up to this stage, I just wanted to pick up on some of the points that Clinton addressed. We're at this very, you know, monumental time, right? Where we can harness artificial intelligence in ways that enables us to achieve the sustainable development goals, to be able to promote democracy worldwide. However, in that same breath, if we don't do it in a responsible manner, we risk undermining everything we've worked for. So in our panel today, we're gonna talk about some of those strategies. So the way it's going to go, I'll first introduce each of the panelists.

I have some questions that I'll be asking, and we're gonna open it up for Q&A, about 11:40. I will run around with a handheld mic, so please wait to ask your question until I give you the mic because we are recording this session. So first to my left is Ritwik Gupta, and Ritwik is one of our AI Policy Hub fellows here at UC Berkeley.

Ritwik wears many hats, but we'll highlight you are an EECS PhD student currently, are you getting close to graduation? [Ritwik] Almost. - Okay. - Hopefully. - Are you ready? Okay, all right. And then next to Ritwik, we have Mia Mohring Larsen, who is the tech advisor for human rights and global engagement in the office of Denmark's Tech ambassador.

Are people in Denmark really excited about the EU AI act? - I think we are. I need to. - Okay, I am asking you to speak on behalf of all of Denmark. - I think we are. - No pressure, you're fine. [Mia] I think we're pretty ecstatic, but we can come back so that. - Yes, sure. I really hope we do. And then next to Mia is Betsy Popken, and she is the executive director of the Human Rights Center at the UC Berkeley School of Law.

Also looking at the use of AI tools, right, in your work. And machine learning. So I hope that we get into that. And then last but definitely not least, is Vera Zackem, and she serves as USAID's Chief Digital Democracy and Rights Officer.

So to kick us off, actually, I'd like to first go with you, Vera. I have some seated questions, but just off the top of the head, I'm gonna take my own moderator's privilege. I'd love to hear from you what is USAID's vision for how to responsibly develop and deploy AI systems? [Vera] So, no sweat. Not a simple question, right? But seriously, first of all, Brandie, to you and to the CITRIS Policy Lab, to CLTC, to the Goldman School of Public Policy, to UC Berkeley writ large, we're so on behalf, of also USAID and the counselor just mentioned, we're just so delighted to be here to kind of showcase with our international delegation, and really bring a little flavor of the world and various missions that are represented here to UC Berkeley.

I've had, before to just answer your question and what makes my work, I think, particularly exciting. I've had the privilege and an honor of serving as the Chief Digital Democracy and Rights officer since May of '22. But more importantly, I've been able to get to the field on the ground, and see how things like artificial intelligence and emerging technologies are actually impacting people.

And fundamentally, how this actually impacts various segments of society in all the countries that we serve. Of course, I have not traveled the world, but I've been in a few pockets that I have seen on the ground what this actually really means. So beyond all of the policy conversations, I think what USAID offers is the ability to create this connective tissue, what we're doing at the policy level, what we're doing through evolve and diplomacy in the multi-stakeholder community, but also what this actually means for the people, and how this impacts the people. So Brandie, as you rightly mentioned, the president signed the executive order on artificial intelligence, which reaffirms the United States' commitment to developing the world's most advanced AI technologies. It also clearly positions the United States as a global leader to make sure that we are advancing and we're deploying safe, secure, and trustworthy AI.

And I think at the core of it, human rights has to be at the center of it. And this is what we mean when we're talking about rights respecting AI. It is everything from human rights impact assessments to development, to deployment, to even thinking about what artificial intelligence needs.

And through the course of the life cycle, but really through various iterative stages to make sure that we're putting human rights at the center of the design process, human rights at the center of conversations as well, and making sure that we are, as we're thinking about artificial intelligence, it is in adherence to the international human rights commitments and principles. One of the things that we're doing in the development space for us at USAID, is we have the USAID AI action plan. And that action plan has been developed based on years of learnings from USAID in our overall development sector on the use of AI to address both development and humanitarian assistance problems. It is everything from how artificial intelligence can help with crop diseases, to providing loans to financially-excluded small holder farmers, to addressing global health issues such as tuberculosis, to making sure that we're matching youth, potentially at-risk youth with jobs. And yes, it is also about how do we make sure that we're creating digital ecosystems that support democratic values and human rights, particularly in a world right now where we are experiencing democratic backsliding and civil unrest.

I'll just give a few examples, and I'll turn it over to our other colleagues and panelists. Clinton mentioned the Indonesia example, which is, I will tell you, is so cool what they're doing with AI, particularly in strengthening information integrity and resilience. And again, focusing on the end user, how this is actually impacting the people in Indonesia. I think we've already seen really, really great results from that. But even also in Georgia, you know, a country where has also seen not insignificant unrest over the years, but one where strengthening information integrity and resilience is also incredibly important.

So here what in Georgia we've done with our partners is actually develop innovation competitions, and partnering also with civil society, but also the ability to deploy AI to create a speech to text transcription tools for Georgian language video and audio, which allow researchers that focus on information manipulation, strengthening the resilience and integrity of the information environment to really detect and monitor and respond. And you know, I'll just mention two other things. One is, you know, we know that artificial intelligence, particularly in development countries, also can be used for good. It's also an opportunity, but it also can be used, if you will, negatively, particularly posing risks to women and girls and through technology-facilitated gender-based violence. So one really cool example I think is coming out of our Latin America region is we've developed an AI chatbot called SARA Chatbot. And it was developed through, deployed through the Infosegura regional project in partnership with the UN development program and in collaboration with USAID.

And what this chatbot provides, it's a 24/7 cost-free information and guidance to any woman that may be at risk of violence. It is strictly confidential, and enables the victim to assess the situation, and can turn to government or civil society in her country. Right now it's deployed throughout Central America and Dominican Republic. And then the other thing that I mentioned, just kind of on the development, diplomacy, and policy in this space and how USAID is involved, again, really thinking through the questions is how do we ensure that we do not overly automate decision making that might impact human beings and societies? How do we also ensure that we're avoiding the risk that generative AI application will produce inaccurate information for USAID beneficiary? And so to answer some of these questions, USAID is very actively engaged across the multi-stakeholder community. Also across the US government, is partly with sort of the AI executive order, and also working with the Secretary of Commerce and the National Institute for Standards in Technology to develop the global development playbook for AI risks to complement the NIST AI Risk Management and Framework.

So let me just pause there. I know there's a lot, but suffice it to say, we are really excited by the order. We're also, of course eager to learn how the EU AI act. And I'm sure you'll mention a little bit about that, how that will develop as well. But fundamentally, I think what we're looking to do and really be part of is seeing how AI can positively help society and thinking about human rights and rights respecting AI throughout the development lifecycle, and ultimately how this actually impacts the people that we serve in our 80 missions around the world.

- Yeah, thank you so much, Vera. I mean, one theme I'm hearing in your remarks is this idea of the need for collaboration, collaboration among partners, and collaboration with those who are most affected by those technologies. So my next question is for Betsy and Mia, you both work in the human rights and tech space within academia and government. What role are those institutions currently playing in helping to develop rights respecting AI, and what roles do you see them playing in the future? - I can start. At the Human Rights Center, we're conducting a human rights assessment of generative AI with a focus on large language models.

And we have brought together lawyers, human rights experts, and computer scientists to do this together. We are doing a traditional human rights impact assessment, which essentially is an assessment of the human rights risks. It also identifies opportunities, but the focus is on risks, of LLM technology.

And we're focusing on three fields in particular, three professional fields, the use of LLMs by lawyers, by educators, and by journalists. And by diving deep in those fields, we hope to elicit kind of more than the general platitudes that have been kind of identified thus far. And we'll dive deep both through kind of a scholarly analysis, but also we're heavily engaged in stakeholder engagement globally. We've learned from, you know, one of our, an expert in LLMs and law in Columbia, that judges have used LLMs to help them make decisions in their cases in South America and Latin America. We've learned from an expert in LLMs and journalism in South Africa that journalists are using LLMs to help them write sports stories. We've learned from an educator and LLM expert in the UAE that educators are using LLMs to assess student work.

So you know, so that just kind of shows we do the kind of literature review, the stakeholder engagement, and then kind of use our own assessment of kind of the human rights risks and opportunities that are at play. The cool thing is we are pairing it with what we're calling a model evaluation, what Brandie has written about as a human rights algorithmic assessment. But for that we're focusing in on one particular area, investigative journalism. And so what we're hoping, and we have computer scientists on our team who is helping to run that. So we're pairing human rights experts up with computer scientists to develop a foundation pilot that can be hopefully built upon further by both us, but also companies developing this technology. So that's just one example of a way that an academic institution get involved.

And I really think the multidisciplinary nature of what we're doing is so key to this. - Yeah, I could not agree with you more. You gave the example of judges using these LLMs to help them make decisions.

And we know full well that sometimes AI-enabled systems can perpetuate biases and discrimination. Now the EU AI Act is a risk-based approach, right? It's looking at, in high risk areas such as sentencing, of which this would fall under. How do they identify and mitigate those risks appropriately? So Mia, you're facing this head on, right, right now with the EU AI Act. How are governments, including Denmark, how are you setting up appropriate processes to ensure that developers of these systems in high risk settings are doing adequate risk assessment and risk mitigation? - Yeah, thank you for that question, Brandie, and thank you so much for having us be part of this.

Even though I can't be the face of the whole EU here, I'm happy to talk to what we have been, I think most excited about since the, at least the initial agreement on the EU AI Act. There's still lots more to come, the course of the next six months in 2024. But one thing we have been really excited about, and one thing that was a primary concern for Denmark, or one thing we really wanted to fight hard for in the EU AI Act was this risk-based approach.

And I think it touches upon something I think you had asked us to consider before entering the panel. And it's something that we've, you know, we continuously, I think, consider when we deal with certain human rights, which is this supposed tension between innovation and regulation. And we really feel like the risk-based approach, you could say, you would ask like, why not the rights-based approach? And I'm a human rights advisor in our tech team.

And so obviously the fundamental rights have been front and center, but the risk-based approach is what we feel will really help us strike that balance between new, (mic clatters) oh, sorry, needing to have that, needing to have an innovation ecosystem that can deliver all of these amazing opportunities and the potential that we see that AI have for so many different people to actually elevate their work and to help them do better and to release some human cognitive resources to do better, to really advance humanity, to do more, while making sure that our citizens are safe, and that we have AI we can trust so that we have AI that we can actually deploy. Because if we start losing that, it's gonna be hard for us to even, I think, even start actually having AI in society. So the risk-based approach for us was really, really important. And I think we have seen that the agreement, at least from what we know now, there's still lots that we don't know, but from what we have seen so far and from what we understand is that they have achieved a really balanced approach via the risk-based approach to legislation.

- Yeah, I think that's very insightful, and I'm always happy when I hear somebody else bring up the trope that like regulation stifles innovation, it's often untrue. Some of the most regulated sectors are simultaneously the most innovative. Clean energy being one example. But also this argument that the risk-based approach is not touching on rights.

I feel like that's actually not true, because the risks are based in human rights, the risks to your individual right, to freedom of thought, freedom of employment. So I think also it's a bit of a tenuous argument that we're hearing because yeah, the risks are founded in human rights. Now Ritwik, to you, Ritwik, you have worked on research looking at how do we harness dual use technologies, of which AI, now that we have these foundation models, as many of you may know, machine learning models up to this point, were primarily trained for a very specific task. Now we have these foundation models that we can use across various domains and for various purposes. So Ritwik, in the work that you're doing, how do you think we can best harness these dual use technologies and mitigate some of those risks? - Absolutely, and thank you Brandi, for asking the question and for having us here today. My career is focused on AI for humanitarian assistance and disaster response, both building technologies to address use cases in this world of HADR, as well as in addressing the integration and impact of these technologies into the user base, right? So firefighters, humanitarian rights investigators, et cetera.

And there's been an amazing sort of influx and onboarding of these tools across multiple sectors of this entire field. Things, you know, some of my research, for example, includes how do we assess building damage from space? We're some of the first people to do that. And now that's something that you see being used all around the world. The DOD does it, the department of state does it, USAID is doing it to figure out how many buildings were damaged in let's say Gaza, you know, what has been done. But like you mentioned, all of these come with a very dual use nature.

The models are pretty generic, you know, the exact same model that I use to do detection of critical infrastructure for targeting purposes, right? I go to GitHub, I download YOLOv8, is the exact same architecture I use to do damage assessment. The only thing that changes between those two things is the data that I trained it on. And even in those cases, the data is largely the same. It's just that the labels I gave the model were slightly different.

And so it raises a very important question, if we were to give, you know, some country in the world access to very powerful AI model that is meant for humanitarian purposes. Let's say we give, you know, some country a model that can do flood segmentation to tell you how much flooding has happened and which buildings were impacted, how much will it cost to reconstruct? Can that same country then use that model for military purposes or for right suppressing purposes, right? Can I then start using that same model to then start targeting vulnerable populations in those same flooded areas? It's a big question, and it's one that I don't think many people are currently looking at. - So what do you suggest we do? Because right, your argument is absolutely spot on. This technology can be used for good and for bad. We see this also in surveillance software in countries where we're surveilling to identify terrorists. And it just so happens those governments then often turn that technology on political dissidents and journalists, Mexico being a big perpetrator of this.

So how can we mitigate this, the malicious intent? Or can we not? - To a degree. To a degree, I think we can. To a large degree, I think we cannot. But I think that, you know, at least from the perspective of some of the organizations recruiting technologies, we can put measures in place to make sure that we have some sort of, you know, some sort of kill switch or some sort of reporting mechanism that ensures that we know what is being done with this model at all times. For example, when we distribute our models, we make sure that people sign up on a website to download them.

They say what organization they're with, we verify them, and then we let them download that. In the future, if they end up using it for bad things, we can cut off their access. The other thing I think that we can do on a larger basis is stop making sort of long-term policy decisions based on short-term tech trends. And maybe just to. - Amen.

- [Audience] Amen. - Exactly like, everybody, yeah, everybody's focused on generative AI. We got machine learning models causing damage already, yeah. - Yeah, actually, maybe just to highlight that, how many of you have heard of LLMs, show of hands? Most of you. - Y'all engage with them. - How many of you are currently working on integrating LLMs into your current workflows, or working on writing policy for how to make sure generative AI is used responsibly within your organizations? Another show of hands? Fantastic.

How many of you have heard of LMMs? None of you. They stand for large multimodal models. As it turns out, no one is working on LLMs anymore.

LLMs are dead, LLMs are no longer the hot stuff. LMMs are the new thing. If you look at GPT-4, GPT-4-V specifically, if you look at some of the new capabilities coming of Quad, you look at all of the research that's now coming out of Berkeley AI research, it's on large multimodal models. No cutting edge AI researchers is even bothering to look at LLMs anymore.

Yet our organizations, our governments, our NGOs are still trying to wrap their head around what an LLM is, how they can be used responsibly, while the world has already moved on from them. And so if we make policy decisions of how to integrate these technologies now on the short term tech trend, we'll be locking ourselves into suboptimal decisions for years to come. - Yeah, I totally agree with you, and I'm gonna open up this question to the three of you that focusing primarily on one type of artificial intelligence, even in the state of California, we had the executive order issued by Governor Newsom with a heavy focus on generative AI, which is extremely shortsighted.

Now, for the work that you're all doing, in light of what Ritwik is saying now, how can we best position ourselves to be able to harness these LMMs and the LMNOPs in the future? How might we be able to, given our experience right now, of setting up a responsible AI strategy, be better positioned to quickly and efficiently on ramp new technologies in the future? - I can answer. You know, we chose LLMs, because in kind of the three professions we were looking at, those are kind of most utilized. I could see in the future definitely expanding that to include LMMs as well. So I think in the academic space, we need to stay ahead of the curve.

You know, there's always a question of like, when does the funding come? And so I think in that respect, like it, you know, it can lag because of that, but I think academic institutions are in a unique space where we can stay ahead of the curve. - Yeah, and I guess I'll just speak from USAID and US government, honestly, one of the ways to stay ahead of the curve is investing in the workforce and in youth and in the people to really understand this technology, And making sure also that we are in various countries around the world, are investing both in public interest technologists, right, in terms of workforce development, but also in the technologies who can say, because we know this stuff is moving way faster than we can even imagine. So I'll just give one plug and an example, and that is during the first or the second Summit for Democracy. We announced our signature marquee initiative called the Advancing Digital Democracy Initiative. And in some respects in various countries, we've done pilots in Serbia and Zambia.

Now it's gonna be in a number of countries by the end of '24. But the point being is it kind of helps with this as well. It's not just helping partner governments with, and making sure legal and regulatory sort of standards and reforms, and making sure we're upholding international human rights commitments, but also investing in public interest technologies and technologists, and making sure civil society is also equipped, if you will, to be able to understand and to learn. So I think this is not just, you know, also at the technical level, but I also think this presents, it's not a challenge.

I actually think this presents an enormous opportunity around the world, and making sure that we are investing both in the workforce, in education, and in understanding, back to my earlier point, that fundamentally we know we have to think about the end user and who this is impacting. And we know that this is impacting very different diverse sectors of the population, through children to youth, to a woman, to, you know, people with disabilities, and, you know, racial and ethnic minorities, whatever, you know. So I think that piece has to be factored into this. - Yeah, I think that's absolutely important.

And in the executive order, right, it initiated a large hiring initiative within the federal government to attract that talent. The Federal Trade Commission has had for years now a technologist hiring program. And one thing I wanna point out is that social scientists are included in that, because we cannot, as was said before, it has to be a multidisciplinary approach. We cannot just have technologists, but we have to have those public interest technologists, those who understand the sociotechnical aspects. Within Denmark, let's say, well, I'll make you talk about all of the EU again, but are there similar initiatives to bring on that tech talent into government? - So one thing we did, and I think maybe jumping from the multidisciplinary to the multi-stakeholder, one thing we really tried to champion within the Danish MFA, but broadly in our government and something that we are very, we consider a Danish position of strength, is working among public-private partnerships.

So public-private-civil partnerships at HA. So something we did a couple of years ago, Vera knows this well, we worked a lot on kind of similar issues there, but we launched this Tech for Democracy Initiative, which was our biggest contribution to the Summit for Democracy, where we said, what can Denmark do to kind of help build this momentum of saying that we need governments and companies and civil society speaking about these issues together. Not just defining the solutions, but defining the problems. And I think what we still are seeing, especially in AI, I think one thing is saying digital platforms, which was I think the primary issue we were speaking about like three years ago, two years ago, and then enter AI. And I think it's true that what, you know, what you're hearing, and people are saying about governments is that we can maybe spell AI, but we can't really understand it.

And I don't know how far we've still moved to really understanding it. And even though I agree that there is, we need to do a better job of doing that. We need to have more folks in governments that understand technology. That's part of the reason why we have a tech ambassador, and we have a team out here to solely focus on that, to bring that expertise to policy makers so we don't do wrong policies.

But I also think that we need to recognize, especially within AI, that I don't think that we're ever gonna be, you know, we're not gonna be experts at the same level that the private sector or academia is gonna be. There is still a need for activists, human rights activists, social scientists, governments folks that should be part of this conversation, even though we do not understand the technical aspects. Because in the end, it's not only a technical discussion, it is a discussion about what is the technology we want in society. And I think, so while we're talking about upscaling, I think we need to remember that this is not just a technologist conversation, but we do need those other voices too. - Yeah, Vera. - Oh yeah, if I could just add to what Mia said, I will plus a thousand to Mia, we've been working together for a number of years now, and with Ann-Marie as well, I cannot underscore how important this multi-stakeholder approach is, and this is not just in all the different convenings that we have, okay, you know, and signing, you know, principles and commitments.

Yes, but this multi-stakeholder approach with the idea that no government, no private sector entity, no academic institution, civil society organization can solve this wicked problem of what we're gonna, how should we be deploying AI responsibly, and other emerging technologies by itself. But I will also, if you will, invite you as a senior representative of USAID to think about how this actually impacts development and other countries, particularly lower and middle income countries around the world. Because those multi-stakeholder coalitions and partnerships must happen also at the country level, at the local level, to make sure that we are designing the exact right solutions. I see my wonderful colleague, our mission director here from Fiji, who I had the pleasure of visiting Fiji, which this is our newest, one of the newest missions for USAID, that the administrator launched in August of this year.

And when I was there just a few months ago, we were talking about that, how do we actually get rights respecting technologists to focus on these issues? How do we get, you know, government and civil society organizations? And I think we need more of that in addition to amazing work we're doing at the policy level, not just in this country, but also with our, you know, European colleagues and other partners around the world. - Yeah, thank you so much Vera, for that point. Now I'd like to be a little bit provocative and actually push back on some of this. It happens to be that I did my dissertation on multi-stakeholder governance models looking at the internet governance forum and its role in shaping policy now.

It's a non-binding forum, it's not actually supposed to influence policy. And I did my work in the East African community, Burundi, Rwanda, Uganda, Tanzania, Kenya. And in that research I found that while all are equal, some at the table are more equal than others, and there are definitely players setting the precedent. The reason why we have the term foundation models in legislation now in the EU AI Act is because of our friends over at Stanford coined that term. And there was pushback on whether or not that term was appropriate for characterizing that type of model. Also, the private sector, even the EU AI Act, it does put in place requirements, but there is so much room for those companies to demonstrate what compliance looks like, which means they can essentially, hate to say this, but they can write the exam by which they are graded.

Now, of course, there's transparency reports that they'll have to issue, and I'm hopeful that the EU will evaluate those, and wave their finger at them if they don't think that they've done an appropriate job. So we talk a lot about stakeholder engagement and multi-stakeholder processes. Now I'm a cynic, because I've done my research on it, and I don't wanna be, I want this process to work. How can we make it work? How can we actually really ensure that multi-stakeholder convenings are bringing people together on equal footing where everybody can play in the sandbox collaboratively and work together. And on top of that, I'd like to get at how do we actually meaningfully engage those people who are directly affected? - So if I may, I think I share your cynicism, Brandie, and also your hope, that, you know.

- I did not sound very hopeful, but yeah, we'll try. - We do want these multi-stakeholder engagements to work, and at a fundamental level, we have no option but to engage in multi-stakeholder engagements because that is the only right way forward, right? As a government, as academics, we want to bring in all the voices in the world so that we can make the most informed decisions. However, like you said, some parties are more equal.

- Yeah, can I interject on that? Because I also think, I'm so sorry. I'm gonna be a big cynic today. I think often sometimes the multi-stakeholder approach is used as a cover for back channeling and having certain stakeholders like industry go into these multi-stakeholder convenings, pitch their idea of how, oh, Sam Altman, okay, let's take Sam Altman, testifying before Congress on ChatGPT.

Essentially, he wasn't doing it out of the goodness of his heart saying that I've built this powerful tool that's gonna cause harm to society. No, he went before them to say, look, I'm best positioned to govern and oversee this technology. There's no need for you to intervene. I can do it best. We're gonna keep it closed, keep the model closed, because if we keep it open, it will cause harm.

People can identify the model weights, and be able to reverse engineer, and make these really bad things happen. So first he did two things. He's trying to get Congress to not regulate him. And then number two, by making it closed and it's proprietary, it's his. - Yeah, so on that note as well, right? Like we've heard of greenwashing, there is a lot of multi-stakeholder washing now. Where companies will host, you know, stakeholders from around the world, say that they did it, and then go and push their agenda anyways.

The reason we have frontier models now is because OpenAI and their group of friends got together and said, foundation models are dangerous, because it suits us the best. And so we need to make sure that congresspeople like policymakers have some sort of word they can use for this terrible technology to regulate it. That's not what stakeholders said, stakeholders didn't even know what the dangers were. But OpenAI had a mission, you know, these people had an agenda, and they pushed it, and they said, all of our stakeholders agree, That's really what we need, right? What I think what we can do about it, is not just sort of set up these multi-stakeholder engagements on our side, but really push out the training, the culture that's needed for people at these companies to honestly and sincerely engage in these multi-stakeholder engagements to begin with. I don't think you can change the leadership, but you can change the people who work in these companies for the right reasons.

And if you give them the tools necessary to build these engagements on their own, and actually act upon those outcomes in a totally objective and sort of formulate manner, even, I think that the leadership has no choice to follow that because there's a standardization that we've like instilled it into the field itself. Yeah, but like you said, lots of cynicism in multi-stakeholder engagements. - And I'll get to Betsy and then Mia, just a quick pitch. I had mentioned before that Ritwik is one of our AI Policy Hub fellows.

And this is actually what we try to do in the AI Policy Hub. We recruit a cohort of students, multidisciplinary. So we have students from social science, engineering, statistics, across campus, bring them together.

They work on very technical AI research projects, but we train them to understand and value social science, the other disciplines, and teach them how to translate their research into policy deliverables. That's a program that we would love to scale. We have the curriculum, if anybody's interested in learning more about it, please come see me after. So let's get to Betsy and then Mia. - I should also mention Ritwik is a Human Rights Center fellow.

- Told you, Ritwik is everywhere on campus. - And for good reason. So it's not perfect. But I thought I'd kind of talk a little bit about what we're doing on stakeholder engagement in our project, because I think it's at least a step in the right direction. So I mentioned we are doing global stakeholder engagement, and we have also included a number of those stakeholders on our advisory board. So they are also helping to shape the decision making and the work that we're doing, which I think is an important element of it too.

And once we draft our white paper, we will also be reviewing it by all of them. So we're trying to integrate those stakeholders, not only by talking to them, and hearing and listening to them, but also by including them in the decision making and the final outputs of what we're doing. - Could say, Mia.

- I think I just, I feel like I need to respond to your cynicism and. - [Brandie] Please do, bring us hope. - Maybe give some hope. But also I think from kind of where I'm sitting and what I'm seeing, I wanted to just mention, the whole multi-stakeholder washing, let's call it that. I think was exactly the reason why we wanted to take it from being just a buzzword to show in praxis what it actually means.

When we launched this Tech for Democracy project a few years back, we did that one, by developing a Copenhagen pledge, which shows like a vision for what digital democracy or what a digital future based on democratic values and human rights looks like. That process included 170 actors that were part of developing that. And we think by starting there, because we do need to first agree on do we even have a shared, and I think this applies absolutely for AI too. Do we even have a shared, like a risk perception, or a shared perception of what the challenges are, and then we can start working on the solutions. So the second thing we did was we launched, we called them action coalitions. We launched one together with the USG on gender-based online violence.

It has a long name. We're still leading one on information integrity, but the basis for those action coalitions was that they have to include, they don't have to be enormous, but they have to include the accord, one from each stakeholder group to sit together, define the problem, and then find the solution. And that was really to showcase what we can do, and how we can do things better. I don't think I could, I think I honestly have to say as a government person, I don't know how people do their job without speaking to other stakeholders, especially in this field, we would be doing very dumb things.

If I wasn't consulting civil society, academia, also companies all the time. But I do see your point, and I think there are two things to the asymmetry there. One thing is how difficult it still is for civil society to not just get a seat at the table.

They can do that now. I think they can actually quite easily get a seat at the table. The question is whether we listen to them or not. And we actually, you know, take what they say and put it in there and actually act on it. And then the other thing is really, I think the power relationship between governments and companies is one of the key reasons why we have something called techplomacy, which is what we think that we're doing, is that we need to understand the relationship between governments and companies today.

And the asymmetry of power of expertise is something we have to factor in. I don't think that that means that we, you know, we have to try and figure out how we can use that to our advantage, or at least engage in a meaningful way. And then there's the whole internal process. Can we change something within the companies? But just to say, I think the multi-stakeholder model, it's still the best we got. It's not perfect, but I think we can work our way there and really forge those relationships to become more trustful.

- All right, thank you so much. And I guess like the Grinch, my heart has grown two sizes from hearing you Mia, that yeah, yes, I'm getting more hopeful as this discussion moves on. I was gonna ask Vera if you wanted to make a comment right as you take a sip of water.

- No, Mia, you just also reminded, you also reminded me. And another thing that we did also coming out of the Summit for Democracy. And that is, and we actually launched it at Internet Governance Forum, ironically in Kyoto. And that is along with our Canadian colleagues, and that is the donor principles for human rights in a digital age. And those donor principles, they were negotiated if you will, non-binary principles to the Freedom Online Coalition, but signed by 38 partner governments that are part of the FOC, and very intentionally and heavily consulted with so many members of civil society.

And again, to me, I could kind of go back to that intentionality, you know, that yes, we need to make sure that all the different stakeholders and on the private are at the table. But to me also the private sector, let's just not talk about the big tech. It's so important to bring in the emerging, like we're talking about AI here, it's just so important to bring emerging technologies that are actually really thinking intentionally developing those rights respecting solutions, if you will. Or ones that are actually thinking about risk. Not ones that are just based in the United States, including in the great state of California, but also the ones that are popping up all over the world in various parts of the world, including ones that are part of the global majority.

And I think you know why that's really important, part of the multi-stakeholder, kind of going back to that kind of localized solutions of the multi-stakeholder model. And because context matters. These are the organizations, whether we're talking about civil society or emerging technologies or private sector actors on AI and other emerging tech that really understand the culture, really understand the language models, and that really understand that cultural nuance.

And it's just a matter of actually being on the ground. I know, Counselor, I will quote you one time that I know we've talked, and a lot, we have a lot of our leaders from around the world here. And that is, it's just so important to be on the ground. And it's so important to show up. And be part of those intimate conversations, particularly when we're dealing with thorny issues like AI.

Because we can sit here in this unbelievably gorgeous and prestigious place, the University of California Berkeley, but and then just incredible research that's done across all the different, you know, centers, and you know, that this university represents. But just being able to bring those on the ground experiences, particularly as we're thinking about this, and particularly when we're thinking about the multi-stakeholder coalitions. So then when we are, as governments, and we're sitting together with private sector and with civil society, whether it's the Summits for Democracy or the Internet Governance Forum or another Freedom Online Coalition convening, whatever it might be, right? Or the UK Safety Summit, whatever it might be, we can bring those lived and large experiences to the table. - Thank you, Vera.

I wanna pick up on one point that you expressed this responsibility around investment. What do we invest in? Now there is a stakeholder that we often don't bring up but is very important, and that's venture capitalists. And venture capitalists control the companies that grow. And there has been studies showing that a lot of these startup companies, you know, they put AI in their title, but there was really no machine learning there. So Ritwik, can I ask you, how do we effectively evaluate whether or not there's something real there? A, and it's not vaporware.

B, like you've said, nobody's looking at LLMs anymore, it's LMMs. So now that you've heard all of that, you're probably gonna start seeing it, right, in the news, LMMs. So how do we know what to invest in, and this space is moving so quick, how do we evaluate them and where should we be investing? - Yes, I'm glad you asked that question.

So outside of Berkeley, I serve as the Technical Director for Autonomy at the US Department of Defense's Defense Innovation Unit that's based out of here in Mountainview. And our job at DIU is to bring in the latest and greatest commercial technologies into the DOD as sort of in this model, what we call fast follower. Industry invents, and DOD follows as fast as possible. And my job as technical director is to make sure that we do evaluation on these technologies to make sure they're not vaporware, and that they're solving the real problems that the DOD really cares about. And honestly, the process is relatively ad hoc.

The way we do it is companies submit to us their pitch decks, we select from them, you know, we have whole competitive process, that's what DIU's known for. And then when we bring them on site, we sort of get at them with a group of technical experts, which include our team. It includes teams from our partners across the US government, and we start asking them very specific questions about their intellectual property.

We could not do that if we didn't have access to the world-leading experts in whatever the technology is in-house at DIU or across the US government. And when we don't have it, we make sure to bring them in through partnerships with FFRDCs like Carnegie Mellon or MIT or you know, the national labs. And so really the only way to make sure that things aren't vaporware, especially when people are saying all the right buzzwords and bringing all the flashy demos, is to meet them head on with equivalent or even better experts that we're able to pull across from our ministry. And so like you mentioned, you know, this multi-stakeholder model, we try to bring in all the different stakeholders, all the experts, all the people who are gonna be impacted by that technology.

So in our case, the war fighter who is actually gonna be using technologies in the room with us as people are pitching, and we get them to say like, hey, gut check, you're a combat controller, you're in the field, you're gonna be calling in airstrikes. Would you use this? Is this right? And equivalently, hey, you're an AI researcher at Berkeley. Does this tech actually work? Is this how opt-in section is properly trained? And together we think that we usually make the right decisions. I wish there was a more formal process, but that's the best we have. - Yeah, I think that's really interesting.

Not only engaging the multi-stakeholders, but also, it's throwing your procurement weight around, right? Because those companies, they need you a lot more than you need them, oftentimes. They're gonna come to you and say, oh, you have this challenge. We're gonna build this tool for you. It's a silver bullet, it's gonna solve everything. You have a lot of power to put pressure on them to be transparent on what they're actually building. And we're working with the University of California, we're 10 campuses, we serve millions of people.

We are a big procurer, and we are pushing that weight onto those entities to demonstrate to us that they're doing their due diligence and risk assessment and risk mitigation. - If I could add to that. That's a great point. You know, one of the things that, you know, DA's been around for about six years now.

I've been with them for about 4 1/2. And one of the things that I'd always see in the start when we were a small organization is companies would come and pitch to us the same style they'd pitch the rest of the DOD, right? These super fancy pitch decks, like over-bloated promises, no technical details, reading off of a script, with these dudes in very nice suits. (audience laughing) And it would be nothing of interest.

And so when we kept repeatedly saying, you're not getting a contract, sorry, you're not moving on to phase two, let alone phase three, they eventually got the message that we mean business. You better come with your technical experts and not your salespeople, you know, the dudes in button downs and not the dudes in the suits, and really give us. - I mean, Patagonia. - That's right, now we got Patagonia, Patagucci, as we call it.

(audience laughing) And really get down to the nitty gritty rather than just giving us over overall message. So even know, even in USAID, even in Department of State, et cetera, we can do the same thing. There's no reason why we have to accept the status quo of how things are pitched to us. Clinton's got a question. - Oh, actually, wait, let me get the, yeah, let me, we are at the Q&A portion, so if you have any final remarks the three of you, I'm gonna actually help facilitate the Q&A and run around in the audience with this mic. So Clinton, did you have something you wanted to say? - [Clinton] No, I was jus

2024-02-15 03:48

Show Video

Other news