How are emerging technologies helping to shape democracy? | Global Stage | GZERO Media

How are emerging technologies helping to shape democracy? | Global Stage | GZERO Media

Show Video

- Bonjour, hello, and welcome to a special GZERO livestream coming to you from inside the famous Palais Brongniart, site of the 2023 Paris Peace Forum. I'm Julien Pain, a journalist for the French Public Service and my field of expertise is fake news and disinformation. Today's program is part of GZERO's Global Stage series produced in partnership with Microsoft. The theme of our discussion is embracing technology to protect democracy. And first of all, I wanted to share with you a figure I found, personally, especially eye-opening. According to the Integrity Institute, there will be as many as 83 elections in 2024 across 78 countries.

That includes critical votes in the US, in the European Union, Taiwan, India, just to name a few. And these elections are happening in a very difficult context: wars in Ukraine and the Middle East, the increasing tension between the US and China, and clearly a decline in trust in both governments and the media. The Peace Forum launches a call to find common ground at this time of great tensions. And so we find it especially important to examine how that can happen also in cyberspace, how can we collaborate more and fight less? I'm joined today by a panel of leaders from politics, private sector and journalism. Ian Bremmer is the president and founder of Eurasia Group and GZERO Media.

Maria Ressa is co-founder and CEO of Rappler and recipient of the 2021 Nobel Peace Prize for her work protecting freedom of expression in the Philippines. Brad Smith is vice chair and president of Microsoft and Éléonore Caroit is vice president of the French Parliament's Foreign Affairs Committee. So welcome to you all. I'm very happy to be with you today. And I want to start with a question that the GZERO team asked to its followers on social media.

And that question is is democracy getting stronger or weaker globally? And that's how the audience voted. Getting stronger, 8%, getting weaker, 77%, no significant change, 10%, unsure, 5%. I see you taking a picture of the poll, because obviously that's shocking, right? How would you have voted yourself? - Definitely on getting weaker. We've been seeing this trend over the last, I'd say, 17 years? That's, you know, and hand in hand with democracy getting weaker, you're also seeing increased attacks against journalists. You know, it's not just harassment, intimidation, it's killings, it's jail, it's censorship, disinformation actually attacking.

- You would all be that pessimistic? - I think it'd be interesting to ask if it's getting weaker or it's getting much weaker. - Yeah. - So it's a question of momentum, I mean, it's so interesting, because you look at the backdrop of the Paris Peace Forum, and you talk so much about the US-China relationship and actually from a power balance perspective, now everyone's asking about is, have we hit peak China? Is China ever going to be the largest economy in the world? They've got demographic challenges, all these things.

But if you look at the issue of political viability and legitimacy and how systems are doing, it is democracy. It is the United States in particular, which had been, you know, the world's leading exporter of democracy. And people around the world, starting in my own United States and looking in from other countries, are saying, we don't know that this system as it stands today is fit for purpose. - But you see everything from, I guess, an American perspective obviously at first.

And the China-US competition is like a huge problem. But if you take the world globally, is that that simple, I mean? - Go ahead. - It's not that simple, but first I thought Ian was going to spark a contest who can be the most pessimistic of all. - He won. - Yeah, he won. - He got it.

- But, no, I think it's such an interesting and important question. And I was in Finland on Monday. If you look at countries in traditional military terms, which country in the world is the most capable at defending its borders against a traditional military threat? Most people would say, oh, the United States is stronger than Finland. But if you ask a different question, and it's fundamentally the question you're asking here, what countries are the strongest in defending, say, their political system, their elections, the democratic process? I think Finland is probably stronger than the United States.

I think Europe is probably stronger than the United States. And then it really forces you to think about where are the threats coming from? They're coming from the outside in part. And I would say Russia is a bigger threat right now than China as we look to 2024 and threats to, say, elections from foreign-based propaganda, disinformation. But you also have to look at other factors. Where is polarization making it more difficult to sustain a country? Where is there less resilience in the form of having an educated public that has learned how to evaluate information more critically? What is the state of journalism? Not just first, as Maria points out, to withstand attacks on journalists and their freedom, but also where is journalism itself less healthy economically? - Yes. - So you have a different picture.

And I think we have to put that picture together better than we traditionally have. - Well, I think it's pretty similar to what we are experiencing in Europe, unfortunately, you know. - Yeah, no, I would've been interested in a following question, how do you feel about it? Democracy is getting weaker, but what does that provoke to you? Do you feel protected in an undemocratic regime? Because that's what worries me is not only the fact that democracy is actually getting weaker, but is that more and more people seem not to care about or to actually be desiring to, you know, give up some fundamental rights in exchange for more security, in exchange for a prediction of what will happen. And that, it's at the same time the strength of that and all the problems that this poses is what strikes me more than the fact. - The problem is that nobody cares. - Yeah. - And that's even

more frightening than the results of this poll. - Well, and people are made insecure by this issue. - Exactly. - I mean, in an environment of disinformation, authoritarian states tell their citizens what to believe. They know what to believe.

The space is made very clear. There are penalties for not believing those things. In democracies, you increasingly don't know what to believe.

What to believe has become tribalized and makes you insecure. So the very foundation of democracies becomes a vulnerability in an environment of maximal disinformation. - Yeah. - Yeah. I'm sure you want to react on that.

- Well, that's the core. It's the core, right? The core now is that if you corrupt the information ecosystem, I don't know if it's the same way. I became a journalist, because information is power, right? And in the past, we were liable, we protected the public sphere, except now technology, Brad, you should jump in here. But technology has taken over as the gatekeepers to the public sphere. And they've largely, actually, they have abdicated responsibility when lies spread six times faster than facts. That's from 2018.

And with Elon Musk taking over Twitter, now called X, you know? - Much worse. - And we're seeing this play out with war happening in the Ukraine, in Gaza, right? So we're being insidiously manipulated. What used to be advertising and marketing platforms have been exploited by geopolitical power. Russia, I mean, let's talk about Russia's impact on the 2016 elections- - We'll talk about it.

We'll have to talk about it. - Right, right. But, I mean, I'm just saying that insidious manipulation when the tech is a behavior modification system, when we do not have agency, journalists are under attack. Our standards and ethics prevent us from fighting back in a way that the platforms demand.

And let me bring it to TikTok, because then we can go to generative AI, which is the next step for election integrity. - We'll talk about AI later on in the- - Okay. All right, all right. I know, I know. Sorry, sorry. - She's taking over- - She's like- - Sorry It's just everything. - First let me show you an interview that was recorded by GZERO Media. They talked to the Director-General of the Paris Peace Forum Justin Vaisse about the key themes and issues of this gathering.

And here is what he had to say. - The idea is that beyond the North-South gap, which is another big element structuring the international system, not just East-West, but also North-South, and the Paris Peace Forum has always been devoted to, you know, being a place where you can mend or where you can improve North-South relations. We're trying to get all the different actors, East, West, North, South to work on the same issues and to make progress where they have common interests. Because that's the paradox of it all is that we focus on competition and, you know, geopolitical rivalry, while we forget the iceberg coming our way when we fight on the deck of the Titanic. - There are parts of the world that have clearly moved beyond rivalry and into all out war. And I'm just sort of curious, as you begin a peace forum, how that's coloring conversations and the way that you're framing this.

- It definitely gives, unfortunately, a very bleak context. So from the start, the forum was always focused on the long-term questions on, you know, crafting peace for the future, rather than intervening on hot crisis like the one in Gaza today or Ukraine a year ago. This said, we don't ignore this crisis and we do what we can and what we can, we have to admit is limited. It's limited because we can mostly do humanitarian work and so on the November 9, Thursday, November 9, we have this humanitarian conference on Gaza that President Macron has convened in the context of the Paris Peace Forum. And then the next day when the forum opens, we'll also have a discussion between civil society from Israel and from Palestine that we hope will at least spark a few, I would say, a few glimmers of hope of getting somewhere in terms of peace. Because we know how much the two-state solution has been receding on the horizon in the last 10, 20 years, and how much we still need to have some kind of a horizon to address the Palestinian question.

- Beyond kinetic warfare, the world is obviously at an apex in terms of cyber. And I'm curious, cyber has always been a big focus for this forum. How do you view the relationship between cybersecurity and peace globally? You're right, from the start, which is for us 2018, which was the centering of World War I, when we celebrated, commemorated the armistice. We launched very early on the Paris Call for Trust and Security in Cyberspace, which is a huge platform of various actors, about a bit more than 80 governments, more than 700 companies, 300 civil society members, local governments, and others, working on norms for cybersecurity. So, you know, obviously we've not solved the issue.

We still have cybersecurity issues, but I think it contributes to advancing on rules of war or rules of aggression, because we know even in wars, there are rules applied to the cyber domain. What the Paris Call did was to have working groups have a number of principles. And so this year, for example, we're focusing on cyber mercenaries trying to get, probably in the winter or in the spring, to a common text or a common position on the employment or the non-employment, ideally, of cyber mercenaries that we know are key to the cyber wars. And these cyber wars, in turn, are part and parcel of current conflicts.

You were mentioning kinetic operations. Well, cyber has become part of these kinetic operations. We saw it in Ukraine, we saw it at the very beginning of the terrorist acts of Hamas on October 7. And so regulating this, trying to inject norms and rules of the road in the cyber domain also contributes to the larger purpose of peace. - So we're on the deck of the Titanic. I mean, we are not the less optimistic here, right? Éléonore, he talked about the Paris Call in 2018.

Can you explain what it is and if it had a result, if it achieved? - Actually Justin just did in the video, so I'm not going to repeat what he said, but it's a very ambitious call that was launched pretty much with the forum in 2018. And the idea was to have an agreement of a sort of common ground between, not only states, but also the different stakeholders, multinationals, civil society, et cetera, to say, well, we don't only have to protect citizen in real life. We have to protect them in the cyberspace.

We need to address these issues, because what happens online can actually turn into real attacks. And we've seen that. Unfortunately we've seen it in France with a terrorist attack that occurred a couple of weeks ago against a professor, which we know happened from an online threat.

And we've seen since the October 7th attack, thousands of what we call the pharaoh's signalments, which is when you actually signal a hater content online that can actually have an impact then in your life. So we see this. We see that there's an implication between what happens on the cyberspace and what happens in real life. And I think this call, as all these international instruments, will not have the results, you know, the tangible results we would hope, but at least it sets a common ground.

And we have seen in the past years legislation that has been enforced in France, including two laws this year. And I'm really happy about it. And also in the EU. So things are happening and I think, you know, being in a sort of leading position in terms of regulating the internet, which is a massive task.

And there again, the Titanic metaphor comes into place, but I think we have to have that ambition. - Yeah, and because we're on the Titanic, so let's talk about foreign intervention of, you know, foreign powers try to intervene in elections now. We know that in the US election, people tend to forget that it happened in France as well, in Canada, should we be afraid of that? Will they manage at some point to turn an election? - I think the biggest danger to the United States is internal, not external. Bill Burns, the director of the CIA, incredibly well respected, has testified repeatedly on this issue.

The Russians, the Chinese, the Iranians, all countries that Brad and his team have spent a lot of time looking at what their capabilities are, what they've done historically, what we can expect them to do in 2024. But the amount of money and resource that they will put towards this will be oriented towards the massive divisions that already exist in the United States. It's the lack of resilience, it's the lack of trust. It's the lack of legitimacy. It's the fact that our base expectation for 2024, a lot of elections happening in 2024, by far the most important is in the United States.

It's the most powerful country. It's a big democracy. And no matter who wins, the losers are not going to respect the outcome.

This is far less of a problem in France. Not because you don't have polarization- - So far. - No. Not because you don't have polarization in France, but because you have the European Union that actually does constrain just how much independent sovereignty is going to matter. In the United States, you get it wrong at the federal level, there ain't anyone on top of that, right? So that's what we're looking at. - I agree and disagree with Ian.

I mean, I disagree in the sense that the reason why the United States is in the place that it is at is we have had years of intense exploitation of the vulnerabilities of the tech platforms. And that has literally, I mean, you see it with Black Lives Matter, you see it with the data that was released by the Senate Intelligence Committee in 2018, you know, that was meant to pound open the fracture lines of any society, including France, right? I mean, France, in 2017, Facebook took down 30 to 50,000 fake accounts. They didn't do that in the Philippines. But now what we're dealing with is the impact of years and years and years of this. Yuri Andropov, who was the former KGB chairman, he said, "Dezinformatsia."

He said, this is his quote, and it's playing out the way it's supposed to. He said, "Dezinformatsia is like cocaine. You take it once or twice, you're okay. But if you take it all the time, you're a changed person."

We are all changed people. And that is part of the reason democracy is so vulnerable - We're all on the Titanic sniffing cocaine. - Yeah. - Yeah, I know. - What could possibly go wrong? - What could go wrong? Brad, what could we do on the technological side? Because I guess you are getting better at spotting this kind of disinformation, but the bad guys are also getting, they're also getting better, right? - I think we have to start by putting this in context. If you talk about the United States for a moment, since we are, you know, it's a country where people have so much in common, but spend all their time talking about what divides them.

It's a country where there is this constant sense, I think many times fed by Russian propaganda from the outside, that is about undermining trust, undermine trust in government, undermine trust in all centers of authority, undermine trust in the news, in all information. All of that, I think, has created a society that, as Ian said, doesn't have resilience the way we would hope. So now we have an election and I think we should absolutely assume that the Russian strategy will be to concentrate resources and feed all of that even further. Now interestingly enough, we do have a greater capability technologically to identify, say, when there is this disinformation coming from the outside, what we lack is a consensus about how to use that information.

Do we want companies to report it? Do we want, if there is, say, a deepfake of a political candidate, do we want tech platforms to take that down? That would be censorship. Would we prefer instead that they relabel it so that people would know that what they're seeing is an effect of forgery? And that's the biggest concern I have going into 2024. Not that we won't be able to spot it, but we don't have an agreement about what we should do about it.

- That's perfect, because that's where we are, right? The Paris Peace Forum, and I think that's to find common ground is what we are looking for here. And now let's talk about artificial intelligence, and again, keeping in mind, that many nations will hold elections next year. And this will be the first election that will be held with this amazing technology available. So the GZERO team asked another question to its followers.

And that question was, what impacts will AI have on future election? Let's see what the audience voted. Positive impact, 6%, negative impact, 42%, both positive and negative, 47, neutral impact, 5%. Do you want to react on that? - Yeah, I- - You were expecting that? - I wasn't expecting it, but I'm definitely in the camp of both positive and negative. And I'll give you a great example of both negative first and then positive.

I think we have a good glimpse into the negative use of AI. We saw the Russians do this recently in Canada. And the best use of AI technology, if you want to spread disinformation, is to create, so-called deepfakes, which is really a fake audio or fake video. They wanted to discredit a man who lives in Canada who is Ukrainian and who has been a voice for Ukraine. So what they did is they created a fake audio of him saying something he never said.

They then took a legitimate real broadcast of the CBC, the Canadian Broadcasting Corporation, and they spliced into that this deepfake audio. So that tells us something. This is the recipe that will probably be followed, you know, take legitimate things and insert forgery within it. - I'm very worried about the audio, because we talk a lot about the deepfake in the video, but I think you're right.

The audio is the simplest way to turn an election, right? - It's audio, I think it'll be both, but I think audio, you're right, that is the easiest way. Now let's look at AI as a shield and as a tool. The interesting thing is we're able to use AI now to identify these kinds of fakes.

We're able to use AI to identify patterns. AI is an extraordinarily powerful tool to identify patterns within data. So for example, when we detected after the fire in Lahaina, the Chinese using an influence network of more than a hundred influencers all saying the same thing at the same time in more than 30 different languages, their message was false. It was that the United States purportedly had set the fire through a meteorological weapon. We're able to detect that very quickly. So AI can be used to create false information.

It can be used to target, and it's probably very good at helping people target, translate, but it's good at detection too. - Then the question is, do you think that could impact an election? Are we just not, like for now, and will, in 20 years- - I absolutely believe it could impact this election in 2024. - How? - Because, well, it gets to what Brad just said.

AI can be used as a tool, a very powerful tool, but the principle agents of an election need to be in agreement on using that tool constructively. And that is the problem in the US. When the people that are actually running the social media platforms are themselves political actors who are very vested in promoting certain types of polarization and disinformation. - I have someone in mind. And maybe not just one person, maybe not one, right? - Okay. - No, no, no. - Yeah, they all use it. - They all use it.

I think that's a serious problem. And I think also when the political officials are very vested in what some of those outcomes would be or some of the people that are running to become political officials. So the AI as a technology absolutely can be used for national security purposes to help defend the integrity of elections, but the government actors must be aligned and the private sector actors must be aligned to support democracy. And right now we do not have those latitude- - So what could we, we do? - No, and to go back to a point that I found very interesting that Brad was mentioning is where do you put the limit? What's the red line? And that goes to how do you interpret freedom of speech? And in the US, you have the first amendment, which actually is so important that anything else could be seen as censorship.

Whereas in France, I think we have a higher tolerance to some sort of regulation, which is not going to be seen as censorship as it would in the US. So it's also a very cultural aspect of things. And when you have, well, someone like, you know, the owner of X that actually has a very, I would say First Amendment view of speech, then you have a problem, I think, because you can have anything said and you have no way of filtering the information.

Now to the point of is AI good or bad? I do think that it's a very powerful tool, but it really depends on how you use it. And it can be actually even more complex. I'll give you an example. In 2017, two of candidates Zemmour's videos were taken down from the YouTube chain because they were made- - Zemmour is a far-right politician in France, for those who don't know who he is.

- 'Cause they were made with AI and they were considered as not accurate. Had it been another video taking down his channel, the channel could have been shut down by law. I mean, that would have been just an application of the French law. But imagine the scandal it would've been if a candidate to the presidency had had his YouTube channel shut down. So it's not only how do you regulate, but how do you actually tell the people that you're regulating and that it makes sense and that it is acceptable.

And you go away from a sort of a fake news or huge scandal that you could have just by applying the law. - So, Maria, what do you think? - Oh my god. So many different things- - That's a problem of- - And I'm staying quiet on purpose. - Yeah, let's go back to AI. Do you think that could have an impact? And do you think it would be okay to censor AI? Because people are going to say, "Well, that's my creativity, I do what I want."

- No, no, no. It's already had an impact, right? You cannot have integrity of elections if you don't have integrity of facts, if you don't have a shared reality. All of that has already happened. And that's part of the reason you're seeing the movement globally towards authoritarian rule. I mean, V-Dem says, as of this year, 72% of the world is now under authoritarian rule, right? We live through this, so this is quite personal, but like just three points from what everyone has said, I think the first part is AI as a defense tool will always be behind the eight ball because it is reacting, you need to feed it.

Unless of course generative AI becomes much, much smarter, much quicker, right? So you have to feed it. So it's reacting to data that has already happened. I know this because when we were attacked, it took the platforms years to come back and fix it. With AI, yes, they did, but it's not fixed. The second one is free speech as a, I mean, the United States will say this, and many of the tech companies will want that, because that continues a business model that has exploited all of our data. It isn't a free speech issue.

It's a design issue of the companies that are now connecting all of us. I think the third thing in terms of elections itself, it's brilliant, because the messages change the way we vote, because it manipulates our fear, anger, hate, and then changes the way we look at the world. And that changes the way we vote, right? That's social media in general. But I guess the last part of this, which is far more dangerous, is we no longer have a shared reality. We're in a chaotic environment.

Let's look at what's happening right now. And, you know, please tell me much more optimistic things. But in Israel and Palestine, if you look at TikTok, let's take it out of, you know, I first blamed the American social media companies because they made tremendous amounts of money out this. But now let's look at TikTok and let's look at the first month of the conflict of the war in Gaza.

What is happening now, Israel, Hamas, right? And if you look at that data, it shows you that almost 97%, 96.5% is #FreePalestine targeting our kids, 18 to 24 years old. 3.5% is #IStandWithIsrael. So what are you saying? This was actually a guy who used to work with Palantir who said, "So either the #FreePalestine is as more popular than Kim Jong Un in North Korea, or China is weighing in, which one?" - I know we're on the Titanic. That's all right. But can we find something, you know, to get hold of, you know? - Yes. - Yeah, please help me look. - Look, I think there's four concrete steps we can take.

Step number one, tech companies can and should take, create technology tools that enable, let's stick with elections for a moment, candidates, for example, to watermark their content so that it can't be split apart and have fakes inserted into it without that being easy to detect. We launched something like that yesterday at Microsoft, content credentials for- - Do you think everybody would do that? I'm sure Microsoft would do that. - Yeah, I think- - Do you think in China or in other countries, they would accept that? Just a question. I don't know. - I don't know, but let's focus first on protecting the electoral processes where there will be elections next year.

- You're right. - Yeah. Step number two, let's make it unlawful to deliberately defraud the public by creating content that is knowingly false. In effect, that's what, there's a bipartisan bill in the United States Senate. We endorsed it yesterday.

Step number three, let's reach an agreement quickly on what we want platforms to do when they detect this kind of fraudulent content. I would say it's either remove it or relabel it. And either one is readily workable. If you want to avoid the censorship debate, let's relabel it, so it's like this has been altered. And then step number four, then let's have recourse so that the users of platforms both have transparency, they know what the platforms are doing, and have the ability to say, wait, you got it wrong.

So then there is a check on the platforms in that way too. We would go into 2024 with at least a game plan. Today we don't yet have a game plan societally.

That's a problem. - I think that these are very important steps that need to be pushed through and endorsed by all of the major tech companies. Do I believe that that's going to happen across the board? I am skeptical because the business models are not all aligned.

I think that the ability of a government, look, if we have a conversation, and it is about blowing something up. We are responsible for that conversation. No one else is. But if we have that conversation on a phone and the phone company, they tape that conversation and they promote it to everyone out there that might be interested in blowing stuff up, the telephone company is responsible for that. We are in an environment right now where we are algorithmically promoting information that is causing damage. It is disinformation, it is spreading hate.

We've got the supreme leader of Iran calling for genocide of Jews, like the end of Zionism, and it's being algorithmically promoted in my feed and Brad's feed, and this is insane, right? And I'm not, by the way, I'm not suggesting that what we want is censorship. - Yeah, I agree. - In the United States, historically, we don't respond with censorship.

We respond with lawyers. In other words- - Laws, right? - In other words- - Gate-keeping. - There has to be a level, if you're making the money from the model, there has to be a level of accountability when you're promoting things that are causing damage. And I'll tell you, the business models that aren't aligned when they realize they can get sued for stuff- - It goes too fast. - Very, very different. - Yeah. - Very, very different-

- But I'm going to play the devil's advocate, but it's going so fast. So many message every day. How can you just say, oh, we are going to use lawyers. I mean, you'd need thousands of your lawyers. - No.

You have to put the law in place. So the Digital Services Act, for example, is the best. I always say the EU is winning the race of the turtles. But we need to move faster. - We're getting there. - Right, so, yes, bow, bow, your move, but it's still not fast enough.

So yes, I think we need to first make clear to everybody, and this is where journalists can also help, what the danger is. We're still stuck in old ways. You talked about, is it the journalists? Yes, to a degree, news organizations are not as strong as they were.

Our business model's dead, right? And we're under attack by the very same forces we're talking about. That's necessary and actually it isn't censorship. You cannot yell fire in a crowded theater. There are laws that govern society. And that's not having those in place is what has turned the world upside down and what has given dictators to-be greater power. - Just quickly about the DSA, because obviously, I guess, you want to react on that.

Can you explain to our audience what is that we are trying in Europe and if you think that could work? - Well, no, there's this new piece of regulation and there's actually been an achievement, because it's been approved on a first lecture in June 2023, so a few months ago. And the idea is to set common grounds and to regulate numerical space. And one of those common grounds is also working with artificial intelligence, but also always ensuring that there's human behind it. I think those principles, as broad as they can be, are a common ground that we can all work with.

And I would go back to what Brad said about relabeling. Relabeling is great, but you have to tell people what it means. And I think relabeling could be enough if it's sufficiently visible, if it's sufficiently received, because we have to stop thinking that people are just stupid, they're not. If you educate them properly to these technologies, to the impact they can have, you'd be amazed by how intelligent they are.

And I'm always amazed by my children, as any mother would be, but they understand these things way quicker than I did 10 years ago, 20 years ago. So I do think that without censoring the whole system, by applying some regulations such as the one we have with influencers, so the bills I was telling you before that we adopted in France, there's a regulation of influencers content that we adopted at the National Assembly earlier this year. Then there's also this "régulation l'espace numérique" which aims at essentially fighting against pedo pornography online. But it actually helps, you know, cleaning the internet quickly when you think that there's an abuse, so that could be seen as censorship, but it isn't actually. And that then also against terrorism. So I think everyone could agree on those principles, including in the US, including the most fervent supporters of the First Amendment.

And that would set a common ground that would help companies work on the model. And I agree, you always need lawyers. - Well, I think we're getting better now. We are turtles on the deck of the Titanic.

I like the ID. And I think it's important now to talk about trust. I'd like to share with you a Gallup survey, which is quite worrying, I think. It found that trust in the media in the US has never been solo. And I'm afraid that's not very different in France.

So GZERO asks another question to its followers, do you trust media's coverage of elections in your country? Yes, somewhat, not much, not at all. And let's see what the people voted. - Hmm. - Well still yes. - So- - If you know why that is. - Yeah. - Yeah, yeah tell us how you-

- It's because they trust the media that they are watching. - Yes. - And they're watching the piece of media that is aligned to them, so this is not actually enormously positive thing. The media landscape is seriously fragmented and increasingly broken, which is part of the reasons why technology needs to be part of the solution, because by itself, I mean, all of this was coming before technology. - Yes.

- I mean, talk radio, cable news. This was pre-technology, you were dividing people into just follow what you already agree with and technology turbocharges that, but at least gives us tools that also can potentially constrain it. Like we've been talking about.

I mean, you know, in other words, so tools that actually show you this is a bad actor. It used to be that you do advertising and you had no idea if the advertising was successful or not. You just threw it out there.

Now you can do micro-targeting AI is very good at identifying patterns around micro-targeting that can be used for ill, that can be used to undermine democracies, that can be used to destroy Israel, Palestine. I mean, we just talked about the Middle East. You mentioned TikTok. - Yeah. - This is the first time in a major conflict that disinformation fundamentally changed the trajectory of the conflict right at the beginning. - Yes.

- Biden was going over to meet with the Israelis and also the Jordan, Egypt and the Palestinians. - Yeah, the hospital. - Yeah. - And disinformation expanded by Hamas on the hospital then picked up by the Wall Street Journal and the New York Times and the AP, and Reuters, by the way, left and right, because they were all getting their information off of social media, but it was all wrong information.

And, you know, as they say, a lie goes around the world before the truth gets its pants on. You already had cancellation of the summit by the Middle Eastern countries before the real information came out. So the summit was canceled and the United States was forced on one side of that, couldn't have the other meetings. That's meaningful realtime impact of disinformation.

- Yeah, that's concrete. It's important to give concrete example to people that, how that impacts. It's not like, it's something that's happening right now. - Maria, you won't disagree with that.

- Absolutely, I mean, look, in the medium term, look at what's happened to the Philippines, right? There's no way that I should be facing a century in jail. There was nothing there, but this is disinformation. You pound the lie a million times. For me, journalist is a criminal and then President Duterte, our president then said the same thing a year later.

This is the seeding part. And then we got our first subpoena, right? This is the way we lose democracy. But I think on the other part of this is that if we do not, this is like a post-Hiroshima. This is one of those moments. And if we do not, as a whole, globally, step up and do what they did, how did the Universal Declaration of Human Rights come together, right? Like, and with generative AI coming up, right? How do we do this? I thought one of the most interesting things, and, of course, Brad will answer that, but one of the most interesting things about the EU is it went far, but every case that it had now, it's kind of blown out of the water by generative AI.

And it didn't go as far as naming the tech company's publishers, which is actually the key point. Are they or are they not? I'm sure you want to- - Go, Brad. - Respond to that. - Well, I actually think the approach of the DSA is probably, you know, something like the right approach.

I think if you just treated a tech platform like a traditional news publisher and imposed an obligation to basically fact-check everything before it was published, it would be very difficult to publish everything that the world actually wants to see, because people do want to see user-generated content. On the other hand, what we have in the United States is just protects publishers from, in effect, doing anything if they're in the technology business, they're a social media platform. So Europe is trying to chart a new way.

Maybe they're faster than a turtle Maria, to use your phrase. And what I think will be interesting and important is to learn from the experience and then adapt further. - Okay, so everybody seems to be okay with the DSA. I wasn't expecting that, so- - No, no, well - Good job. - That's great news, first of all.

I'm glad, but I was going to go back to the example you gave about how does disinformation affect the geopolitics like on the spot. And how traditional media jumped into the news, and you had Reuters and FP and FA headline saying something about this hospital without any sort of fact check. And that to me actually raises the question of journalistic ethic at the time of social media. It's not because, if you're a journalist and you tweet, or you publish content that is not official, does that, you know, do you not have to check your sources? Isn't there something that you, because you are a journalist, of course, I trust a journalist saying something more than I trust just a random, you know, troll. - But let me defend the journalist on that one, right? - Please do. - Hamas said it, so technically they had a quote from someone who did.

But the problem here is that, you know, you said it- - We're skeptical of Hamas here, just to let you know. - Yeah, no, I understand, exactly. We're getting some looks here- - No, no, no, but that's the point, right? - I hear what you're saying. - The old world,

as long as you attribute your, as a journalist. Now it demands more, right? But the problem is that it isn't the old world. And I've lived in countries where, in times of war, in times of conflict, literally we, the news organizations, were called by the government to slow it down. - Okay. - Right? Why does the pace of information have to move at the tech-dictated pace? - But can we go against it? Can we go against- - I think we can. - Do you think that possibly- - The design of the internet isn't the best.

We know this. So why will we not? We work with tech partners. Rappler's one of the 10 groups that's working with OpenAI to look at how large language models can be used to help democracy, right? It is in their power right now in, it's in the tech hands, because governments, the EU take too long. So why not? Why do we want to? I mean, the only thing that benefits from more- - Isn't it too much of a burden for the tech companies to tell them, you know, take care of everything, slow down- - No, no, no. I'm not saying take care of everything- - The base I'm debating, it's not going to work like that.

Brad, what do you think? Well, I don't know. - Well, no, I think there's a couple of factors here that are worth thinking about. First, what Maria is positing is that technology is moving information so fast, the truth can't keep up.

- Yeah. - Yeah. - That was actually said by the New York Times in the 1850s about the telegraph. - And television. - And they weren't wrong,

by the way, because the night that Abraham Lincoln won the presidential election in 1860, disnformation was literally spreading across the United States. So I think let's start by recognizing this problem has been around for a long time. What do we do? Well, first you do need, I think, journalists to course correct and you need governments to help.

And by the way, I would say there was a pretty fast course correction about the attack on that hospital. You know, the truth not only emerged, but it emerged within some hours. - A day. Yeah. - Yeah so- - But the Biden didn't go too. - I understand that. - Yeah. The real world impact. - Yeah, and it's not to say that falsehoods don't have real impact, but what we should always look for is is there an ability to course correct? And this is where traditional journalism, I think, remains vital.

I do think that reputable news organizations still get the calls. - Yes. - They get the requests. - Yes. - Be thoughtful. You know, and I do think that we also live in a world where the New York Times is sharing data that's coming from satellites that wouldn't have been available even 10 years ago, because the only satellites in the world were controlled by governments.

War reporting's always been difficult. - Fog of war. - Yeah, so yeah, we have all of that. I think what we have to go back to though, to some degree, is there is a flood of disinformation, so there's more out there with which to grapple.

- Yes. - There is less resilience societally. There is no real mechanism, no consensus about what to do when we do detect these falsehoods. And we're not having the conversations or even just the fundamental education in our school systems in some countries as to how to look and detect things that just don't sound right.

Lemme go back for a moment to that story about the falsehood spread in the wake of the Hawaiian fire. Just think about the story. What somebody said was, did you know that the fire was set by the United States using a meteorological weapon? Well, just think for a moment, does such a weapon plausibly exist? If it does exist, would a government set a fire on itself or would it more likely test it somewhere else? Unfortunately what we're seeing here is what we saw in 2015 by the Russians as they got ready for 2016.

Look for gullible people. Look for people who are more likely to believe something that just isn't true or even plausible. Do that and build a list of those followers. Now you know who to target when it matters most. The week before an election, the month before an election.

And I think that is a huge, huge problem. And we're going to have to figure out how to do a better job of addressing it. - And I think it doesn't impact only people who are weak or who- - No, not at all. - Know anything

about information, that would be too simple, right? - Yeah. - Because I think it creates like some kind of environment where nobody knows what's wrong, what's true, and it kind of destroy trust in the political system, in democracy, in journalism and everything. - By design. - I worry a great deal.

We got to name names on this. You look at X as a platform right now and the active throttling of journalistic institutions and their links to real coverage compared to the promotion of verified citizen journalists who do not have the resources and that is actively undermining the ability to ensure that people get good and quality information. If it turns out that Twitter X is a meaningful part of the social town square, of the public square, then the United States government has a very strong interest in ensuring that people are going to have access to trusted sources there. Part of Microsoft's effort is saying, we're going to have elections going forward. We're going to promote information that we know about campaigns, not just in the US, all over the world, comes from trusted and verified sources. So that if you're in a hybrid democracy or democracy that might fall apart, because the local leader's prepared to use it against them, we're going to be part of the place that says, no, this is what's really going on.

Voice of America used to do that, right? But in this environment, there ain't no Voice of America for social media. So that's why there's a big hole that needs to be filled. - It goes back to, I think, this risk that people may be gullible, not because they're not smart, but because they're not informed and they're not getting a broader set of information. And especially when you have a social media platform that dismantles its safety architecture and its safety teams. - Exactly. - Yeah.

- You create a dynamic where those risks rise considerably. - That's all of them right now, right? Because everyone like Elon Musk's at such a low bar and the things that both Meta and Google were going to do, they're not doing anymore. So we're walking far more vulnerable into these elections. But I would say just the last part of this is, it isn't that someone is weak or that they don't have information.

It's that we are now operating in a place, our public square is about emotions. It's about triggering the worst emotions. And we expect people to think, right? This is the problem that we have.

And that's part of the reason the insidious manipulation. You know, in the Nobel Lecture in 2021, I said we should reform or revoke Section 230 of the 1996 Communications Decency Act. That's the one that gives the tech companies cover to let the lies, to let the manipulation go, right? So the social media companies. That's not going to happen.

Ian will tell me why. But that's it. It's not even like random.

It's not even people who aren't smart. It is all of us. It's our biology. - It's a problem for all of us.

And I think it's important not to blame certain people and say, well, we're not concerned by that. I think we've made it pretty clear that technology has a great impact on democracy and it's important to fight misinformation and foreign interference in elections. It's also crucial to regulate generative AI because it could be a dangerous technology.

So the Peace Forum begins tomorrow in Paris. What do we hope can happen? If you had to pick one thing, what would it be? What should we do altogether? Because it can only be together that we can achieve anything. Éléonore maybe first. - I would go back to education. And to your point, just previously, about the fact that we can all actually be manipulated by this, because we look at the likes, we look at the hearts, we look at the smileys, and the emojis on the bottom of every single piece of information. And we're not actually looking at who's producing the information, where it comes from, whether it's reliable, what source it is.

We put everything at the same level. And what we look is at the scale, the massive scale of people commenting on information. And the bigger the lie, the more it will be commented.

The bad buzz is better than good buzz. So I think we all have a responsibility here, because we know this, we've known this for ages. We have the tools to fight it and yet we're still all tweeting and, or whatever you say now. And we're still commenting and looking for our own information on those platforms where there's no differentiation on the type of information. So I think actually leading by example would be something that I would recommend we do as leaders and teaching and telling kids, but not only kids, everyone, you know, there's plenty out there and plenty is rotten, but there's also good information.

There's things that we wouldn't have access to a few years ago and we shouldn't be afraid of technology. We should just embrace it and ensure that everyone has the tools to actually use it in a good way. - So education would be key.

What would be the thing we should do? - Well, fundamentally, I think the Paris Peace Forum was founded five years ago. So this is the fifth anniversary by President Macron on the premise that the preservation of peace was going to require a new commitment to multilateralism. That really means, in the 21st century, multi-stakeholderism. I think we should renew and redouble our efforts by focusing on the problems we're describing here. I think we've talked about a number of concrete steps.

They all require the government, the private sector, NGOs, journalists, civil society, that we all come together. And we should put some stakes in the ground and let's focus on the next year and let's see if we can build some more momentum to get some more things done. We won't get everything done, but there's no reason to reach the first of the year where we are right now. And we could take some practical steps. - Maria, how we jump out of this Titanic? - There's three things that we're announcing in the Paris Peace Forum.

I chair or co-chair all these three. I think the first one is the charter on AI and journalism that we're doing with Reporters Without Borders. That will be announced tomorrow.

The second one is the International Fund for Public Interest Media. Right, the tech has collapsed our business model in media. We will not survive. And so in the interim period, we've gone to governments. Last year we raised about $50 million to give to news organizations so that we can survive this time period as we look for solutions.

And then the third thing is, President Macron has put together, it's much nicer in French. And since I don't speak French, they always have to translate for me. It's the EGI.

It's a landmark group that actually will try to redefine what the world, what France is going to do. In trying to do that, we will try to find definitions. We have to look at the world like it's been demolished. This is the destruction part and we must create it moving forward. We'll be announcing the Innovation Lab, which is going to be at the Institute of Global Politics.

But more than that, what can all of us, if you're watching, what can you do? What are you willing to sacrifice at this moment in time? - Ian, final thoughts? - Well, you know, five years and I got to say, you know, it's not like we have more peace over that time, but that's not a failure of the Paris Peace Forum. That's actually why the Paris Peace Forum was created, is because, you know, Macron and others very much understood that we were heading into a position that was geopolitically unsustainable and that we needed to redouble our efforts to try to change the trajectory of that. Now I do think that for all the challenges we're talking about here, the multi-stakeholder approach is mostly populated by adults that want stability.

They don't want to break things. They don't all like each other. They don't all have the same business models. They certainly don't all trust each other.

They're frequently competitive, but they don't want everything to break. We look right now at a war that has just erupted in the Middle East and the Americans and Chinese and the Europeans and all the major companies want that war over, want that war over as fast as possible, because this, if it gets worse, is a global recession that becomes a Middle Eastern war, and will have huge knock-on implications for stability of Europe, migrants, all this sort of thing. The Russia-Ukraine War, I mean, it has become more of a partisan issue in the US, but still you don't want the Ukrainians to lose. You want territorial integrity to win. You want Ukrainian democracy to win.

You want the Ukrainians to be a part of the European Union. They're getting candidate status. That is something that adults want.

And when I look at AI, you also have something that's very fast-moving. We just had the Bletchley Declaration. And the Americans and the Europeans and the Chinese all stood up with the private sector.

They all agreed, how do they agree when they have no trust? They agreed because they understand that this is so fast-moving and so proliferating that you need to ensure that you have some guardrails so the whole thing doesn't break. And we're in an environment right now where multi-stakeholder is, first and foremost, means guardrails. It doesn't mean fix everything, but it also doesn't mean the Titanic. It does mean that like we see that things, if they are without guardrails, will start to threaten our fundamental interest in equities.

That's what the Paris Peace Forum needs to be about. That part of the conversation. - The problem is so huge, it's like the climate. At some point we need to react. Let's hope, I mean, you're so passionate. You convinced me, so I hope we convinced everybody.

It's been a fascinating conversation. I hope you enjoyed it. You can see all of GZERO's coverage from the Paris Peace Forum, including a panel discussion focused on combating cyber crime by heading to gzeromedia.com/globalstage. I'm Julien Pain. I want to thank you for watching us and for, especially for putting up with my French accent for so long. I know it's hard. Have a great weekend.

2023-11-17 14:54

Show Video

Other news