Technological solutions—potential and limitations
Morning, everyone and welcome to technological solutions, potentials and limitations. My name is Dorian Corrosion, and I am associate editor at the DF Arla I'm joined today by four experts who have been at the forefront of designing and deploying emerging technologies within broader efforts to create better functioning information ecosystems. They are going to help us critically ASSESS Some of the text centered approaches figure out where the hype is and figure out where the potential lies in empowering people on the front lines, disinformation campaigns. Vonna, whose partner and CEO of Oh Melas technology company that uses data and look analytics to map how state and state adjacent actors manipulate the Web achieve geopolitical goals. Sam Gregory is program director at Witness an international nonprofit that helps people use video and participatory tech to defend human rights.
Alan she boys senior investigations manager at Code for Africa, Africa's largest network of civic tech, and open data labs, building Democratic solutions. And J. D. Maddox is an adjunct professor in the Department of Information scientists, Sciences and technology at George Mason University and chief technology adviser to the U. S. State Department's global engagement center.
Works on building the U. S. Is technological defenses to disinformation campaigns. Ivana. I want to start off with you as a founder of a tech company in this space.
This information is a fundamentally human phenomenon that has been around since people have been able to communicate with one another. But over the past few years there has been this emerging industry of companies proposing text centered solutions to detect encounter information when it relation What is driving this market interest and what problems are these companies trying to address It's a great question. Thanks for having me on think. Ultimately, the marketplace is really driven by the fact that with The advent of very cheap a I and data and just massive amount of data out there. Um, computational propaganda has made the disinformation
space a lot more critical in terms of national security, but also in terms of our human rights and civic education. And so there are a lot of companies out there. They're trying to tackle this, um, this problem. Um And really there's not a single.
I mean, there's not a silver bullet right? And there's not a full stack. A solution. I can do everything and tackled the entire problem. And so you have a lot of companies out there. That specializing one, uh, niche aspect of the issue. Which basically means that now we have an integration problem. So if I am the G C, and I'm trying to find a couple solutions.
That can help me solve an ultimate problem. How do I make sure that the text sex can actually talk to each other? And how do I also make sure that it is done in a way that is efficient and I actually helps protect Um, civil liberties and data privacy constantly. That's sort of the future challenges of the marketplace.
It's an interesting point about the integration issue and J D. I want to turn to you for your thoughts on that from your work with the G U. C. Yeah, First of all, you know, Thank you for having me as a representative, the G C. This is a great group. A great combination of perspectives on the issue. I really appreciate it. Um, you know, I think from the from the geeks perspective We are in a global competition with foreign adversaries engaged, who are engaged in disinformation and propaganda in a multimodal way, so we're affected by disinformation and propaganda through Multiple technologies and platforms. And so as a result, we need to be quick to understand all of those modes and to find the solutions that help us get ahead of their adversarial use of disinformation propaganda.
Really, and it's close to real time as possible. So to do that, we've set up a number of programs. Leo Bray, the deputy coordinator of the G C, gave a good overview of our tech challenges. Another program that I think is very interesting to, uh, everybody watching. This would be our our technology program. Our, uh Sorry. Our test bed, which is the opportunity for Our partners to test out new technologies in this space to find rapid solutions and again get ahead of the adversary before we've We've lost the battle the propaganda disinformation battle, and so we're looking at a broad array of technologies. I think the last five years
People in this community have been focused, of course, on understanding where the problem is, You know, how do we use Data Analytics? Social listening to understand where the adversary is active? That's absolutely essential. Um, and it will continue. There's also a broader array of technologies that are out there that are potentially very powerful, uh, to counter disinformation propaganda and whether it's Gamification, which is a form of media literacy, or if its content validation such as Blockchain based content validation. Or whether it's crowd sourced assessment of journalistic standards.
These are all new technologies that are coming online and are potentially very powerful against the problem. Um, at a tactical level. And we're really happy to have essentially gone through about. I think we're at 26 different tests of technologies at this point, and we've implemented about a third of those, so we're in a great place for that and critical to doing this is integrating. Um, civil society, integrating academia and, of course, industry at every point along the way, so we are dependent on our industry partners. Really, to lead the development of these new technologies and to get us to a place where we're actively defending against foreign propaganda and disinformation. Um, so you know, we're pleased with
that. And we're well would be happy to work with everybody who's on the line to to push that forward. Thanks. One of the problems that tech solutions Try to address is the problem of scale. There is just so much information online coming from all sources that, uh, it can be hard for human analysts to detect trends and synthesize patterns quickly at the scale at which information travels. In 2021 Allen. I want wanted to turn to you as a practitioner in
this space that leads a team of researchers. How do you integrate? Technologies and tools into your workflows. And what have you found most useful And what have you found most lacking? Oh, God.
I'm glad to be this amazing panel and thank you for the invitation in terms of the tools and what we find most useful. One thing that I we've been trying to do is, of course, the tool that you use is tailored on the project that you're doing so one of the two main categories of tools that we mainly use and find mostly useful. Ones that deal with social social media monitoring and analysis. And also the other thing is, of course, media monitoring because we've seen a number of this information actors have now moved from actually spreading the disinformation on social media itself, but kind of in directing users to other websites, blogs who which basically is captured through social true like media monitoring tools.
The thing that probably I would like to notice that most of the tools that we try to use to investigate this information are actually not tailored for this information, but rather for PR and marketing, So you'll find a social media analysis tool, which has been actually built to support marketing efforts, the ones that we actually used for investigations, and that would mean that at the end of the day, the tooth themselves don't have some of the features that we as researchers will investigators hold Like them to have. Yeah, So that's one of the things the other thing that I've found in the last five years. That has been really helpful is these tools have now started employing machine learning? Uh, like technologies like NLP and sentiment analysis, which get to give us a couple of insights when you're doing our investigations are our research open source. Technology is a lot most of the PR and marketing tools that I'm talking about, Of course, a commercial so you get to have to like, uh, access to Twitter to Facebook. Can't that way. You're able to have visibility on all the indicators
and another thing that are probably I like to highlight in terms of limitations of the solutions that we've seen. One thing is that they try to eliminate the human element at the end of the day, which is really important when it comes to research and investigations, So I think the other thing is also they try to as as as one of the Panelists Anna has seen, Vanna has said being the war all in one stop shop like you want to counter to be the only source of our provider or take solution to counter the entire disinformation sphere. Which is kind of a very, which will involve a lot, a lot of effort and a lot of research and a lot of of course budget for you to create such kind of a simulation, so I would probably say, as users that's one of the other things. The other thing is, of course, at the end
of the day, customization of the truth themselves to meet the demands or the demands of the researchers or, like the use cases of the researchers across the globe. One thing that we found particularly in Africa. Is that some of the tools that are developed don't address, uh, or don't have sources like, for example, when it comes to media monitoring, they don't have sources that are tailored to Africa. So probably we are missing some of the data sets that would be really critical to get an informed decision from the data that you reviewing, and also the other thing is the language capabilities. So in Africa, we do have like 2000 languages. And that means that the NLP and the sentiment analysis will probably understand only English, French and a couple of more, uh, languages in Africa, So that's one of the things that we violated. And even
in the recent Ethiopia election, most of the content that monitoring what actually in Amharic Which costs are very good challenge, because the tools we are using are customized to only understand English. You only understand French and all these other major languages. Thanks. You've highlighted several challenges, I think on the part of an, uh, that researchers actually face when they use those tools. Uh, one of the big ones that you mentioned is that a lot of The tools that disinformation researchers use, um, aren't actually meant for their use case They're the most. Some of the most robust
social media monitoring tools are meant for digital marketing. So I think researchers have to get creative and you highlighted that as well as the limited language capabilities of some of these tools. Lot of the focus on tools design us often on, um Text based on synthesizing patterns and text based information. But there is also, uh, space for development focused on visual media, which is a significant significant Part of many disinformation campaigns. I want to turn to Sam Gregory now, Sam. Wow. Could you give us sort of summary of the evolutions
in the Uh, emerging technologies, uh, to synthesize patterns and visual media manipulation. Thank you. Yes. And, um, delighted to be here to join this conversation. Um, so visual misinformation as you note heavy focus on on text based misinformation and visual information provides Both much more rich information to analyze if we're trying to approach it from, um, an AI or machine learning perspective, but also a range of challenges, and I would say as I look back at the last five years of how we track Misinformation and a witness. We obviously have an interest in both.
How do you, um gather truthful information, and there's been a lot of the work in the human rights sector to develop tools to help people assert. The validity of their information in a climate of mis and disinformation, and there's also been this development of tools to, um to track visual mis and disinformation. It strikes me that the tools that people have used most in a practical sense of actually being Very simple tools developed often with human rights, use the civil society in mind. So I think a lot around, you know very basic tools
for reverse image such like you find in something like the invade tool, or, in fact, something that amnesty developed five or six years ago, the YouTube data viewer. Um and again, what they're trying to do is something similar to, um, what Allen has been describing is give us the opportunity to track source of videos and place it in the context of that source. Uh, you know the history of a source image. A lot of my work in the last three years has come from a witness perspective and thinking about new forms of visual misinformation and disinformation, um, such as deep fakes that are emerging and progressing very fast. Technically. And I think we're seeing there again. How do we, um, developed tools that are effective for the types of scenarios that we see at a diverse set of more localized context, And I think that's often the perspective when I look at these tools is to say Will this be valuable to someone who's in a kind of community level, Miss and Dis Info Medic type approach. They're trying to counteract
things in near real time, and it's in those context of the most accessible tools. Now are things like reverse image search those very simple, contextual ization touch tools. The tools that are inaccessible, and this is also for the technical reasons that we were talking about earlier. The types of image analysis tools for kind of sorting the needle in the haystack doing that triage work. Um, and as we look at the gaps in the space right now,
We're particularly centered on. How do you think about better community level tools that are really centered on the needs of, um community leaders, Human rights defenders. And I think and hope to talk more about this later that the deep fake space really provides us a really great Um, almost case study of how do we understand threats and solutions. And how do we align those as we start to look at an emerging area official and ms visual mis and disinformation? That's a really interesting point about accessibility. Uh and with the proliferation of for profit, some of these more in a for profit solutions and technical tools that rely on machine learning and ai to synthesized. To extract patterns from a large corpus of data.
One of the problems is that there are barriers to entry. First of all, there's often they There's often a cost barrier. Um, there these tools are too expensive because they have to Uh make they have to secure API access from various platforms as well as make a profit off of that, on top of that they're not really accessible to the people. Um, that are That they are meant. To help. So Savanah. I wanted to turn to you, Um, to talk to us a
little bit about Um, how What is the role of for profit companies in this space and designing these solutions? It's a great question. Um, I would actually have to disagree with something that you just said, which is the price. I don't think pricing is ultimately the problem. I think it's the access to the social media. A P I.
Like how many people actually have access to Twitter Fire host Five or six companies, right? And they might be, Yeah, um, selling their product for a lot of money, but for a lot of other people, I don't It's not the pricing. Um, I think You know? The The reason why there are for profit companies is because there is a marketplace for this. Where else no one would actually buy. That's just capitalism. Um and what I want to think that as a for
profit company that we can do is we can do a lot of the R and D work. In computational propaganda. The otherwise would not get done by, um, Simplicity Civic society were canonizations were just were organizations that are supported by donor agencies like U S. A. I D
I can guarantee you that if you actually want to write some a proposal to U S a. I D s saying I would like to use your money for r and D maybe, like, no, I don't think you can even find a budget line for that. Right. So I think we do take on on a lot of the risks and our investors take on a on a lot of the risks in the very beginning to, um and As soon as there's a marketplace, and there's demand. Um then there's
a place for for profit companies. We are laws have also considered actually spinning out a nonprofit sector. Um and actually open source open sourcing a lot of our data sets. We do work with think tanks. We do work with a lot of other nonprofits, and we give it to them either for free. We're at like 90% discount, and that is only to cover Cost of AWS govcloud and a lot of the stores that we have.
That's really interesting. The point about that you have been considering open sourcing some of your data sets and one of the things that can be difficult with assessing The rigor of various tools is the fact that, um, for profit solutions will often use proprietary approaches. And, uh this is a fields. Uh, whose heart is really in, um, transparent transparency, and, uh The fact that researchers can show their work. Um so how von owed you say what is the best way to assess the quality and Of various tools, given that we may not necessarily have access to how they collect data, how they synthesize. Uh
Really? I think collecting data is yeah. So collecting data raw data. I think that for most people, that's a proprietary mean, but that's only because we're terrified that we're going to get cut off by the big social media companies, right? I actually think weight, constant data Analytics. Well, weird working with our U. S government customers. Um we have to be transparent because there's this notion of a eyes of black box and the end users actually won't even buy where adopt what you have built.
Until they feel comfortable enough, and so we have to be actually be really transparent. And so I think that's something that most people don't think about, which is that ultimately we're still catering to a set of And users who are, um, no offense to JD because he's definitely an exception, But for most of them are not tech savvy. Right. And so they're like, Well, how? How can I trust this? If I don't know, And my question is always like, well, you use uber. Right, but you don't need to know how they do their route Optimization.
Um, But even though we kind of pushed back on that we still have to be really transparent like this is how it works. He ours and the data successor we're taking in. This is our method of getting to the analysis. Um and Lot of times they compare that to the ground truth or as close to the ground truth as possible to make sure that is actually accurate, so transparency is actually built in with the analysis portion Thank you, Sam. I wanted to turn to you, uh, to talk a little bit more about the case. Study of
Deep fake detection. Uh, and how that technology has evolved. Thank you. Yes. Um, so for a little bit of context, um And I think this goes to how we think about design on these technologies and how we set expectations. So for the last three years of witness we've
been trying to follow what we call our Prepare. Don't panic approached the fakes. I want to emphasize. I don't think deep fakes are the most prevalent form of visual mis and disinformation. Now when they used it's primarily and gender based violence. But as we look at the tech trends on something like deepfake synthesis of the ability to create deepfakes, you know, we see the way in which these will plug into Visual messenger disinformation trends. So over the last three years,
we ran a process of consultation with people with lived and professional experiences around. Um how Current trends happening and in mis and disinformation in sub Saharan Africa and Brazil, Southeast Asia, the US Europe trying to really ask people what they wanted in terms of how they understood threats and what they understood as solutions and one of the things we heard consistently about solutions as we want. Detection solutions that are cheap and accessible and not a luxury. Good and that are also explainable in the circumstances we work in. I think they meant both explain ability in the AI sense, right? How do we understand the algorithm but also explain ability, So it's not this black box of Yes, I just told you this is not true when it looks real, um, I we do hear a lot of people saying, Look, we don't trust Proprietary solutions. We don't trust the data behind them, and I
think that's legitimate. Given what we've even seen on some of these deepfake detection solutions, which they're not accurate, for example, on certain skin tones in terms of the detection, they're biased because of the data algorithm biases. We know very well. I wanted to share a case study, I think really pulls together some of the threads around the challenges in these tools and accessibility.
So, um, I just wrote about this such in an op ed in wired today. It was a case study from Myanmar that took place in March, A video came out That appeared to show a confession by a senior politician around corruption involving another senior politician. And, um, what was interesting about this case is generally I wouldn't trust a video that emerges from a military coup government with someone you know, creating this kind of claim. But what happened was a lot of people said this must be a deepfake. Um, and they plugged it into the types of detectors that people have available for these now which are primarily, you know, online detectors, you just stick in the video.
And it gives you a result in this case. The result that came back said that this video of this politician was 90% like did he fake and people were suspicious because the face looked glitchy and all these things? Now, the problem as it turns out, and, um and I spent some time looking into the case to try and understand what this really was a deep fake or much more likely, you know, type of forced confession you see under military governments. And with other researchers looked at the case. Um the problem was that deep break detection tools are there glitchy have to know how to use them. They work only on specific methods, right? They're not designed for you know, one type of creation tool when you apply it to another. They don't work well on compressed video, which is what most of us experience Most of our lives like video that's been laundered through social media. Um, but the panic that was created by people using a tool
without knowing how it worked was quite significant. People like you know they're deep faking. These confessions. How do we understand it? And so it crystallizes together one of the problems which is tools out there that people don't know how to use, um or that are not well designed for particular context or well explained Can be extremely damaging, and I think that's one of the challenges we have is making sure that we pair up tools with skills. Um, because one of the other challenges we saw in this case was that journalists and civil society in Myanmar Didn't have the skills in media forensics, and I think one of the gaps we've been seeing from a lot of our work over the last years is, uh, is actually a gap around media forensics. It's how do people
really understand how to understand a piece of media to apply tools to it? Um, And so we saw in this, um, in this case in Myanmar, and we still don't know for certain whether it is a deep fake or not, though, You know, the balance of probabilities suggest not, um how the combination of tools that were not very well designed for a real world scenario that gave results that are confusing to a general public that doesn't understand how to use them. And weren't available to the key journalists to use with the right training, you know, sets us up in a really difficult situation, And so I think there's a real It's really incumbent on us now to think very much about that sort of complementarity consultation to discover what people need. Co design and then really think about as we build the tools. How do you make them available in open source variants with the right capacities around them and the right abilities to escalate things? I think that's really important. We heard as well in a lot of our
consultations is not everyone is going to be a media forensics expert. Visual mis and disinformation. So how do we actually create much better mechanisms to escalate? Um, Two people have better technical competencies. That's a good point about, um What happens when these tools are Either adopted by bad actors or misinterpreted by, uh People that do not necessarily know how, uh, To interpret them and JD I wanted to ask you what is the role of How do you explain how to interpret The results of some of these tools to people that are not necessarily as tech savvy, for example, perhaps in government or really across sectors. Yeah, it's as Ivana pointed out. It can be pretty tricky to get the
message across about the utility of some tools. I think you know, like I said, early on, people in this community are fairly familiar. With the idea of data analytics and and being able to find where the adverse serious active in the information environment. Um but again, there's a whole new set of tools out there that are coming online are already online that are very, very useful to us, and that we need to Um, to to put to use and it can be very difficult to explain these to the end user. Um sometimes people don't really know what question
to ask. Uh And so we're in this position as the global engagement center where really pushing the envelope are really at this bleeding edge of Technology testing and technology implementation. And one of the challenges yes, is to get that message back into the U. S government but also, um to work with our industry partners to Essentially socialized that information very broadly within their networks. So what? You know what we have set up.
Is essentially a feedback mechanism. Every time we conduct a test of a of a technology we are very actively pushing that information back into the U. S government but also much more broadly, and we're attempting to get Third party analysis of the results of those tests. I think a great example of this would be our released this year of Harmony Square, which is a game an online game. Intended to counter the effects of disinformation among the among the users. And we released this back in November and now and in English. And now I believe it's in at least six languages. Maybe more. Um
And the the user base is growing and growing, and we invited Cambridge University to come in and take a look at it and make sure that it was doing what is intended to do Indeed, they They did assess that quantifiable E does help the the user of the game to defend themselves against the effects of disinformation and propaganda. It just takes the user through the process of developing out a disinformation campaign to disrupt the harmony on a town square. It's all very tongue in cheek, but it works very well. But that process of, um you know, working with industry to develop the game, working with academia to test out and verify the utility of the game and then pushing that back into the community and into media has been essential to the to the success of that game, So I think that's probably a good example. Right? Ellen. What? What sorts of creative solutions have, uh, has your
team device and actually built that were Because you Thought they were missing on the current market. Yes. So I think for my team, I think the what? What we realize and probably I just want to highlight. What JD has just mentioned is, of course, what we realize is human rights, Uh, like organizations and he wants societies like Gone for Africa. Notice that this information
tracking does not really need, uh, someone probably might need additional capabilities. And that is where we can get people like data analysis technologies to review the technology that we have and tell us, okay? This technology will be able to give you this kind of results. But at the end of the day, that is never enough because you're using tools that probably have been customized for particular use case, and I think one some of the tools that we've had to actually manually create ourselves. So I would say hashtag analysis in on Twitter is one of the prime areas. And one of the things that we've struggled with the most is getting alerts on, for example, hashtags within different countries within different jurisdictions that are starting to crop up. So just recently is when the alerts feature have been implemented in most of our social media monitoring, But we found that to be quite a challenge. So what we did is, of course, went to Python started
playing around with Twitter FBI and try to figure out uh, whenever our hashtag is trending on on a daily basis. How do you get that notification? So what we try to to do is of course, at the end of the day. These are scraper that looks at Twitter data and all the trends that have been doing I've been happening all throughout that day, and and now, of course, because the researcher will not be sitting there looking at Twitter hashtag every single time they will be able to just review those hashtags and look at those that are suspicious. So that is kind of one of the things ways we've we've had to do that The other thing is probably when it comes to media monitoring. I think I've got for Africa because media monitoring has been a key part of our research process. We realized, of course, the sources and the lack of data from African Associates. What we did is we started
building a different version of media clouds. Media monitoring tool focused only on African data. So ingesting media articles and media success from all the African countries and because we are actually on the ground, and our researchers are in a cross like 22 African countries. It's easier for us to pinpoint within these countries. These are all the media associates. We should be investing, and that way we are able to at the end of the day, only target sources that we are really particularly interested in for all the disinformation counter this information efforts that we do One of the hardest parts of, uh Relying on the text centered approaches that sometimes If, uh, certain tools don't acknowledge their limitations. Um
Media that reports on, uh their results can sometimes misinterpret, uh some of the method of logical limitations, for example. Uh, you see headlines sometimes, like, uh, 60% of Accounts that are tweeting about. I don't know the pandemic or but relying on, uh, but detection technologies.
Um, I want to again. Go back to Sam. Uh, we see the what do we see with regard to, uh Reporting, uh, deepfake detection and how well uh, does the public and media uh, understand current capabilities? Yeah, And I think I'm glad we've picked up this point around. You know how we translate the results of technology because I think it's really important and and the Myanmar case study I was sharing earlier was a really good example there of How you need to have technical literacy on the bottom of both ocean type investigators in reporting teams, but also to more generally in reporting teams, because in that case, many of the reporters really picked up quite uncritically.
This results. You know, there was sort of a red box around the politicians face in this video that was shared that said 90% probability. It's a deep fake and underlying that you know you have to layer in and say, Wait a second. That's probably an erroneous result. Because You need to understand that deepfakes detection technologies and generally not General Izabal across different formats. They don't work well on compressed videos all of these questions, and I think we also have to understand how we report on technology Also plays interferes. So I think you know deep fakes are a classic example of kind of this idea of Either. We don't know how to detect them, which is one sort of descriptive
trope around them. Like all our technologies. A fail was surrounded by fakes just completely untrue. And that's a reflection of technical literacy on reporting that goes to the hype side or the other is when we genuinely do have cases. We don't have enough literacy.
The report and say, Look, you know, we we know that Deepfake detection technologies say from the recent challenge that Facebook lead that the leading algorithm had about 65% detection rate on deep fakes, you know that's not great. We should be using this as a data point. And then you should be showing your work in terms of reporting to explain it. I think this is really important with all technologies.
And in this community, we know this well is you need to sort of show your work of how you complement the technological, Um Results with with with human judgment so explaining While there's probably there's rationally and logically we should interpret this result in this way as we move forward, So, um, as I did fix as a classic example of both the hype cycle around technology that leads people Um, to be skeptical of things. They shouldn't, for example, and then the second part of it, of course, is that we need this literacy, both at the ocean journalist level, but frankly, increasingly at a generalist level in journalism to be able to report accurately on these types of tools. Thanks. Yes, it's an important point, uh, to think about not only designing tools, but how to communicate their use and results to the public. Uh, I have. We have a few questions from the audience.
Uh, this one, uh, is for Ivana. I'm curious on the arms race we are already in to avoid Aware. Better detection results one second. How do we get out of this cycle? The questions about the arms race in, uh, detection technology. And how do we get out of This arms race in which, um, counter disinformation, technologists or constantly, uh, developing tools to detect Campaigns and the content is becoming more sophisticated and better able to avoid detection.
I think there are two answers to that. Or two part answer. The first part is we need to start working with other players in this, um, space in terms of making sure that what we're providing the most comprehensive data so, for example, my company like we're really focus on overt. Bio detection, um, and often will work with other companies are looking at the dark Web where some of the more the covert means right so that we can actually package it together. So I do think there needs
to be a lot more collaboration. But the second part of that is, um Get out the arms race by actually being a little bit more focused on how do we actually respond? There's so many people who are doing detection, but there are very few companies and NGOs are actually working up the strategic communication campaigns to either respond to it. We're saying Hey, um, you know, the reason why people are falling for disinformation is because the civic education and things and media literacy so why don't we actually focus on the root of the problem as well? Um, I think that's just another way of kind of distinguishing yourself from this oversaturated Field of D Tex trend. It's a good point. And Sam I wanted to get your thoughts as well
on, uh, that because I know you've done some work on that with regard to deep fakes and visual media. Yeah. So so definitely, you know, in the arms race metaphor is used a lot in, um in the deep fakes world of kind of the contest of developing, you know algorithms to create deep fakes, and there The ability to detect them. Um and I think it's you know, it's valid to describe that also valid to recognize as a technical reality underlying that we should also understand that you know where we sort of place our bets in any type of sort of missing just information. We should reflect the underlying technical realities. And so in the case of deep fakes, you know the trend is to make better and better, deep fakes.
And the detection will get more complicated over time. Um, And so I think we have to recognize those dynamics and what that's mentioned that in the visual mis and disinformation spaces, you know, we're also seeing a lot of investment in And, for example, provenance architecture so ways you can show how media has moved over time and examples of that are things like the content, Authenticity Initiative and the coalition for Content. Provenance and authenticity, and those are big industry initiatives. I think that brings us back to another point when we think about technology, and it's um and it's particularly it applies to civil society actors globally as There You have a big set of set a set of set of infrastructure and standards being built for what's known as authenticity. Provenance, which is, you know, tracking where a piece of media comes from how it's been manipulated over time, and that's probably in the long run. One of our strongest measures for for dealing with visual mis
and disinformation is be able to understand better how media is manipulated over time. The real key question comes as whose voices are represented in that type of infrastructure building and actually think when we look at civil society often we come to this discussion way too late. Right? Human rights groups are consulted once the product is in market once the underlying infrastructure has actually been built in the standards have been set. And have very practical considerations of usages globally, been integrated, have freedom of expression and privacy. Been central to it. Um, And
so this has been one area where witness has been very involved in actually trying to be Deeply involved in the standards building around these authenticity, infrastructure to try and ensure that things we know could be problematic for you know, global civil society and human rights are integrated early. You know, for example, not insisting on identity around a piece of media, Uh, when you track the provenance in it, because we know the risks the citizen journalists, But I think the big point there is actually for technology. We actually civil society at the table for the infrastructure and standard side. Before we even get to the tool side on the algorithm side and the implementation I add something just onto what Sam said. Definitely Okay, Great. Um, I 100% agree with that. I think one of the biggest
frustrations is you know, in this space like, what is we all have very different definitions of what influence operations are What disinformation is what is male information and what is misinformation? Right? And so once we actually have the standards not only on like Things like algorithms and regulations around what is actually considered personally identifiable information and all these things, it's going to help us. But even more from like, like a meta perspective of like, if we can actually just agree are with each term actually means I think that would go a long way in helping everyone is space. Not just, you know, for profit companies, but also civil society works. That's a great point. And it leads me to my next question. I'm going to ask J D, uh how Well how do we agree on definitions right before we start designing technology? How do we agree on, uh, these definitions across sectors. Yeah, I think, um, you know, this is a discussion that's been going on for pretty much a decade. Now. You know, how do we define mis
disinformation, Misinformation, more information. Um, I think that you're not going to be able to define it substantively, right? You're not going to ever be able to come to grips with the idea of you know what is politically accurate information. Instead, you can. You can make a decision about what is technically authentic information. Um, and that's the direction that most of the social media platforms have gone when they have attempted to conduct content, moderation or other actions against disinformation online.
I think it's a very It's a very complex problem. And you know, for the for the U. S government. We certainly are not in the business of judging. Truth from fiction online instead again, we're looking for in authenticity and making sure that our adversaries are not You know, using those tactics against us? And what happens when adversaries and use some of these tools against us thinking about, um, focusing on for example, foreign disinformation campaigns in particular. Are there cases JD where Ah, like we know that, um state adversary has used some sort of like social media monitoring tool for, um For launching this information campaigns rather than countering them.
Um Off the top of my head. I'm not thinking of an individual case where social media monitoring was used itself for, uh, disinformation as a technological tool. But I think that in terms of responses to all of these we are you know, Over time, The global engagement center has shifted its focus from you know that sort of tit for tat, Uh Counter disinformation messaging effort over to really, uh, a whole of society effort where we're attempting to integrate. Yes, those narrative solutions but also bringing in, um you know industry where it's appropriate social media platforms where it's appropriate.
To do what they are legal, legally and ethically capable of doing Also, you know, bringing in that broad array of government capabilities, whether it's you know, FBI, conducting arrests or Treasury, conducting sanctions or D O d doing, uh, show force. Uh, things like that, Um So you know, we're really focusing on, uh, responses in a different way. And I think Ivana is right that we can't simply sit back and monitor. But we have to position ourselves where we are actively, Um, you know, pushing, pushing the message and making sure that we have control over the narrative, and we're not in constant response mode. Um yeah, I think that's it.
Thank you, Uh, Alan, we have a question from the audience that I would really like to ask you, Um What do you have any tips for researchers on how to evaluate various. Uh oh sent tools that you used in your day to day investigations. Yes. So I think one of the things that of course you need to test
out is the accessibility of data of that particular tool and one of the things that I also think many researchers and investigators failed to do is to actually or did the features of that particular You do? Push out a report saying that attributing a particular piece of this information content to, uh, 20 Institutional organization or an individual And yet you haven't and that is just based on what the tool is telling you. So that's one of the things that research has actually failed to do so kind of the tips on how we do evaluation is Number one. Look at what kind of data sources does this tool have How much does it pull? And is it complete information? You can probably test it out or corroborate the, uh, the information you're getting from one tool with another tool with the same capabilities. That way you're able to see that, okay? And I think you probably for commercial commercial tool. It's impossible to do that because you need that priority access where you're given by the vendor. But I think for
us into tools that is possible, and we have very many tools that do the same functionality. So just doing that collaboration between two Different tools or three different tools on the results of what you're getting is could be really important. Thank you for that. Just we have about a minute and 30 seconds left, And I wanted to ask us last question each of you if you could design like One. If you could solve one design problem with what would it be in tools development in the space? Uh, Vonna, Let's start with you. Uh um.
Interoperability. So, um, I want like all the tools to actually have an A P. I Restful API that can actually be used. We mentioned that at the beginning and J. D I think it's somewhat similar to that answer is simply that we have a array of tools that are now available and unfortunately, you know, a policy makers and users are only thinking about them singularly, and we really need to be able to bring these tools.
Together and try to use as many of them as possible to face the multi modality of of the disinformation problem. Thank you, Sam over to you. Um in fact investment in skills around tools so much more diverse investment in skills to support a greater range of people to use these tools. Is an important one and Alan As I use that I don't think I would have that design perspective. But I think investment into how we need to look at now. The encrypted messaging platforms would really be really interesting space for us to like investing in the near future, So yeah.
Great point to end on. Well, thank you so much for joining us, Uh, for this panel, Uh, Vonna, Sam, JD and Allen and, um, thank you to the audience for listening. I am. We had really have a lot of work ahead of us in terms of tools development. But all of the points you raised, uh, are really, really important and I'm going to turn it now. To my colleague rose to introduce the next panel.