Welcome to Agents of Tech, the podcast where we explore the groundbreaking innovations and ideas shaping our future. I'm Autria Godfrey here in Washington DC, and today we delve into the critical intersection of AI and cybersecurity. As artificial intelligence evolves at a rapid pace, it's not only revolutionizing industries, but also redefining the very landscape of cybersecurity. From combating sophisticated cyber threats to ensuring AI systems themselves remain secure. This topic sits at the heart of some of the most pressing challenges of our time.
Today, we are looking at how do we protect against the vulnerability of technology, infrastructure and digital systems in the age of AI? We are joined today by Dr. Siwei Lyu, Director of the University at Buffalo Media Forensics Lab, to discuss his work to empower individuals to recognize and respond to AI related threats like deepfakes. And also today, Dr. Damon McCoy, Co-director of the NYU Center for Cybersecurity discusses his work encompassing the economics of cybercrime to leading the Cybersecurity for Democracy project, which is a nonpartisan initiative that exposes online threats and develops strategies to counter misinformation.
If you are enjoying our show, please don't forget to subscribe to Agents of Tech. This helps develop the show and shine a light on cutting edge research. We thank you. Joining me now is Laila Rizvi, the Director of Strategy and Research here at WebsEdge, the company behind this podcast. Laila, pleasure to have you with us from London today.
It's great to be here again. Really interesting conversation on the horizon. I was going to say that the topic of deepfakes. I mean, let me just ask you. You're scrolling on social media.
Have you come across a video that's kind of made you stop and say, is that really that person? Did they really say that? I think, you know, the first deepfake I think I encountered maybe like a year ago or so was like these Tom Cruise deepfakes, which were incredibly realistic. I'm not sure if you saw them, but I think that was like in my, my head where I was like, okay, this is going to be a problem. Do we need to lock down like videos of us, images of us online because they could be used potentially, for committing fraud or pretending to be us when it comes to getting money off family members, emergencies.
We live in a digital world, and I think that's what's so worrying because when it comes to cyber security, I personally don't treat it in the same way as physical security. You know, I always check my, you know, door is locked, I have the alarm on. But when it comes to passwords, I think, gosh, it's a nightmare to remember seven different passwords. And I wonder if, like, we all need a technical level up to get into it because is it, you know, perhaps it's just that we don't really get it. I mean, I definitely think it is maybe just like a technical misunderstanding of what is actually at stake and you kind of just take it for granted. You know, you mentioned should you be protecting maybe like your image or any of your videos that are online, the way that AI works, obviously the more images, the more videos of you that are out there, the easier it is for these deepfakes to be created.
The more that your voice is out there, the easier it is for your voice to be replicated. You know, and I don't know how things are in the UK, but here in the United States, for example, deepfakes are contributing to billions of dollars in fraud losses, in particular, having to do with Elon Musk. Have you seen this? Yeah, I did come across it. And like you said, that makes sense because he's plastered everywhere.
His face, his images, his videos. So, you know, it's easier to make more realistic deepfakes to, you know, commit fraud. Billions is crazy given how early we are in the technology. The consulting firm Deloitte is estimating that the losses for fraud could reach 40 billion in the United States by 2027, which, you know, is not that far away. And it's interesting because AI firm Sensity found that Elon Musk was the most common celebrity used in those deepfake scams.
But you think if Elon Musk is asking you for some money and he's one of the wealthiest people, you probably shouldn't give it to him, right? But we are going to get into all of that with some of our experts today and they'll kind of weigh, how we can, you know, parse out what's real from what's a deepfake, which, you know, is maybe a little bit harder to do than it might seem. And joining us now to do a little bit deeper dive on these deepfakes and how we can combat them is Dr. Siwei Lyu, Director of the University at Buffalo Media Forensics Lab. Dr. Lyu thank you for your time today. Thank you very much for having me. Absolutely. Okay.
This is such a fascinating point of conversation - the issue of deepfakes and how can we counteract them? You've developed the DeepFake-o-Meter to try to help people detect manipulated media. How does this tool work, and how can individuals incorporate it into their daily lives in order to protect themselves, and also to verify the authenticity of the media that they encounter and consume? Well, just to be more precise, DeepFake-o-Meter is not a single tool it’s actually a platform or a basket of multiple detection methods of different kind of deepfake media. So including images, audio and videos. So DeepFake-o-Meter provides a place where users can have access to the most cutting edge deepfake detection capacities developed in recent research publications. And we shorten that distance from the user to the researchers.
So all of those capacities will be available to the users through this platform. So in doing so we actually incorporate 30 different types of detection modules there. So you have a choice. You don't rely on a single module. And this came from our observation that, you know, although there are quite a few, commercially available deepfake detection services, most of these methods are, you need to pay for the service. They are closed sourced.
And, the methods were usually not transparent. So you don't know what methods are being used to analyze all the media you uploaded. So we provide this 100% open source, free, and transparent platform to connect the users with researchers. And, to answer the second part of your question, to connect with the users, on everyday usage, DeepFake-o-Meter now have a web-based portal. We also have a mobile version of the website, and the mobile app is being developed, so everything will be free.
Practice of identifying a piece of media, upload to the service, and then select your detection modules and get your results. So it should be fairly easy to use it. Yeah. That's awesome. I love that it's free and open source. So, Dr. Lyu, I want to come on to kind of the challenges that, you might encounter with the advancement of deepfake technology.
Obviously, we've seen OpenAI Sora model. There's more noise than ever before. So what would you say are the biggest challenges when it comes to developing user friendly, deepfake detection tools especially? I know you mentioned that it's open sourced and transparent. Does that make it harder? Or, you know, are the technologies which are open source, more susceptible to being kind of, outmaneuvered, let's say? Ever since we have the service, launched in late January, early February, we got, I will say misuse possibly misuse, at least 4 or 5 times.
That's when we observe an abnormal large number of accesses continuously to the system. And we also noticed that, the files being uploaded to the system, all had similar stamps in their names, with slight variations. So there was always a risk of being misused in that way because we put ourselves open and free to the general public. But that being said, I think that's necessarily a part of this whole development cycle because we believe, sunlight is the best, disinfection, basically. We put the action algorithm open source
to make sure that it will be used by the most number of people, including people with malicious intents, and that actually helps us to gain experience to improve the detection methods. So I don't see that necessarily as a bad thing, it’s a challenge for sure. Two other big challenges. Yeah.
One is essentially lack of resource. If we want to make this free for the public, but it was mostly the effort of me and my students, donating, volunteering our time to do this. But for the long term sustainability of the project, I think, you know, this may not run, you know, as long as we wish, but we will try to do that. And the other thing is, the technology of generating deepfakes just grows so rapidly. And we struggle to keep up with - we here not meaning my group only, I mean, the research community of deepfake detection researchers. So that's the, I will say, the single most, important, serious challenge to this ongoing work.
Definitely. Yeah. Like the rate of advancement is just unprecedented, and it's ensuring that we have the resources for each part of like the safety, the cyber security, the detection. Right. I just want to go now on to, deepfake detection and fairness in terms of data, do you think deepfake detection has a fairness issue when it comes to various demographics? And, the detection of deepfakes from certain demographics? Does that vary? And what do you think of that? That actually does. So, we had a work published in a conference early this year where we actually looked at the fairness issue and built-in biases for deepfake detectors. The detection accuracy should not be, but were actually affected by some of the contextual information, the background, you know, the ethnicity, gender, age of the subjects involved. So for that study, we focused on human deepfakes involving human faces.
And we found that certain groups of subjects have a higher error rate in comparison with others. And again, deepfake detection itself, even though it’s a beneficial use of AI, is not free of any problems of generative AI. So it can still have all the fairness, transparency, accountability problems. Yes. What is the approximate accuracy rate of your, DeepFake-o-Meter or your models at the moment on average? It depends on different modalities.
I think for, we have the best, performance on videos right now. And that's specifically for lip syncing videos where you take somebody’s video and you lip sync an AI generated voice onto that person's face or face swapping videos where, you know, again, the face was generated by AI and then spliced into the original video. I do not mean algorithms developed only by us, because it has methods developed by third party researchers, and researchers from around the world. I think the average, accuracy up to this point is about, I would say upper 80s even, you know, lower 90s.
So if I could be clear, combining both false positive, which means if it's a if it's a real video it will tag that as a mistake saying this is a fake video or the other way around, which is false negative. So videos are somehow surprisingly easier to detect. Audios are slightly better. So audios are like a mid 80s and then images, we have the biggest trouble because that's the place where you see we're seeing a lot of developments and new models almost come out on a monthly basis.
I want to ask you, Dr. Lyu, Laila and I were just talking earlier about, you know, how problematic the deepfakes are and how, you know, the amount of, fraud losses in the United States, that we're seeing because of people being duped essentially by, Elon Musk videos in particular. In your opinion, how big of a problem are these deepfakes? And for someone like me who's just your regular average citizen who's consuming media, you know, on in a normal amount, and consuming, you know, stuff via Instagram and TikTok, you know, and all the social media sites. How big would you say, of a percentage on a daily basis am I seeing these deep fakes? I do not have accurate numbers for this.
I don't really think anybody has. A qualitative answer to that is it's not as big as we thought. Okay. It's not as serious as some doomsayers will say. Making a good deepfake is still pretty hard.
Even though we have very, powerful software, algorithms. It’s not completely foolproof yet. So somebody still needs a little bit understanding and know the trade, have some experience to make a decent piece of deepfakes, but that's only for today. Maybe, you know, on down the road, they've become easier. On the other hand, there are definitely more than what we are seeing reported in the media. So we are seeing the tip of the iceberg of all the deepfakes on social media.
But the whole iceberg is not as big as the ocean. So kind of in this ongoing battle, then, between the deepfake creators and the detectors and those, you know, in the prevention arena, how can we empower folks like me and Laila, you know, with the tools and the knowledge that we need to not only respond to those types of threats, but also, you know, be on the lookout, I guess, for future techniques. I started working on the technologies of detecting manipulated media. And then I started to realize that, you know, this is not a pure technical problem because to the very end, the users are part of it, are victims, but also part of the problem.
We need to have a better awareness campaign to the general public about the situation. You know, as I mentioned, a lot of times the deepfakes sneak into our perception only because we didn't set up the correct alarm in our brain. So this is the same thing, like, you know, if you remember 20 some years ago when email was a new thing, spam email misled a lot of people. And that's because we are so fascinated with the new technology that we didn't realize it can be used, it can be misused.
So the same thing is happening now in terms of generative AI. So that's why I started to devote a lot of my work, a lot of my time, the effort of educating or informing the general public about generative AI. So we made educational videos you know, some innocuous deepfake videos, images, various kinds of media to increase the general awareness of the public, of, you know, everyone, and I think with some more critical thinking in our brain, we could protect us better. And then combined with that is the technical development of detection technology. And also proactive techniques like watermarking, content authentication and whatnot. Yeah, yeah.
No, I think that's such a good point you make about the spam email, Dr. Lyu, because, I've always kind of argued that we've had media for forever, and there has been misinformation and disinformation depending on the different mediums. I mean, print media can also have this disinformation, misinformation, right? Because you're obviously in this field, what are the personal techniques that you use to kind of, critically think and identify deepfakes or misinformation which has been produced by this new technology? My first work, and probably the world's first work of detecting deepfake face swapping videos is, observing the lack of eye blinking.
So that was in 2018. We saw the first examples of deepfakes. There were mostly, pornographic videos, and they actually planted AI generated faces into some other videos. There were no realistic eye blinking motions in those videos. And that is because the training data of those models are grabbed from images.
And when somebody uploads an image, you upload the image with somebody's eye open. And then, that work got, reported by media and people start to learn about this. And then the other side effect occurred, when people see a video that has eye blinking, they say this is not deepfake because the eye blinks. And the problem is the different generations, the producers, makers, learn from this detection method. And they augment their training data, fix that problem. With that caution in your mind.
There are things that the current generative AI models, although they are very powerful, can still not reproduce perfectly. So there are mostly two things I look into if I'm just allowed to, look at a single piece of media, like an image or an audio or video, without going fact checking on the internet, I usually look for, like for images, for instance, I usually look for things where we need to dedicate understanding of the physical world or human body. I think up to this point, the hardest part for the AI model are, our hands, or foot. Yeah, but that being said, again, without running into the danger of overusing these attributes, I'll say this may be fixed in the later generation models.
So I think the most effective protection is still - be aware that AI can be used for creating images, voices, faces. I want to ask you about kind of the balance of, you know, AI in cybersecurity. Obviously AI can enhance cybersecurity defenses but also at the same time be weaponized for sophisticated attacks. So how do you perceive the balance between continuing to advance these AI capabilities for good, but also preventing their misuse? We have seen a lot of issues in the past in science and technology basically running into the same situation, taking for example, nuclear technology, nuclear, science. Right? So on one hand, you know, we can use nuclear power to generate electricity, powering our society, on the other hand, it can be used as a mass destruction weapon. AI is similar in the sense that is a dual use technology.
There are a lot of beneficial sides of it. There are misuse, and possibly abuse side of it. So, I think the balance is like, what are we are basically, what are we the power we have taken in the past, for, nuclear technology is to promote it’s positive uses, you know, promote the development of that side of technical use and then put regulations, and also be very aware of its potential misuses. Dr. Lyu, I also just wanted to get into, the secure training of models in terms of, what the most effective methods are or what are the most effective methods that you have used to, help protect against the exploitation of models? Adversarial attacks basically means, you know, somebody creates a deepfake and they just tweak a little bit of the deepfake, make it look visually the same as the original, deepfake media. Yeah.
But when you present this to the detector, the detector will make a wrong decision and say, this is real. That happens because the machine learning model we use for deepfake detection itself is a complicated, deep neural network. And we do not have a lot of understanding of how and why it makes certain decisions. And that gave the space, opportunity for adversarial attackers to manipulate their fake media, to make them, quote unquote, look real with regard to the model. As a countermeasure,
we incorporate that into something called adversarial training. So this is, established machine learning technology where we preemptively, put protection measures into the model. All the models we do have. I mean, this, my research group, we do usually, always put in this adversarial training procedure there. So there are roadblocks to a certain extent to these kind of manipulations.
I know we, kind of touched upon it in conversation earlier, which was about, kind of tagging or watermarking AI generated content. Do you think as, as more, AI generated content starts to proliferate, it would be feasible for us to firstly collaborate with AI developers to kind of, have those watermarks or do you think we're going to have to start kind of tagging, the stuff which is actually real? Well, I think we need both. None of the single measures will be able to work. I actually think the creators of generative AI, synthetic media, at least are open to the fact that, you know, having their media be traceable to a certain extent. We have focused mostly on the negative impacts of general AI.
On the other side, it has a huge beneficial effect, and especially it becomes a new way of creativity, expression. So I first had to observe, you know, primary schoolers use generative AI to try to visualize their stories or their ideas and, actually open up their creativity, expression in their minds. And we can treat generative AI almost like the way that artists use, particular paint brush or a canvas to make a painting. So tracing to a specific model is almost like saying this painting is made with this type of paint brush, right? Yeah. I mean, this is independent, or I will say not much correlated with the content I'm expressing there.
So I think, you know, a watermark just saying this comes from, say, Stable Diffusion model or Flux AI model does not in impact in any way, the First Amendment free speech rights because this is about the media, not about the content. Right. And secondly, I think, with more generative AI being used as a creativity tool, those authors have the right to say this is my work, right? And having a watermark might actually help them to claim that right. The pure negative use I mean, with malicious intent, the use of deepfakes, that's a whole different issue. That probably needs tagging and detecting to identify.
But with that, with that method, are you not concerned that we might, unintentionally infringe upon free speech rights? Or maybe that might lead to some unintended censorship, I think it’s down to the technical level of what do we label? How do we label? And you know, You know what kind of information we put into the watermark that can be used to trace back to the authors? And I think the key word here is a choice. So the author can choose - the minimal level is say this is synthetic. Doesn't tell anybody else that this is coming from this model or coming from me or from this computer or this browser, but they also can have more, detailed information embedded into those watermarks.
Right. Well Dr. Lyu, Thank you so much for your time today and for, you know, all of your information about the DeepFake-o-Meter and kind of laying out for us how big of a problem deepfakes are, maybe or not. We certainly appreciate all of your insights today.
And thank you for your time. Thank you very much for having me today. It's a great pleasure. So, Laila, I thought that that was some really encouraging news from Dr. Lyu there. He doesn't seem to be, you know, sounding the alarm about how prevalent of a problem these deepfakes are, I think in the grand scheme of things.
What do you think? Yeah, I think it was reassuring, the fact that, firstly, the deepfakes aren't actually as prevalent as everyone kind of seems to be scaremongering about. And also to that, a lot of the time, even if they are prevalent, it's not in a malicious way. You know, we just wrapped up the presidential election here in the United States. The conversation around deepfakes, I think, reached a fever pitch during the election period because, you know, at least for those of us who consume the majority of our, news online, like there was kind of like this warning of, like to be on the constant lookout for, you know, deepfakes when it comes to, the political candidates. So I feel like it was very, like, top of mind, for that specific time period.
But, yeah, kind of like you said. It makes me feel better. Reassuring is a very good word. The point that we kind of alluded to in the discussion is that misinformation has been around for centuries, literally, from the advent of the printing press.
Anyone could write down whatever they wanted, whether it was true or false, and, you know, print it en masse. Right. So I think we as humans are pretty malleable when it comes to the information we're consuming. I do think, obviously with AI, the reason why it strikes a little bit more fear is because social media, that's just humans talking to humans, right? Whereas AI might be integrated in some of our most critical infrastructure.
So I don't think the fear of misinformation and disinformation, as long as culturally we are constantly pressing the fact that, you know, there is misinformation out there, there is disinformation. Be aware of it. I think empowering the user and, you know, everyone has a brain to make those decisions to, like, evaluate the things that come in front of them.
And I think educating the user and educating everyone to like know that, okay, yeah, you might be susceptible to fraud. Yeah. The public awareness is a huge, huge component. And hopefully we're helping with with spreading that public awareness. It's our pleasure now to welcome Dr. Damon McCoy, Professor of Computer Science and Engineering at NYU Tandon School of Engineering and Co-director of the NYU Center for Cybersecurity to weigh in a little bit more. Dr. McCoy, thank you for your time today.
Thank you very much for having me today. Well, it's such a fascinating topic. We’ve really done a deep dive on it here on Agents of Tech. Can you kind of explain why critical infrastructures are more at risk now with the advancement of AI technologies? And what do you think makes these systems so vulnerable? Sure.
So the concern with AI is that it allows for lower cost attacks to happen. And so, you know, before, right, it was only major nation states that could probably penetrate critical infrastructure. But with AI maybe that puts it in reach of many more actors in order to be able to attack critical infrastructure. So, Dr. McCoy, I wanted to get into,
data poisoning and, how data poisoning poses a significant threat to AI systems. Firstly, could you explain a little bit more how that might happen, but also how we can safeguard against that, especially now where data sources are exceptionally diverse but also somewhat decentralized. So when we're looking at, you know, the advanced systems like the ChatGPT they are basically they are scraping the web and they're collecting tons of, you know, written text from all over every nook and cranny of the internet. And then they're feeding that in. And that's what kind of gives the intelligence to those types of systems to be able to answer a huge range of questions.
And so they're getting all this information from these untrusted sources, and that's obviously a danger. Just digging a little bit more deeply into that. How can we design architectures which are more resilient to that data poisoning? Yeah, it's difficult because there's a big trade off because again, the more data we have, the smarter it seems these systems become. But if we restrict where we get the data from to, say, more vetted sources, then, you know, we're losing a big source of information and training data for these AI systems.
So, you know, the obvious answer is to restrain where they're getting their information from. But that would become problematic in terms of making them more and more intelligent. So can we talk about some real world examples of AI being used in cyber attacks? Because I think for the general population, we think of this as something that's, you know, not going to actually have an impact on our day to day lives, but it could be something as simple as a cyber attack on the automotive industry. Yeah. So those are certainly possible.
But I would say that people in their day to day lives are going to start being impacted by AI. So, you know, those annoying calls that you get trying to scam you out of money. So right now those are normally done in call center somewhere and they have to pay people. But more and more, those kinds of attacks are going to be done by AI systems.
You know, if we have like a banking app or, you know, some sort of financial app on our phones, that's, you know, a situation where we could find ourselves a victim of a cyber attack there as well. I would say that it's more easy to trick people into giving you the money. Then for this cyber attacker to directly, you know, try and take it without duping you. And we've been seeing that these AI systems are incredibly persuasive.
They're very good. You know, they're very manipulative and very good at achieving their goals. So they might be very effective at these scamming attacks. They're very good at not taking no for an answer. Yes. What would you say are the key steps that larger organizations like government and, industry can take to protect against AI enhanced cyber threats? Yeah, I would say one of the huge threats to the companies is kind of insiders.
So I've been hearing more and more people taking sensitive documents internally and, you know, feeding them into these systems to summarize them. And, you know, if the system isn't set up right, then that data ends up becoming fodder for training data and then potentially leaks out. Well. And how like, how are they safeguarding against that because obviously I think one of the biggest challenges is that as a company, you want to be able to leverage these tools because you don't want to be left behind, but simultaneously, you don't want to compromise your security.
So you know what is the solution kind of going forward in that respect? How do you safeguard, by still being able to leverage the, you know, the advantages of the tools. Yes. I think a lot of this is just setting up paid subscriptions and working out contracts with these AI companies so that, you know, when the data is sensitive, it doesn't end up being used for training data and doesn't get retained into their systems. So when you're talking about companies, I guess beefing up their cybersecurity investments, what is your advice? Is that something that they should leave to these AI companies to handle, or should they invest in their own, cybersecurity? I mean, completely outsourcing anything is is probably a little bit dangerous because, you know, normally cybersecurity solutions aren't one size fits all. And so you should at least have some expertise in-house to make sure that you're getting, you know, cybersecurity solutions that fit the needs of your particular business, especially if you're in a more regulated business like the finance or health industry, in terms of like from a policy and regulation front, what would you say are the most effective measures in terms of protecting our essential systems and where might there be kind of key gaps which aren't being addressed as of now? Yeah, it's definitely, you know, it's the Wild West out there in terms of legislation and regulation. And it's hard to know exactly where it's going to land and if it will even be effective because, these AI systems are being built out so fast and so quickly that, you know, any regulation or legislation might be kind of far back.
So, I mean, I guess, you know, one thing is to kind of look at liability schemes to see if, you know, we can work out a proper liability scheme for AI systems. Yeah. Because I guess, I mean, you don't want to stifle technological innovation either way.
Like in the EU, the regulation policies are pretty aggressive. To the extent that I don't think the EU or the UK is getting Sora anytime soon. So, you know, I guess it's just balancing that as well. Yeah, it's going to be a learning experience and very tricky at the beginning. Well, kind of on that front, you know, the democratization of AI tools means that even, you know, these low skilled attackers can launch high impact attacks.
So what measures do you think could be taken to prevent the misuse of AI technologies without hindering accessibility? I mean, a lot of the AI models are still, you know, they're they're controlled by big companies. And so the really high end of AI, these companies can and should, you know, place safeguards on their AI systems. But again, it's very difficult for them to get this right.
But as time goes on, you know this is going to be more and more democratized. And, you know, open models where those safeguards can't be put on them. Is it a situation where you can't put the toothpaste back in the tube or are we already past that point? Yeah, it's going to be very hard to contain and control these systems.
And so we're going to need to figure out how to do this. But it's going to be very difficult given the rate of innovation that we're seeing. And I guess like if the genie is out of the bottle, we have to kind of build a culture that emphasizes, cybersecurity awareness across, you know, all of our communities and society. How do you think we should go about, like building more of a culture of cybersecurity awareness? Because I was chatting to Autria earlier today about how when it comes to physical security, we're all super aware that, like, you know, you've got to lock your doors, put your alarms on or no one would leave the house. But when it comes to passwords, honestly, the number of people that have said that I cannot be bothered to think of like seven different, really strong passwords because I can't remember them. And we're just so much lazier when it comes to it.
So how do we fix that? Like, is it is it a technical level up that everyone needs or is it just awareness? I think it's a combination of multiple things. I mean, when you look at passwords, that was that was a poor design. Humans weren't meant to remember lots of different random strings. And so I think, yeah, there's an education piece, but there's also, you know, the people designing this need to think about, realistically, the usability component of how can we get usable security into these systems. Yeah. Because I think I think your point on usability is, interesting because I guess what kind of, got everyone's attention with ChatGPT was not obviously only the technology, but the user interface of the technology, which like I mean, that's what got me hooked, that the UX was just great.
Like it was easy. And that's essentially what set it off. So I guess we kind of have to mirror that in cybersecurity. But yeah. And I think, going back to kind of regulation, establishing international norms for cybersecurity, and AI is going to be super challenging because there's just, you know, global tensions.
How do you think we can foster a kind of global collaboration when it comes to cyber security norms, given the disparities in legal frameworks and regulations, or is it not possible? It's really a struggle. I mean, we've been struggling for so long to try and establish cybersecurity norms even before the explosion in AI and so that's going to continue to be a hard sore point. I mean, I guess we can look at other big threats and, you know, look at deterrence theories and things like that. And that seemed like it was somewhat effective in the past when we had very dangerous weapons.
How does it vary globally, like even before, like the explosion in AI, what did you see was kind of the global variation in cybersecurity norms? Across kind of the globe between countries. You know, obviously there are all the united operations and everyone wanted to spy on everyone else. And I think kind of how we dealt with it is, you know, when it got too high, then we started using like sanctions and monetary devices again, kind of as that, you know, deterrence level of, you know, there are going to be consequences if, you know, you use these things in what we consider to be escalatory ways. Don't you think it's going to take kind of a global international attack that honestly like brings technical infrastructure to its knees in multiple countries simultaneously before people will get on board with having regulations that are unified around the world. This is one of the sad trends in cybersecurity is that oftentimes when things get really bad is when people step in and things start to improve. And I mean, this is kind of human nature, again, with right, where we want to ignore the problems until they become too big to ignore.
So I agree with you that probably when real change is going to happen is when, you know, multiple countries agree that things have gotten to a level where the problem needs to be addressed. Yeah, and the solution is going to be reactive rather than proactive. That's unfortunately the trend that we see in cybersecurity. It's hard to do that. And again sometimes it’s hard to proactively invest in cybersecurity because until you know that there's that really dangerous threat, it's hard to motivate the investment. Human behavior is often the weakest link in cybersecurity.
So like how can insights from like psychology or maybe sociology inform the development of AI systems and security protocols that are a little bit more user centric but also secure? Speaking in the research in me, we definitely need more collaborative work, because I think one of the big failures is that those passwords and other cybersecurity measures were designed by technologists without the consideration of, you know, other disciplines like psychology, that could have told them that it was not a good idea to have people memorize hundreds of, random strings. So I definitely agree that ideally, you know, cybersecurity is more multidisciplinary now, and we have different opinions that can help guide the usability of this case. So this is something that I hope that we learned our lesson from the past, like passwords. And that will do a better job in the future.
What is something whether it's a concept, methodology, or just maybe something in your area of expertise that you wish more people knew about when it comes to, cybersecurity and protecting themselves? Yeah. I think, you know, a lot of my work looks at kind of incentives and economics and that stuff. So I wish that people understood that oftentimes cybersecurity isn't just a technical problem, but, you know, it's a problem of misaligned incentives sometimes economic market failures and things like that, and that a lot of the gains in cybersecurity could be, you know, improved by addressing those problems rather than, you know, the technical problems are also there.
But sometimes, there's other things that stymie other than technical solutions. And just a follow up from that, what would you say is like a historical example perhaps of economic, I guess drivers of failures of cybersecurity as opposed to technical ones. I guess an interesting example is tying this into regulation as well and leveraging the UK and the US. So there's an interesting example in liability for, debit card fraud where, you know, it's there.
And so the US basically, you know, put that liability in the banks. So the banks would have to make the customers whole. Whereas in the UK the customers are largely liable and the banks likely won't make you whole. And so this spurred investment US banks into anti-fraud systems and drove down the fraud levels. Whereas in the UK, since the liability wasn't there, the fraud levels remained elevated. All right.
Well Damon, thank you so much for your time today. We really appreciate it and appreciate all of your insights. Very fascinating conversation. Thank you. It was great chatting with you both. Thank you. Have a good one. Thanks.
So Autria, what did you think of that? I mean, I think it's interesting how, you know, on a lot of our questions about, you know, how do we fix this or where's the like, where are the checks and balances? And, you know, what about the global collaboration? Like he's an expert on this on this topic obviously. And so well versed in it and even he is like, yeah, it's an issue, you know. Yeah, it's a problem. And it sounds like,
you know, even when he said when we were talking about like global collaboration and everybody kind of getting on the same page, he kind of admitted he's like, I think it's going to take like a serious international attack where, you know, multiple countries are affected all at once before everybody kind of gets on board and is like, oh yeah, we need to regulate this a little bit better. Yeah. I think that was actually really interesting because it was similar to what, Dr. Eddy said in a previous episode to us that, you know, there might need to be some bloodshed for, you know, AI safety to really kind of dig its heels in in the same way that, you know, a plane collision was required to develop a collision avoidance system. So I think the fact that that is being kind of repeated from like, different angles, is like, that's just human nature and just how things evolve.
Unfortunately, you can't kind of predict something that you don't know, you can't predict every single possible negative outcome. Our thanks to both of our guests today. And that is it for today's show. If you enjoyed our discussion, don't forget to like and subscribe. It really helps us improve the show. Until then, thank you so much for your time today and goodbye.
Goodbye. Agents of Tech is brought to you by WebsEdge We want to thank our studio engineers, Adam Dean and Sam Saris. Editor Abbie Harries, graphics designer Matthew Coleman, and our producer, Cath Sheehan.
2025-02-03 05:09