Deep Fakes are About to Change Everything
- I wanna show you two video clips. One of them is real and one of them is fake. Can you tell which one's real? Okay. This one was kind of easy. Obviously this is the fake one. Like you can see the color on the face isn't quite right, and the hair is obviously like a wig. But this video was like from a few years ago.
Let's look at a more recent example. Look at these two clips. One of them is real, and one of them is fake. Can you tell which one it is? Okay, what about these? Or these? Which one of these is fake, and which one of these is real? Maybe you're some wizard and you could tell, but I couldn't. And my guess is that most people can't.
That's because things have changed. - I've never seen it quite like this. - [Woman] This technology is spreading rapidly. - It's really extreme. - Mind blowing. - Deepfakes. - Deepfakes.
- [Man] Deep Tom Cruise was a tipping point for deepfakes. - It's all real. - In the last few years, we've crossed a threshold, into a world where moving images are manipulated in ways that make them indistinguishable from reality. This is our new world. It is the world of high quality deepfakes.
- The catastrophic potential to public trust, and to markets, that could come from deepfake attacks. - This is happening quicker than any of us could have imagined. So I wanna get you up to speed on what's happening with deepfakes, how they're revolutionizing entertainment. - It's all my fault.
(speaks foreign language) - [Johnny] And creating new challenges and fears for the people who make our laws, and the people who enforce those laws. - They can make it look like anyone is saying anything at any point in time. - Ultimately, I wanna show you how much of a threat deepfakes actually pose, and answer the biggest question of all.
What is even real? Seeing is not believing anymore. Okay, it's actually me. This is the real Johnny. I promise this is not a deepfake. I promise! I'm not like gonna do like some little bait and switch.
I need to tell you about the sponsor of today's video, which is a thing I'm really excited about. I'm genuinely excited! Not just because they're supporting the channel, but because I'm actually excited! For years now, I have been in a war with marketing people. Secretive, shadowy marketing people who collect all of my data, put it on lists, lots of lists, and then sell it to corporations who then flood my inbox and my phone and my life, with spam.
Lots of spam. This ends up creating a lot of noise in my life, a lot of noise that I don't want. And it has been impossible for me to actually get myself off these lists, until Incogni came around. What many of us don't know is that we actually have legal protections against our data being bought and sold, and these predatory marketing practices coming to bear on us all of the time. But it is a very complicated and time intensive process to actually implement those rights.
With Incogni, you can get your name off of these lists. The creepiest part about this whole industry is that there are people search websites now, where random people can just look you up and learn a bunch of stuff about you. Incogni targets those as well. It's so satisfying, because they show you a dashboard of all of like, the marketing lists that they're working on, to take your name off of. So you get to see your name being taken off of all of these lists that you didn't even know you were on. So, I'm signed up for the yearly plan, which means Incogni is constantly monitoring month after month, to keep me off these lists.
And I'm already seeing the results. I'm very very happy that Incogni exists, which is why I'm so giddy right now. There's a link in my description. It's incogni.com/johnnyharris. Clicking that link helps support the channel. It also gets you in on this amazing service.
You can sign up and have 30 days risk free, where you can get all your money back if you feel like it's not useful to you. 30 days. Use the link, go sign up.
Thank you Incogni, for sponsoring today's video. Let's dive back into the wild, weird world of deepfakes. Okay, let's just be clear.
We've been faking the moving image for at least 100 years. - [Man] America goes to war! - Thomas Edison wanted to spice up his news reporting from the front lines of the Spanish American War back in like, the end of the 1800s. So to do this, he shot some fake battle scenes in New Jersey, and then cut them together with shots from Cuba, making this look like it was happening in Cuba, and it wasn't.
And for the next 100 years, motion picture manipulation remained as crude and simplistic as Edison's sneaky editing. But then suddenly, in 2014, with the invention of a new type of AI, everything started to change. - [Man] Incredibly realistic so-called deepfakes. - How similar it is.
Is there a way to know when this is fake, and to tell when it's fake? - We now have the ability to make people look like other people. Like, I could be Johnny, or I could be Nick, the studio manager. Look, I'm Nick the studio manager right now. I could also be Tom, the music composer. And what's crazy is even if you go back and watch what I just showed you frame by frame, you likely won't be able to tell. Let me explain how on earth we did this.
We can do all of this because of this clever computing process. It's called Generative Adversarial Networks, or GANS. This is where two AIs work in tandem, to get the best fake image possible. So you have these two AIs.
One of them is a forger, and the other is a detective. The forger creates an image based on what you ask it to make, and then it shows that image to the other AI, the detective. The detective goes over the image, and points out all of the reasons it's fake. It knows what to look for, because it's been trained on hundreds, sometimes thousands of images of exactly what the finished products are supposed to look like.
The forger is like, okay, cool. Let me try again. And it goes away and it makes another image, fixing the pixels that the detective AI pointed out.
Then it shows it to the detective. The detective AI once again points out all of the weaknesses in the fake image, and the forger goes back and makes a new one. And over and over and over again. And so the longer you give one of these AI models to train on, it basically goes through this creation and improvement cycle, over and over, until you get the best deepfake possible. There are many ways to train a deepfake, but this is the most common, and it's actually how we got the deepfake that we're running on this clip right now.
My face is slowly morphing into something else, and it's basically pixel perfect. We had to run a training model on this face for two weeks. But it got really good. Look. It's like, amazing.
I'm not me. I mean I am me. But I'm not me to you. And that's kind of nuts.
- We're about to enter a brave new world. - It scares me, it really does. It scares me. - An important point here is that, we used to have to train these models on huge amounts of data. You need a lot of footage from someone's face to train these models. Which is why it's mostly worked on like, actors, who have a lot of high quality face data available.
It's how you get deepfakes like this. - Do you like what you're seeing? Just wait till what's coming next. - [Johnny] Or how you get Jim Carrey imposed onto Jack Nicholson's face in The Shining. - I'm not gonna hurt you. - [Johnny] Or Willem Dafoe in Silence of the Lambs instead of Anthony Hopkins. - You look like a rube.
- These last two clips were done by a super talented deepfake artist named Control Shift Face. And he's actually the one who's doing the deep fakes for this video too. He's the artist behind that video that went super viral a few years ago, of Bill Hader doing an Arnold Schwarzenegger impression, and turning into Arnold Schwarzenegger. - Get out of there! There's a bomb in there! Get out! - And he's now working at a deepfake production company called Deep Voodoo, where he worked on that Kendrick Lamar video. Yes, that Kendrick Lamar video. - For me personally, it's exciting because I can like do this goofy, funny movie remixes that I can put on YouTube.
I really enjoy that. - And yes, when I was talking to him in our interview, he was using a live deep fake mask and changing between different characters, and it kind of blew my mind. Oh my god, I'm sorry.
He just changed, you just changed to Mark Zuckerberg. - I've got lots of faces up my sleeves here. - Oh my god! Wow, I did not, I was not ready for that. Deepfaking has already started to shape the entertainment industry.
(upbeat music) Filmmakers are using deepfake technology to translate films into other languages. - And it's all my fault! (speaks foreign language) - They can de-age celebrities. - I (indistinct) went anywhere. - Yeah, and we're back! - There's a six part TV show in the UK about a bunch of rowdy neighbors, and guess what? All of these characters are all deepfaked celebrities. - I did move some of Kim's things. I'm sorry about that.
- Two years ago, if someone told me something like this would exist, I don't think I would believe him. And what will be next in like, another two, three, five years? - It's getting to the point where deepfakes are nearly impossible to decipher as computer generated, which is super exciting, but also kind of scary. - [Man] The FBI tells NBC news they're following the rapidly developing technology closely. - I believe that this is the next wave of attacks against America. - It's a real concern.
It's a real concern. (dark electronic music) - And this is where we talk about the doom and gloom part. And for that, yes, I have a lot of paper. (dramatic music) Lawmakers and law enforcement are getting worried about this technology. Here's a letter from Congress, to the director of National Intelligence, raising the alarm that, "Hyper-realistic digital forgeries, popularly referred to as deepfakes, use sophisticated machine learning techniques to produce convincing depictions of individuals doing or saying things that they never did." I kind of love when Congress talks about technology.
And this is the most important line. "By blurring the line between fact and fiction, deepfake technology could undermine public trust in recorded images and video as objective depictions of reality." Oh, and by the way, this letter was back in 2018. Basically the Stone Age for AI image generation. More recently, I saw a Democratic senator giving his opening remarks to his colleagues in Congress via a deepfaked voice that sounded perfectly like him.
- [Man] We have seen what happens when technology outpaces regulation. - But it's not just American lawmakers. (dark music) Europol, which is the European International Police Agency, says that experts estimate that as much as 90% of online content may be synthetically generated by 2026. 90%. Meaning AI will be making most of the stuff that we watch on the internet. Their big fear in all of this is that deepfakes will, quote, "lead to a situation where citizens no longer have a shared reality," causing what they call (deep voice) an information apocalypse.
(normal voice) Here is a 43 page report from the US Department of Homeland Security, and look at this title page! I mean, look at that graphic design! Especially this moment, where it goes from a classic sans serif, to a serif. I mean, it's not bad for the government. DHS says that deepfakes and the misuse of synthetic content pose a clear, present, and evolving threat to the public, across national security, law enforcement, financial, and societal domains. The Pentagon is using its big research wing, the one that helped invent, I don't know, the GPS, the COVID vaccine, and the literal internet. That one.
To look into deepfakes, and how to combat them. Like, they're taking this very seriously. But my big problem with all of these reports is, they're couched in such vague language! What can deepfakes actually be used for, outside of making really cool Kendrick Lamar music videos and TikTok Tom Cruises? - I think there's bubble gum inside this. (electronic whirring) - Well, to get a taste, look no further than Ukraine. - The Russian president says a military operation is now underway in eastern Ukraine.
- [Woman] A column of Russian armor crossing into Ukrainian territory from the north. (speaks foreign language) - This is not Ukrainian president Volodymyr Zelenskyy. This is a deepfaked video that appeared on a Russian language Ukrainian news site just four weeks after Russia invaded Ukraine.
Russian troops were trying to take over Kiev, which would've meant a massive victory for Putin. Information on the ground was scant and foggy, and this video pops up, of Volodymyr Zelenskyy urging his troops to surrender to Russian forces. Now listen, this is kind of a crappy version of a deepfake. I mean, misshapen head, weird accent and voice.
Wasn't super pulled off. But guess what? It doesn't matter, because what this video did do is it made people question every other video coming out of Ukraine. And this is what Congress meant when they were freaking out in 2018 about the idea of deepfakes blurring the line between fact and fiction, undermining trust in recorded images and videos as objective depictions of reality.
And this is actually, so far in my reporting on this, the biggest takeaway. I don't know if we realize what a big shift this is! As deepfake technology gets better, yes, it allows people to create compelling fake evidence, and that is worrisome. But it also allows people to dismiss real evidence, real footage, as, "Oh, that's just a deepfake. We can do that now. We can't trust anything."
- It's not just what they could create. It's the doubt that is cast on authentic video and audio. - Yeah. - Okay, let's get to another concrete threat here, to get us away from all the vague language of government agencies talking about how scary this all is.
Think about the legal system. (electronic whirring) Deepfakes are becoming a nightmare for evidence in court. In 2020, there was a child custody case, where the mother presented as evidence audio recording of the child's father saying violent things over the phone. It was submitted as legit evidence, as proof to be like, "This dad is unfit to have the kid."
But after some digital forensics, it became clear that the mother had used an online tutorial and some cheap software to doctor the audio file, to add in fake violent words. Most judges and juries are not ready for this! Most wouldn't think to question evidence like this that sounds like a smoking gun, and yet was totally fake. This sort of technology is getting so accessible and easy to use. In fact, this VO that you're listening to right now is actually fake.
I made it using a cheap and easy software widely available to anyone. Now imagine that you're on a jury, and you're presiding over a criminal case. And even though there may not be any manipulated evidence in front of you, there's now a nagging voice in the back of your mind, that's like, "Remember that time that Johnny tricked me in that YouTube video, and he made his face change, and now he looks like someone else, and it wasn't really him, and it was really convincing? Who's to say all of this visual evidence that I'm seeing in court isn't also fake?" Visual evidence is no longer rock solid. Surveillance footage, body cam footage, heck, even audio taken in a bus from a presidential candidate.
This was all solid at one point, and now it can all be called into question. (electronic whirring) And then of course, deepfakes are being used for good old fashioned cyber crime. Man, cyber crime! It just sounds so quaint. Cyber crime. Elon Musk was recently faked to help shill a new crypto scam. - (indistinct), a site that will help ordinary people to gain financial independence.
- And yeah, this one's obviously fake and poorly done. My point is they're getting better, and some scams are already way more persuasive. Like this group of fraudsters who were able to clone the voice of a major bank director, and then use it to steal $35 million in cold hard cash, $35 million! - [Man] That's a lot of money. - Just by deepfaking this guy's voice, and like, using it to make a phone call to transfer a bunch of money. And it worked! Okay, but in reality, the risk of eroding trust in the public or weakening our legal system or giving cyber criminals a new weapon, all of these are actually the rarest uses of deepfakes.
The main victims of this new technology, at least at the moment, are women. - [Man] These sexually suggestive ads are popping up online. - Sexual misuse of these deepfakes.
- She's been deepfaked. - [Woman] That's not me. - By one estimate, 96% of deepfake production is used to produce porn.
Almost all of which are using women who have not consented to this. Just two years ago, Telegram got in trouble, because it was found that they had private groups on their service that were using deepfake technology to remove the clothes of more than a hundred thousand women. It used to just be the faces of celebrities, because they're the ones who have all this high quality data out in the world that you can train a model on. But no longer. The tech has developed such that an ordinary person with a few images on social media could suddenly have their faces deepfaked onto the body of a porn star. - [Man] Martin has found dozens of doctored images and videos of herself.
She has no idea who's responsible for them. - Okay, but what's being done about this? What can be done? Let's first remember that when a new technology comes along, it typically evolves rapidly, way faster than the lawmakers who need to understand it to regulate it do. So, we're in the kind of wild west phase, where the lawmakers are kind of just trying to get their head around this stuff.
- Let's hope that Congress can catch up. - China is actually the first country to have regulated what they call deep synthesis technologies, by requiring all deepfake content to be clearly marked as having been modified. Luckily, a surveillance state is a lot easier to regulate this kind of stuff.
But the EU is trying. - [Woman] In an ambitious bill called the AI Act. - Boy do I love going through European Commission PowerPoint slides. Like China, the EU is trying to make it so that you have to label deepfakes. But of course, in a Western democracy, you have to put this line, "unless necessary for the exercise of a fundamental right or freedom, or for reasons of public interest."
Oh man, those pesky rights always getting in the way of being able to regulate stuff. The UK is also trying to figure this out. They are targeting the porn situation, the one that is the mass majority of the uses of deepfakes. Doing deepfake porn without consent comes with a penalty of prison, though it's super hard to enforce this stuff.
That's what's so tricky. But they're trying. - Will the government take urgent action and repair this mess? - And then there's tech companies, like here on YouTube.
They say that they will pull down any video that, quote, "Includes deceptive uses of manipulated media, ie. deepfakes, which may pose serious risk of harm." Who decides what serious risk of harm is? Meh. But one promising solution here would be to fight software with software. If we could train software to detect the quirks of deepfakes, like the unnatural blinking, or some of the other weird subtle things that the human eye can't pick up on, we would have a strong way of verifying what is real footage, and what is distorted. So deepfakes are here, and they're seemingly here to stay.
We are entering a new chapter of how humans consume and relate to information. It's a chapter as significant in my mind as some of the other big chapters, like when we learned to capture light and record it. It changed the world! Photographs could be used in court as proof. They could be used in science to capture and understand the world around us. In journalism, it allowed us to show, not just tell.
And when we made those images move, we could capture more truth, more proof, more evidence. And then we could connect, and spread those images far and wide. Truth and evidence coursed through the world's internet. This evolution of the moving image and its uses in the connected world is taking an unexpected turn, away from being the bedrock of evidence, and towards the foggy territory of deception and confusion.
We're gonna have to navigate this chapter carefully. We're gonna have to push even harder to know where our images came from, who made them, and for what purpose. And we're gonna have to resist the urge to believe everything we see, no matter how real it looks. (solemn music)
2023-07-21 04:25