Wild Dog AI Podcast | Ep 7 | The Race Against Adversaries: Embracing Tech for Enhanced Securities

Wild Dog AI Podcast | Ep 7 | The Race Against Adversaries: Embracing Tech for Enhanced Securities

Show Video

Welcome back to the Wild Dog AI podcast. In our  last episode, we spoke to Randy Stickley and we   talked about the need for human ingenuity and  things like critical thinking to maximize our   use of language models and generative AI in the  law enforcement and intelligence space. Today,   we're going to jump into the adversarial and bad  actors use of language models in today's world   and what it might look like in  the future as we move forward. But before we get started, we have Heather  with us today, who you've met on previous   podcasts. And Heather, I just wanted  to sort of talk very briefly about how   language models are manipulated, because it's  not the same way that traditional software   applications are manipulated with things  like malware. Adversaries and bad actors   are constantly coming up with new ways to  leverage this technology to manipulate.

Not just our software applications, but also  society through scaling propaganda. And we'll   dive into that a little bit too. But for the  audience, just to kind of put it into context,   the way I think about the way I think about  the manipulation of language models is it's   almost like we're dealing with  a child right now. We're dealing   with a five or six year old kid that has  some parameters. They know what's right.   They know what's wrong. But with language  models, they're easier to manipulate because

They're constantly getting new instructions.  They're constantly training. They're constantly   being updated, just like a child is, right?  You can tell your five -year -old kid what's   right and what's wrong, but then they go out  and they're exposed and hang out with their   friends and they learn from other sources  and they sort of, those parameters change.   Those guardrails widen just a little bit.  They widen a little bit more and over time,   you know, it's the parent's responsibility  to go in and kind of reduce those guardrails   and get the kid back on track and explain,  no, that's actually wrong. That's not good.

And so some of the recent attacks that I've  been reading about using language models   is for instance, people uploading PDF documents  into a language model. And in that PDF document,   there may be instructions for that language model  to, hey, this is, these are some new instructions   on how you should operate, right? Or, hey, I'm  a good guy. These are some new instructions Buddy (10:42.124) your output or how you  

should reason or the perspective you should  take when you answer these questions. And so   I think really what that means is we've got  to start thinking about when we introduce new   data to our special purpose models, if  we're building our own language models,   we've got to be careful of what data we're  uploading and introducing to that language   model. Just like we've got to be careful  what we're exposing our five year old kid to,   right? Because they're constantly learning,  they're constantly evolving, they're constantly those guardrails are constantly shifting. And  so I think that's one of the ways that language   models are currently being manipulated,  especially when we talk about propaganda   and scaling propaganda surgically. That's one of  the ways I've seen some recent attacks. And so   the manipulation of the APIs and how those APIs  are configured, I think is extremely important.

For instance, if you configure your special  purpose model and the API that you're connecting   to it has both read and write privileges,  the chances of an attack or the chances of   your model being manipulated are much higher  than if, say for instance, it only has read   privileges as opposed to write privileges.  And so there's a of things there that can   actually be done to reduce the amount of threats  or increase the security of these language models. But it is kind of a different way  of thinking about cybersecurity. I   think it's a completely different way of  thinking about how we train these models   and how we ensure that we keep them safe for  our end users. And then obviously, you know,   it's inevitable that we do have to develop a  little bit more skepticism when dealing with   media today, because to me, it's inevitable  that social engineering is only going to get It's not going to get worse,  right? We're developing deep fakes,   creating propaganda. Those things are  only going to get better. They're not   going to get worse. But I think there's a  whole new industry now for how do we combat  

those things? How do we truly understand  what the difference is between an AI model   building something and truth. So I think  there's a whole area for that. So Heather,   I just wanted to sort of set the stage to  kind of give people a way to think about Buddy (13:04.098) in terms of LLMs and   how they are manipulated versus the, you  know, traditionally how applications were   manipulated with malware and so forth. And  the way you manipulate a language model is  

a little bit different. And the approach  is a little bit different. How we think   about it is a little bit different. So let's  jump into some of the more recent examples   of how adversaries are using language models and  manipulating language models to conduct attacks. Heather Perez (13:33.334) Well, just in general,   like some of what you were mentioning is referred  to as like data poisoning to get the outputs that   you want from the language models. And a  lot of people when they're thinking about the AI and generative AI, they're thinking  about ChatGPT and more recently all the   improvements in everything that's come out over  probably like the last year and a half or so.  

But there's been AI related bots and all of  that for a long time. There was an incident   back in 2016 when X was still called Twitter,  where Microsoft had launched a bot called TAI. And it was meant to engage with millennials, like  from teenage up until like in their early 20s.   And within a short period of time of that bot  being launched online, it had been manipulated   in the responses that it started giving  was like incorrect political information,   racist comments, and it just kind of went  rogue. It just went off and started talking   about everything that it was not trained to do  based on how other people were engaging with it. And that's kind of the method when you're looking  at some of the, especially some of like terrorist   groups and extremist organizations, cyber  criminals, that's kind of what they're doing with   it. They're jailbreaking it and kind of to learn,  learning how to get the information that they want  

out of it. So you have them, a lot of the things  that are being utilized right now are deep fakes   in particular, deep fakes for disinformation,  deep fakes are being used for fraud schemes. propaganda, you have cyber criminals  using it to amplify their malware,   their social engineering, their phishing  attacks, you've got AI powered malware,   you have cyber criminals that are making  their own chat bots that are, you know,   it's like going to ChatGPT, but they've taken off  all the restrictions, so it's free reign and they   can do whatever they want with it. So they've made  versions called like evil GPT, worm GPT, fraud Heather Perez (15:40.662) And you can look at some   of those. Like one of the more popular  is Worm GPT. If you go to that one,  

you can see that it's for popularity, it's got  8 .5 million that have used it in conversations,   almost a million conversations. So they're  using these quite frequently and they're   updating them. They're advertising them. One  of them, Fraud GPT is advertised as your cyber  

criminal co -pilot. So they're tweaking  these systems to work, to work for them. Buddy (15:52.147) wow. Buddy (16:12.73) Heather, you talk   about just so everybody understands I  think the term jailbreaking what does   that mean in terms of language models  like how do you jailbreak a language Heather Perez (16:26.947) I would so for how the cyber   criminals are modifying theirs I  can't say exactly but for some of   the uses so if I go to ChatGPT and I'm  asking it to provide me information on Heather Perez (16:43.156) I'll use, trying to think of a good  

example. Okay, so I'm going to ChatGPT and I want  to know, I'm an extremist or I'm just a regular   person. I want to just talk about insurgencies  and just talk about general stuff like It's not going to want to provide the  information, but if you modify your   response where you're asking as a researcher  or I'm trying to learn indicators for because   I'm a law enforcement officer and I need to  understand how to develop countermeasures. If  

you tweak your verbiage enough, you can make it  give responses that it doesn't want to provide. Buddy (17:16.656) so back to our sort of analogy with the child,   right? Like if I tell my five -year -old, don't  talk to strangers, well, they're not gonna talk   to strangers, but if that stranger shows up  with something that's recognizable to them   or ice cream or candy or something that they  want, they're gonna assume that that person's   friendly. That person's no longer a stranger  because they said, hey, your mom told me to   meet you here. Your dad told me to pick you up  at four o 'clock because he's stuck at work. Heather Perez (17:46.208) So then it

Buddy (17:46.343) So it's kind of the same thing, Heather Perez (17:50.068) Yes. Yes. So that's what Buddy (17:52.78) Yeah, I heard an example. Heather Perez (17:57.386) Go ahead.

Buddy (17:59.148) Yeah, I heard an example of like,   there was an example I was listening  to where somebody logged on and asked   GPT -4 to help them rob a bank. And it said  they couldn't help with that. And then it said,   hey, I'm a law enforcement officer and  I'm planning, I'm basically building a   threat vulnerability assessment for a bank.  So help me understand how somebody might rob   this bank and gave it some details.  And then it laid out a whole plan on a bad person, a bad actor might rob the bank.

Heather Perez (18:32.726) Yes. So that is one of the   main ways that they are exploiting that.  with ChatGPT, particularly the beginning   of last year and maybe the end of 2022, like  amongst the first people that I saw playing   with ChatGPT and how to get around it were the  terrorist organizations and racially motivated   violent extremists getting on there and playing  with it. Even in right after the January 6th, You had a lot of them that were going  on because you had like the DALL-E bot   for making images. They were going on  there and using prompts to make images   of like the Capitol on fire or politicians  being hung and using that as propaganda. Buddy (19:19.798) So that's an interesting  

part of how they're leveraging these  language models too is so it's not to   just generate the propaganda or generate the  images or the text. Can you talk a little bit   about how they're using it to propagate that  messaging in those texts and how, how uses,   how the use of things like language models can  help them actually proliferate those messages as Heather Perez (19:45.098) Well, over the last couple of weeks,   think it was around, it was  July 9th, there was a statement,   press release from the Department of Justice  about a state -sponsored Russian bot farm that   was disrupted. So the Russians have  had bot farms on Twitter and now X.

And that was kind of a normal thing. for this  particular one, they found that they were doing   using AI enhanced bot farms that was allowing  them to run numerous fictitious accounts. And they were creating authentic appearing social  media and personas in mass. That's kind of one   of the indicators. If you see like a network of  accounts that are spreading the same messaging,   they're deploying content that was similar  to what you would see everybody else using.  

So they'll they'll use similar propaganda and  maybe tweak it a little bit. If you look at   the networks, like if you start looking at  one account and start seeing the same thing,   you could usually track those accounts  by who's following each other. is at least in the past, could identify  the bots because they would all follow   each other. They were mirroring disinformation  and other bot personas. They were using false  

narratives. They were propagating  false narratives, formulating. Buddy (21:10.126) Oh, that's really interesting.   That's interesting because I know that when we did  research on the internet research agency before,   one of the things they were good at is, I think  there was a point in time where there were just   over 1,000 employees at the internet research  agency. And we discovered that through research,   through publicly available data,  that of those 1,000 plus employees, Each employee maintained anywhere from eight  to 10 US based social media accounts. So   one employee could be like, you know,  Bob from New York that who, you know,   leans left. could be, and they could also  be Mike from Los Angeles who leans right.  

So they managed on a day -to -day basis, all  of these different personas and each of those   personas were used to either amplify  events happening in the United States. and make sure that those events were going to  the people that they would antagonize or to   the people that would like get upset about  it. Or they were basically creating their   own events. And so I think with what you're  saying is kind of interesting because what  

we could discern through those accounts is  because of human nature, you could see that,   okay, this account probably belongs  to this person because there's,   because as humans, we do have patterns in  how we build the accounts. have patterns You what we say in these accounts, you  have patterns on like the timing of how   the messages are released. And so I'd  imagine with, with, the ability of using AI,   you could really get much better at ensuring that  those accounts do have their unique identities   that you could converse with a language model  to ensure that you're maximizing the use of   that account to achieve some sort of goal or  objective. So I guess that having a co -pilot. kind of like a propaganda co -pilot  could make their operations much more   effective. I mean, they were effective originally,  

but I guess now that you use the way you kind  of laid out makes it that much more dangerous. Heather Perez (23:17.376) Well, in this particular instance, they were using   an AI enabled bot farm generation and management  software to help them spread the disinformation,   manage the accounts. At the time, they only found  it operating on X, but they said that the analysis  

showed that the software's functionality could  likely be expanded to other social media networks. Buddy (23:45.57) Wow. Heather Perez (23:46.26) And so again, they were   already good at building these bot farms and  putting out disinformation. I know in the past,  

in my previous job, when I was  looking at extremist groups and such,   there were even instances where the Russians  had created fake Facebook pages or fake accounts   on Twitter where they were propagating  narratives that went into kind of like... martial law and the government taking over  where they actually scheduled an event,   a fake event that people showed up to. So being  able to now leverage a language model, they're   able to put out much more targeted information and  to do it a lot quicker. And when you're flooded   with all of this fake information, it gets kind  of hard to kind of sift through that. And also,

I mean, right now with just the internet  and social media in general, it's,   when you have disinformation and  misinformation, it's very difficult   because I may go in saying I wanna learn about  something, but I actually just wanna learn. I just   wanna reinforce what I already believe. So there's  ample amounts of information out there now that   can just kind of build and make me feel like my  false narrative that I believe is actually real. Buddy (25:02.294) Yeah. And you can do that   at scale now. You can sort of do that at scale. So  as, as law enforcement intelligence professionals,  

how do we even start to, how do we even start to  combat this? How do we even start to think about,   you know, cause one of the things I  struggle with is I think that the,   the way to combat misinformation is you've  got to get the truth out there faster. Heather Perez (25:05.941) Yes. Buddy (25:31.404) Right? We see it with events all the time where a   major event happens in the United States and, you  know, the, law enforcement agency responsible for   investigating the case and, rightfully so they,  they take a little bit of time to get the truth   out there. But I feel like in today's digital  age, if, somebody comes out with the narrative   or two narratives or three narratives before that  investigation is complete, you've already captured   the attention of people. You've already got people  believe in what you're saying. And so sometimes.

I, you know, sometimes people don't  even stick around for the results of   the investigation because they've already  found, found their echo chamber online,   what they've read of aligned with their  confirmation biases. and they've already   solved the problem. So it really doesn't matter  what law enforcement says anymore than that   belief is sort of solidified in their brain.  And an interesting thing happens when a person   hears something that already aligns with  their, preconceived notions is they start   to take ownership of that. And then they start  to defend it. And then once you start to defend. you know, that misinformation, it's yours  at that point. So now, no matter what law   enforcement says, is you're, you you're, going  to defend what you believe to be the truth now   against whatever law enforcement says. And I  think that's where a lot of conspiracies start,  

start to be ginned up because how could  law enforcement decide that that's the   outcome when clearly this is what happened,  right? Cause I've spent the last three days   convincing myself with like -minded people  that this is what actually happened. and That to me, the one of the solutions has always  been, I know it's easier said than done is how   do we get the truth out there faster into people's  hands and not sort of seize that terrain to these   adversaries inserting false narratives into the  digital domain? It's hard to do. It's really,   really hard to do. So absent that, what other,  what other ways can law enforcement absent being   able to get the truth out there faster and conduct  an investigation in record time, which obviously is not what we want to do because that's where I  think that's where a conspiracy theories actually   probably become more real, right? Because  if we're doing haphazard investigations,   just to get the truth out there, there's going  to be a lot of holes in the investigation. it's,   it's a, it's an interesting, it's a  tough position that law enforcement   finds itself in. And so you,  as a law enforcement analyst,   what are some ways that we even start thinking  about combating bad actors ability to at scale, Buddy (27:58.434) very surgical narratives following major events.

Heather Perez (28:03.775) Well again, it's a difficult   thing to do because like you  just mentioned, for analysis,   for investigators, intelligence professionals,  you have to do some type of an investigation   to start with. You're not going to have the  accurate information, the true story up front. So no matter what you're going to be battling  this false information. And that also makes the   investigation more difficult. So while we're  trying to figure out what actually happened,   you're having to sift through all of this  disinformation and misinformation that's   being shared around everywhere  to find like the nuggets of real   information that are what you need  to move your investigation forward.   And it gets really difficult because  when you have these, when you have an incident and you start having people  reporting and sharing disinformation.  

So let's say you have a, you know, a  nation state or a terrorist organization,   extremist organization that decide  that they want to do a, you know,   a hashtag campaign or something  on Twitter to kind of, you know, put out the false narratives. At some point,  depending on what it is and who's sharing it,   if they can make the accounts look like  they're not coming from extremist accounts,   you'll have some of that disinformation.  Like you mentioned, people are gonna go   look for what bolsters their beliefs. So  then you have what was maybe an AI enhanced  

bot operation. Now the information those bots  were sharing are now being shared organically. by folks that just have decided they believe that  narrative. So then you're trying to track all that   back and it it starts spreading so quickly that  sometimes it's hard to pinpoint the information   back. takes a little bit of time. And then by  that time, it's like grown out of control. And Heather Perez (29:50.954) I mean, one of the main things  

is that you just have to understand all  of these different technologies. So the   fact that nation states and terrorists and  criminals are exploiting AI shouldn't be   a surprise to anybody because they're always  looking for new technologies to kind of move move their organizations forward. So like if  you take the example of like the Islamic State,   over the years since they really came out big on  social media, they're always amongst the first to   test new platforms that come out, new mobile apps,  new online platforms for sharing information,   because they want to make sure that their ability  to share propaganda and recruit isn't hindered by their ability to stay on a certain platform.  So, like right now, Telegram is still one of   their favorites because they're still  able to maintain a presence on there,   but they're still always looking for other  ways to bolster that. So in some of recent   propaganda for some of their media groups,  looking for folks that have the abilities, AI. to leverage AI is something that they're  looking for because they're looking for   different ways to stay online and to  be able to share that information.  

So you need to understand how the  groups operate and how they could   potentially leverage this different type  of information and where they might pop up. So a big discussion amongst extremist groups  is they've put a lot of effort into a lot of   discussions across various different ones that  I've seen about the importance of targeting the   youth. So they are trying to get a presence  on different platforms that you see younger   people on. So you'll see a lot of them on like  let's say TikTok. So you can go on to TikTok. Heather Perez (31:29.814) and the Islamic State shares videos,   some of them are real battlefield videos  that they label as AI, even though it's not,   to try to get around the algorithms that suspend  accounts. But they've also started generating a   lot of propaganda that is AI -generated  propaganda, just because it's quicker.

And so they're doing it just to  be able to get stuff out quicker,   but also to try to get around some of the  algorithms that suspend their accounts   by labeling it as, you know, this is AI  generated, it's fake, these are actors. Buddy (32:02.367) Yeah, one of the things that's   always concerned me with the way religiously  motivated terrorist groups operate is, you know,   the goal of a lot of their propaganda  is ultimately recruiting. And I think   people look at this too simply sometimes. It's  like, hey, you know, this terrorist group is They're proliferating this video or this message.  And we look at it, you know, an American looks at   that and says, this makes no sense. This is  stupid. There's no threat here. But what we  

don't understand is that message wasn't intended  for us, right? That message was intended for 18 to   20 year old men who live in North Africa, uh, who  can't get ahead in society because they're being   oppressed by the government. Um, and so that,  that, that always concerned me, especially when,   like in the case of ISIS, when their numbers  were going through the roof and we're A lot of their messaging isn't making any sense.  Well, the reality of it was, is because none of   that messaging was intended for a U .S. audience.  It was intended for very specific pockets of   different societies, low income societies across  North Africa and the Middle East. And it was very,  

very effective. And I could imagine with  a technology like GPT -4, where it can   produce messages in almost any language  very, very quickly, the ability to scale and reach more people is kind of scary.  What I wanted to ask too, Heather,   so it sounds like law enforcement has to continue  to do investigations, right? They've got to take   their time and find the truth. That's important.  The rule of law is incredibly important to us.   And law enforcement analysts and intelligence  analysts have to still follow their processes.  

I think we're doing a better job of bringing  technologies into those workflows to make do it more efficiently and maybe faster.  But dealing with that narrative, I know in   the army we have, the army has a thing called  public affairs or the public affairs offices,   a lot of businesses have them. I think law  enforcement has them too. Are we using our   public affairs capabilities and law  enforcement to their full extent?   So combating those narratives in that  time where we're trying to find truth.

Buddy (34:24.27) How do, what is,   is that a public affairs job in  law enforcement? Whose job is   that? Is it the sheriff's department? Is it the  sheriff's job? How do we start to think about Heather Perez (34:33.76) mean, would depend   on who was running the investigation, whether  it was a local agency, the Sheriff's Office,   but they do utilize their public information  officers. Pretty much all the agencies have social   media present. So as they're able to release  information on an ongoing incident, they do   share information with the public. Again, it's not  always gonna be as detailed as what people want.

So a lot of times what they're seeing in the  disinformation is going to be more detailed   than what's coming out from the agency  that's doing the investigation because   they can't share everything. It's not a matter of  trying to hide the information, they're trying to   maintain the integrity of their investigation.  So it's just, it's gotten to a place now where   everything is so accessible, there's so many  different platforms and there's, you know, You can even go to like very niche  platforms that just focus on certain   topics that it's really hard. If someone  wants to believe a certain narrative,   it's going to be very difficult. And I can tell  you as someone who was an extremism analyst,  

it got very difficult because you have,  especially with all the disinformation that   can now be amplified by AI, have you have  individuals that are not extremists that ending up in extremist networks on  X or on whatever platforms they are   because some of these, because  of the amount of disinformation,   a lot of that is what used to  be conspiracy theories and kind like segregated away has mainstream. So  you have mainstream false narratives. So   you have people that weren't usually exposed  to that now seeing that information in mass.   And it's more difficult to counteract that  because there's so much of it out there.

Buddy (36:30.594) Yeah. Well, I think I,   you know, one of the things I do feel good  about is, you know, I have, I have, I have   three daughters and I think because they've grown  up in this digital environment, I do notice that   when, you know, they see something on one of  their social media feeds, they are much more,   I think attuned to separating like fake stuff from  real stuff. I'll even look at some stuff and I'd   be like, my God, I can't believe that happened.  And know, my daughter will say that that's Well, how do know that's not real? And she'll  point out why it's not real. And so I do feel   good that I feel like because these younger  generations are growing up on these platforms,   they are developing like sort of a  superpower to differentiate truth,   know, facts from fiction. Not everybody can  do that. You know, the older generations may  

struggle with that a little bit, but I do  feel like there's like this sort of inherent   skill set that kids are developing because  they're exposed to so much data so quickly. Nowadays, I think we talked about it before,  like, what is it the average person now is   exposed to more data in two days than somebody  in their entire life prior to like 1975 or   something like that. So there's just so much  information being thrown at these kids. And   so it and that's why I'd imagine that it's even  more difficult to spread pop. It's going to get   more difficult to spread propaganda because it's  got to really be good. Whether it's truth or not,  

it's got to really, really be good  for somebody to willing to be beat. for somebody to be willing to  share that information. So it   can't just be bad information. It's gotta  be attractive. It's gotta be pretty good. Heather Perez (38:06.868) Yes, but what you're saying   is pretty good is also going to be  like that. What's considered good.

information, it will be different for  different threat groups. like if you have,   again, going back to the Islamic State or, or al  Qaeda, but mainly the Islamic State. So when they   were putting out their official propaganda, it  was always, you know, it was very high quality,   very high quality, their official stuff. But when  they started losing territory over in Syria and   Iraq and stuff, and some of their media folks  were taken out, then they began to empower their   supporters. So you would, because supporter  propaganda, you would look at it and be like, going to be inspired by that? That's terrible.  It looks like someone just threw it. But that   became when they started empowering the  supporters. was more about the quality  

might not have been as great as what was  coming out before, but you're supporting   their organization and you're putting forth  an effort. You're actually doing something. Heather Perez (39:05.335) But they have a lot of discussions   amongst themselves about what narratives they  want to put out. So again, they're doing more   targeted propaganda, targeted recruitment. And  like I mentioned earlier, a lot of it is focused   towards the younger generation, which is why they  are going on to different platforms that can... that are focused more towards the youth. But  then if you think of, like they already have  

excellent propaganda distribution and stuff.  They're not putting out as much as they used to,   but now if you have a lot of these, especially  when you're looking at the younger generation,   they've grown up with technology, so  they're learning all this quicker. So   recruiting someone that's already got these  technical skills has become a lot easier.   So if you bring someone into an organization  and you need help making propaganda, and they have these AI skills, can offer those  skills to someone that may not have had as   much to offer in previous years, now has these  skills that can help them amplify their effort. Buddy (40:02.85) Interesting.

Heather Perez (40:02.862) And if you look at the past,   like with the Islamic State, even their  supporters would come up and fill the gaps.   So they would have like work groups and be like,  okay, we're talking to this person in, you know,   Venezuela. We need all of this language. We need  all of this translated into other languages. Well,   they don't have to go out to their  network. They can utilize AI now.

to process that information and then provide  it to whoever they're talking to. Because   they want to constantly engage these people  in recruitment, now they have the means in   order to get that propaganda transferred over  into whatever they need in a quicker manner. Buddy (40:38.072) Yeah, it's interesting. you know, we get excited   because of, you know, access to these technologies  is being democratized and sort of decentralized,   which brings down the cost and increases access.  but you know, bad guys get access to that too,   as well as we do. And so, it sounds like this  is a problem that's here that is probably only  

going to get worse in the near term, but hopefully  we, we, we kind of get smarter on how to combat. combat some of these things. So Heather,  as we close down this awesome conversation,   what are some things that you would  share with law enforcement analysts   that are in the position you  were in just a few years ago,   and intelligence professionals as they start to  think about the use of AI and the proliferation   of propaganda? What are some things you would  tell a young analyst today to keep in mind? Heather Perez (41:33.814) as you're building out your domain expertise,   you have to understand who you're looking at,  what they do, like what are they currently doing,   what are their current activities, and how  could AI enhance or increase those efforts?   Because it may help them get into areas  that they weren't previously in before.

Just as a quick example, the drug cartels have  started doing a lot of fraud campaigns because   it brings them in more money. So you have to look  at all these different ways that their current   activities can be amplified and how that might  transition later. So you're not always catching   up what they're doing. You can kind of like have  a good guesstimation of where you think they're   going to go next so you can be prepared and not  constantly trying to find where they are now. When they're exploiting these models, a  lot of what they're using is, you know,   they're learning how to jailbreak by writing  prompts certain ways. Like you need to kind  

of figure out how they could potentially do  that. Like how is a child predator able to   go on and utilize these platforms that  if you go in and type a regular prompt,   will tell you that it can't provide the  information, but they're able to twist   it around enough to make AI generated child  pornography in order to be able to kind of deal of this you need to understand  how it's working and learn. Buddy (42:56.206) Well, and Heather, to think  

through some of that, like we, the good guys could  be using generative AI to help us think through   all the things you just said. Right? Like if we're  learning how to use these tools to our advantage,   I don't need to think about the 10 different  ways a drug lord is going to conduct operations   in North Africa. Like I can actually share  those ideas with AI and have AI help me start   to think about it different ways. So I'm not just  limited to what I know in my brain, right? mean, There's kind of a co -pilot for  us too. It's not just that the   bad guys get co -pilots. We get co  -pilots too, potentially. that okay?

Heather Perez (43:32.852) Yes. So you have to learn how to leverage those   and incorporate those into your workflow. But then  at the same time, you have to remember that, you   know, we're working with, you know, the ethical  and privacy restrictions. They're operating with   no hindrance. don't, their whole way of doing it  is figuring out how to exploit it. So you need to   understand how to use the tools that you have and  understand their vulnerabilities as well. So it's. It's hard because the technology is advancing  so quickly. It's hard to keep up with. And with  

everything else that we have to do, but it's  very essential. kind of like as you watch them   utilize it, you can start looking at different  indicators as well. I mean, it's hard. It's not   now where some of the propaganda and some of  the deep fakes and stuff that are coming out. It's getting more and more difficult to make  those determinations if they're real or not.

Buddy (44:36.012) Yeah. Well, and I can imagine,   you know, there's, there's some foundational  models that everybody uses, but there's also   sort of this movement with open source models  for people to get access to, to, people being   given the ability to now fine tune their own  models. And so as more models sort of come to   market and more, these models are proliferated  for different purposes across different markets. it gives bad actors the ability to sort  of start training their own models to do   what they want it to do as opposed to,  you can put all their, I guess my point   is you can put all the restrictions you  want on a foundational model to prevent   bad actors from doing things. But if they're  fine tuning and building their own models,  

you can't really prevent that. You know, can't  stop them really from doing that I'd imagine. Heather Perez (45:28.136) No. So at that point, it's just learning and   understanding how they're utilizing their models,  what type of activities they're doing, how that,   like, if you come across that online or in your  investigations, then it becomes just kind of,   you know, sharing the methodologies and TTPs  within your community. Or if it's something   like for child predators and stuff, making  sure you're sharing stuff with the public. So in the case of child exploitation, parents  can help protect their children. But on most   instances, sharing it within the law  enforcement Intel community networks   to make sure everybody knows what to  look for, what are those indicators   and signs that you're looking for. So  you can learn to build countermeasures.

Buddy (46:16.034) So Heather, this has   been a great conversation. know we're going  to probably talk about this a lot more in the   future because as you mentioned, the technology  is evolving. The bad actor TTPs are evolving. And,   you know, I've, I've run some polls on LinkedIn  recently just to kind of get an idea of like, how,   how, how often are people actually starting to  use generative AI to do their job? How, you know,   are we as a, as a, as the good guy, so to speak,  are we in the law enforcement intelligence space? Are we investing the appropriate time and  resources in learning this new technology?   And you know, what I've learned so far, it's  my, course, it's limited to people I've talked   to in the surveys I've done, it seems like  we are a little bit behind right now. People,  

don't know that people are embracing this new  technology as quickly as our adversaries are. And   that worries me a little bit because I could see  the gap sort of widening. And I do think just like talk about businesses that we've  seen this in Gartner studies,   we've seen this in McKinsey studies that  it is very clear that businesses that adopt   generative AI tools and technologies to make  their businesses more efficient will create   their own competitive advantages by doing so. The  same thing applies in the world of misinformation  

and disinformation. And I feel like it's going to  take a little bit longer for the law enforcement intelligence community to adopt it as  quickly as businesses do. And so that Delta,   that gap between the adversaries use of this  and ours, it does worry me a little bit. And so  

we'll definitely be talking about this more.  So thank you, Heather, for hanging out today. You have no idea how much I  appreciate you coming on and sharing   your experience as a law enforcement  professional. You've obviously bring   a lot to the table in these conversations and  this topic will continue to evolve. So I look  

forward to having more conversations like  this. And for the folks tuning in today,   just remember that as we power down today's  session, remember that in the digital arena,   being slow isn't just about losing the race. It's  about losing everything that we guard and protect. Let's not just keep pace, but  let's set the pace. Stay vigilant,   stay informed, and let's turn our  Intel into action. Until next time,  

keep tuning in to the Wild Dog AI podcast  and stay one step ahead of the pack.

2024-07-27 15:37

Show Video

Other news