The State of AI & API Security 2025 - Live Webinar

The State of AI & API Security 2025 - Live Webinar

Show Video

All right, and we're live. Okay, welcome everybody. Good morning or afternoon wherever you are located and welcome to the state of AI and API security webinar updated as for 2025. Um, we're just

going to talk a little bit about what we're going to be covering today. So, first we're going to go over the importance of AI and API security, which is obviously extremely vital in today's ecosystem. some key developments from the past few months and 2025 in general, notable breaches and incidents we've been seeing across the landscape, which there are quite a few. Um, some API

breach data that we've compiled with our breach tracker, AI risks, and then finally some effective strategies with some time for Q&A at the end. So, our two presenters right here, we've got Pimmo and he is the VP of product at Firetale. He's got a background in theoretical physics, but he's been doing AI and API security for a while now. He's very focused on some of the current challenges in software development around APIs and he's also multilingual and lives in Finland. Then we've also

got Jeremy, our CEO. You guys have probably seen him before if you've come to any of our other talks and events. He has 13 years of experience in cyber and IT. He's got a BA in linguistics, but he also has ex a lot of experience in cyber security. And he founded Firetale with his co-founder Riley in 2022.

So just to get us started here, Jeremy, could you tell us a little bit about why AI and API security in 2025? Yeah, happy to. Lena, thanks so much for kicking off today's webinar. Excited to be here and talk through this today. So this question, why AI and API security? How are the two things connected? I think the thing that is most important to understand is that the two things are very closely related. And what I mean by related is I mean architecturally and technologically related. And the reason

that I say this is that when you look at the way that AI and in particular large language models, LLMs and you know Gen AI, whatever buzzword you want to use right now, when you look at the way that these platforms work and how you as a customer or as a user of those platforms interact with them, all of the integration touch points, all of the interactions are happening over APIs. So whether you're doing something like let's say sending data to an LLM model to augment it with some of your particular um organizational information for training purposes or whatnot that data is sent over APIs. when you're invoking an LLM to answer a particular prompt or question that goes over an API. And this is true whether you're

doing this programmatically, meaning like you've written a piece of software that connects to an LLM provider, whether internal or external, or whether you're just chatting in a browser with one of these LLMs, asking it questions that you're looking for the answers to. What you don't realize is that that question inside the browser just goes to a front end, typically a JavaScript kind of uh front end inside the browser that then packages up the request, sends it to the back end via an API, and the response coming back is kind of the API response payload. So there is no AI without APIs is a phrase that you'll hear thrown around, but it really is true, right? And so one of the things that we talk about in at Firetale is that AI and API security are similarly interconnected. So just like there is no AI without APIs, there's no AI security without API security. And part of the reason that we say that is again you look at those integration points, you look at how you understand how your organization is using AI. the simplest and most effective touch point to connect to is that API layer. And we're going to talk

through that a little bit more as we go through today's webinar. And the second thing that I want to just mention here quickly is that there's a big difference between kind of AI safety versus AI security. And if you've been, let's say, looking at some Firetale webinars or podcasts over the last several months, you might have heard a recent episode with Sumil Yu. I think it might have

been our webinar from back in January where we talked about kind of what we can expect from AI security in 2025 and Sunnil really made the reference to the fact that AI safety and AI security are two very different things. AI safety is kind of let's call it like safe and ethical uses of AI and and looking at things like what content is coming back. whereas AI security looks for things around the security of your integrations with AI platforms and whether you're accidentally leaking data for instance and things like that. So that's going to be a big part of our focus at Firetale when we talk about um AI security. And

along those lines, there's actually a couple different areas to think about in terms of AI security approaches. everything from kind of let's say like red teaming and external pen testing of AI systems proxy or runtime controls where you say like I don't really trust our organization to use an AI engine directly I want to send everything through a kind of a middleman layer so to speak or whether it's things like AI security posture management authorized data access etc there's a lot of different approaches out there to how you might think about securing AI at your own organization Here at Firetale, a lot of our focus is on AI security posture management, bringing visibility into how you're using AI and time permitting, maybe when we get to the Q&A section, we can get into some of that today, but I know a lot of our focus today is really around kind of the state of things and and again, you know, this section right now, what we're really talking about is trying to help people understand why AI and APIs are so um so kind of intricately intertwined. And I I think you know Forester the analyst firm really said it best. Your AI strategy will crumble if you neglect API security. So as you go down this path with your organization in terms of adopting AI, you really have to focus on API security, making sure that you're getting some of those core basics right as you go there. Yeah. No, that's um very insightful and

I know that's something that a lot of people are still kind of learning and coming to grasp with as we still are learning more and more about AI in the landscape. Um, so kind of piggybacking off that, Pimo, could you tell us about some of like the key developments we've seen with technology around uh AI and API since last year? Absolutely, Lena. Thank you. Um yeah so let's start off with uh with some interesting news and sort of things happening around the sort of regulatory space slash standardization space which we are sort of very interested in following uh obviously because we're producing a tool that uh wants to give the uh companies ability to follow standards and to ensure that they're working uh with the correct threat models uh And so the first thing that I want to mention here is the CIS API security guide. And this is sort of uh our uh personal baby here at Firetale. Uh we sort of realized uh after working on API security already for a while that there wasn't really a good standard out there uh giving you sort of uh similar to what an ISO standard is for cyber security as a as a whole. for example,

uh giving you prescriptive advice for how to deal with API security. Um there there are some books written, but nothing in the form of a guideline or or or a path to follow. Uh and so we uh contacted CIS and and started working on this and it's now published. Um and the real uh real benefit here is this is not just a threat model like the well-known OAS top 10 for APIs. This is actually

giving you prescriptive controls uh to make API security sort of a thing that you can do. Um, and more specifically, this is designed in a way that it is very broad and can be used as a tool to start your security strategy planning for API security as a company and all the way down to actually including actionable items for doing implementation of API security. So, it's supposed to be a really broad tool uh to address this whole area of uh API security. We're very proud of it. I encourage everybody to go over there and take a look at it and uh collaborate with us. Um the next thing I want to mention is the track phone consent decree case. So this was a case where the FTC FCC um took enforcement actions against Crackphone Wireless um a division of Verizon. So there were some monetary uh

fines levied against them. But more importantly uh a large part of the consent decree has actual prescriptive actions around cyber security that they need to implement. And so although this is like an individual company uh that is having this consent decree um acted against them. This is something that that is going to be more and more on the mind of regulators. This is going to be uh should be on the mind of everybody who's working in cyber security at a large company. Um compliance violations

will be met uh not just in Europe but also in the US. Uh and this is something that uh is is you can't really rest on your laurels anymore and just uh say that you're doing some uh you know medium amount of effort towards security. you really need to press hard and actually figure out where you're standing uh with regards to API security. Um yeah, if I could just chime in there for a second, Timo, on that last point, um one one of the things that we've also talked about previously with Sunnil and actually with Anthony Johnson who formerly worked at a large uh publicly traded financial service company and it's you know for security leaders who work within regul regulated spaces, one of the things they keep an eye on is what is happening in other regulated spaces because typically it is not a question of if but when. So when you see

regulation coming for a particular space in this ca in this case telecommunications you can expect that it then comes for all the other regulated spaces financial services healthcare government insurance etc. So this is the you know in many ways you can think of this as kind of the first domino down the regulatory compliance path for API security. Absolutely. And you can take a look at

the EU's VA and DORA regulations again to your point uh about the financial service sector uh and their sort of like actually containing prescriptive guidance for API security in particular, right? Yeah, that that's a very good point. Um yeah, so the next thing is a new new sort of uh element from OAST top 10 regarding AI models. Um and so this is an interesting addition to their suite of uh sort of top 10 listings. Uh this basically lists the sort of uh it's again similar to other OAS top 10's threat model based analysis of the space. So giving you the most sort of uh

dominant attack vectors, the dominant risks that you're facing if you're deploying an LLM. So there's stuff about chronection attacks, there's stuff around uh data poisoning, excessive agency, uh insecure architecture designs, etc. So uh this is a very good thing to read and understand and be aware of uh just as a sort of uh groundwork for understanding what you might be facing if you're using LLMs in your product. But again, this is mostly

a a list of threats. Albby real. Um, but it does lack sort of enforcable action or sort of like actionable items. Uh, descriptive. Uh, it's descriptive, not prescriptive. Um, and so there's there's something um where again this is uh is a is a way where your good security posture on APIs can help you. fact that uh most LLMs are being used via APIs um and so you do have a leg up if you if you start there. Um

however, contrary to the case for APIs, there is already a light on the horizon for prescriptive uh standardization. That's the last sort of big item I want to touch upon. And this is a new ISO framework uh for standardizing AI governance. So this is exactly um the sort of thing that uh that was lacking in the API space is now uh becoming a reality in the AI space. So this is a standard that established structured guidelines for uh risk assessment, mitigation, compliance policies, um impact assessments, life cycle management, sort of all the good stuff that you that you can rely on for building out uh and sort of enacting a secure AI strategy. Um so this is I think a really good sign um and and sort of something that um yeah again you should definitely be aware of and get to know if you're working in the space and if you're sort of responsible for adding LLM features to your product or uh or sort of are in the position of deciding on LLM strategies for your business.

Yeah. Um, I think that's great, especially because I know there is such a huge gap right now and a huge need for stuff like this. Uh, especially because we're seeing so many risks and incidents across the landscape. Uh, Jeremy, do you think you could talk to us a little bit about some of the notable uh, incidents we have been seeing and breaches and things like that? Yeah, happy to, Lena. And what we did is we just cherrypicked a few. There's too many from the last year to go through each one individually on this podcast on this webinar rather.

But um what we did is we picked a couple of interesting ones. Uh I think we've lost the slides. So let me just try to pull that back up real quick. I'm not sure what happened there. Yeah, sorry about that. So sorry for anybody who

lost the slides there for a second. We've got them back on the screen. Um, but what we did was we picked a couple that are indicative of kind of some of the key different vulnerabilities and factors that we see time and again. We've picked four different ones to highlight today because they highlight four of these key aspects each individually. So the first one is the Irish government's co 19 vaccination portal breach. And what happened here is

that basically there's a vaccination portal. Uh, like a lot of modern web applications, it's a decoupled front end and back end. the front end makes API calls to the back end and it has request parameters that do things like highlight let's say an individual's um account ID in this case it might have been been an identification number or a national identification number I'm not exactly sure actually what parameter it was but it was the kind of thing where you could see in the URL of your browser you could see an ID number that clearly indicated that it was your record well it was very easy to just then go change that number to another number and you would get then somebody else's vaccination record. So this is an example of what's called broken object level authorization. It is long been the number one on the OASP API top 10. I think it's been for the last

two editions of it and rightfully so. it is a contributing factor in I think the something like 50 60% of all of the breaches uh that happen around uh APIs and the the real challenge here is that you know it's very easy to overlook the need to enforce serverside authorization with each record request and that needs to happen on the back end of the API and it needs to be around the combination of the user the piece of data and the action that they are trying to perform with that. And so this is something that again like I said it's super easy to miss um when you're building an API. And so it is uh uh the kind of thing that often gets overlooked. Uh common factor like I said highlighting that one as the first one. Second compromised robot vacuum cleaners. And what's interesting

here is that you know when you think about IoT and actually if we think about a lot of modern software you don't build software from scratch. meaning that you write every single part of the application stack that you're working on. You grab open source libraries. Those libraries are things that might have vulnerabilities in them as is the case here. And in the case of IoT devices, it's actually a pretty limited and well-known set of libraries that are going to be added for these devices. So here we have a a uh device software stack. In that stack is a piece of

thirdparty uh code. that code has a CVE, a common vulnerability um and exposure and that is centered around an API and the API in this particular library had weak authentication. Um so it really allowed anybody who had access to uh the robots on the network to then send them an instruction set with a crafted URL and a crafted uh payload. And so with that, um, these I mean, it sounds kind of funny, but if this had been a more serious device, this would have been a lot worse. But what ended up happening is that you could tell the robot vacuum cleaner to kind of yell profanities at the user. And so these, you had these vacuum cleaners kind of going around houses and I just had this image in my head of a vacuum cleaner chasing me in a room, you know, not like scary chasing, but a vacuum cleaner following me and then yelling bad words at me, right? And so like like I said, it's kind of funny in a way, but it also really highlights the the risk of vulnerabilities in software stacks and the importance of doing a lot of basics like keep your software stack up to date. It is

challenging in IoT to push out firmware updates and and so on. Um but I think it's very important then to you know as you're incorporating thirdparty libraries test them for these vulnerabilities. Right. Right. Third, OpenAI's chat GPT had an API vulnerability that could lead to DDoS attacks. And what this is is that, you

know, Open API as it trains its models, its LLMs on content off the web, what it does is it scrapes web pages for content, right? So, and and when I say content, I'm literally meaning text content, right? And so um there is an indexing function that prompts the browser they had or sorry their web crawler to go fetch all the content from a website. Well what it was was here you had a completely unauthenticated API endpoint. And this is again one of these common problems that we see is that you know you might build authentication into your API but do you have it on every single endpoint and as you add new endpoints are you making sure that they're also meeting the same security specs that are required of the rest of the API and in this case what it looks like is there was one unauthenticated endpoint that you could send a request to and tell it please go index a particular web page and then there was a secondary factor here where there was kind of a lack lack of input validation. So let's say I sent a particular uh URL, meaning a particular website, but I sent that same website, let's say, 10,000 times. Well, there's also a design flaw in the crawler where sending it 10,000 times would initiate 10,000 web crawls of that site and it would parallelize that. And I think it got up to the rate

of 5,000 requests per second if I remember right. And what that effectively did for some companies websites is it took them offline. And so, you know, telling OpenAI's webcrawwler to go index a particular website could also be telling it to please go DDoS that website, distributed denial of service and take that website offline. So, unauthentication on the API endpoint is kind of the root cause here, coupled with a couple of uh design flaws or design kind of let's say like things that didn't get thought through properly. And then last but not least, uh, Metal Lama framework vulnerability.

Um, so security flaw in the Metal Lama, which is one of the most popular open-source uh, LLMs out there. I I've talked to a number of organizations that are actually running their own version of it where they're taking, you know, this open- source LLM and then they're training it with company proprietary information and they feel comfortable doing that because it's within their own control, their own environment, right? But when you're introducing open source software into your environment, you're bringing in everything that comes with it, including in this case a CVE that allowed arbitrary code execution on servers running this inference stack. Now, this may or may not be a problem for your organization if you've got that let's say the servers or the cluster where you're running Metal Lama well protected with things like network security, firewalls, etc. But it does highlight the danger of kind of insecure uh software that you might be introducing. And in particular, the thing I wanted to kind of mention here is that the um the particular vulnerability focused on the serialization and deserialization of inputs and outputs on the API calls going into Metal Lama. So that's kind of the fourth thing that I wanted to mention. If you think about

it, we've gone through we've went through um an authorization example. We went through a thirdparty CVE example. We went through an unauthenticated endpoint example. And last but not least, we're pointing at a kind of input output um you know, string, text, parameter, whatever you want to think about it, but input and output handling on the uh on the API there. So hopefully

that gives some sense as to the flavor of the types of vulnerabilities that we see on APIs again and again and and you know, just like I said, picking a few select incidents from the past year to highlight and focus on some of those problems. Yeah. Yeah, that's great. Um, and I know that at Firetale, we've been tracking breaches and incidents across uh for quite some time now and kind of seeing where we see the most vulnerabilities, risks, etc. And that's kind of how we inform our data for the state of AI and API report. So, do you want to tell us a little bit about how we compile the data for our breach tracker and how we show all that and everything? Yeah, happy to. So, uh, for those who

don't know, you can find our API breach tracker. It's always linked from the footer of our website. I think it is just literally firetail.ai

API-breach-tracker if you want to try to find it directly. But like I said, link from the footer of our website. And what we do is um people can report breaches into us, but we also track a number of different uh news feeds and sources across the web to look for breaches that have happened around APIs. And and the

whole point of it is to try to understand how thread actors are exploiting APIs in the wild. There's kind of a general rule of thumb that for every publicly disclosed breach, there's probably another nine that go undisclosed. Kind of a a 10% get disclosed kind of rule of thumb. Um, and so we've been tracking, I think since 2017, uh, over a billion and a half records have been exposed in that time period. There's one thing I do want to highlight is that there are things like incidents versus breaches. breaches. In our case,

we tend to be a little bit of a stickler in terms of only classifying things as breaches when we know that there have been records exposed and let's say like data sold on the dark web or something like that. Um, so we track both incidents and breaches. Incidents could also include reports that could also include things like responsible disclosures from security researchers.

So when we look over the last several years, what you can certainly see if you certainly if you take those two items together, but even if you just focus on the incident side of things, um is a pretty sharp rise in the last three years. And honestly, we attribute a lot of that to greater thread actor awareness and understanding of how to use APIs as well as the ability of things like AI code authors that allow uh thread actors to really like very quickly um you know write API requests against APIs and and do that you know in a scripted way with a lot of automation around it. And so um from that data one one of the things that we do and we've done for the last several years is we try to then classify around kind of primary and secondary attack vectors. So how do these APIs get breached? And one thing from uh the comparison between our 2023 and our 2024 report um authentication and authorization remain number one and number two. Authorization is is a little bit ahead of authentication at this point. Um, that's not super surprising. When we analyzed previous years, what we always found was that authorization was always responsible for the greatest number of records breached. This

analysis right here that you're seeing on the screen right now is in terms of number of incidents. Um, so that's that. This is also detailed a lot more in our state of AP AI and API security report. So for anybody who's interested, the the a fuller breakdown is in there. And then one of the other things that we use in preparing the report is we actually take data anonymized data um off of the Firetale platform. And there's a couple of things that I thought were pretty interesting from this year because this has changed from previous years. Um one

of the things from previous years is that we saw I think um 401's unauthenticated as our number one uh traffic or number one error from previous years. And what we're seeing right now is 429 too many requests which is very often um rate limiting in response to things like bots. Um so bot traffic is way up. We again attribute that to thread actor automation. A lot

of web scanning goes online and and you know I've said this before but when you put something online even an API at a random IP address it starts getting traffic in less than 5 minutes. And that traffic is getting increasingly intelligent. It's not just like ping, is there something there? It's ping, let me try to figure out what's there. Let me

now use my automation and a little bit of AI to to figure out what is there and and try to send things to it. One of the other things that I think is kind of interesting is that when you look at the aggregate of all of this, if you look at your 200 classes, which are successful requests, the total is only around 54%. Meaning only 54% of API requests are successful. And so what that actually means is that like 46% of requests are failures. Great. That's very obvious. But from that it really means that we're paying a lot of extra compute costs to run our APIs. And so

actually crafting our APIs in in smarter ways, making sure that we have a little bit more security around them. Maybe pushing some uh controls to things like uh network areas can actually reduce our application server loads. So that's something that I think was also kind of interesting to uh think about.

I think Jeremy, just to that point, if you go back one slide real quick, a really interesting thing to take into account is that one of the most common findings on our platform is the lack of a 429 um response being set on the API. Meaning that uh to your point, a lot of those 200 responses are probably being uh given to bots. uh or or sort of reflect a lot of this overprovisioned um capacity for calculation, right? If you think that the if nothing else then the 429 uh percentage here should be even higher. Yeah. Yeah. It's a great point. Yeah. It's it's kind of crazy to see

though. Um just to just to digress for a second on on your point because I I do want to take a second to talk about it. One of the other really interesting things that we've seen coming off of our platform, aside from the number of findings, um we're seeing a lot more attempts at things like code injection, at things like planting malware via APIs. Um that is something that is

relatively new over the last kind of year and a half. Um and and we do touch on that a little bit in the report, but just know that if you have an API that has um some kind of vulnerability in it, thread actors, the likelihood of thread actors trying to drop a particular payload that is specific for that vulnerability is increasing. O these are some crazy statistics. Um, and getting on to the the risks around APIs, let's segue into talking about some of the AI risks that we're now seeing in 2025. Um, Jeremy, do you want to continue kind of going on about some of what we've been seeing in the landscape? Yeah, and you know, I'll start with just kind of a um a citation from a third party. This is not Firetale Research. We

do attribute it in our report and and cite it as such. Uh this is from a group called the uh responsible AI collaborative which is a kind of a working committee within the responsible uh sorry within the waking up foundation and that's an organization that's been tracking the development of AI going all the way back to the 1980s. So we took some of their statistics and we kind of analyzed year-over-year to try to understand and unsurprisingly since 2021 the uptick is just massive. And um if you look at kind of a year-over-year in the in the last 3 years, we're looking at more than 150% growth. Um so the the

it's not really surprising when you kind of frame that in the context of like, well, what is AI adoption like nowadays? Well, AI adoption has grown at record rates over the last few years. Why? Because it's super accessible nowadays. It used to be that organizations needed a lot of specialized expertise and maybe hardware and and software and and teams to kind of run those environments to um to take it on AI projects. Well, nowadays, anybody with a credit card and access to AWS can get an AI system online in like 5 minutes. And so, sure enough, in the rush to adopt AI, organizations make mistakes. We saw this

with cloud. We've seen this with really every new technology uh that that kind of becomes available at scale. as these technologies bring a lot of b benefits and value to organizations they rush to adopt they make mistakes so obviously the highlighted incident there is pretty high what's interesting also to understand is that when you take that and you break it down um there is a lot of unknown here and so when we we look at the AI attack vectors and we look at all the incidents that have been uh reported unknown is the largest percentage and it goes to something teemo said earlier we've got our first view of the risk model around AI. Right? Now, the first view is rarely the correct or the final view. Right? So, we've identified 10 top risks. It does not mean that they are the only risks and it doesn't mean that there are the 10 risks that are going to remain the top 10 for the next years. And so, like

there's a lot of lessons being learned very fast and then sometimes the hard way right now going on. Um, but we do already see things like prompt injection being a very very high up. it it's actually an area that really kind of ties directly to things like SQL injection on the API side. They're very

carefully uh sorry, very closely related types of problems. Um so that's something that we see and we highlight this in our our report as well. Yeah, absolutely. Um so with all these

risks and vulnerabilities, what are some things that we can actually do uh in our own AI and API security strategies? Uh Teemo, do you want to take this one? Uh yeah, absolutely. Let me just sort of start off here by um going through like this model for API security that we've been using for quite some time and now are discovering that this I mean obviously this also works for other areas and specifically it does work very well for AI security as well. So this is the way that we look at uh your sort of windows into your own API and AI security. Um, and it starts off by discovery and visibility. This is like the the very foundation of security is the fact that if you don't know about it, you can't secure it, right? And this was true for APIs, but it's even more true for AIS. Um, as Jeremy went over like in the almost the first slide there, uh, deploying LLMs is a massively more complex task than deploying an AI API, which is already quite a complicated thing um to to produce a a sort of scalable and and well functioning API. uh LLMs have so

much more infrastructure in them, have so much more cloud presence to them, um that it really behooves a a company to invest significantly in discovery and visibility if they want to keep track of what all of their development teams are doing uh with AI. The next step then is assessment and enforcement. is also a key factor on the Firetale platform and similarly as to API security with our security posture management around that we're now building and have built the AI equivalents for that uh getting getting your sort of models assessed getting uh getting findings around how uh how your cloud deployment works for the AI deployment and then enforcement having sensible workflows uh to be built in there alerting and and policy violations that get triggered and go to the right people in the right channels, right? Um and then obviously all of this ties together into like called observability and and auditing. Um making it so that you can have uh reporting and reports making it so that you can actually get to the sort of big G the governance side of it uh and making sure that everything uh stays uh stays the way it should be. And so that's that's the way that we attack it. Maybe Jeremy can go over some

of the details of the individual ones here. Um, but this is this is sort of the high level picture how we usually do things. Yeah. And I think that high level is super important because like you know all the things that you said, but if you don't have those basics in place, it's really hard to get a handle on, you know, how well or poorly are we using AI today and what are the security risks to us as an organization. But just to dive in on a couple of things that you mentioned there. Um unttracked AI equals unprotected AI is really just another way of saying, you know, if you can't see it, you don't know about it.

You have no idea how good or bad it might be. And one of the other things is that um just like we've seen with lots and lots of new technologies. There's a ton of experimentation going on right now. And so one of the things that we've seen let's say in the past in terms of let's say the adoption of cloud platforms like AWS Azure GCP from my time in the mid2010s helping organizations solve cloud security problems we very often saw this problem of like oh well I'm going to stand up something to test it and see if it's what I need it to be. So you stand it up you test it you find out either yes or no but that was just an experiment. Then you go away and you build the final thing in production. Well, did you tear

down that experiment or did you leave it around there? And so this kind of like zombie stuff that gets created, that gets built, whether it's in code, whether it's in cloud, whether it's a piece of infrastructure that you've deployed, that brings a real risk because when somebody finds it later on and starts to use it, they may go into using something that is kind of fundamentally insecure. So we see a lot of kind of challenges around that. And then one of the last things around um supply chain risk that I want to talk about. This is something that is a little bit new around AI that a lot of organizations are trying to figure out because you've got a combination of different supply chains coming together. you've got, let's say, like the technology supply chain, which is typically the LLM, the provider. And one of the things we've talked about and we we've kind of looked at is are all LLMs created equally from the security perspective, or are there certain LLMs that have certain risks that you might need to know about whether that is on things like content safety or let's say ethics or or um reasoning engine things or whether that is on things like disclosure of data. um both of those can

be kind of thirdparty risks around the technology side, but then you also have risks within your own data sets that you're feeding into it. So you've got these two supply chains coming together in ways that I think are not very well understood just yet. So all of this kind of you know comes back to you need to understand what's going on. So that discovery step is really really super critical that builds you an inventory. From that inventory, you can start to do some classification and some risk assessment. Internal use case for searching our internet, probably not high risk. External use case for

recommending a financial services product to a customer based on their demographics, potentially high risk, right? Like lots of that stuff you need to kind of have visibility in in terms of understanding. And you know, again, I just wanted to kind of follow on to the last couple points that Teimmo raised around assessment and compliance. Um, we already see ISO 4201. There's not a ton of companies out there that are adopting it just yet, but there are also standards coming out of places like the EU in terms of, let's say, safe, ethical, fair AI adoption, um, that organizations are going to have to start thinking about in the very near-term future. And again, if you don't have

those first steps around visibility, let's say building an audit trail, whatnot, it's going to be very hard for you to go down the second set of steps, which is around assessment, compliance, and whatnot. So, make your security assessment a standard practice. Doing things like consultations that are point in time are great for point in time, but we've talked a couple times already about how there's a ton of experimentation going on in AI right now. point in time is probably good for what like one month, two months before a new set of apps has been stood up, more data has been integrated, etc. So if you kind of think about doing all of this stuff programmatically, using automated platforms, that's probably a better approach and then using some kind of lens to assess things very quickly is a super powerful way to to kind of help your organization manage the risk around it. So I think that's all I'll say on

that. I want to kind of keep the conversation going on. Teimmo, if you can kind of summarize like what's the bottom line here? Yeah, I mean the bottom line is that as we already said many times like APIs are the the sort of foundation of AIS. Um and and many people who are figuring out attacks against AIS are using APIs obviously. So, so when you think about

this sort of way that a new technology comes along and maybe some people don't understand it quite as much and you sort of don't know your threat model exactly, what you should be thinking about is going back to the basics, going back to the fundamentals like we discussed. Um and so there's stuff that you can do with uh cloud infrastructure about visibility and inventory as we said but also going back to like how is a how our AI is being used and that is by APIs. So a fundament a good fundamental um API security stack will let you sort of fix a lot of sort of early fires uh that you might be otherwise uh experiencing. And that's sort of like I think the the sort of main bottom line here um around around what we are trying to say uh with uh with regards to AI security and and that's basically also what we are sort of trying to uh uh sell in our tool right that's what we are trying to make uh firetail AI security solutions for.

Um so there are things around uh uh you know continuous code to cloud AI discovery and risk assessment. So uh identifying AI usage um in in sort of environments and and evaluating your security posture there. Uh there's there's AI threat detection. And so scanning the models, scanning your code bases, seeing how the models are being used, what are the prompts, what parameters are getting injected into the prompts, what are the what are the sort of like GDPR, PII implications there. Uh there's real-time attack protection. So

uh all of the sort of logging infrastructure uh that goes goes towards API security is immediately applicable to AI security as well with detection and tagging and and sort of the normalization that we do on the platform. Um there's uh AI specific risk scoring uh available meaning that that we sort of uh take into account relevant standards like the LLM top 10. Um, MITER actually has uh their own AI related framework already out as well. Uh, and we're using that to that to sort of uh score and classify findings that we get uh against the sort of LLMs that are out there. Um, and then obviously a great focus of our tool is always to be very businessfriendly in the sense that we don't want to disrupt anybody's workflow. We don't want to be just

another tool. uh we want to be able to integrate with the sort of ticketing systems with the notification channels with the sort of messaging uh pathways that you have at your at your company and sort of intelligently get back in there and don't provide a lot of noise but just provide some really good quality signals or stuff that you need to address. Yeah, that's basically that. Awesome. Thanks so much for that,

Teimmo. Uh we're coming to the end of our discussion, so I am going to take some questions that we've got from the audience, but I'm also going to leave up this QR code. Uh anybody who's here can just scan this for access to the report. If you've registered for this, you can also get a free report in your email anyway. But if you're watching this live or not live, sorry, if you're watching this at a later time on demand, then you can scan here and receive a copy of the report. And we'd also like to invite you

to check out our website, grab a demo. Uh we've got a free tier as well for people to try out the AI security capabilities. But now let's get over to the questions. Um our first question that I have right here is ah sorry one second. There it is. So um this one's for Jeremy. Where do you see the biggest emerging threats to AI systems in 2025 and beyond? I think the biggest thing is the unknown unknowns. It's you know that that

organizations don't know what's going on and they don't know the risks around what's going on. There's a stat I saw recently that 90% of AI usage today is classified as so-called shadow AI meaning that it's happening outside the purview of the um security organization and that's that leads me to think that you know just a lot of organizations don't know what's happening that's actually the biggest risk because what can go on when you don't know how something is being used is bigger than let's say a fundamental flaw within one of the platforms so that that's where I would say is the biggest risk and I think that's going to be the case for like the next one to two Yeah, and I mean that's a huge percentage like 90. Oh my gosh. Um, so yeah, that makes perfect sense. Uh, Teemo, we've got a question for you here. What is your view on securing AI plugins and thirdparty model APIs like chat GPT integrations? I mean that's a that's a very good question. So uh I I would view the the

sort of whole uh point of architecting something in the similar way as you would do with any other third party service is just make sure that you've got security controls in place. Right? So in this case like ensure that you've got logging and monitoring in there. Um, for plugins, be aware like, uh, stuff in the LLM top 10 around excessive agency, for example, making sure that if you're using plugins that the plug-in has a very sort of limited scope and is is being watched by guard rails, etc. So, uh, just making sure that that you're like it's an exciting and new toy, but uh, make sure that it stays in its lane, I would say, is important.

Yeah. Yeah, absolutely. I think that's a really important point. Um, you've just got two more questions that I think we've got time for. Uh, this one real quick for Jeremy. Uh, what role should red teaming or LLM specific testing play in any AI security program? I think it's an important role, but I actually think it's like a second step.

I think the first step is to figure out what LLM you're going to use. And that takes a lot of testing because the different LLMs, even in our own experience and things that we've done, we found that different LLMs excel at different tasks, right? And so some are better at codew writing, some are better at let's say spreadsheet analysis, whatever. So you figure out your use case. Based on your use case, you might figure out your LLM. You do a little bit of let's say the integration work, the coding, building your agent, building your prompts, whatever the case may be. To Timo's last point, make sure that you don't allow for excessive agency. Make

sure your prompts are are pretty clear as far as what they're limiting the responses to potentially being, etc., etc. At that point, you then have the risk of what the LLM is vulnerable to.

And that's the point where you bring red teaming into the equation and you start thinking about like, okay, if something were to go wrong, somebody were able to access this LLM through some unauthenticated or unauthorized method, what are the types of things that they could do? Jailbreak techniques, uh, injections, etc., etc., but that's where I see it. I see it as being important, but kind of being secondary to kind of that use case. Gotcha. Okay, that makes sense. Um, and so the last question, this one can go to either of you or both of you, but what are the biggest gaps that you guys are seeing right now between AI development and security and compliance? The big one. Um, well, maybe I can go first. Like I I

think there's just a very big gap in understanding, right? Um it it's it's like we we've been following and working with AIS uh non-stop uh for for since since they sort of emerged uh like a year ago. um the the sort of the fact that uh that you can be working on an LLM product and not really understand at a fundamental level how LLMs work means that you cannot prepare yourself or the application for the threat model that you're facing, right? Um so we've got things like that are very exciting like MCP which is a way to have APIs be automatically be available for LLMs. uh and now there's a whole bevy of vulnerabilities where people can put uh malicious instructions into their MCP protocol that then attack an entirely different MCP service just because they're both sort of in this listing directory. Um, and this is this is sort

of like the it's really it's really um a lack of understanding that I think is right now the biggest gap in security is people doing things without fully comprehending the the consequences. Yeah. And I fully second what Timo said there. I mean it's something that I kind of said earlier. It's the unknown

unknowns, right? There's so much happening so quickly. If I had to say what the biggest risk is right now, I think it's the risk of like the common mistakes that an organization and the people within the organization will make as they just like rush down this path to Teemo's point without understanding how these things fundamentally work and what some of the risks are around them. And you know, not having tooling that addresses the risk model around it is also a challenge, but there's just like too much to try to understand too quickly. So for a lot of organizations I think the thing to focus on is what you can do right now in the near term that is going to enable adoption because like no security team wants to be the team of no that is holding the organization back. So figuring out like what is addressable for you and I think like those six pillars that we tal about like just get visibility as a first step know what's going on then you can start to make decisions around it and then as you learn about additional threats you have the visibility that allows you to check oh okay it's this type of threat well do we see across our let's say like our AI landscape that threat posing a risk to us anywhere you have to start with like trying to reduce the number of unknown known unknowns and that can help you at least move forward. That's that's kind of how I see it. Yeah. Yeah. Absolutely.

I think that's a great place to wrap up and uh thank you everybody for attending, everybody who attended live. And once again, if you did not attend live, you know, feel free to get copy of the report right at this QR code. And if you enjoyed this discussion, you know, we'd also like you to check out our podcast and we've got some other webinars. They're all up on the Firetale website and YouTube channel for you to check out. We've got modern cyber and

right now we're also doing a special breach series where we ask people who have been breached about what happened, how it happened and the lasting impacts on it. So anyway, thank you guys so much for coming today and we are going to end it right here. Thanks so much. Thank you.

2025-04-27 21:33

Show Video

Other news

Inside Railway Maintenance: Practices, Strategies & Future Technologies 2025-05-14 04:24
The rise of Cursor: The $300M ARR AI tool that engineers can’t stop using | Michael Truell 2025-05-10 04:13
Why Refresh and Replace Outdated Technology with Juniper Networks | CDW 2025-05-08 00:45