Nine Cloud Security Reasons for Resting Breach Face

Nine Cloud Security Reasons for Resting Breach Face

Show Video

- [Maya] Hello, everybody. Hope you're having a great morning. My name is Maya Levine. I'm a product manager at Sysdig.

And today we're gonna talk through "Nine Cloud Security Reasons for Resting Breach Face." We're gonna go through actual attacks that have happened in the cloud in the past few years and focus on what we can take away and learn from them. And as a quick disclaimer, if any of your companies are mentioned here as one of the breaches, I'm sorry in advance for the PTSD that I'm sure I will be bringing on for you.

But on a really actually serious note, I do think that as an industry we need to have an attitude of openness around breaches, both in the fact that we shouldn't be shaming victim of breaches because they're happening more and more often, and if you are the victim of a breach, you should be as open and transparent as possible about how and why the attackers were successful so that everybody in the community can grow and learn from that. This is the RSA disclaimer. Now, we can't talk about cloud attacks without talking about the migration to the cloud. And cloud migration has been a trend for over 20 years now.

But until fairly recently we haven't seen many cloud-focused specific attacks. And we think that attackers didn't find cloud environments worth attacking because of the heavy investment in the skillset that was required to learn how to attack them, and it wasn't as lucrative. But now cloud is the mainstream, if not default way of acquiring infrastructure and 'cause of that, it becomes much more lucrative for attackers.

So if you are in a state where you're not prioritizing your cloud misconfigurations, let me be, I'm sure not the first to tell you, it's too late. Attackers care about the cloud, they are even specializing in cloud attacks. And so we can no longer ignore it because attackers are prioritizing it. And this is actually backed up by a Verizon 2021 report, which found that external cloud assets were targeted more than on-premise assets for incidents and breaches. And the truth is that certain types of attacks are actually easier to execute in the cloud. If we think of a supply chain compromise, for example, the way we build modern applications have a lot of libraries and dependencies and it's harder to tell what is using what.

And there's a lot of things that are offering automated processes that attackers can take advantage of. And when it comes to crypto mining, the cloud is the perfect breeding ground for crypto mining attacks because of the amount of infinite elastic compute that most users have access to. So let's jump in to the first attack which focuses on ransomware. And unlike in "Austin Powers," it's never a laughing matter.

This attack targeted ONUS, which is one of the largest cryptocurrency platforms in Vietnam. And the attackers exploited a Log4j vulnerability in a Cyclos server of ONUS, and they left back doors behind and named them kworker, which is a common Linux service for the purpose of disguising as the Linux operating system's kworker service. So this was used as a tunnel to connect to the command and control server via SSH, which is a good way of avoiding detection. The attackers began to start a remote shell and discovered a configuration file which held AWS credentials.

Now these credentials had full access permissions. And with that, the attackers were actually able to access S3 buckets and begin an extortion scheme. ONUS discovered that customer data had been deleted from S3 buckets.

At this point, of course, they deactivated all of the relevant access keys. But at this stage it was too late. They received a ransom request for 5 million US dollars via Telegram from the attackers. They chose to decline this request. They had to disclose the breach to all their customers, and then go through all of the Cyclos nodes to find and patch all of those back doors.

And the reason why attackers are still doing ransomware attacks, since it's been around for years now, is that they know that companies hate seeing operations grind to a halt. That financial incentive is actually still there years later for ransomware attacks. And in this particular instance, there was multiple failures that allowed this attack to happen. The first is that the system was not patched for Log4j.

That's what allowed the initial access. But actually this wasn't really ONUS's fault because they patched all of their Cyclos servers as soon as a patch was made available, but they were already compromised by that point. Log4j went from announcement to being weaponized faster than vendors could release patches.

So I think what's more important to focus on is the fact that they had really overly-permissive access keys. This is what enabled the attackers to overwrite and delete the data from the S3 buckets. And the impact here is that over 2 million ONUS's users personal information was leaked, which included government IDs and password hashes. And if we think of GDPR, for example, this can result in massive fines. This chart is showing you the amount of GDPR fines given out over the past few years.

We can see that this is trending upwards. So if it wasn't enough to worry about attacks themselves, we also now have to worry about secondary fines that could be levied against us if we did not protect customer information. So what can we learn from this attack? You wanna patch as soon as possible.

In this case, again, this was done, but because Log4j went from announcement to being weaponized so quickly, it wasn't enough. So if you're waiting for a patch, consider other mitigating controls. Something like a web application firewall could have actually filtered out Log4j in this case. And certainly something I wanna drive home here is overly permissive, if you have that in your environment, it's like a gift that you're handing attackers on a silver platter. If they make it into your environment and they can do whatever they want, they can really wreak havoc and escalate their attack very quickly.

So least permissive is should be a major goal. For the second attack that I'm gonna talk about, the attackers were actually able to pivot from the cloud to on-premise. And this attack started with a Russian-speaking cybercriminal group known as Circus Spider. Back in 2019, they created the NetWalker ransomware variant, and they actually sold this to criminals in an as-a-service model, meaning a criminal could rent this technology for a fee or for a percentage of the profits.

And what this does, this as-a-service model for malware, is it enables attackers who are not very technically savvy to still execute really intense attacks. So it kind of like increases the scope of who can attack you in a meaningful way. This ransomware variant, NetWalker, encrypts all of the files on the local system. It maps network shares and enumerates the network for additional shares to target. So it attempts to kind of expand its reach by accessing everything with the security tokens of the victims that are already logged into the system. It's the type of ransomware that just spreads and spreads and spreads.

And Equinix actually suffered a breach that utilized this ransomware variant. Officially, they had a configuration management deviation. What this most likely means is that somebody, like a developer, spun up a cloud environment that was outside of the scope of the normal security practices. And that allowed the attacker to get in via RDP. Now this cloud environment had an instance that had access to an on-premise environment. So the attacker was able to pivot from that cloud environment to on-premise.

And this is where they actually encrypted the data with the NetWalker ransomware variant. Once they made it onto this on-premise environment, Equinix defense system picked up on their presence, and they were actually able to contain this attack and get all of the data back from their own backups. So as far as ransomware attacks goes, this is kind of the best-case scenario for the resolution, both being able to contain the attack and get all your encrypted data back from your backups. And the moral of this story, as I see it, is you can't protect what you can't see. The cloud makes it very easy for people to spin up new instances and resources, and that's part of why we love it. But it adds this challenge for security because what happens in the cloud often is invisible to the people responsible for protecting it.

And this is made worse by the fact that many organizations have DevOps being the ones that manage the cloud and SecOps and DevOps may be not communicating in the healthiest, most productive way. This particular instance in this attack that the attacker used was unknown to security. So it wasn't monitored or configured to the normal security standards.

And the impact was that approximately 1,800 systems in Equinix ended up having ransomware installed on them. But remember, they had good backups, so they didn't lose any data, and no customer data was leaked. But the takeaway here is that this incident came down to a lack of visibility, specifically security visibility in the cloud. And for this, you need to keep an inventory of your cloud assets and apply security policies to all your systems.

Of course, that's easier said than done. These unmanaged or shadow systems are really common causes for security breaches. It's also worth noting that, even if you are in the cloud intentionally, your on-premise assets should be in the scope for the discussion of how to protect yourself.

On-premise and cloud are not separate, isolated use cases. They are almost certainly connected and therefore can be abused. And this one's almost too obvious to say, but I'll say it anyway, back up so you don't pay up. Another way of saying this is prepare for the worst-case scenario, right? Living in California, I've been told probably since age five to keep some cans and water in a backpack somewhere in case the big earthquake hits. That's preparing for the worst-case scenario. And that's the same kind of mentality that you should have in terms of your environments.

You should expect that the worst will happen, so if it does, you are prepared. Now for the third attack, I want everyone to consider what happens when the security controls that you're supposed to have in place don't work quite as intended. And when it comes to customer data, we have this expectation that you have security controls in place that only the right people can access them. Now Pfizer was found to have multiple exposed files on a misconfigured Google Cloud Storage bucket. And this bucket contained transcripts between users of various Pfizer drugs and the company's customer support.

In total, there were hundreds of transcripts from people all across the United States, and each transcript had many personal identifiable information, or PII data, things including home addresses, emails, medical history, the chats, like, between the customer support and these users. And when the researchers who found these exposed files reached out to Pfizer, they went through and they secured this bucket. But the impact here is really immense because if malicious or criminal hackers were able to access the data that was stored on this bucket, they could have very easily exploited it for targeting these users for fraud campaigns.

Think of how easy it would be to make a convincing phishing email if I knew exactly what drug you're using, exactly what problems you're having with it, the previous communications with customer support and all of that other data that was in this transcript. All I had to do is get the template of what this email looks like and it would fool most people. So again, if you don't keep your customer data secure, you're making yourself vulnerable to legal issues, not just GDPR. California has also laws to try to protect data privacy. Make sure that you never leave a system that doesn't require authentication open to the internet.

And this GCP bucket had this incorrect permissions that allowed the data to be publicly readable. And maybe consider, is all data worth storing? If it's sensitive customer data, you are liable if something happens to it. So do you really need to log all of it? If you do need to log it, at the very least, make sure that it's encrypted. For attack number four, I really like the example of a poisoned well to relate to supply chain compromises that utilize malicious image distributions.

So just like the unsuspecting villager who goes to drink from the well and they don't realize that it's been poisoned, unsuspecting developers are utilizing images that are malicious without realizing because attackers are planting them as traps in public repositories. And this exact thing happened in 2020. An attacker took a malicious Amazon Machine Image, or AMI, which is basically just a pre-packaged EC2 instance. In this case, it was a Windows Server 2008 instance.

And they took this AMI and they planted it into the AWS Marketplace, which is a very public well. When this instance gets started, a script placed in there will run a crypto miner in the background. And by embedding a crypto miner in an AMI, the attacker is basically earning passive income. If anybody happens to use this, then they'll be mining and generating money for them. Now let's take a step back and really quick, just in case anyone isn't familiar, talk through what is crypto mining and how attackers actually make money off of it.

So a typical transaction that uses cryptocurrency goes as follows. Lily wants to buy a product from Eric using Bitcoin. She'll use her private key to sign a message containing Eric's address and the Bitcoin amount. And then that transaction is bundled with other ones into what's known as a block. This block then gets broadcasted to all of the mining nodes in the Bitcoin network.

And this network is going to validate the transaction using algorithms in a process that we call mining. Now, the first miner to validate a new block for the blockchain actually receives a portion of that Bitcoin amount as a reward. And once that happens, then the transaction is considered complete, the new block gets added to the blockchain, and Eric gets his money.

Attackers come in at this stage here. The money that they're making from crypto mining attacks is just that portion that they get as a reward for validating a new transaction. It's not actually a lot of money. What you really need for an attack to be considered successful is scale. So you could also make money from this, right? You could run a PC under your desk and try to mine and get kind of those portions as rewards.

But with the current rates of electricity that PG&E is offering in California, you're probably gonna lose money rather than make money. However, if you can take somebody else's PC or somebody else's cloud infrastructure and do mining there, then you're getting that reward for free without incurring any of the costs. And Sysdig has estimated that for every dollar that attackers make, the victims are charged $53. To put that into perspective, for an attacker to not even make 10 grand, the victim will be billed over $400,000.

So for attackers, crypto mining is this low-risk, high-reward way to make a profit. And again, scale is what matters to them. The more they mine, the more money they make. However, the impact on the victims can be really huge financial losses. And the victim in this attack would unknowingly be running these crypto miners, possibly multiple if they are running more than one instance at a time. As your CPU goes up, so does that AWS bill.

So for this attack, I think that one of the main takeaways is you should only be using trusted sources for your images. And this is not only specific to Amazon and AMIs. Sysdig analyzed many, many images and found that there are multiple malicious categories. Crypto mining is the most popular, but there are many different categories that attackers utilize, and that these traps are planted in many different repositories.

Even if you are using trusted sources, you still should have static and runtime security tools installed on the instances to ensure that no malicious activity is occurring. For attack number five, unfortunately, it didn't take me very long to find a real life example for compromised credentials. Because it turns out some people think that it's a good idea to post their credit card information on social media and include the security code. Now, I am not saying that developers would do this. However, there are many ways that credentials can become compromised, and there have been known cases where developers have posted credentials in public code, and the attackers find them and use them for their attacks.

This attack all started with compromised credentials. It was a root user whose access key was leaked. It was never determined how.

But as far as compromised credentials go, root access is about as bad as it gets. The attackers use this to generate EC2 SSH access keys and then spin up EC2 instances with the run instance's API call. And they created security groups that allowed inbound access from the internet.

These EC2 instances then began communicating with the cryptocurrency server MoneroHash. So one leaked access key allowed the attacker to set up SSH keys in AWS, which allowed them to install the crypto miner remotely. Now the impact of this attack is very similar to that other crypto mining attack that we talked about. However, it is worth noting that abusing a leaked access key, especially one with that level of privileges, can allow attackers to generate a very large amount of instances in a very short amount of time, generating the type of AWS bill that is the stuff of nightmares for probably everybody in this room. So secrets management, that's a critical part of operating in the cloud. And alongside it, you do need real-time monitoring of your environments to understand if those secrets are being abused for malicious activity.

And the real-time part is actually very important because it doesn't take those attackers very long to generate huge bills if they have the right access. In this specific attack, real-time monitoring could have picked up on two things, the fact that suspicious SSH key pairs were being generated and the fact that the instances were communicating with known cryptocurrency servers. For attack number six, the focus is on access without authentication. And Leslie here found a really great way to circumvent the usual ways that we authenticate into somebody else's home. I don't know, let's say knocking on the door, ringing the doorbell, using a spare key. In the cloud, access without authentication can be a huge problem.

Peloton was discovered by security researchers to have web-based API endpoints that allowed access without any authentication. This access allowed these unauthenticated users to view information on Peloton users. And it included those users that actually set their profiles to private. Now this was reported to Peloton. And the first fix that they did basically added an authentication requirement, but for any legitimate user. This was not a real fix because you can become a legitimate user for free, and as of today, there's over 3 million authenticated users.

So they had to go back to them and ask them to actually fix the vulnerability. And really this just seems to be the result of poor security architecture. Having your API endpoints not require any authentication or allowing them to be accessed by such a large group is not good security best practices. And the impact was that a data leak could have occurred. But this was found by researchers who properly reported it, so it doesn't seem like there were any public leaks.

However, that doesn't mean that attackers didn't gather this data and will use it for further fraud campaigns or phishing attacks. Like I mentioned previously, these personal identifiable information, that's what attackers use for successful phishing campaigns. So consider having secure coding practices and reviews actually built into your development process. This type of vulnerability could have been discovered easily by an application penetration test that's done by a third party.

Also, API security tools can be used to detect an attacker abusing the calls or pulling down really large amounts of data through them. For attack number seven, the attackers actually abused free trial, free tier accounts. And this is what I imagine their face was when they realized they could make money off of it. This was an attack that was discovered by Sysdig's Threat Research Team. And it's really an extensive and sophisticated active crypto mining operation. We've called it PURPLEURCHIN.

In this attack, threat actors are targeting some of the largest CI/CD service providers, including GitHub, Heroku and Buddy.works to run and scale a massive cloud operation. The activity that we observed were calling freejacking. This is when attackers are abusing compute that's allocated for free trial accounts on these CI/CD platforms. And at the time of publication of this attack, Sysdig's Threat Research Team had uncovered more than 30 GitHub accounts, 2,000 Heroku accounts and 900 Buddy accounts. And this number has gone up since then.

Now the efforts that PURPLEURCHIN invested here are pretty abnormal with an extensive list of service providers and open source tools beyond what we've just shared here. So this is not that low-effort, high-reward attack that I was talking about before. The amount of effort initially invested to get everything automated and get it to work with basically no human intervention was pretty abnormal for attackers. They're usually the ones going after the low-hanging fruits.

What's the minimum amount of work I can do in order to make money? And that was not this attack. This all started with Sysdig's container analysis engine capturing suspicious behavior associated with this Docker image. We decided to dig in a little bit and it turns out that this container actually acts as the command and control server as well as the Stratum relay server, which is what receives the connections from all of the active mining agents. So this acted as the central hub of this whole operation. And how did these attackers actually automate creating so many free trial accounts? The GitHub repositories that were created were used within a day or two, and each repository had a GitHub action to run Docker images. This script was responsible for creating the GitHub actions YAML workflow in each of the threat actors' repositories.

And they tried to hide these actions by naming them with random strings. And in order to push the workflow file to each repository, the script added SSH keys for the GitHub CLI. So it created the GitHub repository, and then it pushed that previously created GitHub workflow to the master branch of the new repository. And the result of this automated workflow was the creation of a GitHub account and repository and the successful execution of many GitHub actions to run mining operations. And that last part was automated with a different script. So this script actually went through that list of all of the previously-created GitHub accounts, and then it used curl to pass a pre-made Docker command to each repository's action.

It included the IP address of that Stratum relay server so that it could report back to it. Now on the GitHub side, they just receive a Docker command and run it and then start the mining container. The fact that the attackers used their own Stratum mining protocol relay really helped them to avoid the network-based detections that are looking for outbound connections to known mining pools. It also has the additional benefit of obscuring the crypto wallet addresses.

So each miner is just reporting back to that relay and asking for work. And the relay is what's keeping track of the wallets and the payments all upstream, which means it gets hidden from incident response. Now to pull off an automated operation of this scale, they used quite a few techniques to bypass all of those protections that were supposed to be in place to prevent just this. OpenVPN was used to make sure that the source IP address was different for every account.

They used programmatic mouse and keyboard inputs and speech recognition of audio files to bypass CAPTCHA as well as containers with IMAP and Postfix servers to handle the emails that are required for registration and account verification. And some of you are probably wondering, "I don't work for a CI/CD service provider. Why do I care about this? How does this affect me?" Basically it's ruining a good thing for everybody because who doesn't like free stuff, right? And we can't expect them to absorb all of the costs without it trickling down to their end users. We estimated that for every GitHub account that PURPLEURCHIN created, the cost to GitHub was about $15 a month. Now at these rates, it would take the providers, it would cost the providers more than $100,000 for the attacker to mine a single Monero coin.

And they aren't even mining Monero. At this stage, they are currently mining cryptocurrencies with really low profit margins. So one theory is that this attack is really just a low-risk, low-reward test before they move on to higher-valued coins like Bitcoin or Monero. Another slightly more alarming theory is that PURPLEURCHIN is actually preparing to attack the underlying blockchains themselves because proof of work algorithms are vulnerable to a 51% attack, where an attacker controls 51% of the network's hash rate, there-fi wow, therefore they control the entire network with some caveats. So as I mentioned, this kind of activity could really put the free trials that we know and love at risk. There might not be free trials in the future for personal use, and enterprise and business account costs could go up.

And a takeaway that's more relevant for everybody here is that you can't actually just rely on malicious IP detection, because, again, the use of something like a Stratum mining protocol relay lets them avoid the network-based detections that are looking for outbound connections to known mining pools. For attack number eight, I want everybody to consider that sometimes attacks are much more malicious and in depth than how it appears on the surface. This attack was also discovered by the Sysdig Threat Research Team, and we've dubbed it SCARLETEEL.

What can I say? We have a thing for marine animals I guess. Now the SCARLETEEL attack began with hackers exploiting a vulnerable public-facing service in a self-managed Kubernetes cluster that was hosted in AWS. Once the attacker gained access to the pod, the malware performed two initial actions.

First, it downloaded and launched a crypto miner. And secondly, while that was happening, we also saw a script running to enumerate and extract additional information from this environment, things like credentials. Once the attackers actually gained that initial access to the environment, then they gathered more information about what resources were deployed, focusing specifically on Lambda and S3 services. In the next step, the attackers used those credentials that they got in the previous step to move laterally and contact the AWS API. And at this step, they actually were successful in doing three things. The first was they disabled the cloud trail logs to evade detection.

The second was they actually stole proprietary software. They successfully did data exfiltration. And the third thing was they found the credentials of an IM user that was related to a different AWS account by discovering Terraform state files in an S3 bucket. Now the attackers use those credentials that they found in that state file to move laterally again and then repeat this whole process, do the same kill chain. But fortunately, in this case, all of their AWS API requests failed due to a lack of permissions. Going back to what I said in I think the first example, permissions, if you have an account where you're allowed to do everything, the attackers are gonna escalate and move laterally and kind of wreak more chaos.

And if you have an account where the permissions are very granular only to what people actually need, you're more likely to stop attacks in their tracks. And I think it's interesting to think about what the purpose of this attack was. Did these attackers go in just hoping to crypto mine and then they found all of these misconfigurations and decided, "What the heck? I'm gonna go even further." Or did they use crypto mining almost as a decoy to evade the detection of data exfiltration? There's a theory that attackers might use crypto miners as almost a canary in the mine kind of warning. If these miners are being shut off, it means that somebody has detected their presence and they should stop whatever activity they're doing. Now, cloud security might be young, relatively, but attackers are actually really learning about the cloud and beginning to experiment with what is possible.

The attacker here had really great knowledge of cloud native tools. They were able to adeptly move and navigate and escalate this attack. So my personal concern is that we will get to a stage where the attacker's knowledge of cloud-native tools is gonna be greater than that of the average person that is using them to deploy their infrastructure or applications. And the result of this is that benign, unexpected artifacts will cause people to get in trouble. For example, if you did not know that Terraform leaves a state file on an S3 bucket with credentials on it, then you wouldn't know to handle that with care, and attackers can use that to their advantage. Also, I think it's worth saying that, if you see crypto mining in your environment, don't dismiss it as normal just because it's become so widespread, because that attack might not have actually stopped there.

And if this victim didn't continue to do incident response, they would never have discovered that their proprietary software was stolen. So make sure that you keep the ability to disable or delete security logs to as few users as possible. The fact that the attackers disabled the cloud trail logs here made the whole incident response process a lot harder.

And Terraform is a really great tool, but it needs to be handled with care. Terraform access should only be granted to those who actually need it, and Terraform files should be stored somewhere secure, not in an S3 bucket that's easily accessible. Also, you can't assume that just because attackers can't alter anything with read-only, that there is no impact to your organization. Because with read-only, you can still see credentials and Lambda code, and that is still useful information in escalating attacks. Now for the very last attack that I will touch on, I want you all to think about what happens when you build something on really faulty foundations. Now in the case of homes, this is probably the worst-case scenario.

But when thinking of modern applications, there's a lot of layers of dependencies and foundations that go into building them. Now, PyTorch, which is a very popular and widely used Python-based machine learning library, had an attack occur on December of 2022. They fell victim to a supply chain attack that used open source software.

Now it turns out that PyTorch imports some of their dependencies based on the PyPI index, which they assume that the most popular one is the one that they should include. So the attacker uploaded a poison PyPI dependency that hid under the real name of torchtriton. This Trojan version of torchtriton behaved exactly the same, except it also had extra code to exfiltrate sensitive information to a command and control server. Now this issue persisted for five days, but thankfully never made it into the stable version of PyTorch. And this really happened because of the blind trust of the PyPI repository index for a dependency. And just like this kid learned what seems to be the hard way, blind trust is not something that us in the cybersecurity world should be ever doing.

There are so many layers of dependencies in these modern applications, especially the ones that are utilizing open source software, that there ends up being many vulnerable points. And a clever attacker can exploit this trust relationship, and they did to get their own code into PyTorch. Now, the malicious version of PyTorch was downloaded over 2,000 times in this five-day period. That is not an insignificant amount. We haven't yet seen the full implications of this attack and subsequent attacks that could occur because some of that exfiltrated data included credentials and keys. And as I've mentioned, those are great starting points for new attacks.

So keep in mind that any code that isn't totally under your project's control runs some risk. And this actually isn't just limited to open source. Commercial, closed-source software is also vulnerable.

SolarWinds is probably the most obvious example. So you should operate under a trust-but-verify mentality, meaning you can trust a package enough to use it only if you can verify that it's behaving as it should. And for this, you need security testing. You can use static analysis to try and find unwanted code, but this can be fooled.

What's harder to trick is dynamic or runtime analysis that's looking at the actual behavior of an application. In the case of PyTorch, this runtime analysis could have detected the connection to the command and control server. And the best way of doing verification is as close to the developer push as possible, because the earlier you find out, the easier it is for you to deal with. So looking at all of these attacks and kind of zooming out, what do we expect to see as a high-level trends for cloud attacks moving forward? It's safe to say that crypto mining is only gonna get more popular, which is a little counterintuitive given the fact that lots of cryptocurrencies are currently crashing in value. But it doesn't change the fact that these attacks are low risk and potentially high reward.

It's not a lot of work for attackers to do. What they do care about is scale, right? If the value is going down, they need to actually mine more to make the same or more amount. And this is not only for crypto mining attacks. We're seeing that cybercriminal groups are starting to behave almost like a startup where they're just ramping up their operations.

And this means that all attack types will begin to be operated on the larger and larger scale. And what makes all of this worse is that attackers are actually selling the malware that they're creating on the dark web and making these advanced attacks accessible to people who maybe are not that technically savvy. And this one is a bit obvious, but supply chain compromises can really have devastating effects.

Most people, when I talk about supply chain compromises, they think of zero-day exploits, the ones where the attackers know about it, but we don't, so there's potential risk there. But Log4j really showed us that even if an attack is announced and we know about it, that attackers can still weaponize it faster than we can release patches for it. So, not gonna be all doom and gloom. How do we actually cope with this? And one thing I touched on a lot on this presentation is the concept of real-time visibility. This is super important because, no matter how an attacker gets into your environment, and there's many different ways in which they can get into your environment, usually once they're inside, they're doing very similar types of actions.

So if you are searching for those actions, then you can be notified at least when a breach has happened. Another thing that's difficult is you have way more things to fix than manpower, time or ability to fix them all. So one of the major challenges is how do you prioritize all of your work? And if we use container image vulnerabilities as an example, Sysdig found that 87% of container images have high or critical vulnerabilities. So if you're using that to try to prioritize, that's not very helpful, that's almost 90% of all of them. So you need to apply additional filters to prioritize your vulnerabilities.

Basically, you want to go after the ones that the attackers are gonna go after, the low-hanging fruit. So some filters could be, okay, is a fix available but hasn't been patched yet? Is this actually in use at runtime, right? Am I focusing on code that is actually being executed, not just code that could be? And is it exploitable? Is there a known exploit that attackers can use here? You add all of these filters and you start with those and then work your way up. And another thing I've mentioned quite a bit in this presentation is the concept of lease permissive. Now Sysdig found that 90% of granted permissions are not used.

This is a staggering number. And I think, and I hope that you all agree, we can do better collectively. Again, attackers, if you're going into an environment and again, you made it in somehow, I wish we could say that we could prevent all attackers from entering your environments, but worst-case scenario is gonna happen sometimes. And they made it into your environment, if it's all, like, very, very permissive, in essence, you've been focusing on locking the windows, locking the doors, and then just leaving the safe cracked wide open. Least permissive actually should be a major goal of your organizations. A lot of people I talk to say it isn't, right? There's other things that they want to focus on first.

But I urge you to reconsider because, again, when you have really overly-permissive access, that's like giving attackers a gift to do whatever they want within your environment. And that's all for me today. Thank you all so much. (audience applauds) I think we have a few minutes for questions, but I'll also be, I'm be standing outside or out of the room afterwards if anybody wants to come up and chat.

2023-06-09 04:38

Show Video

Other news