Hello everyone, this is Tom Uren and I'm here with the Grugq for another Between Two Nerds discussion. G'day Grugq, how are you? Good day, Tom, fine and yourself. I'm well.
this week's episode is brought to you by Nucleus Security who make a top notch vulnerability management platform. It's good stuff. So, Grugq, like more than once, you've mentioned this book to me by Matthew Monte, M-O-N-T-E, Network Attacks and Exploitation A Framework. And you've mentioned it.
In more than one podcast and I've always edited it out because it never quite fits into to what we're talking about there's a particular page you've sent me, which is first principles. And it mentions three things, humanity, access and economy. And so today we'll expand on those looking at a couple of recent incidents that have happened. I guess you'd call them case studies in a way.
So do you want to explain like your understanding of what those three things are. So, Monte describes them as first principles that are immutable and fundamental. Right, so basically like first principles of hacking is how I look at them.
And the way he puts them forward is basically you have access, which is to me that's the foundation principle, which is that if there's a piece of data that can be accessed legitimately, a hacker can steal it. Because at the end of the day, the only thing that they absolutely have to do is impersonate that legitimate access or coerce the person who has legitimate access or basically replicate what they can do. So because someone can access it, it can be stolen. So that's sort of the foundation. Then. the paragraph is there is always someone with a legitimate access and a means to use it, which is kind of self-evident in a way because there's no point having data that no one can access, is there? Anyway, go on.
yeah, exactly. And then the reason that you can access it is humanity, which is that humans make human errors, humans make things easy for other humans to do, or humans are lazy. Humans fundamentally will make something insecure in some way.
Right, yep, yep. So we'll pick up on that one as we go through the examples, I think in particular. Yep. in that. And then I think possibly the more interesting one is that there's the principle of economy, which is so here you've got access, you can gain access to something.
Humans are involved, so there will be a way through exploiting a human to get to it. But economy means that you have infinite requirements and finite resources. So you have to sort of figure out, yeah, yeah. And so that's like, at some point, something is no longer just, the juice is not worth the squeeze.
Like it ceases to be worth investing heavily in something when the reward is just not gonna be there. And... the way Monte phrases it is, ambitions always exceed available resources. And he says this applies for everything.
This is true for both computer offense and defense. There is a priority, cost and benefit to every action and every outcome. So when I worked in ASD, if you were doing something, you often encountered this almost nihilistic attitude of, know, you're trying to do something that you want to keep secret and people would say things like, well, you know, probably our best team could get access to that. Therefore the adversaries best team could get access to that.
And it leaves you in this position of, well, what do we do? Nothing then, but care that much about the carpool? Well, exactly. so that was the trick is that, Adversary intelligence agencies can seem all powerful, but the trick is that they've actually got an infinite number of things that they want to do. And like, how much do they really care about whatever it is that you're doing? And if it's super important, then obviously you'd be wanting to put in the best effort, but just because it's not perfectly defended doesn't mean it's not worth doing. Yeah. well, just because it's a secret thing doesn't mean it's an important secret, in a way, to them, right? So, yeah.
But anyway, let's move on to the recent examples. So. Yeah, so for me, these came up when we saw, there's a few things that have been in the news recently and it triggered, I keep coming back to like, he got these things so right. Like he just, he hit the nail on the head with these principles. They always apply. And there's the signal thing and the device code thing.
Okay, so the signal thing, Dan Black at Google's Threat Intelligence Group has just published, I guess, a kind of wrap up of different ways, several different ways that particularly Russian, threat actors are targeting signal and trying to get access to signal. And it all boils down to basically phishing. Mm-hmm.
They lie to people and make them do things. That's... But it's sophisticated in how they go about doing it, the trappings that they use. But yeah. Yeah, yeah, so that's like the top level, that's the example.
One of the, and we sort of spoke about this peripherally a few weeks ago when we spoke about Paragon, which was spyware that allegedly, according to a random tweet that we talked about, a method of secretly, I guess my understanding was secretly cloning a device without the original owner knowing. that any messages sent to it would go to a separate attacker controlled device. Right, it basically stolen, allegedly, or according to an anonymous tweet, the technical details.
So Yeah, so it stole the authentication token via an exploit that didn't require user interaction. So there was some... Like there's an exploit angle to it, but the end result was then you had a device that was linked to the account that could get all the messages. That was the goal. different to that in that it's convincing people to basically link devices without knowing.
And the reason I think this is interesting and relates to the three principles is that it's often using QR codes. And because of the way signal is like typically on a phone, it's not very practical. QR codes are very very practical way to transmit basically links. I guess share links, right? That can be extremely extremely painful to type in on your on your phone I mean, they're painful to type in anyway because it's usually so long and it's so error prone, particularly because in order to make it secure, you need to have enough bits.
And if there's enough bits, you either have to make it case sensitive, in which case you type a lowercase l instead of a 1, or you hit Shift at the wrong time and the whole thing fails and you have to start over. or it has to be super, super long, in which case you've got the exact same problem. It's just you're trusting humans to do something which humans are particularly bad at, which is like copy random strings perfectly. Yeah, yeah, so in terms of our three principles, we've hit on humans there. It's just not practical for humans to enter those kinds of URLs. And so the affordance, because we're human, is we'll use QR codes instead.
Now, there's also all those sorts of aspects that come with phishing, which is you're just fooling someone and you're... presenting a sort of a reason to do this thing. Yeah, I know I'm dismissive when I say it's just lying to people.
But I honestly think that the tradecraft that goes into lying effectively is actually very, very interesting. So much of HUMENT is about sort creating these scenarios where things that are unusual become very plausible and normal. So, if someone says to you, scan this QR code with Signal, you're going to say no. Right, yes.
just not a thing you would do. So crafting a scenario where that seems like not only plausible, but sort of just natural. Like that's obviously what you should do in order to continue this thing, this narrative that you're now involved in. So I think that there's a lot of work that goes into making those good.
And while I... I make fun of it a little bit, I seriously respect the amount of effort that goes into making effective fishing campaigns. Right, So like it says here, in remote phishing operations observed today, malicious QR codes have frequently been masked as legitimate signal resources such as group invites, security alerts, or a legitimate device pairing instruction from the signal website. So it's taking advantage of basically signal features and you're just wrapping them up in... a deceptive Right, you're spoofing it in a way. You're abusing the inherent legitimacy of someone else's process, like someone else's yeah, so.
a legitimate signal feature you're taking advantage of as well. So there's access. I guess in this case, it's fairly straightforward if you've got access, you're inadvertently, if you're phished, giving it away to someone else.
Now, how does economy apply in this case? so here's the thing I think is interesting is signal, like it's obviously been on the radar for a long time, but it became a very, high priority in 2022, three years ago. So this for the Russians, sorry. Yeah, so this has been a top priority for three years and all they can do until now. is these phishing attacks. And these are a fairly recent development as well.
So it seems to me like they've invested a lot. And in the end, they fall back on back on just nicely asking someone to do it for them and hoping it works. Yep.
And there's costs involved in that, but it's obviously a lot cheaper than developing the vulndev process, developing an exploit and finding that stuff, which I think we could probably safely say they have not been able to do if this is what they're deploying. Yeah, I was wondering, can we say that? Because in a way, it seems like it, was like, no, that doesn't mean it. It means that when they're going after like Joe Schmo, the second lieutenant who just got promoted, they're not going to be using their magic.
Right, yep. Yeah, I think that's a safer thing to say. So it means you and I are more likely to get phished than magically hacked. It's actually a good thing if they target you with this. means that you're so low priority that they're giving you like the El Cheapo, the store brand version of attack Right, yeah. Now, to be fair, it's not all just phishing.
Sometimes, if I'm reading this right, they modify group invites that are, so altered legitimate group invite pages for delivery in phishing campaigns. So there's a legitimate page, it's been modified. And so I guess it's a variation. It's not not phishing, but it's still. is such a bad term because it has so much baggage and it can get used in so many different ways. on the one hand, when we talk about phishing, we mean specifically an email that gets you to enter your credentials into a website, which then gets stolen.
But I think about phishing more as the process of social engineering, of manipulating someone into doing something. it's, like I don't think it's a good term and I don't like that we don't have another one, but for purposes of this discussion, phishing is going to mean, yeah, I'm going to define my terms. So for this, when I'm speaking about phishing, I'm speaking about the process of manipulating someone into doing something. you could say it's lying, but it's not simply, I don't just mean getting someone to log into something. so you can steal their credentials. I mean, anything in which you pretext and arrange for someone to do something that's beneficial to you, right, without them necessarily knowing about it.
Through online communications of some sort, like I feel that that's important. Like basically, phishing doesn't go away. When I tweeted out, give a man an O day and he'll have access for a day. Teach a man to phish and he'll have access for life. Phishing, I think, adheres closer to those principles of humanity and access than...
know, vulndev does, than exploits do. And because of that, while exploits are ephemeral in a way, like there'll be, there'll always be a technical means of achieving this thing. The human means of doing it is just always going to exist no matter what, as long as there's people involved.
And so I think that while we can come away from this discussion saying like, signal. You know, shouldn't have done this or whatever. There's no way around it. Well, I don't even think that that is possible, right? Because like, what's the alternative, right? the signal in, I guess, signal, if you go, I think it's signal settings, link devices, it'll show link devices. So this, types of... hacks, I guess, would show up as a linked device.
And so maybe they could make linked devices like a feature of the top UI, know, number of linked devices or something. I don't know. But there's like, people want to link devices. That's the thing that...
right, and you have to make it easy, otherwise they won't do it, and then they won't use your software, and that's even worse, right? But I think the other thing is, if you make it super prominent, it's going to become visual noise, and people will stop seeing it. And if you make it an alert that shows up, then, Either you make it very sensitive, like Apple has done. So if you have a laptop that's turned off for a few months and then you reconnect it, you suddenly get an alert saying, a new account has been added to your thing. And if that happens enough, you don't pay attention to them anymore because you just get fatigued by it.
It's just an alert. So yeah, I don't know what the solution is because there's humans involved. There's always going to be this problem. Yeah, you can make an application perfectly safe and perfectly unusable at the same time. That's the solution. So there's other things that they're doing, which is like, you know, just getting access to the device and stealing the database behind it or whatever, which I think in the context of this discussion is a bit boring.
I guess it goes to the, they've got priorities. and they're pursuing all different sorts of avenues to get access. like Signal is clearly a high priority for them. the way that devices are supposed to be secured when taking them to the battlefield is that you use a pin to unlock the device, and then every app is individually locked by a biometric. And that's because if If you lock every individual app by a pin, it'll be too frustrating to use. Whereas if you use the biometric to lock the device, it can be unlocked.
if they have access to your body, whether through as a prisoner or through some other means. So the compromise is to make it that you need knowledge to unlock from the screen, and then you use the easiest route possible to unlock each individual secure app. That way, the apps should be encrypted on disk. But there's best practices of what you should do, and then there's what everyone does. And so I'm... Yeah, so at the end of Google's post, one of their recommendations, which I think speaks nicely to the usability security.
Mm-hmm. is that a trade off usability security conundrum is enable screen lock on all mobile devices using a long complex password with a mix of uppercase and lowercase, there is numbers and symbols. See, I like to use the year that Saint Dominic was canonized, which was 1234.
Yeah, so this is advice that makes it harder to get into a phone to use it legitimately or illegitimately. Yeah, mean, there's the one time it's going to be used illegitimately at most, right, versus the thousands of times that you're going to have to use it. So the trade-off there of efficiency versus security is not going to happen.
It's just that's impossible. So I think this advice runs into that first principle of humanity, right? There are very few people who do that. Yeah, memorize a 512-digit number and then use that.
Yeah, there's another piece of advice which is exercise caution when interacting with QR codes and web resources, purporting to be software updates, group invites, or other notifications that appear legitimate and urge. are you supposed to do that? what? That QR looks a bit sus. that's right.
Now my understanding is that there are apps that will like show you what the QR code actually points to, right? But I think the problem is then you're like, well, okay, I'm now looking at a link rather than a QR code. Yeah, like, and if you're trusting people to notice that a URL is fishy, then yeah, that's. they have a list of IOCs which are like signalgroup.site for example and it's like yeah that seems fair enough. It looks legit.
enough. Yeah, I think all you're doing at that point, honestly, is just you're translating from a QR code phishing attack to a regular phishing attack without, the attacker loses nothing in fidelity. Like they lose no capability in that and you gain nothing. yeah, and I think this points out that we're laughing at these recommendations is that it's a difficult thing because the whole, the reason it's difficult is because of people.
Like there are affordances that must exist because there are people using signal and you just can't get rid of them. Like it's kind of not. practical to get rid of them. And so, yeah, that's where we're stuck.
Now, let's move on to the second example, which is device code authentication. And Patrick and Adam spoke about it last week, and I wrote about it. And the potted summary is that there's a legitimate OAuth method to authenticate what's called an input constrained device, something like a printer or a smart TV, which doesn't have necessarily a keyboard or a good input method. And it basically links a device to an account.
The device will pop up a code and by entering that code, you link it to your device, to your account. And so the Russians have also been using this as a way to get access to Microsoft accounts. So they have a device, they generate a code, they give it to someone they want to phish and they're giving the code to that person, that person then takes that code and authenticates. And so it's different from traditional phishing. It's like, they've given me this, what implications could it have for my account? And so apparently that's super effective, but...
like again we'll step through the kind of different principles I think. So what leaps out to you about this one? all three, really. Look, access. So someone has an account. You want access to that account.
All you need to do is impersonate that someone. It's pretty straightforward. And then you do that by linking a device to their account. And there's a super easy way to do that. You just get them to enter six digits.
know, alphanumeric code. And all you need to do is find a way to convince them that this is a thing that they're doing for their own benefit. Right. I've seen are, you you construct a reason to have a meeting. Here's a code for that meeting, which is super plausible. And if you're, if you're buying the reason for the meeting, yeah.
Not only is it super plausible, I'm pretty sure that that actually happens. Yeah, I can't remember but... I know that there's absolutely, oh, that's it, there's team codes.
Right, so not only is it plausible, I think it mimics how Microsoft Teams has this, you enter a meeting by entering a meeting number or a meeting code, something. It's a screen that does come up. it's in line with, mm.
has been more effective than years of other spearphishing campaigns. And, like the, again, the, go. If you don't attend meetings, you're not vulnerable. This is the, that's the solution right there.
Well, again, that falls foul of the humanity type thing, Humans are destined to attend meetings. But also like because of the taxes and meetings. Right. Now, if we think of a printer, most printers, does seem plausible that you could like, plausible but not practical, that you could enter a very long password just by pressing up and down, like you'd only need two buttons or three. It's just not practical to enter a very long password.
So there's this kind of authentication flow that's really, because we're human. and I think economy plays a role here as well, because there's a lot of effort. So a lot of effort and money and resources went into building out these ways of allowing you to connect a printer to your account or allowing you to connect to smart TV. And then the exploitation of those things is a lot cheaper to accomplish.
If you have to do actual phishing, as it was saying here exactly, actual phishing is not very successful compared to... this more targeted type of phishing vector. And I think that that's playing into, it's probably a lot cheaper to run this as well. Once you've got it set up, you can just scale it very rapidly. You can... this becomes your number one priority because it works so much better.
I mean, the economy, like to me, this plays into it in many different ways. Like one of the reasons that those printers don't have good interfaces is because it's cheaper not to have a good interface. And so everyone's trying to say, That's why they like that. There's a trend for a long time of just having like touch inputs.
That's because it's a lot cheaper. Like I spoke to a speaking to a manufacturer long ago and like they were stripping buttons off everything because it saved them however many cents per unit per whatever. And it just adds up so quick.
Yeah. so terrible. Your printer bursts into flame and traps you inside. And I guess this drive from the manufacturers for economy, I guess, has these kind of second order security implications that for most people, most of the time are meaningless because most people are not going to be phished by the Russians. Yeah.
Yeah, but. logical and understandable, but at the same time. Yeah, if you're a target for the GRU, you're not most people.
And that's the trade-off. If you're a target for the GRU, then yeah, you need to buy the printer that has the keyboard that allows you to type in the long password. Because. for the GRU, doesn't matter what printer you have.
matters what printer the GRU is pretending to be. So you can still buy a cheap printer, that's the good news. That's the takeaway.
So we've looked at two case studies, which I think are very similar in that there's these affordances that are made because people are people. They actually have these security implications that get picked up on because they're very effective as sort of phishing or targeted vectors to get access to particular people's accounts. What's your gut feeling? These kinds of attacks are going to be more or less common in the future.
So more. think absolutely this is like what I think is fascinating about this is it's very much that exploits will come and go. Vulndev will rise and it will fall. We will add whatever sort of defenses and security mechanisms and all that. But at the end of the day, you're still going to have access and you're going to have humanity and it doesn't matter. There will always be these fundamental problems that you're facing as a defender and these fundamental problems that you're exploiting as an attacker.
access and humanity and your constraint is economy. But that's on both sides. So the way I see it, there's going to be more and more devices that we have to link to our accounts because that's the world that we live in now. have to have your iCloud account or your Google account or your Facebook or whatever. It's one of these fiefdoms that controls or your Microsoft account. only three years away from having to log into our cars to get them to go.
Right. I'm surprised that Tesla doesn't have a thing like that already, to be honest. Yeah, so everything's going that way.
You need to log in and authenticate with one of your real accounts for one of these, literally any device. This kind of brings me to a tangent, which is A lot of the OPSEC stuff that I like to focus on of cover, compartmentation, concealment, I think compartmentation now becomes much more important for this, the every man. And that you need to start thinking about having the account that you use for authenticating to all your stuff versus the account that has access to your bank or that has access to, that sort of interfaces with things that are actually sensitive and important to you versus. the things that you use to lock into Notion or to Evernote or to whatever other sort of... whatever sort of other service or application that you're using, like that shouldn't be your main account. I strongly feel that that's, it's exposing too much, it's placing too high a risk on that thing being secure and the processes of accessing that being secure.
So if someone links a printer to the account that I use when I'm logging into my personal Gmail, that's a huge problem, which is why I don't use my personal email for things. I have another account that I use for all of these various interactions. And my recommendation to everyone is to start considering doing that, like compartmentation. It's the foundation of security. It's the way of the future.
Everyone will have 15 minutes of fame and they will be James Bond. Well, I think what you're suggesting is that everyone will have 15 minutes of fame and 15,000 different accounts. hahahahah Thanks a lot, Tom. scratch.
2025-03-05 05:24