The Guard and the Thief Carpool
A reflection on how cybersecurity became optimized for detection and response instead of prevention, and why application control might be the shift that changes it.

Peter Girnus @gothburz wrote a piece last week that I haven't been able to stop thinking about. If you don't follow him, he does this thing where he writes satire about the security industry that's so structurally accurate it stops being funny about halfway through, and you realize he's just describing your job. This one is worth reading in full.
He writes from the perspective of a Senior Director of Incident Response. Walks through how IR firms bill $1,200 an hour to show up after the breach, how their revenue grows in lockstep with breach volume, how their threat reports feed their sales pipeline which feeds the next threat report. He calls it the flywheel. The last line is the one that stuck with me:
"I have never been able to tell where the guard ends and the thief begins. But we carpool."
I spent roughly ten years building threat research at Splunk. LOLDrivers, LOLRMM, cataloging the techniques attackers use with the legitimate tools already on the system. And I believed in it, really. I thought if we just gave defenders better visibility, better detections, better analytics, we'd start winning.
We didn't start winning. We got better at watching ourselves lose.
And honestly nowhere is that more obvious than what's happened to network infrastructure over the last few years. Palo Alto, Cisco, Fortinet, Ivanti, these are the products that are supposed to be the perimeter, right? And they have been absolutely obliterated by state actors. Hundreds of CVEs across those platforms, mass exploitation of VPN appliances before patches are even available, Chinese and Iranian and Russian operators living in perimeter devices for months before anyone notices.
We're really good at detecting and patching those now. Really good. And it hasn't mattered, because the next one comes out and the cycle starts over. You can't prevent your way out of a zero-day in firmware you don't control. That part I've made peace with. But the endpoint is different. That's our territory. That's where we actually have options.
Here's the thing Peter's piece exposes without quite saying it directly. The entire cybersecurity economy, not just IR but the detection side too, is structurally dependent on breaches continuing to happen. EDR vendors need threats to execute so their sensors have something to observe. SIEM vendors need logs to collect. Threat intel vendors need novel TTPs to publish. MSSPs need alerts to triage at 3am. IR firms need the breach to bill against. And the threat reports that come out of all this, those become the marketing material that sells more of the same stack to the next company that will also get breached.
I was part of this for years and I don't think most people in it are cynical about it. They genuinely want to help. But the incentive structure doesn't reward the thing that would actually reduce breach volume, which is prevention. It rewards observation. It rewards response. It rewards the after.
Look at what just happened with Stryker.
Handala, also tracked as Void Manticore, is an Iranian-aligned group that runs hands-on intrusions rather than automated malware campaigns. Operators in the environment, moving manually, using legitimate tools and admin protocols. Their thing is operational disruption combined with psychological impact, publishing screenshots, defacing systems, exaggerating the damage numbers for effect. They claimed 12 petabytes exfiltrated from Stryker. That number is almost certainly not real. But the wipe? The wipe was real.
From what's been reported and from what researchers have reconstructed, this followed a pretty recognizable pattern. Credential compromise first, probably phishing or credential reuse, possibly through VPN infrastructure which Iranian groups have used as an entry point repeatedly. Once you're in with valid credentials you don't trigger most of the traditional controls. Then privilege escalation through account manipulation in Entra ID, assigning roles, modifying group memberships. Then the lateral movement phase: RDP between systems, tunneling tools to reach internal hosts, manual enumeration of accounts and network resources. This is where the LSASS dumps happen, comsvcs.dll via rundll32, registry hive exports, ADRecon scripts running against Active Directory. Then bulk data collection through PowerShell on systems that don't normally run PowerShell. Staging. Exfiltration.
And then, once they had what they wanted, they walked into Intune and wiped the fleet.
The SEC filing said it explicitly: no indication of ransomware or malware. That's not a statement of good news. That's an admission that every EDR deployed across the entire enterprise had nothing to detect, because there was nothing malicious running. The attacker authenticated with valid credentials and used the legitimate admin tool to do exactly what the admin tool was designed to do. Across something like 200,000 devices in 79 countries. The Handala logo showing up on login screens was the first thing most employees knew about any of it.
Every security vendor responding to Stryker right now is selling the same playbook. Better monitoring of admin actions. Tune your SIEM for mass wipe events. Conditional access policies. Which is, again, detection. Watching it happen faster. Maybe fast enough to stop some of it if you have someone awake at 12:30am when the wipe commands start firing.
The question nobody is asking publicly is why we don't just define what's allowed to run and deny everything else.
Application control has been around forever. WDAC has been built into Windows since Windows 10. It's free. It's already on the machine. The reason it hasn't been widely adopted isn't technical, it's economic. Prevention doesn't generate a content marketing engine. There's no recurring alert stream to justify a SOC. There's no incident to feed the threat report. There's no threat report to feed the sales pipeline.
From what I've seen building MagicSword, the resistance to application control isn't usually "it doesn't work." It's "it's too hard to manage" or "we tried it and it broke things." Fair criticisms, honestly, of how it's been implemented historically. WDAC policy management has been genuinely painful. But that's a product problem, not a conceptual one. The concept itself, only let known-good software execute and deny everything else by default, that's the posture that makes a lot of these conversations different. It wouldn't have stopped the Intune credential compromise. But in a properly controlled environment, the blast radius of what one compromised admin account can actually execute is fundamentally smaller.
I keep coming back to something I noticed over years of writing detection content. The same techniques show up in breach after breach. Certutil downloading payloads. Mshta executing HTA files. Regsvr32 loading DLLs. Rundll32 doing things rundll32 should not be doing. We'd write the detection, publish the analytic, present it at a conference, and six months later we'd see the exact same technique in the next breach at the next company. The detection existed. The company had it deployed. The alert fired. Someone didn't see it, or saw it and didn't act fast enough, or acted but the attacker had already moved laterally and staged the data.
At some point I started asking myself why we were building a better telescope when what the customer needed was a lock on the door.
That's not to say detection is useless. You need visibility. You need to know what's happening. But the industry has over-indexed on detection as the primary security control because detection is the control that sustains the business model. Prevention, done well, is quiet. It just works. And quiet doesn't sell at RSA.
Peter wrote that the cybersecurity industry's success metric and the problem it solves are the same number moving in the same direction. That's the structural critique that matters. And it applies just as much to the detection vendor selling you a better EDR as it does to the IR firm billing $1,200 an hour to tell you what the EDR should have caught.
I'm not going to pretend I have this figured out. MagicSword is still early and we're learning from the customers we're deploying with. But the thesis is pretty simple: if you control what's allowed to execute, you collapse a huge chunk of the attack surface that the entire rest of the stack is built to observe. And that collapse makes the industry uncomfortable, because it doesn't just reduce risk for the customer. It reduces revenue for everyone else in the chain.
The guard and the thief carpool because they both need the breach to exist. I'd rather be the one who locks the door and puts them both out of a job.
Want to see what application control actually looks like when it's not painful to manage? Book a demo and we'll show you how MagicSword lets you define what's allowed to run and quietly denies everything else, before the alert ever fires.
Keep up with how modern attacks actually work and how to prevent them. Subscribe to the MagicSword newsletter for practical research, real-world attack tradecraft, and prevention-focused intelligence.

Written by
Jose Hernandez
Threat Researcher
Jose Enrique Hernandez formed and served as the Director of Threat Research at Splunk. Jose is known for creating several security-related projects, including: Splunk Attack Range, Splunk Security Content, Git-Wild-Hunt, Melting-Cobalt, lolrmm.io and loldrivers.io. He also works as a maintainer to security industry critical repositories such as Atomic Red Team and lolbas-project.github.io.


