July 21, 2025
Research Fellow:
- Bobby Boughton, MacroPraxis Research Institute Fellow
Link To Research Paper
Research Analysis Graphs:
Figure 1 — MITRE ATT&CK paths: why scalable, path-centric deception trips earlier
This diagram maps an attacker’s progress across MITRE ATT&CK (left→right) and shows where different deception strategies typically trigger. The green bar reflects automated, path-centric deception: scalable breadcrumbs on endpoints, identity honeytokens, and cloud-path lures that make early detections likely during discovery, credential access, and lateral movement. The red bar reflects DIY/destination-heavy approaches: while a DIY program may plant some breadcrumbs, they’re usually few, hard to maintain at scale, and more easily identifiable, so they seldom trip early and tend to alert later near collection/exfiltration/impact. The takeaway: instrument the paths broadly and automatically—not just a handful of destinations—if you want time to contain.
Figure 2 — TCO comparison (linear and log scales).
Side-by-side charts show annual costs for DIY deception (orange), enterprise deception (blue), and resulting annual savings (green) across small, medium, and large enterprises. On the linear plot (left), DIY costs ramp sharply—driven by manual design/rotation labor—while the enterprise total grows more moderately. The log plot (right) highlights the same trend across orders of magnitude: the cost gap and savings widen with scale, indicating that automation/rotation efficiency dominates as environments grow.
ABSTRACT
Organizations under budget pressure often begin their deception journey with free tools or small, roll‑your‑own experiments. These approaches can be useful in training labs or limited pilots, yet our analysis shows that they rarely scale to enterprise environments without imposing significant hidden costs and material coverage gaps along real attacker paths. We examine where and why destination‑centric traps (stand‑alone decoys and simple tokens) tend to alert late; we analyze operational fragility in environments with endpoint churn and cloud drift; we discuss governance risks introduced by public‑model–assisted decoy naming; and we quantify total cost of ownership across three enterprise size bands. Taken together, the evidence suggests that “free” deception frequently costs more than expected while delivering a lower probability of early, reliable detection (especially with the advent of AI-assisted adversaries). We conclude with a practical evaluation rubric and a data‑driven decision framework for leaders.
Research Summary
Enterprises often “check the box” on deception with free or DIY tools—sprinkling a few honeypots or tokens at destinations—only to discover those controls alert late and decay quickly. Our research explains why: attackers move along paths (endpoints → identity → cloud), so destination-only traps tend to fire after meaningful progress has been made. By contrast, path-centric deception—breadcrumbs on endpoints, identity honeytokens, and cloud-path lures—produces earlier, higher-fidelity signals while giving the SOC time to contain. We also examine governance risks introduced by public-model-assisted decoy naming, which can make decoys more predictable and requires careful data-handling practices.
We quantify the economics across small, medium, and large environments. When labor to design, place, and rotate DIY lures is counted, annual costs scale steeply—into millions at the high end—while automated, enterprise approaches grow far more moderately. In our model, a medium enterprise spends ~$1.32M annually on DIY labor/ops versus ~$439K with an automated platform; at large scale the gap widens to $6.31M vs $1.57M, before any reduction in expected breach losses from earlier detection is considered. The takeaway is simple: “free” isn’t free at enterprise scale, and the TCO delta grows with size.
The paper closes with a pragmatic blueprint: automate deployment and rotation (ideally via EDR/XDR), instrument attacker paths rather than just destinations, insist on clear data-handling controls, and measure outcomes—not artifacts—using coverage, time-to-trip, alert precision, rotation health, and indistinguishability. Organizations that operationalize deception this way move detection left, reduce dwell time, and convert would-be breaches into fast, contained events—turning deception from a clever trick into a durable advantage.
Key Takeaways
- Coverage on attacker paths beats destinations. Early, reliable detection comes from breadcrumbs and honeytokens spread across endpoints, identities, and cloud paths—not from a handful of destination traps.
- DIY looks cheap but scales poorly. Manual design and rotation create operational debt; at enterprise scale, labor alone can exceed a platform subscription while delivering later alerts.
- Automation is the unlock. EDR/XDR‑assisted deploy and refresh compress per‑asset touch time, enable consistent rotation, and reduce stale‑lure risk.
- Public‑model naming adds risk, not certainty. LLM‑styled labels can become class‑predictable and raise governance questions; name realism is not a substitute for path coverage and rotation.
- Measure outcomes, not artifacts. Track time‑to‑trip, alert precision, coverage, rotation health, and indistinguishability; report quarterly and scale what empirically improves these metrics.