Close Menu
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Trending

Rotary rues misinformation in fight against polio

May 14, 2026

Disinformation against UN Peacekeeping Operations

May 14, 2026

Trump administration defends right to ban content moderation experts from US

May 14, 2026
Facebook X (Twitter) Instagram
Web StatWeb Stat
  • Home
  • News
  • United Kingdom
  • Misinformation
  • Disinformation
  • AI Fake News
  • False News
  • Guides
Subscribe
Web StatWeb Stat
Home»False News
False News

NCSC: Wrong SOC Metrics Make Real Attacks Look Like False Positives

News RoomBy News RoomMay 13, 2026Updated:May 14, 20266 Mins Read
Facebook Twitter Pinterest WhatsApp Telegram Email LinkedIn Tumblr

Imagine you’re running a top-tier security team, the kind that guards against sneaky cyber attackers. You’ve got smart people, fancy gadgets, and tons of digital evidence collected, yet somehow, real attackers are still slipping through the cracks. This isn’t just a hypothetical nightmare; it’s a real problem identified by the UK’s National Cyber Security Centre (NCSC). They’ve noticed that even well-funded security operations centers (SOCs) can become blind to actual threats, not because their people or tech are bad, but because their leaders are measuring the wrong things.

Picture this: an analyst, sharp and dedicated, staring at a screen filled with alerts. Ninety-nine percent of these alerts are what we call “noise” – false alarms that don’t indicate a real threat. But unfortunately, this analyst is being judged on how many of these “tickets” they close per hour. Every click, every closed ticket, makes their numbers look good. The heartbreaking part? Sometimes, amidst all that noise, a real, live intrusion is happening, and our analyst, under pressure to close tickets fast, might accidentally close it along with the false positives. This creates a deeply counterproductive environment where the very metrics meant to show productivity actually make the team worse at detecting real threats. It’s like grading a detective based on how many case files they close, even if most of them are about lost cats, leading them to quickly dismiss a serious kidnapping case to boost their numbers.

The NCSC points out that this problem often stems from adopting metrics designed for other parts of a company, like customer support or IT helpdesks. For those teams, measuring things like “tickets processed per shift” or “time-to-close” makes perfect sense and looks great on a executive dashboard. But for a SOC, where the stakes are incredibly high and the signals are often subtle, these metrics are disastrous. They incentivize quick closure over careful investigation. Another trap is “volume metrics” – counting how many detection rules an analyst writes or the sheer volume of logs collected. On the surface, more rules and more logs seem like better coverage. But if those rules are too broad (like writing a rule for every single suspicious IP address you see, which generates tons of false positives), or if nobody’s actually analyzing those logs (the NCSC once found a SOC that had been collecting only the first 30 characters of its biggest log source for years without realizing it!), then all that volume is just an illusion of security. It’s like bragging about owning a massive library but never actually reading any of the books; you have the potential for knowledge, but you’re not gaining any.

So, what should be measured? The NCSC boils it down to one crucial metric: “Time-to-Detect” (TTD) and “Time-to-Respond” (TTR). These measure how quickly a SOC can spot an attack and then deal with it. This is the only outward-facing metric that truly proves a security team is doing its job. The challenge with TTD, however, is that in a healthy, well-defended organization, real attacks should be rare. You can’t just wait for a real breach to happen to test your TTD. This is where clever testing comes in. The NCSC strongly advocates for “red teaming” and “purple teaming.” Red teaming is like hiring ethical hackers to secretly try and break into your system, mimicking real attackers. This tests the SOC’s ability to detect something truly covert. Purple teaming is a more collaborative approach: your red team works with your SOC, showing them exactly where their attacks succeeded or failed to trigger an alert. This feedback loop is invaluable for refining detection capabilities. They also recommend using “MITRE ATT&CK-decomposed test cases,” which break down attacker techniques into tiny, isolated steps. This allows a SOC to measure its TTD for each specific attack method, giving a much more granular and actionable picture of their readiness.

The NCSC’s most insightful (and perhaps uncomfortable) observation is that any metric you report externally or even internally will change people’s behavior. If analysts know their rule count is being watched, they’ll write more rules, even if they’re not effective. If they know ticket counts matter, they’ll close tickets fast, regardless of the content. Their recommendation is blunt: stop reporting these problematic metrics altogether. Don’t even show them on internal dashboards. By making TTD/TTR the only metric that goes up to the board, you remove the incentive to play games with the upstream counters. This frees up analysts to focus on what truly matters: finding and stopping attackers, not hitting arbitrary ticket quotas. It’s about shifting the mindset from “how many tasks did I complete?” to “how well did I protect us?”

To rebuild a SOC based on this philosophy, the NCSC suggests a step-by-step approach, starting with a fundamental shift in culture. First, give your analysts the time and the authority to truly investigate. Don’t rush them to close tickets. Trust their judgment. Once this cultural shift and metric reset are in place, then you can validate your detection coverage with adversary simulations. A key practice is “hypothesis-led threat hunting.” Instead of just waiting for alerts, analysts actively hunt for threats. They might form a hypothesis – “What if an attacker tries to exploit this specific vulnerability?” – and then proactively search through logs for evidence. Most hunts won’t find a live attack, but they’ll often lead to new detection rules or ideas for hardening systems. This type of deep, investigative work is precisely what ticket-throughput metrics stifle.

Another critical step is to enforce strict false-positive thresholds on detection rules. Don’t just accept noisy alerts. The NCSC gives an example of flagging PowerShell execution by anyone outside an IT role. Initially, this might generate a flood of false positives. But instead of ignoring it, the SOC refines the rule, whittling down the false alarms until any new PowerShell execution is either a genuine attack or a thoroughly documented, legitimate exception. This requires ongoing work and regular reviews but ensures that when an alert does fire, it’s meaningful. Finally, forget about throughput metrics for analysts. Instead, track “threat awareness,” “tool expertise,” and “organizational fluency.” How well do they understand different attack techniques? How proficient are they with their tools? And crucially, how well do they understand the business itself and what normal operations look like? An analyst who doesn’t know “normal” can’t spot “abnormal.” These are the metrics for internal use, while the board still only sees that single, critical TTD number. The ultimate validation comes from “purple teaming” exercises that focus on techniques most relevant to your organization. The NCSC saw a SOC that had been struggling with 99% false positives completely transform itself. After rebuilding its operations around detection-coverage metrics, the very same analyst who was once overwhelmed by noise was able to detect simulated adversaries within hours, not days, on every high-priority attack step. This remarkable turnaround demonstrates the power of shifting away from misleading metrics and focusing on what truly protects an organization from cyber threats.

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
News Room
  • Website

Keep Reading

Wolverhampton woman, 45, denies false imprisonment and assault

Calhoun High School swatting call prompts lockdown

Amritpal’s counsels submit pen drives, allege false implication

Zarah Sultana forces grovelling apology out of Katie Hopkins over false statements

Perfectus Aluminum Inc. to Pay $549.5 Million to Settle False Claims Act Allegations Relating to Evaded Customs Duties

Lara Breaks Silence on “Harmful False Narratives” About KATSEYE Since Manon’s Hiatus

Editors Picks

Disinformation against UN Peacekeeping Operations

May 14, 2026

Trump administration defends right to ban content moderation experts from US

May 14, 2026

WATCH: The Department of Information and Communications Technology (DICT), together with the Cybercrime Investigation and Coordinating Center (CICC), is monitoring social media posts that are spreading disinformation and sowing harm after the shooti – facebook.com

May 14, 2026

Wolverhampton woman, 45, denies false imprisonment and assault

May 14, 2026

CTV National News: New study reveals misinformation in Canada is getting worse – CP24

May 14, 2026

Latest Articles

Lithuania to Criminalize Disinformation and Bot Networks

May 14, 2026

CTV National News: New study reveals misinformation in Canada is getting worse – CTV News

May 14, 2026

Contributing to the fight against disinformation

May 14, 2026

Subscribe to News

Get the latest news and updates directly to your inbox.

Facebook X (Twitter) Pinterest TikTok Instagram
Copyright © 2026 Web Stat. All Rights Reserved.
  • Privacy Policy
  • Terms
  • Contact

Type above and press Enter to search. Press Esc to cancel.