June 4, 2025
5
min read

Reality Check: Hype vs What Actually Works in AI for SOC

The AI revolution in security operations is here, but marketing promises far exceed current reality. After three decades building security software, the ground truth is clear: AI's value lies in augmentation, not replacement of SOC analysts. Real success comes from proven use cases. Large language models excel at unplaybooked investigations—where tier-2+ analysts struggle most without existing playbooks. AI removes investigative drudgery like log correlation and data extrapolation, keeping analysts cognitively focused instead of context-switching between mundane tasks. The most problematic messaging focuses on "time to resolve" and "replacing tier-1 analysts." Optimizing purely for speed creates dangerous tunnel vision. Risk reduction through thoroughness should be the primary goal—making the same mistake faster benefits no one. Successful adoption requires slotting AI into existing workflows, not overnight transformations. SOCs won't abandon tens of millions in infrastructure for new automation platforms. By 2025-end, adoption becomes mainstream. By 2027-2028, AI for SOC will be standard practice. Organizations understanding AI as augmentation—not replacement—will emerge significantly stronger in cybersecurity's biggest transformation since firewalls.

Alfred Huger
Cofounder & CPO
In this article

Introduction

The AI revolution in security operations is here, but the gap between marketing promises and ground truth is wider than most vendors would like to admit today. After three decades building security software and working directly with SOC teams implementing AI technologies, I've seen what actually works—and what doesn't.

Today, AI in the Security Operations Center (SOC) is awash with buzzwords. You hear about revolutionizing cybersecurity, autonomous everything, and an end to analyst toil. Some of that is on point; a lot of it is overly optimistic for where we stand today. My focus, and what I see delivering real value on the ground, is how AI genuinely helps us reduce risk. Everything else, while important, is a supporting actor to that primary goal.

Augmentation: The real promise of AI for SOC in 2025

AI's true value for the SOC lies in capability augmentation, not replacement. Most SOC analysts are SMEs in one or two narrow areas—typically endpoint security. AI rapidly expands their expertise across unfamiliar data sources while maintaining the same investigative outcomes. This means you can get everyone off the bench and into the game.

LLMs excel at removing investigative drudgery: log correlation, data extrapolation, and baseline analysis. This keeps analysts cognitively focused instead of context-switching between mundane tasks. AI also transforms report generation, enabling analysts to articulate issues to multiple stakeholders and dialogue with data sources without logging into dozens of different consoles.

Distinguishing marketing vs real impact

Security marketing consistently leans ahead of functionality, creating unrealistic expectations or jadedness. The most problematic messaging focuses exclusively on "time to resolve" and "replacing tier-1 analysts."

Time-saved metrics are valuable—security products are drowning teams in alerts, and alert fatigue is real. But optimizing purely for speed creates dangerous tunnel vision. When analysts are on the clock, expediency becomes everything, leading to narrow investigative scope and missed threats.

The messaging should emphasize thoroughness and skill development. Risk reduction is the ultimate goal of both SOCs and CISO organizations. Making the same mistake faster benefits no one. We haven't proven that AI judgment surpasses tier-1 analyst's judgment—but we have proven that the combination is powerful.

What's actually working in 2025

Real adoption success comes from specific, proven use cases:

  • Unplaybooked Investigations: Large language models excel as force multipliers for investigations without existing playbooks. This is where tier-2+ analysts struggle most—getting from A to Z without previous instructions. LLMs enable these investigations with higher fidelity and sensible timeframes.
  • Reasoning Through Alerts: The most effective approach has LLMs reason through investigations with analysts proving them out afterward. This combines the model's analytical power with human business context and case history knowledge.
  • Comprehensive Reporting: Every investigation should produce a well-articulated, stakeholder-relevant document. LLMs make this standard practice instead of an exception reserved for full-blown IR incidents.

The state of adoption

Early challenges

Technology quality alone doesn't drive adoption. SOCs are complex environments constrained by existing technology investments, staff training, and budget cycles. Any solution must integrate with tens of millions in existing infrastructure—enterprises won't turf their entire SOC for relatively new automation platforms.

Successful adoption happens when organizations understand they must slot AI into existing workflows. Greenfield SOC rebuilds are rare. Most customers need gradual deployment and implementation projects, not overnight switches.

The commitment requires measurable, provable outcomes. Teams need retraining on new platforms, and results must be ruthlessly measured throughout the process.

The road to 2026-2027

By end of 2025, adoption will become mainstream as projects launch and platforms deploy. Real-world contact with production environments will mature these technologies rapidly. Most vendors in this space are relatively new—battle testing will separate effective solutions from enthusiastic marketing.

2026 will see genuine environmental transformation. By 2027-2028, AI for SOC will be standard practice.

The AGI transformation

AGI will enable close to machine-speed response and investigations—critical as both defenders and attackers gain access to the same innovations. The balance typically favors attackers initially.

True environmental defense requires AI embedded natively across all systems: HRMS platforms, email systems, every business application. Agentic technology will coordinate between systems to investigate, solve, remediate, and act upon security events in near real-time.

The SOC becomes an orchestrator, tuner, and arbiter rather than having humans involved in every investigation. This isn't controversial—it's inevitable. What we have now clearly isn't working, and we can't scale human involvement to match threat volume.

Moving beyond time-to-resolution

The industry obsesses over speed metrics while missing the bigger picture. Current SOC practices incentivize analysts to move faster without focusing on quality or scope expansion. We're pushing them toward smaller scopes to run faster, which doesn't deliver optimal results.

AI should relieve analysts from high-speed chases while enabling thorough investigations. The goal isn't just faster responses—it's better outcomes through comprehensive analysis powered by augmented capabilities.

Navigating a historic inflection point

This represents the biggest sea change in cybersecurity history. Network IPS/IDS, modern firewalls, EDR—none compare to the transformation we're witnessing. Those innovations aren't within visual distance of what's happening now.

We're entering a wild ride over the next three years. Organizations that understand AI as an augmentation tool rather than a replacement solution—and implement it thoughtfully within existing workflows—will emerge significantly stronger.

The key is moving beyond marketing hype to focus on proven capabilities: investigation augmentation, comprehensive reporting, and skill development. Risk reduction through better outcomes, not just faster ones.

Book a demo today to see how Command Zero can incorporate LLMs into your SOC flows today.  

Alfred Huger
Cofounder & CPO

Continue reading

AI
Highlight

If Your AI SOC Can’t Show Its Work, You’ve Got a Compliance Problem Coming

The era of unregulated "black box" AI in security operations is ending due to new legal frameworks like the EU AI Act. With the EU Act now enforceable law and full compliance for high-risk systems required by August 2026, security leaders face strict mandates for transparency, auditability, and human oversight. The author warns that "showing your work" is no longer just a best practice, but a regulatory necessity with significant financial penalties for non-compliance. While the US lacks a single federal law, a patchwork of state regulations in Colorado, California, and Texas is creating similar pressure for explainability. Because AI-driven SOC tools make consequential autonomous decisions—such as blocking traffic or dismissing threat alerts—they fall squarely into these high-risk categories. The piece contends that if a security platform cannot produce a clear reasoning chain or audit trail for its actions, it creates a dangerous compliance gap. The article concludes by positioning Command Zero’s platform as a solution specifically designed to meet these rigorous transparency standards.
James Therrien
Feb 3, 2026
7
min read
AI
Highlight

The Federated Truth: Why Data Lakes Are Failing Investigations

The Federated Truth This article argues that traditional security architectures based on data centralization (Data Lakes and SIEMs) are failing to meet the needs of modern investigations due to prohibitive storage costs, data ingestion lags, and incomplete visibility. The author identifies a "SecOps Last Mile" problem, where analysts lose critical time switching between disconnected consoles to access data that was never ingested into the central repository. The proposed solution is a Federated Data Model, such as Command Zero, which queries data directly where it resides (EDR, Identity Providers, etc.) via APIs rather than moving it. This approach eliminates ingestion delays, provides access to 100% of real-time data, and reduces infrastructure costs. By leveraging AI to normalize these distributed queries, the federated model allows analysts to investigate threats in seconds rather than hours, shifting the focus from data management to rapid threat resolution.
Eric Hulse
Jan 27, 2026
10
min read
AI
Highlight

The Black Box SOC AI Agent Problem (And How to Fix It)

Security Operations Centers face a difficult paradox where AI agents offer necessary speed but create unacceptable liability due to their "black box" nature. CISOs remain hesitant to deploy these autonomous systems because they cannot explain the reasoning behind actions like blocking users or terminating processes, which leads to compliance failures and a lack of trust. Traditional AI models prioritize prediction over the transparency required for complex, iterative cyber investigations. Command Zero addresses this critical gap by introducing a "glass box" architecture designed for verified autonomy rather than blind trust. This approach transforms the investigation process into a visible, auditable "stack trace" where every query, source, and decision is exposed to the analyst. Beyond simple transparency, the system ensures pivotability, allowing human analysts to seamlessly take over and inject expertise into autonomous workflows without losing baseline data. By combining this visibility with the ability to customize investigation logic for specific environments, Command Zero allows organizations to safely leverage the speed of AI automation while maintaining the rigorous oversight and explainability essential for modern security operations.
Eric Hulse
Jan 23, 2026
8
min read
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.