January 13, 2026
6
min read

The "Tierless" SOC: What Happens When Junior Analysts Disappear?

The cybersecurity industry faces a paradox: AI is successfully automating "tier-1" grunt work, but in doing so, it is destroying the foundational apprenticeship that trains senior analysts. Historically, junior analysts built vital pattern recognition by triaging thousands of routine alerts. Without this "manual" phase, a "missing middle" has emerged—juniors are now expected to handle complex investigations without the environmental context or investigative intuition usually gained through repetition. To bridge this gap, SOCs must shift to an "Apprentice-in-the-Loop" model. By using expert-built, executable Questions, Command Zero codifies senior-level methodology into a guided framework. This allows juniors to "sit shotgun" with expert thinking on real cases from day one. Instead of grinding through false positives, the next generation of analysts will develop through structured, AI-augmented exposure, democratizing high-level expertise and accelerating career growth in a tierless environment.

Eric Hulse
Director of Security Research
In this article

Security Operations Centers face a paradox that's quietly reshaping our industry: AI automation is successfully eliminating tier-1 grunt work, but in doing so, we're losing the very foundational career step that trains future generations of senior analysts.

For decades, the SOC career ladder worked like an apprenticeship system. Junior analysts spent months—sometimes years—grinding through false positives, learning to distinguish normal from suspicious, and building pattern recognition through sheer repetition. That repetitive work wasn't just busywork; it was the foundation that taught analysts how systems actually behave before they encountered sophisticated threats.

Now, AI is automating tier-1 triage at scale. Promising a drop in alert fatigue, and a reduction in false positive rates. But we've created an unintended consequence: if junior analysts never touch routine cases, how do they develop the foundational knowledge to become tier-2 and tier-3 investigators?

The Traditional Apprenticeship Model is Breaking

The conventional SOC structure relied on a clear progression path with clear-cut analyst roles. New analysts started with basic alert triage, working with SIEM systems where they might execute pre-built queries—if they had the permissions and authority to query at all. Those fortunate enough to build their own queries started simple: basic log searches, single-source correlations. They'd make mistakes, learn from false positives, gradually add complexity. Over months, through trial and error, they'd develop the ability to craft sophisticated multi-source queries. They practiced documenting findings, and gradually built the muscle memory that separates skilled investigators from tool operators. Senior analysts handled the complex cases—the lateral movement investigations, the privilege escalation chains, the sophisticated persistence mechanisms.

This worked because juniors could observe outcomes while simultaneously building organizational context. They'd escalate an alert to a senior analyst, watch how it was investigated, see the questions that senior asked, and learn which evidence actually mattered. But they also learned the environmental quirks that don't appear in any training manual: Jake's authentication patterns look suspicious because he travels constantly across time zones. That HR software triggers PowerShell detections every Tuesday morning when it auto-updates. The marketing team's file sharing patterns look like data exfiltration until you understand their campaign launch process. Over time, through hundreds of these interactions, junior analysts internalized both investigative thinking and the organizational reality that context provides.

As I explored in The Evolution of SOC Structure: From Rigid Tiers to Flexible Operations, modern SOCs are already moving away from rigid tier structures toward more flexible, capability-based operations. But the challenge isn't just organizational—it's affecting career development. When automation handles tier-1 triage, we compress the career ladder without replacing the learning mechanism that ladder provided.

What we're witnessing is the creation of a "missing middle" in analyst development. Organizations hire junior analysts, deploy AI to handle routine triage, and expect those analysts to somehow jump directly to complex investigations. The foundational pattern recognition, the system behavior understanding, the investigative intuition that used to develop over thousands of alerts—where does that come from now?

Current Training Approaches Miss the Mark

Organizations recognize the gaps in analyst skills and typically respond with formal training programs. They send analysts to certification courses, deploy capture-the-flag exercises, or create lab environments for practice. These initiatives have value, but they don't replicate the critical learning mechanism that SOC apprenticeship provided: exposure to real organizational context while observing expert decision-making.

Generic training teaches concepts—how lateral movement works in theory, what pass-the-hash attacks look like in sanitized examples. But SOC investigation is deeply contextual. It requires understanding your specific environment: which user behaviors are normal, which service accounts typically make unusual connections, how your particular applications authenticate, where your organization stores sensitive data.

This organizational context can't be taught in a classroom. It has to be learned through exposure to actual cases in your actual environment. The traditional tier-1 role provided that exposure at lower-complexity cases. When AI removes that entry point, junior analysts either jump directly into complex investigations they're not ready for, or they handle only the edge cases that automation couldn't resolve—the weird, atypical scenarios that don't teach foundational patterns.

The fundamental problem is that we've automated the work without preserving the learning mechanism embedded in that work.

The Apprentice-in-the-Loop Model

This is where Command Zero's approach transforms the analyst development problem. Rather than trying to recreate tier-1 grunt work or hoping formal training will bridge the gap, we enable what I call the "apprentice-in-the-loop" model: junior analysts actively participate in senior-level investigations, learning expert thinking in context.

The mechanism is expert-built Questions that work across our investigation platform. These Questions are designed by senior analysts and security experts to facilitate growth, learning, and empowerment for analysts regardless of their role or expertise level. When AI Agents investigate alerts, they leverage these Questions. When human analysts investigate, they have access to the same expert-crafted investigative frameworks.

Here's what makes this transformative: a network security expert investigating an EDR alert can execute Questions built for endpoint investigation. An email analyst encountering her first network-based lateral movement alert can access Questions that guide her through authentication analysis across multiple systems. The Questions show analysts what can be asked, why it matters, and what to analyze in the results. It's like sitting shotgun with a senior analyst—you see their thinking process, understand their methodology, and practice their techniques on real cases.

Consider how this works in practice: A junior analyst receives an alert about suspicious lateral movement. The platform surfaces relevant Questions: check for recent privilege changes across identity systems, analyze authentication patterns for this user across the network, identify unusual file access to sensitive shares. Each Question isn't just a query—it's expert methodology made executable. The analyst sees what evidence senior investigators consider critical, learns why these specific checks matter for lateral movement detection, and understands how to interpret results in context.

But the real transformation happens through repeated exposure. That same junior analyst encounters different alert types—password sprays, data exfiltration attempts, privilege escalations. Each time, expert Questions are available, guiding investigation across whatever sources matter for that specific scenario. The analyst isn't memorizing playbooks—they're internalizing investigative thinking through structured practice with expert frameworks.

This creates a learning loop that traditional tier structures never achieved. Junior analysts learn by doing real investigations, but with expert guidance embedded in their workflow. They're not just executing steps—they're absorbing the decision-making frameworks that define skilled investigation. Another benefit to this approach is that it prepares analysts to excel at running AI-augmented flows or orchestrating autonomous AI tasks, both of which will become the norm for knowledge workers in the AI age.  

Democratizing Expertise Through Captured Methodology

What makes this approach fundamentally different from traditional training or playbook automation is that it preserves the contextual, decision-making aspects of investigation while making them accessible.

Consider a typical scenario: A senior analyst has developed sophisticated methods for detecting compromised credentials. They know which authentication patterns indicate password spraying versus legitimate VPN behavior. They understand when to check for suspicious PowerShell activity versus when to focus on lateral movement tools. This knowledge exists as tacit expertise—it's in their head, built over years of investigation.

In traditional SOCs, this expertise transfers slowly, if at all. Juniors might observe the senior working a few cases, but they don't get to practice those methods themselves until they've proven they're ready. And by the time they're ready, the senior may have moved on.

Expert-built Questions capture this expertise in executable form. The senior's investigation methodology becomes a resource that junior analysts can leverage immediately. They see not just what the expert would check, but how those checks connect to create investigative conclusions. When results come back from a Question, the junior analyst is learning what evidence matters and why.

This is democratization in the truest sense: making expert thinking accessible without diluting its sophistication. Junior analysts aren't just clicking through predetermined steps—they're engaging with expert-level investigative frameworks while building their own pattern recognition. A network specialist can investigate identity-based attacks using Questions built by identity experts. A cloud security analyst can investigate on-premises threats using Questions that encode years of traditional infrastructure expertise.

Accelerating Growth Through Structured Exposure

The practical impact on analyst development is substantial. Instead of spending 12-18 months grinding through tier-1 alerts before touching complex cases, junior analysts can engage with sophisticated investigations from day one—with appropriate scaffolding.

A new analyst joins the team. Within their first week, they're working investigations with Questions that check for credential compromise indicators across multiple systems, even though they couldn't build those queries themselves yet. They see results from EDR systems, identity providers, SIEM platforms—all through Questions that show them what expert analysts look for. They learn what suspicious patterns look like in their specific environment, and they start developing pattern recognition.

Three months in, they're not just running those Questions—they're starting to understand why specific checks matter. When authentication logs show unusual patterns, they recognize it because they've seen hundreds of examples through their Question-guided investigations. They understand which anomalies warrant deeper investigation and which are just environmental quirks.

Six months in, they're beginning to modify existing Questions for specific cases and contributing their own expertise back into new Questions. The apprenticeship model is working, but compressed dramatically because they've had structured exposure to expert thinking from the beginning.

This acceleration matters not just for individual analysts, but for team capacity. Organizations can onboard junior analysts faster, reduce burnout from senior analysts who aren't constantly training others, and build deeper benches of investigators. The knowledge retention problem that plagued traditional SOCs—where a senior analyst leaving meant years of expertise walking out the door—becomes manageable because that expertise is encoded in Questions that remain available to the team.

Building Curiosity and Investigation Culture, Not Just Skills

Beyond technical skills, the apprentice-in-the-loop model cultivates something equally critical: A curious, investigative culture. It teaches junior analysts how to think about investigation, not just how to execute queries. This is only achieved by enabling analysts with the tools so they can be bold and curious when it comes to getting answers.  

When junior analysts work with Questions created by senior investigators, they’re exposed to expert judgment about what constitutes thorough investigation. They see which questions get asked even when initial evidence seems clear. They learn the habit of checking related systems, verifying assumptions, and building complete narratives rather than jumping to conclusions.

This cultural transmission was always the hidden value of traditional tier structures. Junior analysts absorbed not just techniques but values—the thoroughness, skepticism, and systematic thinking that characterize effective security investigation. When we automate tier-1 work without replacing this cultural transmission, we risk creating analysts who can operate tools but lack investigative discipline.

Questions preserve this cultural element because they embody expert judgment. Each Question represents decisions about what matters, what to verify, and how to think systematically. Junior analysts practicing with these Questions aren’t just learning to query systems—they’re internalizing investigative culture by seeing how experts approach problems across different domains and data sources.

Rethinking Career Development for SOC Analysts

The tierless SOC that’s emerging isn’t just about organizational structure—it’s about rethinking how we develop analytical capability in an environment where AI handles routine work. Organizations that solve the analyst development problem will have significant strategic advantages.

They’ll onboard new analysts faster, retaining junior talent who see clear growth paths rather than being stuck in AI’s shadow. They’ll build institutional resilience, where analytical capability isn’t concentrated in a few senior analysts but distributed across teams through captured expertise. They’ll adapt faster to new threats because their entire analyst population is practicing with sophisticated investigative methods, not just executing playbooks.

The apprentice-in-the-loop model provides a practical mechanism for this strategic advantage. It acknowledges that AI should automate routine work while ensuring that automation doesn’t eliminate the learning pathway that creates senior analysts. When both AI Agents and human analysts leverage the same expert-built Questions, the platform becomes a continuous training environment—every investigation is an opportunity for junior analysts to sit shotgun with expert thinking.

As SOCs continue evolving from rigid tiers to flexible operations, the organizations that thrive will be those that solve not just the structural question of how to organize analysts, but the developmental question of how to train them. The answer isn’t bringing back tier-1 grunt work—it’s creating new mechanisms for junior analysts to ride along on complex investigations, learning expert thinking in context while building their own investigative capability across diverse security domains.

The next generation of senior analysts will emerge not from grinding through thousands of false positives, but from structured exposure to expert investigation methodology while working real cases in their real environment. That’s the apprenticeship model rebuilt for the age of AI-augmented security operations.

Eric Hulse
Director of Security Research

Continue reading

Investigations
Highlight

2026 SOC Resolution: Stop Machine Speak. Level up Investigations with Natural Language

SOC analysts waste critical time translating investigations into complex query languages like SPL, KQL, and SQL instead of hunting threats. Natural language investigation platforms eliminate this cognitive burden, enabling analysts at all skill levels to conduct sophisticated investigations by simply asking questions. Pre-built investigative sequences should operationalize expert methodology across common use cases like impossible travel and suspicious activity analysis, standardizing excellence while breaking down data silos across endpoints, identity providers, and cloud environments. Question-based approaches create reinforcement learning feedback loops, continuously improving investigation quality through analyst validation. By removing syntax barriers, junior analysts gain advanced capabilities while senior investigators accelerate case closure. As alert volumes surpass human capacity in 2026, natural language interfaces become essential for SOC scalability. Modern security operations teams should expect tools that close complex cases in minutes through AI-assisted analysis and autonomous investigative flows, fundamentally transforming how they handle evolving threats.
James Therrien
Jan 7, 2026
5
min read
Investigations
Highlight

Investigating Service Principal Attacks with Graph API Activity Logs

Service principal attacks are escalating, with threat actors like Midnight Blizzard and Storm-0501 exploiting non-human identities to compromise enterprise environments. These attacks historically succeeded because reconnaissance activity—enumeration of users, groups, and roles—remained invisible to defenders through traditional directory audit logs. Microsoft's new GraphAPIAuditEvents table in Defender XDR Advanced Hunting changes this by capturing all Graph API requests, including reads, writes, and failures. This preview feature provides unprecedented visibility into service principal activity, enabling security teams to detect enumeration attempts, privilege escalation, and OAuth abuse before attackers execute their primary objectives. Leveraging Microsoft’s new GraphAPIAuditEvents, Command Zero automates the detection of previously invisible reconnaissance—such as permission enumeration—that legacy logs miss. By embedding expert knowledge into AI-assisted investigation frameworks, the platform correlates disparate data points (IPs, tokens, API calls) to expose complex attack chains. This transforms raw logs into finished investigations in minutes, enabling SOC teams to close the visibility gap and maximize productivity without sacrificing control or transparency.
Kiki Preteau
Dec 23, 2025
4
min read
Investigations
Highlight

The 51-Second Problem: Why SOCs Can't Keep Pace with Machine-Speed Adversaries

Adversaries achieved 51-second breakout times in 2024—faster than most SOCs can triage an alert. While top-performing teams reach Mean Time to Detect of 30 minutes to 4 hours, typical investigations take 90+ minutes before response coordination begins. By then, attackers have already moved laterally and established persistence. The bottleneck isn't analyst speed—it's investigation architecture. Analysts spend 60-70% of investigation time on mechanical tasks: translating questions into queries, context-switching between tools, manually correlating findings across systems, and maintaining investigation state. No amount of training can compress human-paced investigation processes to match machine-speed attacks. The solution requires eliminating mechanical work through investigation patterns that execute at machine speed, allowing analysts to focus on judgment and decision-making. Organizations achieving investigation velocity improvements aren't just deploying better technology—they're consolidating workflows, capturing expert methodologies in executable patterns, and redesigning SOC architecture for the threat landscape they actually face.
Eric Hulse
Dec 3, 2025
6
min read
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.