Posted
AI Buzzwords vs. AI Breakthroughs
Analysts read reports every day that hint at adversary behavior but stop short of mapping to MITRE ATT&CK. Valuable context is lost, and blind spots grow. At the same time, vendors promise “AI-powered” features, but too often, what’s under the hood is little more than regex rules, keyword matches, or indexing tricks presented as innovation.
Instead of making the job easier, these claims create more noise, false positives, and wasted time. A true breakthrough in AI for cyber threat intelligence doesn’t just check the “AI box.” It gives analysts reliable, high-fidelity results they can trust in real workflows, surfaces context where it’s missing, closes blind spots, and reclaims time for the work that matters.
Why Generic AI Falls Short
Across the industry, advanced CTI teams are experimenting with small, tightly controlled LLM workflows that summarize reports, search breach data, or build agents for teams to create repeatable outputs. These smaller-scale experiments can be game-changers when paired with a human in the loop who fine-tunes the tools, validates the outputs, and takes responsibility for the results. But most teams can’t be experts in both AI and cyber threat intelligence, and that’s the blind spot other vendors exploit with generic, one-size-fits-all features.
The problem is that generic LLMs aren’t built for CTI. It guesses too much, inherits bias from broad training data, and often leaves teams blind to new techniques because there’s no clarity on which version of MITRE ATT&CK it was trained on. Independent research backs this up: the CyberSOCEval benchmark of LLMs on ThreatIntelligence Reasoning found leading LLMs scored only 43–53% accuracy on CTI reasoning tasks and just 15–28% on malware analysis, concluding that “current LLMs are far from saturating our evaluations” and that there is a “significant hill to climb for AI developers to improve AI cyber defense capabilities.”
Current LLMs aren’t “analysts in a box.” Yet vendors continue to parade them up with razzle-dazzle demos and marketing buzz, offering black-box features that collapse under real-world conditions. That’s why CTI teams should start asking harder questions:
- What problem is this tailored to solve?
- Which models are being used?
- How do we know it works?
- How are they monitored?
- Where does the training data come from?
- How often do they update?
- What standards guide their design?
The ThreatConnect Difference: AI for CTI
MITRE ATT&CK has been central to CTI for years, but most tools only scratch the surface. They tag explicit mentions or keywords when they’re written into a report and leave the harder and more valuable context undiscovered, leaving analysts with blind spots.
Analysts need a way to surface ATT&CK techniques even when they’re only implied. That’s where MITRE ATT&CK AI Classification comes in, a purpose-built AI going beyond keywords to reveal context that would otherwise slip through, surfacing techniques that authors never name outright. When a report implies Credential Access: Brute Force without saying it, analysts still get the signal they need to check whether their defenses are ready.
At the same time, nothing explicit is ever missed. Combined with our Alias Extraction feature, which captures 100% of ATT&CK techniques when they’re listed in text, while AI classification fills in the implicit gaps. Together, they help analysts surface important context to act on.
AI That Works Where Analysts Work
Adversaries never stand still, and static AI models fall behind quickly. Generic models lag behind because they’re trained on broad datasets, biased toward noisy outputs, and often tied to outdated versions of ATT&CK. Keeping pace takes innovation that gives analysts both coverage and precision.
Two advances power MITRE ATT&CK AI Classification:
- Sentence-level classification. We look at the meaning of a report, not just its keywords, surfacing techniques that might never be named outright. For analysts, this means fewer blind spots and more complete context for every incident.
- Synthetic data expansion. By fine-tuning our model with high-quality training examples for techniques not yet widely reported, the model recognizes new adversary behaviors earlier and with greater fidelity.
Quality-controlled responses. Every potential match goes through multiple layers of validation, with only the strongest candidate surfaced to analysts. This ensures fewer false positives, more reliable context, and results they can act on with confidence.
Every synthetic example is validated through ensemble checks, vetted by analysts, with results continuously monitored in production, and aligns AI with principles of ethical, responsible AI for the Intelligence Community. The result is AI that keeps pace with adversaries. Gartner sees the same shift and predicts that 60% of AI training data will be synthetic this year, rising to 90% by 2030. ThreatConnect knows how important trustworthy data is to CTI workflows, and we’re showcasing what responsible AI features look like: transparent training, human oversight, and models that evolve with the threat landscape.
MITRE ATT&CK AI Classification in OSINT reports
The scale of the difference is clear. In a 90-day period, CAL™ Automated Threat Library (ATL) processed over 7,000 OSINT reports.
- 9 out of 10 reports classified. MITRE ATT&CK AI Classification applied to 6,421 reports (90%) compared to 2,604 reports (37%) with explicit ATT&CK mentions.
- 7.7× more context. 28,993 TTPs surfaced using MITRE ATT&CK AI Classification compared to explicit mentions alone (3,832).
That difference isn’t just statistics; it takes work off of analysts. It means detections and control gap analyses can be built on a much more complete understanding of adversary behavior.
Why It Matters for Analysts
For CTI teams, the benefits are immediate and practical. Analysts spend less time hand-tagging reports and more time acting on the intelligence inside them. Implicit behaviors that would have slipped through are now surfaced with confidence, giving teams better visibility into adversary tactics. Classifications happen automatically in CAL™ Automated Threat Library (ATL) and are available for workflows in the Doc Analysis Playbook, powering automating enrichment, triage, and reporting with workflows features like ATT&CK Visualizer, ATT&CK Tags, Graph Visualization, and via ThreatConnect Query Language (TQL).
Most importantly, the system is built for trust. We provide customers with clear documentation that demonstrates how every classification is backed by transparent design choices, ongoing monitoring, and human oversight to identify evolving needs to keep pace with adversary behavior.
What was once slow, manual, and partial becomes faster, clearer, and comprehensive. That’s the value of purpose-built AI.
Use AI That Delivers in Real Workflows
Every day, analysts are asked to do more with less time and greater pressure. What makes AI valuable isn’t a polished demo; it’s whether the capability holds up inside real workflows, from enrichment to triage to reporting. Purpose-built MITRE ATT&CK AI Classification lets analysts surface the context that’s missing, close the gaps in coverage, and trust the intelligence they rely on.
In a world where adversaries evolve faster than defenses, the measure of AI isn’t how good it looks in a showcase, but how well it works in the hands of analysts. That’s how purpose-built AI turns information into outcomes analysts can trust.