AI Grading Criteria

1. ISO 42001 Readiness (iso)

Based on public certifications, alignment, or stated intent to pursue ISO 42001.

GradeCriteria
AActively certified or publicly pursuing ISO 42001.
BPartially aligned or structurally prepared.
CAnnounced roadmap or intent to pursue ISO 42001.
DGeneral security certifications (e.g., SOC2) but no ISO alignment.
FNo visible ISO 42001 or ISO 27001 initiative.

2. Model Context Protocol (mcp)

Measures adoption or planned support for the Model Context Protocol standard.

GradeCriteria
APublished MCP interface.
BImplemented but not published MCP interface.
CAnnounced intent to support MCP.
DAgentic-like architecture, but not MCP compliant.
FNo MCP support or alignment.

3. GenAI Workflows (workflow)

Evaluates how integrated and structured GenAI workflows are within the platform.

GradeCriteria
AFull workflow using MCP and performing actions.
BWorkflow using MCP, results routed to analyst.
CWorkflow exists, but uses internal/local standards.
DAtomic AI queries sent stateless to LLMs.
FNo autonomous AI decision support.

4. Remediation Capability (remediation)

Assesses the level of AI-driven remediation or action capability.

GradeCriteria
AAI directly modifies the security posture.
BAI closes or updates tickets.
CAI recommends specific playbooks.
DAI provides general advice or suggestions.
FNo AI-powered recommendations.

5. AI Roadmap Completeness (roadmap)

Judged by public commitment to AI maturity across multiple dimensions.

GradeCriteria
APublic roadmap includes all: ISO 42001, MCP, workflows, actions, and feedback/self-improvement.
BPublic roadmap includes four of the above.
CPublic roadmap includes three of the above.
DOnly one or two elements are publicly committed.
FNo clear roadmap or commitment to AI maturity.

Grading Results

We evaluated leading SIEMs and security platforms against our AI grading criteria. Here’s how they stack up.

It’s important to note that this is a rapidly evolving space. As vendors race to add AI capabilities, announcements often outpace real-world implementations. Our grading focuses not just on what’s promised—but on what’s operational, public, and structurally aligned to long-term AI maturity. These evaluations are current as of today, but we expect the landscape to shift dramatically over the next 12 to 18 months.

SIEMISO 42001MCPGenAI WorkflowAI RemediationRoadmapFinal Grade
Fluency SecurityBBBBAB
Microsoft SentinelBCDCCC
Securonix EONFCCCCD+
CrowdStrike Falcon SIEMFCDCCD+
Splunk Enterprise Sec.FDFCDD
IBM QRadarFFFCDD-
ExabeamDCCDCC-
DevoDDCDDD+
HuntersFDDDDD
LogScale (Humio)FFFDDF
Google ChronicleDFDDDD
WizDDDFCD+