Ecosystem/Apollo Research

Apollo Research

SeedPreliminary

Specialises in detecting whether AI models are being deceptive - specifically testing if models engage in scheming or strategic deception. The only company focused specifically on deceptive alignment evaluations.

HQUK
Est2023
Raised$2M
apolloresearch.ai
Score
51.4 / 100
Confidence
Preliminary

Strong safety posture with established governance frameworks and active risk management.

Strengths:Governance Maturity, Technical Safety, Risk Assessment, External Engagement
Weaknesses:Regulatory Readiness
Competitive positioning

Unique niche - no other company focuses specifically on AI deception detection. Competes broadly with Patronus AI, METR but differentiated by deception focus.

Key risk

Market for 'deception testing' depends on buyers believing AI deception is a real near-term risk. Many enterprise buyers don't see this as urgent.

Enterprise traction

Partnerships with frontier labs. No enterprise customers confirmed.

frontier labs
Safety area

Evaluations & Benchmarking

Enterprise business needs
Test my AI before deployment

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
59
TS-01dim: 62
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 62
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 55
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 55
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 55
Incident History & Track Record
Past incidents and response quality
EE-04dim: 60
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Apollo Research incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
55
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
62
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
55
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
25
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
60

Social Impact & Safety Profile

Strong

Apollo Research specialises in detecting deceptive alignment and scheming in AI models, one of the most consequential safety challenges. Their evaluations are used by frontier labs to identify potentially deceptive model behaviours before deployment. No other company focuses specifically on AI deception detection.

deceptive alignmentscheming detectionfrontier model evaluation

Want Apollo Research scored on the Mappera framework?

Subscribe to get notified when full safety scoring becomes available, or reach out to request a detailed brief.