MIRI

Pioneer of alignment research field with decades of foundational contributions.

HQ🇺🇸 US
Est2000
Size11-50
EU AI ActLimited
intelligence.org
Score
47.0 / 100
Evidence
2 items
Confidence
low

Developing safety practices - core foundations in place with room for improvement.

Strengths:Governance Maturity, Technical Safety, Risk Assessment, External Engagement
Weaknesses:Regulatory Readiness
Focus Areas
alignment researchagent foundationsmathematical alignment

Strengths

  • Strong social impact assessment

Risks

  • Regulatory score (15) - significant gap
  • Low evidence coverage (2 items)
  • Uneven profile - Regulatory lags Risk by 45 points
Table of Contents

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
58
TS-01dim: 55
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 55
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 60
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 60
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 60
Incident History & Track Record
Past incidents and response quality
EE-04dim: 55
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
MIRI incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
50
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
55
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
60
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
15
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
55

Social Impact & Safety Profile

Strong

MIRI (Machine Intelligence Research Institute) pioneered the field of AI alignment research, establishing many of the foundational concepts now used across the safety community. Their work on agent foundations and mathematical alignment theory has been historically influential in shaping how the field thinks about AI risk.

agent foundationsmathematical alignmentfoundational research

Peer Comparison

Redwood Research
B55

Alignment Research

Compare
Softmax
C41.3

Alignment Research

Compare
Conjecture
C40

Alignment Research

Compare
Fathom
D25

Alignment Research

Compare

Data Sources & Methodology

Scoring methodology v0.1 · 40 indicators · 6 frameworks

Last assessment: 2026-03-23 · Confidence: low · Evidence: 2 items

NIST AI RMF · EU AI Act · ISO 42001 · FLI AI Safety Index · MLCommons AILuminate · METR

Scores reflect publicly available information. A low score may indicate limited transparency rather than poor safety practices.