Ecosystem/Center for AI Safety

Center for AI Safety

NonprofitPreliminary

Reduces societal-scale risks from AI through research and field-building. Created WMDP benchmark (widely used) and orchestrated the famous open letter on AI risk signed by hundreds of researchers.

HQUS
Est2022
safe.ai
Score
50.2 / 100
Confidence
Preliminary

Strong safety posture with established governance frameworks and active risk management.

Strengths:Governance Maturity, Risk Assessment, External Engagement
Weaknesses:Technical Safety, Regulatory Readiness
Competitive positioning

Among the most influential AI safety orgs globally. Unique ability to coordinate the field (open letters, benchmarks, community building).

Key risk

Influence-dependent model. Policy influence doesn't generate revenue. Dependent on continued donor interest.

Enterprise traction

High policy influence. No commercial customers.

government
Safety area

Field Building

Enterprise business needs
Train the next generation

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
50
TS-01dim: 45
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 45
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 55
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 55
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 55
Incident History & Track Record
Past incidents and response quality
EE-04dim: 68
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Center for AI Safety incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
58
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
45
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
55
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
25
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
68

Social Impact & Safety Profile

Strong

CAIS reduces societal-scale risks from AI through research and field-building. Created the WMDP benchmark (widely adopted) and orchestrated the famous open letter on AI risk signed by hundreds of researchers. High policy influence and ability to coordinate the AI safety field gives outsized impact.

field buildingbenchmark creationpolicy coordinationrisk communication

Want Center for AI Safety scored on the Mappera framework?

Subscribe to get notified when full safety scoring becomes available, or reach out to request a detailed brief.