Ecosystem/Safe Superintelligence

Safe Superintelligence

Series APreliminary

Pure safety research lab led by Ilya Sutskever, co-founder of OpenAI. Raised $3B at $32B valuation with no product, no revenue, and no timeline. Betting everything on solving superintelligence alignment before building products.

HQUS
Est2024
Raised$3.0B
ssi.inc
Score
29.0 / 100
Confidence
Preliminary

Early-stage safety posture - basic practices exist but significant gaps remain.

Weaknesses:Governance Maturity, Technical Safety, Risk Assessment, Regulatory Readiness, External Engagement
Competitive positioning

Most funded pure safety research org in history. No commercial competitors at this scale of pure research. Closest comparison: early DeepMind before Google acquisition.

Key risk

No path to revenue. $3B burn rate with no product timeline. If alignment proves harder than expected, investors face total loss.

Enterprise traction

No customers. No product. Research only.

Safety area

Alignment Research

Enterprise business needs
Make AI fundamentally safer

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
35
TS-01dim: 40
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 40
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 30
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 30
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 30
Incident History & Track Record
Past incidents and response quality
EE-04dim: 25
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Safe Superintelligence incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
35
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
40
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
30
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
15
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
25

Social Impact & Safety Profile

Emerging

Safe Superintelligence, led by Ilya Sutskever (OpenAI co-founder), has raised $3B at a $32B valuation with the sole mission of solving superintelligence alignment. Despite the safety-focused mission, no product, safety policies, or social impact framework have been published. The bet is entirely on future research outcomes.

superintelligence alignment

Recent Signals

View all signals

Grants, funding rounds, policy updates, and market events linked to Safe Superintelligence.

Funding
4 Sep 2024
Safe Superintelligence raises $1B

Pure safety research lab raises $1B at $5B valuation. Backed by a16z, Sequoia, DST Global, and others.

alignment$1.0B

Want Safe Superintelligence scored on the Mappera framework?

Subscribe to get notified when full safety scoring becomes available, or reach out to request a detailed brief.