Safe Superintelligence
Series APreliminaryPure safety research lab led by Ilya Sutskever, co-founder of OpenAI. Raised $3B at $32B valuation with no product, no revenue, and no timeline. Betting everything on solving superintelligence alignment before building products.
Early-stage safety posture - basic practices exist but significant gaps remain.
Most funded pure safety research org in history. No commercial competitors at this scale of pure research. Closest comparison: early DeepMind before Google acquisition.
No path to revenue. $3B burn rate with no product timeline. If alignment proves harder than expected, investors face total loss.
No customers. No product. Research only.
Alignment Research
Security Assessment
Security-relevant indicators for vendor evaluation
Dimension Breakdown
Social Impact & Safety Profile
EmergingSafe Superintelligence, led by Ilya Sutskever (OpenAI co-founder), has raised $3B at a $32B valuation with the sole mission of solving superintelligence alignment. Despite the safety-focused mission, no product, safety policies, or social impact framework have been published. The bet is entirely on future research outcomes.
Recent Signals
View all signalsGrants, funding rounds, policy updates, and market events linked to Safe Superintelligence.
Pure safety research lab raises $1B at $5B valuation. Backed by a16z, Sequoia, DST Global, and others.
Want Safe Superintelligence scored on the Mappera framework?
Subscribe to get notified when full safety scoring becomes available, or reach out to request a detailed brief.