OpenAI
AI research and deployment company. Developer of GPT-4, ChatGPT, and DALL-E.
Strong safety posture with established governance frameworks and active risk management.
Security Assessment
Security-relevant indicators for vendor evaluation
Dimension Breakdown
Social Impact & Safety Profile
LimitedOpenAI dissolved its Superalignment team and lost key safety researchers including Jan Leike and Ilya Sutskever. The nonprofit-to-profit restructuring raised fundamental questions about governance accountability. The Preparedness Framework has been weakened in practice, and commercial pressures increasingly override safety commitments. System cards and usage policies exist but lack independent verification, and transparency has declined significantly.
OpenAI's governance controversies (board crisis, safety team departures) make it the most important case study in AI safety governance. Whether OpenAI's remaining safety structures are sufficient under extreme commercial pressure is the central question.
Civilizational Risk Awareness
Charter references catastrophic risk but organisational behaviour has diverged significantly from stated risk awareness. The gap between rhetoric and action is the widest in the frontier lab category. Board crisis, safety team departures, and the for-profit transition collectively demonstrate that risk awareness is not structurally embedded.
Responsible Scaling Policy
Preparedness Framework (2023) - defines risk categories and evaluation criteria for frontier model deployment. Risk levels (Low/Medium/High/Critical) with deployment thresholds.
Framework exists on paper but enforcement credibility has been severely undermined by senior safety team departures, the dissolution of the superalignment team, and governance instability. Downgraded from 'published' to 'informal' because a published policy without credible enforcement is functionally informal.
Mission Drift Protection
- ✓Mission statement in charter (AGI that benefits all of humanity)
- ✓Safety Advisory Group
- ✓Preparedness Framework gates
- ○Capped-profit structure being restructured - mission protection unclear in new corporate form
- ○Board crisis demonstrated governance failure under commercial pressure
- ○No PBC status - transition to for-profit removes structural mission protection
- ○Multiple senior safety researchers departed
- ○No independent external safety board with binding authority
Vulnerability Disclosure
Public bug bounty programme via Bugcrowd. Covers traditional security vulnerabilities and some AI-specific issues (jailbreaks, safety bypasses).
Bug bounty exists but scope for AI-specific safety vulnerabilities is narrower than Anthropic's programme. Downgraded from 'public_bug_bounty' to 'external_programme' because AI-safety-specific vulnerability coverage is limited compared to the breadth of traditional security bounty.
Safety Reporting
Reporting is tied to product releases rather than on a regular cadence. Publication frequency of safety research has decreased compared to 2022-2023. System cards are informative but not comprehensive safety assessments. No structured transparency report.
Dual-Use Risk
Dual-use mitigation structures exist but institutional commitment has been questioned. The gap between formal policy and organisational culture is the concern, not the absence of policies.
Mitigation details
Need a detailed report for OpenAI?
Subscribe to express interest in indicator-level evidence, peer benchmarking, and regulatory gap analysis - or reach out to request a full company overview brief.