Screener/Stability AI

Stability AI

Open-source generative AI developer with significant governance gaps and financial instability.

HQ🇬🇧 GB
Est2019
Size51-200
EU AI ActGPAI
stability.ai
Score
22.0 / 100
Evidence
3 items
Confidence
low

Early-stage safety posture - basic practices exist but significant gaps remain.

Weaknesses:Governance Maturity, Technical Safety, Risk Assessment, Regulatory Readiness, External Engagement
Focus Areas
generative AIimage generationopen source AIStable Diffusion

Strengths

No notable strengths identified

Risks

  • Governance score (18) - significant gap
  • Risk score (20) - significant gap
  • Regulatory score (20) - significant gap
  • Engagement score (22) - significant gap
  • Technical score (30) - significant gap
Table of Contents

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
25
TS-01dim: 30
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 30
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 20
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 20
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 20
Incident History & Track Record
Past incidents and response quality
EE-04dim: 22
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Stability AI incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
18
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
30
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
20
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
20
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
22

Social Impact & Safety Profile

Limited

Stability AI develops Stable Diffusion and other open-source generative models. Governance has been turbulent — CEO departure, significant layoffs, and financial difficulties have undermined safety investment. CSAM and copyright concerns around training data remain partially addressed. Open-source release model creates dual-use challenges with limited post-release control.

open source AIimage generation safetyCSAM prevention

Peer Comparison

Cohere
C-38

Foundation Models

Compare
Mistral AI
C-35

Foundation Models

Compare
AI21 Labs
D+33

Foundation Models

Compare
xAI
D25

Foundation Models

Compare

Data Sources & Methodology

Scoring methodology v0.1 · 40 indicators · 6 frameworks

Last assessment: 2026-03-23 · Confidence: low · Evidence: 3 items

NIST AI RMF · EU AI Act · ISO 42001 · FLI AI Safety Index · MLCommons AILuminate · METR

Scores reflect publicly available information. A low score may indicate limited transparency rather than poor safety practices.