Ecosystem/EleutherAI

EleutherAI

NonprofitPreliminary

Open-source AI research collective that builds interpretability tools, evaluation frameworks, and open models. Created the widely-used Language Model Evaluation Harness.

HQUS
Est2020
eleuther.ai
Score
38.6 / 100
Confidence
Preliminary

Developing safety practices - core foundations in place with room for improvement.

Strengths:External Engagement
Weaknesses:Governance Maturity, Technical Safety, Risk Assessment, Regulatory Readiness
Competitive positioning

The Linux Foundation of AI interpretability tools. Unique position as open-source collective. Tools used by labs and researchers globally.

Key risk

Volunteer-driven model creates sustainability challenges. Risk of contributor burnout without institutional support.

Enterprise traction

Tools adopted across research community. No commercial revenue.

Safety area

Interpretability

Enterprise business needs
Understand what my AI is doingTrain the next generation

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
42
TS-01dim: 48
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 48
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 35
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 35
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 35
Incident History & Track Record
Past incidents and response quality
EE-04dim: 60
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
EleutherAI incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
35
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
48
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
35
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
15
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
60

Social Impact & Safety Profile

Moderate

EleutherAI is the leading open-source provider of AI interpretability and evaluation tools. Their tools are widely used in academic research and by safety-focused organisations. As a nonprofit, their work democratises access to safety-critical tooling.

open sourceinterpretability toolsevaluation

Want EleutherAI scored on the Mappera framework?

Subscribe to get notified when full safety scoring becomes available, or reach out to request a detailed brief.