Ecosystem/GovAI

GovAI

NonprofitPreliminary

Centre for the Governance of AI - Oxford-based policy research organisation studying how AI should be governed. Research shapes international AI policy.

HQUK
Est2018
governance.ai
Score
47.0 / 100
Confidence
Preliminary

Developing safety practices - core foundations in place with room for improvement.

Strengths:Governance Maturity, Regulatory Readiness, External Engagement
Weaknesses:Technical Safety, Risk Assessment
Competitive positioning

Leading AI governance think tank. Unique Oxford institutional backing. Competes with RAND, Brookings on AI policy research but with deeper technical focus.

Key risk

Academic pace may not match the speed of AI development. Policy research influence is indirect and hard to measure.

Enterprise traction

Policy papers cited by governments. No commercial customers.

government
Safety area

Governance Tooling

Enterprise business needs
Train the next generation

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
38
TS-01dim: 35
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 35
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 40
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 40
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 40
Incident History & Track Record
Past incidents and response quality
EE-04dim: 55
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
GovAI incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
55
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
35
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
40
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
50
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
55

Social Impact & Safety Profile

Strong

GovAI's research papers are cited by governments worldwide and directly influence AI policy development. Their work on compute governance, international AI agreements, and frontier model regulation shapes the global governance landscape.

policy researchcompute governanceinternational regulation

Want GovAI scored on the Mappera framework?

Subscribe to get notified when full safety scoring becomes available, or reach out to request a detailed brief.