All About the Data
AI Safety scoring.
AI Security assessment.
AI Signals for investment evaluation.
Every number is source-attributed and independently verifiable.
Independent, evidence-based assessment
Most AI governance assessments rely on company cooperation: questionnaires, self-reported data, voluntary disclosures. This creates two problems. First, it introduces systemic bias - companies control their own narrative. Second, it limits coverage exclusively to organizations willing to participate, leaving the most opaque actors unexamined.
Mappera operates differently. Every indicator is scored using publicly available, independently verifiable information. Our evidence base includes published safety policies, GitHub repositories, benchmark results and evaluation scores, regulatory filings, patent applications, job listings, academic publications, and press coverage. The methodology synthesizes requirements from six recognized governance frameworks - NIST AI RMF, EU AI Act, ISO 42001, FLI AI Safety Index, MLCommons AILuminate, and METR - into a single, comparable score. Each of the forty indicators maps to at least one framework, ensuring that the rubric reflects established, peer-reviewed governance standards rather than arbitrary criteria or vendor-specific benchmarks.
Scoring is calibrated specifically for the AI governance ecosystem. A Series A startup with basic governance practices scores 2-3 on most indicators, not zero. This enables meaningful differentiation across the startup and scale-up landscape, where absolute scoring would compress most companies into the bottom quartile. The assessment covers five dimensions (Governance Maturity, Technical Safety, Risk Assessment, Regulatory Readiness, and External Engagement), and every score comes with a confidence level. High confidence means evidence exists for 80% or more of the assessed indicators; medium covers 50-79%; everything below 50% falls into low. Important to note is that a low score may reflect limited public transparency rather than poor safety practices. Mappera acknowledges this, as transparency itself is a safety practice to a certain extent.
Five Dimensions, 40 Indicators
Every organisation is scored across 40 indicators organized into 5 dimensions. The same framework applies to AI safety and AI security assessments, ensuring a single, comparable standard across the ecosystem.
Framework Crosswalk
Every indicator traces to at least one recognized standard. Counts reflect how many of the 40 indicators reference each framework.
Totals exceed 40 because individual indicators map to multiple frameworks (e.g. TS-03 maps to NIST AI RMF, EU AI Act, MLCommons AILuminate, and ISO 42001).
Grading Scale
Scores (0-100) map to letter grades calibrated for AI safety maturity. Even industry leaders score 70-75, so thresholds are set lower than traditional academic grading.
Mappera Methodology v0.1 - 40 indicators across 5 dimensions, investment attractiveness scoring.
Calibrated for startups, scale-ups, and deployers in high-risk sectors.