Ecosystem/Mindgard

Mindgard

SeedPreliminary

AI security testing platform that finds vulnerabilities in AI models through automated adversarial testing. Think penetration testing, but for AI models instead of networks.

HQUK
Est2022
Raised$8M
mindgard.ai
Score
43.8 / 100
Confidence
Preliminary

Developing safety practices - core foundations in place with room for improvement.

Strengths:Technical Safety
Weaknesses:Governance Maturity, Risk Assessment, Regulatory Readiness, External Engagement
Competitive positioning

UK-based competitor to US-heavy AI security field. Competes with Robust Intelligence (Cisco), CalypsoAI. Differentiates through UK/EU market focus.

Key risk

Cisco's acquisition of Robust Intelligence signals incumbent entry. Mindgard's $8M cannot compete with Cisco's distribution.

Enterprise traction

Early enterprise engagements in UK/EU.

technology
Safety area

Robustness & Adversarial

Enterprise business needs
Protect my AI in productionTest my AI before deployment

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
53
TS-01dim: 58
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 58
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 48
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 48
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 48
Incident History & Track Record
Past incidents and response quality
EE-04dim: 35
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Mindgard incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
40
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
58
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
48
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
38
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
35

Social Impact & Safety Profile

Emerging

Mindgard provides automated adversarial testing for AI systems, continuously probing models for vulnerabilities including prompt injection, jailbreaks, and data extraction. Academic spin-out from Lancaster University bringing rigorous ML security research to commercial testing tools.

ai red teamingadversarial testingcontinuous security

Want Mindgard scored on the Mappera framework?

Subscribe to get notified when full safety scoring becomes available, or reach out to request a detailed brief.