Imper.ai
AI agent security platform protecting autonomous AI systems from manipulation and unauthorized actions.
Score
44.0 / 100
Evidence
2 items
Confidence
low
Developing safety practices - core foundations in place with room for improvement.
Strengths:Technical Safety, External Engagement
Weaknesses:Governance Maturity, Risk Assessment, Regulatory Readiness
Focus Areas
ai securityagent securityllm protectionruntime defense
Safety Profile
Strengths
No notable strengths identified
Risks
- Low evidence coverage (2 items)
- Regulatory requires attention
Table of Contents
Security Assessment
Security-relevant indicators for vendor evaluation
Security Posture
46
TS-01dim: 50
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 50
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 42
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 42
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 42
Incident History & Track Record
Past incidents and response quality
EE-04dim: 50
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Imper.ai incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages
Dimension Breakdown
GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.Peer Comparison
Data Sources & Methodology
Scoring methodology v0.1 · 40 indicators · 6 frameworks
Last assessment: 2026-03-23 · Confidence: low · Evidence: 2 items
NIST AI RMF · EU AI Act · ISO 42001 · FLI AI Safety Index · MLCommons AILuminate · METR
Scores reflect publicly available information. A low score may indicate limited transparency rather than poor safety practices.