Scorecard/Softmax

Softmax

Data gathering in process

AI alignment research lab developing organic alignment - AI systems that learn to collaborate with humans through coordination principles inspired by biological systems.

HQUS
Est2024
Size1-10
EU AI ActLimited Risk
softmax.com
Score
41.3 / 100
Evidence
4 items

Developing safety practices - core foundations in place with room for improvement.

Strengths:Governance Maturity, External Engagement
Weaknesses:Technical Safety, Risk Assessment, Regulatory Readiness
Focus Areas
alignment researchai safetymulti agent rlcooperation

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
43
TS-01dim: 45
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 45
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 40
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 40
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 40
Incident History & Track Record
Past incidents and response quality
EE-04dim: 55
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Softmax incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitymedium
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
50
1 evidence items
GM-01
TS
Technical Safetymedium
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
45
1 evidence items
TS-01
RA
Risk Assessmentlow
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
40
1 evidence items
RA-01
RR
Regulatory Readinesslow
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
20
EE
External Engagementmedium
Survey participation, research support, transparency, behavior specs, open-source contributions.
55
1 evidence items
EE-01

Social Impact & Safety Profile

Emerging

Softmax focuses on alignment research, which addresses the fundamental question of ensuring AI systems act in accordance with human values. This is inherently a social impact endeavour, but as an early-stage research organisation, formal policies and measurable commitments are aspirational rather than documented.

alignment researchvalue alignment
Why it matters for safety

Scalable oversight is one of the core unsolved problems in AI alignment. If oversight techniques cannot scale with model capabilities, safety guarantees degrade as models become more powerful.

Civilizational Risk Awareness

2/3

Research focus on scalable oversight demonstrates implicit awareness that AI systems could become dangerously misaligned at scale. The work is motivated by the alignment problem itself.

Responsible Scaling Policy

None

No RSP. Research-stage company. Not a model developer. The research output could inform RSP design for frontier labs.

Mission Drift Protection

1/3
  • Alignment-focused research mission
  • Halcyon portfolio alignment
  • No PBC status
  • No structural governance mechanisms
  • Research companies face pressure to commercialise, potentially shifting from alignment to capability

Vulnerability Disclosure

None

No CVD programme. Research-stage company.

Safety Reporting

- None

No structured safety reporting. Research-stage.

Dual-Use Risk

Not applicable - this company does not develop dual-use AI systems.

Need a detailed report for Softmax?

Subscribe to express interest in indicator-level evidence, peer benchmarking, and regulatory gap analysis - or reach out to request a full company overview brief.