Softmax
AI alignment research lab developing organic alignment - AI systems that learn to collaborate with humans through coordination principles inspired by biological systems.
Developing safety practices - core foundations in place with room for improvement.
Security Assessment
Security-relevant indicators for vendor evaluation
Dimension Breakdown
Social Impact & Safety Profile
EmergingSoftmax focuses on alignment research, which addresses the fundamental question of ensuring AI systems act in accordance with human values. This is inherently a social impact endeavour, but as an early-stage research organisation, formal policies and measurable commitments are aspirational rather than documented.
Scalable oversight is one of the core unsolved problems in AI alignment. If oversight techniques cannot scale with model capabilities, safety guarantees degrade as models become more powerful.
Civilizational Risk Awareness
Research focus on scalable oversight demonstrates implicit awareness that AI systems could become dangerously misaligned at scale. The work is motivated by the alignment problem itself.
Responsible Scaling Policy
No RSP. Research-stage company. Not a model developer. The research output could inform RSP design for frontier labs.
Mission Drift Protection
- ✓Alignment-focused research mission
- ✓Halcyon portfolio alignment
- ○No PBC status
- ○No structural governance mechanisms
- ○Research companies face pressure to commercialise, potentially shifting from alignment to capability
Vulnerability Disclosure
No CVD programme. Research-stage company.
Safety Reporting
No structured safety reporting. Research-stage.
Dual-Use Risk
Not applicable - this company does not develop dual-use AI systems.
Need a detailed report for Softmax?
Subscribe to express interest in indicator-level evidence, peer benchmarking, and regulatory gap analysis - or reach out to request a full company overview brief.