Scorecard/Musubi

Musubi

Data gathering in process

AI content moderation and fraud detection platform with human-in-the-loop oversight for social platforms and marketplaces.

HQUS
Est2023
Size11-50
EU AI ActLimited Risk
musubilabs.ai
Score
48.5 / 100
Evidence
5 items

Strong safety posture with established governance frameworks and active risk management.

Strengths:Governance Maturity, Regulatory Readiness
Weaknesses:Technical Safety, Risk Assessment, External Engagement
Focus Areas
content moderationtrust and safetyfraud detectioncompliance

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
47
TS-01dim: 48
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 48
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 45
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 45
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 45
Incident History & Track Record
Past incidents and response quality
EE-04dim: 35
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
Musubi incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitymedium
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
52
1 evidence items
GM-01
TS
Technical Safetymedium
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
48
2 evidence items
TS-03TS-06
RA
Risk Assessmentlow
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
45
RR
Regulatory Readinesslow
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
55
1 evidence items
RR-05
EE
External Engagementmedium
Survey participation, research support, transparency, behavior specs, open-source contributions.
35
1 evidence items
EE-01

Social Impact & Safety Profile

Emerging

Musubi works on content moderation and trust-and-safety tooling, which inherently addresses social harms from AI-generated content. However, the company has not published formal social impact policies or measurable commitments. Social impact awareness is acknowledged but not systematised.

content moderationtrust and safety
Why it matters for safety

Additional alignment research capacity contributes to the broader effort to solve the alignment problem. Each research team brings a unique perspective and approach.

Civilizational Risk Awareness

1/3

Implied safety awareness through positioning in the AI safety ecosystem. Insufficient public information to assess depth of catastrophic risk commitment.

Responsible Scaling Policy

None

No RSP. Research-stage company. Not a model developer.

Mission Drift Protection

0/3
  • No PBC status
  • No structural governance mechanisms
  • No public safety mission statement
  • Limited public information on mission commitment

Vulnerability Disclosure

None

No CVD programme. Research-stage company.

Safety Reporting

- None

No structured safety reporting. Limited public research output.

Dual-Use Risk

Not applicable - this company does not develop dual-use AI systems.

Need a detailed report for Musubi?

Subscribe to express interest in indicator-level evidence, peer benchmarking, and regulatory gap analysis - or reach out to request a full company overview brief.