Screener/IBM Research — Trustworthy AI

IBM Research — Trustworthy AI

Pioneering responsible AI research with open-source toolkits (AI Fairness 360, ART) and enterprise governance platform.

HQ🇺🇸 US
Est1911
Size10001+
research.ibm.com
Score
52.0 / 100
Evidence
10 items
Confidence
medium

Strong safety posture with established governance frameworks and active risk management.

Strengths:Governance Maturity, Technical Safety, Regulatory Readiness
Weaknesses:Risk Assessment, External Engagement
Focus Areas
trustworthy AIAI fairnesswatsonx.governanceAI Fairness 360

Strengths

  • Strong social impact assessment
  • High evidence coverage (10 items)
  • Security posture assessed at 60

Risks

No significant risks identified

Table of Contents

Security Assessment

Security-relevant indicators for vendor evaluation

Security Posture
50
TS-01dim: 55
Red Teaming & Pre-deployment Testing
Adversarial testing before deployment
TS-05dim: 55
Robustness & Adversarial Resilience
Resistance to adversarial attacks
RA-01dim: 45
Sector-Specific Risk Assessment
Risk analysis for deployment context
RA-03dim: 45
Dual-Use & Misuse Risk
Dangerous capability awareness
RA-07dim: 45
Incident History & Track Record
Past incidents and response quality
EE-04dim: 48
Vulnerability Disclosure Program
Bug bounty or CVE reporting process
Incident History
IBM Research — Trustworthy AI incident records sourced from AIAAIC Repository and public reporting.
Integration: AIAAIC, OECD AI Incidents Monitor
Third-Party Audits
External audit reports, SOC 2 attestations, and ISO certifications verified where published.
Sources: Company filings, registry lookups
CVE & Disclosures
Known vulnerabilities and security advisories from NVD, GitHub Security Advisories, and vendor pages.
Sources: NVD, GHSA, vendor disclosure pages

Dimension Breakdown

GM
Governance Maturitypreliminary
Published policies, corporate structure, safety mandate, whistleblowing, executive commitment.
58
TS
Technical Safetypreliminary
Benchmarks, adversarial robustness, fine-tuning safety, watermarking, model cards, research output.
55
RA
Risk Assessmentpreliminary
Dangerous capability evaluations, thresholds, external testing, bug bounty, halt conditions.
45
RR
Regulatory Readinesspreliminary
ISO 42001, EU AI Act compliance, GPAI obligations, international commitments, incident reporting.
55
EE
External Engagementpreliminary
Survey participation, research support, transparency, behavior specs, open-source contributions.
48

Social Impact & Safety Profile

Strong

IBM Research has been a leader in trustworthy AI, publishing foundational work on fairness, explainability, and robustness. Open-source toolkits (AI Fairness 360, Adversarial Robustness Toolbox) are widely used in industry and academia. watsonx.governance provides enterprise AI governance. Strong regulatory engagement and standards participation (ISO, NIST).

AI fairnessadversarial robustnessAI governance toolingstandards

Peer Comparison

Confident Security
C+47

Robustness & Adversarial

Compare
Aim Security
C43.6

Robustness & Adversarial

Compare
Noma Security
C41.1

Robustness & Adversarial

Compare
Vendict
C40

Governance Tooling

Compare

Data Sources & Methodology

Scoring methodology v0.1 · 40 indicators · 6 frameworks

Last assessment: 2026-03-23 · Confidence: medium · Evidence: 10 items

NIST AI RMF · EU AI Act · ISO 42001 · FLI AI Safety Index · MLCommons AILuminate · METR

Scores reflect publicly available information. A low score may indicate limited transparency rather than poor safety practices.