AI Signals Archive

All signals tracked by Mappera. Updated weekly.

Period13 signals
Market
17 Mar 2026
UK’s "Quantum leap"  to help beat disease, deliver high-paid jobs, and strengthen national security, as first country in the world to roll out Quantum computers at scale 

Published by: Department for Science, Innovation and Technology, HM Treasury. UK unveils new package of measures to become the first country in the world to roll out Quantum computers at scale.

governance
Alert
12 Mar 2026
AI Security CVE: CVE-2026-32247 — Graphiti is a framework for building and querying temporal context graphs for AI agents. Graphiti versions before 0.28.2

CVE-2026-32247 (CVSS 8.1 — HIGH). Graphiti is a framework for building and querying temporal context graphs for AI agents. Graphiti versions before 0.28.2 contained a Cypher injection vulnerability in shared search-filter construction for non-Kuzu backends. Attacker-controlled label values supplied through SearchFilters.node_labels were concatenated directly into Cypher label expressions without validation. In MCP deployments, this was exploitable not only through direct untrusted access to the Graphiti MCP server, but also through prompt injection against an LLM client that could be induced t

robustness
Alert
12 Mar 2026
AI Security CVE: CVE-2026-27940 — llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is

CVE-2026-27940 (CVSS 7.8 — HIGH). llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the buffer boundary. This is a bypass of a similar bug in the same file - CVE-2025-53630, but the fix overlooked some areas. This vulnerability is fixed in b8146.

robustness
Alert
11 Mar 2026
AI Security CVE: CVE-2026-31854 — Cursor is a code editor built for programming with AI. Prior to 2.0 ,if a visited website contains maliciously crafted i

CVE-2026-31854 (CVSS 8.7 — HIGH). Cursor is a code editor built for programming with AI. Prior to 2.0 ,if a visited website contains maliciously crafted instructions, the model may attempt to follow them in order to “assist” the user. When combined with a bypass of the command whitelist mechanism, such indirect prompt injections could result in commands being executed automatically, without the user’s explicit intent, thereby posing a significant security risk. This vulnerability is fixed in 2.0.

robustness
Alert
11 Mar 2026
AI Security CVE: CVE-2026-30741 — A remote code execution (RCE) vulnerability in OpenClaw Agent Platform v2026.2.6 allows attackers to execute arbitrary c

CVE-2026-30741 (CVSS 9.8 — CRITICAL). A remote code execution (RCE) vulnerability in OpenClaw Agent Platform v2026.2.6 allows attackers to execute arbitrary code via a Request-Side prompt injection attack.

robustness
Grant
15 Mar 2025
Coefficient Giving: $40M+ Technical AI Safety RFP

Request for proposals across 21 research areas in technical AI safety. Largest single-round safety RFP to date.

alignment$40M
Grant
22 Jan 2025
UK AISI: £8.5M systemic safety grants + £5M Challenge Fund

UK AI Safety Institute awards grants for systemic AI risk research and safety evaluation challenges.

evaluations$17M
Funding
15 Jan 2025
Goodfire raises $50M Series A for interpretability tools

Enterprise interpretability platform backed by Lightspeed Venture Partners.

interpretability$50M
Grant
15 Jan 2025
DARPA: $450K to REGENTS OF THE UNIVERSITY OF CALIFORNIA, THE — SPECIAL YEAR ON LARGE LANGUAGE MODELS AND TRANSFORMERS

DARPA COOPERATIVE AGREEMENT (B) to REGENTS OF THE UNIVERSITY OF CALIFORNIA, THE. SPECIAL YEAR ON LARGE LANGUAGE MODELS AND TRANSFORMERS

governance$450K
Grant
8 Jan 2025
SFF 2025: $29M to AI safety (86% of total allocation)

Survival and Flourishing Fund allocates $29M of $34.3M total to AI safety organisations in its largest round.

alignment$29M
Funding
15 Sep 2024
The Martian raises $32M Series A for LLM routing via interpretability

Intelligent model routing using representation analysis. Backed by NEA, Sequoia, a16z.

interpretability$32M
Funding
4 Sep 2024
Safe Superintelligence raises $1B

Pure safety research lab raises $1B at $5B valuation. Backed by a16z, Sequoia, DST Global, and others.

alignment$1.0B
Market
5 Aug 2024
Cisco acquires Robust Intelligence

Enterprise networking giant acquires AI security startup, signalling maturation of robustness/security into mainstream enterprise procurement.

robustness
Data from public announcements, press releases, government filings, and financial databases