Simon Willison’s Weblog: Cato CTRL™ Threat Research: PoC Attack Targeting Atlassian’s Model Context Protocol (MCP) Introduces New “Living off AI” Risk

Source URL: https://simonwillison.net/2025/Jun/19/atlassian-prompt-injection-mcp/
Source: Simon Willison’s Weblog
Title: Cato CTRL™ Threat Research: PoC Attack Targeting Atlassian’s Model Context Protocol (MCP) Introduces New “Living off AI” Risk

Feedly Summary: Cato CTRL™ Threat Research: PoC Attack Targeting Atlassian’s Model Context Protocol (MCP) Introduces New “Living off AI” Risk
Stop me if you’ve heard this one before:

A threat actor (acting as an external user) submits a malicious support ticket.
An internal user, linked to a tenant, invokes an MCP-connected AI action.
A prompt injection payload in the malicious support ticket is executed with internal privileges.
Data is exfiltrated to the threat actor’s ticket or altered within the internal system.

It’s the classic lethal trifecta exfiltration attack, this time against Atlassian’s new MCP server, which they describe like this:

With our Remote MCP Server, you can summarize work, create issues or pages, and perform multi-step actions, all while keeping data secure and within permissioned boundaries.

That’s a single MCP that can access private data, consume untrusted data (from public issues) and communicate externally (by posting replies to those public issues). Classic trifecta.
It’s not clear to me if Atlassian have responded to this report with any form of a fix. It’s hard to know what they can fix here – any MCP that combines the three trifecta ingredients is insecure by design.
My recommendation would be to shut down any potential exfiltration vectors – in this case that would mean preventing the MCP from posting replies that could be visible to an attacker without at least gaining human-in-the-loop confirmation first.
Tags: atlassian, security, ai, prompt-injection, generative-ai, llms, exfiltration-attacks, model-context-protocol

AI Summary and Description: Yes

Summary: The text discusses a newly identified threat involving a “Living off AI” risk associated with Atlassian’s Model Context Protocol (MCP). It highlights how malicious actors can exploit the MCP through prompt injection attacks, allowing them to exfiltrate data under internal permissions. This scenario illustrates significant vulnerabilities in AI systems and reinforces the need for enhanced security measures within such infrastructures.

Detailed Description:
The provided text outlines a critical vulnerability involving Atlassian’s Model Context Protocol (MCP), a feature that leverages AI for task automation and data handling. The threat actor’s method of exploitation reflects significant security risks in AI integration within enterprise frameworks. Key points include:

– **Attack Overview**:
– A threat actor submits a malicious support ticket as an external user.
– An internal user triggers an AI action connected to the MCP.
– A prompt injection payload from the malicious ticket is executed, leveraging internal privileges.
– This results in either data exfiltration or alteration within systems, demonstrating a trifecta of exfiltration attack components.

– **Nature of the MCP**:
– The MCP facilitates various operational tasks, allowing users to summarize work and perform multi-step actions.
– It manages both private and untrusted external data, which increases the risk of exploitation if not secured properly.

– **Insecurity by Design**:
– The text emphasizes that any MCP with components such as access to private data and the ability to consume external untrusted data is inherently insecure.
– This assessment points to a fundamental design flaw in how MCPs operate, raising questions about the viability of defenses against prompt injection attacks.

– **Recommendations**:
– Strengthen security measures by eliminating potential data exfiltration points.
– Incorporate human verification (human-in-the-loop) before allowing publicly visible replies or actions that could expose data to threats.

This analysis underscores the necessity for professionals in security and compliance, especially those working within the domains of AI security and infrastructure protection. It highlights urgent considerations for designing more secure systems and stresses the importance of understanding the implications of AI usage in operational contexts. The ‘living off AI’ risk presents immediate challenges that must be addressed collaboratively among developers, security teams, and compliance regulators.