AI/ML, Vulnerability Management, Data Security

Microsoft 365 Copilot ‘zero-click’ vulnerability enabled data exfiltration

(Credit: vladim_ka – stock.adobe.com)

Microsoft patched a “zero-click” flaw in its Microsoft 365 Copilot retrieval-augmented generation (RAG) tool that could have allowed for exfiltration of sensitive data, according to Aim Security.

The vulnerability is tracked as CVE-2025-32711, which has a critical CVSS score of 9.3, Aim Security told SC Media in an email. Microsoft said in its disclosure that the AI command injection vulnerability has not been exploited in the wild and requires no further user action to resolve.

The flaw, dubbed “EchoLeak,” would have allowed an attacker to extract potentially sensitive information from a user’s connected Microsoft 365 services, such as their Outlook email, OneDrive storage, Office files, SharePoint sites and Microsoft Teams chat history, by sending a specially crafted email that bypasses several security measures, Aim Security explained.

“The EchoLeak discovery by Aim Labs exposes a critical shift in cybersecurity risk, highlighting how even well-guarded AI agents like Microsoft 365 Copilot can be weaponized through what Aim Labs correctly terms an ‘LLM Scope Violation,’” SOCRadar Ensar Seker said in an email to SC Media.

The proof-of-concept exploit chain developed by Aim Security starts by bypassing Copilot’s cross-prompt injection attack (XPIA) classifiers by addressing the instructions in the email to the receiver rather than the targeted large language model (LLM).

The attackers would then need to get past Copilot’s link redaction feature, which prevents external markdown links from appearing in the Copilot chat. The researchers discovered that links marked as references (i.e. [ref] in markdown) are not redacted, allowing them to be output by the chatbot.

Rather than tricking a user into clicking the link, the attacker could leverage an external markdown image to trigger an automated GET request for the image. However, the content security policy (CSP) for image embeds on the Microsoft 365 Copilot webpage only allows images from a set list of domains related to Microsoft services.

The researchers discovered this could be bypassed leveraging a specific Microsoft Teams URL format that allows the attacker’s external URL to be accessed via the “/urlp/v1/url/content” endpoint.

As mentioned in a recent comment by a Microsoft employee on the Teams Developer Tech Community page, “Microsoft Teams’ link unfurling uses a proxy service (/urlp/v1/url/content) to retrieve and cache external images.”

An attacker could abuse this link preview feature to cause Copilot to contact the attacker’s site while bypassing the CPS guardrail via the trusted Teams domain. By sending the victim an email that covertly instructs Copilot to append sensitive M365 data to the end of the image URL as query string parameters, this data is transmitted to the attacker’s external server via the GET request for the image.

While the attack does not depend on the victim to click on a malicious link, a video on EchoLeak published by Aim Security demonstrates the victim sending Copilot a message referencing a subject mentioned in the attacker’s email, which triggers the markdown image output.

Aim Security noted the attacker can increase the likelihood that the malicious email will be referenced by Copilot by either sending many emails referencing different topics relevant to the victim, or by sending a single long email separated into chunks that cover a wide range of relevant topics (ex. employee onboarding, human resources FAQ, leave of absence management etc.).

“What stands out especially is that this isn’t limited to Copilot. As Aim Labs warns, any RAG-based agent that processes untrusted inputs alongside internal data is vulnerable to scope violations,” Seker noted. “This signals a broader architectural flaw across the AI assistant space – one that demands runtime guardrails, stricter input scoping, and inflexible separation between trusted and untrusted content.”

Seker recommends organizations defend against similar attacks by disabling external email ingestion by RAG tools like Copilot, enforcing data loss prevention (DLP) tags to flag requests involving sensitive information, and applying prompt-level filters that can block suspicious links and structured outputs.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

Get daily email updates

SC Media's daily must-read of the most current and pressing daily news

By clicking the Subscribe button below, you agree to SC Media Terms of Use and Privacy Policy.

You can skip this ad in 5 seconds