Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- An attack called “reprompt” used a URL parameter to steal user data.
- One click was enough to trigger the entire attack chain.
- Attackers can still extract sensitive CoPilot data even after the window is closed.
Researchers have uncovered a new attack that requires only a single click to execute, which bypasses Microsoft Copilot security controls and enables the theft of user data.
Too: How to remove Copilot AI from Windows 11 today
meet reprompt
Published by Varonis Threat Labs on Wednesday new research Documentation of Repprompt, a new attack method that affected Microsoft’s Copilot AI assistant.
The reprompt affected Microsoft CoPilot Personal and, according to the team, “gave threat actors an invisible entry point to execute a data-exfiltration chain that completely bypasses enterprise security controls and accessed sensitive data without detection – all with a single click.”
Too: AI PCs aren’t selling and Microsoft’s PC partners are scrambling
No user interaction with Copilot or plugins was required to trigger this attack. Instead, victims had to click on a link.
Following this single click, Repprompt can bypass security controls by abusing the ‘queue’ URL parameter to feed prompt and malicious actions through Copilot, potentially allowing an attacker to ask for data previously submitted by the user – including personally identifiable information (PII).
“The attacker retains control even when Copilot Chat is closed, allowing the victim’s session to be silently logged out without any interaction after that first click,” the researchers said.
How did Repprompt work?
Reprompt combined three techniques:
- Parameter 2 Prompt (P2P Injection):By exploiting the ‘queue’ URL parameter, an attacker can inject a prompt from a URL and inject crafted, malicious instructions that force Copilot to take actions, including data exfiltration.
- double request: While Copilot had security measures in place that prevented direct data intrusions or leaks, the team found that repeating a request for an action twice would force it to perform.
- chain-request: Once the initial prompt (repeated twice) is executed, the reprompt attack chain server issues follow-up instructions and requests, such as seeking additional information.
According to Varonis, this method was difficult to detect because users and client-side monitoring tools could not see it, and it bypassed built-in security mechanisms while hiding the data being infiltrated.
“Copilot gradually leaks data, allowing a threat to use each reply to generate the next malicious instruction,” the team said.
There is a proof-of-concept (POC) video demonstration Available.
Microsoft’s response
Reprompt was quietly revealed to Microsoft on August 31, 2025. Microsoft fixed the vulnerability before public disclosure and confirmed that enterprise users of Microsoft 365 Copilot were not affected.
Too: Want Microsoft 365? Just don’t choose premium – here’s why
“We commend Varonis Threat Labs for responsibly reporting this issue,” a Microsoft spokesperson told ZDNET. “We have introduced security addressing the described scenario and are implementing additional measures to strengthen safeguards against similar technologies as part of our defense in depth approach.”
how to stay safe
AI assistants – and browsers – are relatively new technologies, so hardly a week goes by without a security issue, design flaw, or vulnerability being discovered.
Phishing is one of the most common vectors of cyber attacks, and this particular attack requires the user to click on a malicious link. So, your first line of defense is to be cautious when it comes to links, especially if you don’t trust the source.
Too: Gemini vs Copilot: I compared AI tools on 7 everyday tasks, and there’s a clear winner
As with any digital service, you should use caution when sharing sensitive or personal information. For AI assistants like Copilot, you should also check for any unusual behavior, such as suspicious data requests or strange prompts that may appear.
Varonis suggested that AI vendors and users should remember that trust in new technologies can be exploited and said that “Reprompt represents a broad class of critical AI assistant vulnerabilities driven by external input.”
As such, the team suggested that URLs and external inputs should be treated as untrusted, and therefore validation and security controls should be implemented throughout the process chain. Additionally, safeguards should be implemented that reduce the risk of rapid chaining and repeated actions, and should not stop at the initial signal.
