Follow ZDNET: Add us as a favorite source On Google.
ZDNET Highlights
- Researchers found a high-severity bug in Chrome’s Gemini feature.
- This gives extensions the ability to spy on you or steal your data.
- update Now.
A new vulnerability affecting Google Chrome’s Gemini Agentic AI feature has been disclosed – patch now to be safe.
Too: MIT study shows AI agents are fast, loose, and out of control
Gal Weissman, senior principal security researcher at Palo Alto Networks’ Unit 42 team, revealed the browser vulnerability affects Google Chrome. Gemini AI feature, an artificial intelligence (AI) agentic browser assistant.
Vulnerability, explained
tracked as CVE-2026-0628 And deemed high severity, the vulnerability is described as an issue with “insufficient policy enforcement in the WebView tag in Google Chrome” that, prior to version 143.0.7499.192 of the browser, “allowed an attacker who convinced a user to install a malicious extension to inject script or HTML into a privileged page via a crafted Chrome extension.”
Also: Why scammers don’t say anything when they call – and how to respond safely
The team found that an extension with access to the basic permission set, via Declarative.NET Request APIPermissions that an attacker can use to inject JavaScript code into the new Gemini Panel browser component.
According to researchers, this vulnerability can be exploited as part of a broader attack series Targeting Google Chrome users.
For example, if an attacker can convince a target to download and install an innocent-looking browser extension, a malicious extension can exploit the policy issue to hijack Gemini. The AI assistant can then perform actions without permission, including granting the cybercriminal access to webcams and microphones, taking screenshots, and providing access to local files and directories. The panel can also be hijacked for phishing purposes.
“Since Gemini apps depend on them to function for legitimate purposes, hijacking the Gemini panel allows privileged access to system resources that an extension would not normally have,” the researchers said.
how to stay safe
Following private disclosures made by Palo Alto Networks to Google last October, it has been up to the team to test, iterate and fix the bug.
Included in the January patch of Google Chrome notesThe Chrome security team developed a workaround and included it in the 143.0.7499.192/.193 Windows and macOS stable channels (143.0.7499.192 for Linux). there were extras too security patch It has since been implemented, including fixes for vulnerabilities such as out-of-bounds bugs.
Also: Destroyed servers and DoS attacks: what can happen when OpenClaw AI agents interact
The best advice is simple: As soon as you see an alert that a new version of Chrome is available — usually to the right of your address bar on desktop — accept the update. Not only can you benefit from performance improvements and new features, but the patches included in these releases will reduce the risk to your browser and data.
Agentic Browser Security – What’s the Big Deal?
Agentic browsers could be the benchmark for browser experiences in the future. However, while they are still evolving, we are facing new cybersecurity challenges that are increasing the attack surface and putting our privacy and data at risk.
Agent AI features, typically created through messaging and chat windows, are intended to answer our questions, receive information on our behalf, fill out forms for us, and help us manage our workflow. But the security implications of giving often unused, untested and unsecured AI-powered tools the keys to online accounts and services – not to mention the power to act on our behalf – has created a security nightmare for defenders.
Too: Microsoft and ServiceNow’s exploitable agents reveal a growing – and preventable – AI security crisis
Why are they worried? In addition to the usual vulnerabilities, disclosures, and patch management that are required for software today, AI browsers and agents may be susceptible to early-injection attacks. Malicious instructions can be hidden in source content and websites, which then hijack these devices, forcing them to hand over the user’s sensitive information, monitoring and performing all kinds of illegal activities.
There is also a question of trust. How much should you trust an AI system with personal data, and how would it affect you if it were exposed or leaked?
A recent MIT study found serious shortcomings in the race to “fast and loose” agentic AI development with respect to security testing, and so such technologies should be treated with caution.
Also: This new phone scam has ‘carriers’ calling to exchange your device – don’t fall for it
We have not yet seen the full security risks posed by agentic AI, but we have yet to see its true potential. The true challenge will be to manage its benefits while balancing the risks – and this applies to both consumers and businesses.
“Innovation cannot come at the expense of security,” commented Anupam Upadhyay, senior vice president of product management for Palo Alto Networks’ Prisma SASE. “If organizations choose to deploy agentic browsers, they need to treat them as high-risk infrastructure with runtime visibility, enforced policy controls, and rigid guardrails built in from day one. Anything less than that invites compromise.”
