ChannelLife New Zealand - Industry insider news for technology resellers
Illustration open laptop data streams leaking warning symbols ai vulnerabilities

Seven ChatGPT flaws expose user data to attack, Tenable warns

Thu, 6th Nov 2025

Tenable Research has identified seven vulnerabilities in ChatGPT, initially observed in ChatGPT-4o and with several issues persisting in ChatGPT-5, that could allow the theft of personal data and compromise user privacy.

The research, conducted under responsible disclosure protocols, revealed a series of flaws dubbed collectively as "HackedGPT", highlighting multiple routes through which attackers could exfiltrate user data by exploiting ChatGPT's web browsing and memory functions. While OpenAI has resolved some of the vulnerabilities, others remained unaddressed at the time of reporting, leaving certain exploit paths accessible to potential attackers.

New class of attack

Central to Tenable's findings is a security weakness known as indirect prompt injection. In this method, attackers embed hidden instructions within online content - such as comments on blogs or message boards - which are then unwittingly executed by ChatGPT when it processes those pages. This means that the model can be coerced into taking unauthorised actions simply by retrieving data from the web, thereby bypassing user intent and safety restrictions.

The identified vulnerabilities expose several entry points for attack, including "0-click" scenarios where no user interaction is needed, and "1-click" attacks, requiring only a click on a malicious link. Particularly significant is what researchers call Persistent Memory Injection, where hazardous instructions are saved within ChatGPT's memory and remain active across sessions, creating opportunities for ongoing private data leakage.

"HackedGPT exposes a fundamental weakness in how large language models judge what information to trust," said Moshe Bernstein, Senior Research Engineer at Tenable. "Individually, these flaws seem small - but together they form a complete attack chain, from injection and evasion to data theft and persistence. It shows that AI systems aren't just potential targets; they can be turned into attack tools that silently harvest information from everyday chats or browsing."

Breakdown of vulnerabilities

Tenable's research details seven specific techniques and vulnerabilities:

  • Indirect prompt injection via trusted sites: Attackers conceal instructions in legitimate-appearing content online. When ChatGPT encounters this material, it may follow those commands without user intervention.
  • 0-click indirect prompt injection in search context: Users can be exposed to attacks simply by asking questions, as ChatGPT's web search might retrieve a page carrying hidden malicious instructions, leading to a single-prompt compromise.
  • Prompt injection via 1-click: Adversaries can embed directives in what looks like harmless links. A single click is sufficient to trigger the model into carrying out unintended actions, potentially giving attackers control over the chat session.
  • Safety mechanism bypass: ChatGPT normally filters unsafe links, but attackers can obscure the destination by employing trusted redirect URLs, causing the model to interact with malicious sites.
  • Conversation injection: Exploiting ChatGPT's division between search and conversation features, attackers may use search-generated content to insert instructions the conversational model then follows, even if the instructions were never directly provided by the user.
  • Malicious content hiding: Formatting bugs make it possible to hide commands within code snippets or markdown text, making them invisible to users but still actionable by ChatGPT.
  • Persistent memory injection: Malicious instructions can be stored long-term within ChatGPT's memory function, causing ongoing data leaks until the stored memory is cleared.

Risks and implications

Given ChatGPT's widespread adoption for business, academic, and personal communication, the potential consequences of these vulnerabilities include unauthorised insertion of commands into conversations, theft of sensitive information from chat logs or linked accounts, exfiltration through browsing integration, and manipulation of AI-generated responses.

While some vulnerabilities have been addressed, Tenable noted that several remain unpatched in ChatGPT-5. The company has recommended that vendors fortify their systems against such attacks by ensuring safety mechanisms are robust and by isolating browsing, search, and memory features to mitigate against cross-context exploitation.

Advice for security professionals

Tenable's recommendations to IT security teams include approaching AI systems as active attack surfaces, conducting regular auditing and monitoring for manipulation or data leaks, investigating anomalies that might indicate prompt injection, and establishing strict governance and data classification for AI use.

"This research isn't just about exposing flaws - it's about changing how we secure AI," Bernstein added. "People and organisations alike need to assume that AI tools can be manipulated and design controls accordingly. That means governance, data safeguards, and continuous testing to make sure these systems work for us, not against us."
Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X