According to BlockBeats, AI researcher Abi Raghuram has identified a prompt injection vulnerability in Notion's newly released AI Agents. This security risk allows attackers to embed hidden text, such as white font, in files like PDFs. When users process these files with the AI Agent, the hidden prompts may be executed, potentially leading to the transmission of sensitive information to external addresses.
Researchers highlight that such attacks often employ social engineering tactics, including impersonating authority, creating urgency, and providing false security assurances to increase their success rate. Experts advise users to exercise caution by avoiding the upload of PDFs or files from unknown sources to the AI Agent. It is also recommended to strictly limit the Agent's internet access and data export permissions, perform steganography removal or cleansing on suspicious files, and conduct manual reviews. Additionally, requiring the AI Agent to display a clear confirmation prompt before any external submission can help mitigate the risk of sensitive data leaks.