All modern web browsers have some form of AI built in to help you with your day-to-day tasks. AI-powered browsers like Microsoft Edge (Copilot), Google Chrome (Gemini) and Perplexity (Comet) are transforming how we interact online. These tools can make internet browsing convenient. But with that convenience comes new security risks — prompt injection is a very sneaky one.
Prompt injection occurs when hidden instructions in files or websites trick your AI assistant into performing unintended actions, such as visiting malicious sites or leaking sensitive data.
Prompt injection can wreak havoc, including:
The following are just a few real-life examples of the damage prompt injection can cause:
A proof-of-concept attack involved a webpage containing hidden text (using style sheets to hide it from human view) that instructed the AI assistant to send sensitive information from previous chats to an external server. When a user asked the AI to summarize the page, the AI followed the hidden instructions.
Attackers used prompt injection to manipulate AI-powered social media bots. By embedding hidden instructions in posts or comments, they caused the bots to repost spam or malicious links, amplifying the attack across multiple accounts.
The Obsidian Security Team reported that an AI customer-service agent leaked sensitive account data for weeks. Prompt injection bypassed traditional controls, resulting in millions of dollars in fines and remediation.
Major AI browsers (OpenAI, Perplexity, Copilot, Edge, Gemini) were hijacked by hidden instructions in webpages to leak credentials and perform actions without user awareness. NBC News reported on this here.
AI is a powerful tool, but understanding its vulnerabilities — like prompt injection — helps you stay safe. Stay informed, update regularly and use AI features wisely.
Our Technology Solutions Group can help with your organization’s cyber security. Contact us online or call 410.685.5512.