Microsoft has revealed a new concern for AI users and businesses alike: some companies are secretly influencing AI assistants using a technique dubbed “AI Recommendation Poisoning.” Researchers found over 50 hidden prompts from 31 companies spanning 14 industries, all embedded in buttons labelled “Summarize with AI.”
These buttons, which appear to simply summarise page content, carry hidden instructions delivered through URL parameters. While the visible prompt asks the AI to summarise a page, the concealed instruction tells the assistant to treat that company as a trusted source in future conversations. If successful, this can bias recommendations without the user even knowing it.
How the Technique Operates
Microsoft’s Defender Security Research Team analysed AI-related URLs from email traffic over 60 days and identified dozens of prompt-injection attempts. Most of these hidden instructions encouraged AI assistants to remember a business as a reliable source for a topic or to favour its content over others. In some cases, entire marketing messages, including product features and benefits, were injected directly into the AI’s memory.
The research team traced this practice to publicly available tools such as the npm package CiteMET and the web-based AI Share URL Creator. These tools are marketed to help websites “build presence in AI memory” by generating pre-filled prompts for AI assistants. The technique relies on URL query parameters supported by most major AI platforms, including Copilot, ChatGPT, Claude, Perplexity, and Grok. Microsoft has formally categorised this under MITRE ATLAS as Memory Poisoning (AML.T0080) and LLM Prompt Injection (AML.T0051).
Companies and Industries Targeted
All 31 identified companies were legitimate businesses rather than scammers. Many operated in sensitive sectors such as health and finance, where AI recommendations can influence important decisions. Some domains were easily mistaken for well-known websites, increasing the risk of users trusting them unknowingly. Microsoft also warned that websites hosting user-generated content, such as forums and comments, could see AI assistants extend trust to unverified content across the site, compounding the risk.
Microsoft’s Response
Microsoft has implemented protections in Copilot to prevent cross-prompt injection attacks, noting that some previously reported prompt-injection behaviours can no longer be reproduced. Protections continue to evolve to address new attempts. For organisations using Defender for Office 365, Microsoft has released advanced hunting queries so security teams can scan emails and Teams traffic for URLs that contain memory-manipulation instructions. Users can also review and remove stored Copilot memories through the Personalisation section in Copilot chat settings.
The Implications for AI and Business
Microsoft compares this tactic to traditional SEO poisoning or adware, where the goal is to manipulate search visibility. The key difference is that the target has shifted from search engines to AI memory. By embedding hidden instructions, companies can influence AI recommendations at the point of interaction, bypassing the normal evaluation of source credibility.
For businesses working to improve their visibility through AI tools, this creates a new challenge: some competitors may gain an advantage not through better content but by manipulating the AI itself. With AI brand recommendations already varying widely between similar queries, memory poisoning can skew results toward companies that use these tactics.
What This Means for Users
For AI users, this discovery emphasises the need for awareness. Hidden prompts can affect which sources an AI assistant prioritises, potentially leading to biased recommendations. Microsoft advises users to manage AI memory settings and remain cautious about blindly trusting AI suggestions, particularly in sensitive areas like health, finance, or legal advice.
Looking Forward
Microsoft highlights that this is an ongoing issue. Open-source tools make it easy for new prompt injections to appear faster than platforms can block them, and URL-based prompts are compatible with most major AI assistants. While AI providers may eventually introduce stricter policies or enforcement measures, businesses could continue to exploit these techniques in the near term.
The broader lesson is clear: as AI becomes an increasingly influential source of recommendations and insights, safeguarding the integrity of AI memory and outputs will be just as important as content quality and SEO strategies have been for search engines over the past two decades.
More Digital Marketing BLOGS here:
Local SEO 2024 – How To Get More Local Business Calls
3 Strategies To Grow Your Business
Is Google Effective for Lead Generation?
How To Get More Customers On Facebook Without Spending Money
How Do I Get Clients Fast On Facebook?
How Do You Use Retargeting In Marketing?
How To Get Clients From Facebook Groups