- By Prateek Levi
- Fri, 22 Aug 2025 04:02 PM (IST)
- Source:JND
AI security breaches are not a new thing these days, yet despite constant mishaps, some tech giants aren't learning their lesson. Now, all credit to Grok's share feature, user conversations, almost more than 37,000 of them, many of which contain confidential and personal information, are getting indexed on search engines like Google, DuckDuckGo and Bing.
A report from Forbes confirms this anomaly: those unique Grok URLs created via the “Share” button lack any privacy shield. No “noindex” tags, no access restriction, just naked links accessible to anyone online.
ALSO READ: Get OnePlus 13R Under Rs 35,000 On Flipkart! Check Offer Details And Smartphone Specs
What Got Revealed?
These days people are using AI for even the most personal queries. Search results are now surfacing Grok chat transcripts that include everything from password exchanges and personal health details to even criminal discussions like bomb-making instructions. While the transcripts may be anonymised, identifiers left in the conversations could still expose users to investigators, bad actors, or trolls.
Privacy Lessons Not Learnt
This isn’t the first time an AI platform has slipped up. OpenAI faced a similar backlash when ChatGPT’s shared links started showing up in Google results. Grok appears to have overlooked that lesson, and until xAI puts stronger safeguards in place, every shared Grok link has the potential to become a serious privacy risk.
What You Can Do Right Now
- Stop using the “Share” button for now — assume links are public.
- Audit and delete old shared links and, if needed, request removal through Google’s Content Removal Tool.
- Stick to screenshots for sharing — they don’t create traceable URLs and stay offline.
What Grok and xAI Need to Fix
- Display a clear warning that shared chats may become public.
- Add noindex tags or short-lived secure links to stop search engines from picking them up.
- Conduct regular audits of shared content to prevent sensitive or illegal material from slipping through.
ALSO READ: Google AI Mode Gets Smarter: Adds Restaurant Reservations, Will Soon Handle More Daily Tasks
Bottom Line
As AI becomes more and more integrated with our lifestyle, this leakage issue becomes a serious problem, as there is a serious breach of mutual trust that users are putting on the AI. Sharing should not feel threatening to the user; rather, it should complement the user's intent. From all this, one thing is sure: that platforms like Grok need to build stronger privacy protections, and users should assume that anything they share could end up public.