• Source:JND

Anthropic is making a major change to how it handles user data on Claude. For the first time, the company will begin using consumer conversations to train its AI models — unless users opt out by September 28.

The update introduces stricter rules on consent and retention, with chats potentially stored for up to five years. For many, it marks a significant departure from Anthropic’s earlier approach, where interactions were automatically deleted after 30 days unless flagged.

ALSO READ: iPhone 17 Air Vs iPhone 17 Pro Max What To Expect Before Apple’s September 9 'Awe Dropping' Event

What’s Changing for Claude Users

Up until now, Anthropic kept consumer conversations separate from training. That’s about to shift. Starting this fall, chats and coding sessions from users who don’t opt out may be included in the company’s AI training pipeline.

Retention period: Data may be stored for as long as five years.

Scope: Applies to Claude Free, Pro, Max, and Claude Code.

Exemptions: Enterprise clients—including Claude Gov, Claude for Work, Claude for Education, and API users—remain unaffected, echoing OpenAI’s decision to shield business customers.

Why Anthropic Says It’s Making the Change

The company argues that real-world user interactions will make Claude more accurate and reliable. Anthropic points to benefits in areas like reasoning, coding, and analysis, presenting the policy shift as a way for everyday users to directly strengthen future models.

Industry watchers, however, see another layer: competition. Rival firms such as OpenAI and Google rely heavily on user data to refine their systems. By tapping into Claude’s conversations, Anthropic could gain a sharper edge in the race to build more advanced AI.

The Consent Question

The rollout has sparked debate around transparency. New users will be asked to choose at sign-up, but existing users will encounter a pop-up where the “Accept” option is highlighted, while the opt-out toggle is smaller and already switched on by default.

Critics say this design risks nudging many into agreeing without full awareness. Privacy advocates also note that consent becomes murky when policies are wrapped in complex legal language or buried in fine print.

ALSO READ: Google’s Preferred Sources Lets You Set Your Go-To News Outlet: Here’s How To Set The Daily Jagran As Yours

Regulators are watching closely. The U.S. Federal Trade Commission has already warned AI companies against making quiet, hard-to-find changes to their data practices. Anthropic’s new policy could become another test case for how consent is defined in the age of generative AI.