Anthropic is rolling out a major shift in how it handles user data, starting September 28, 2025—unless users actively opt out, their new or resumed chat and code sessions across all consumer tiers will be used to train its AI models, with data retention now extended to five years. The opt-out toggle appears in a prompt during sign-up or via a conspicuous pop-up for existing users, defaulted to “on.” This change raises concerns about the meaningfulness of consent, even though users can change their preferences later (though past data already used cannot be undone) and Anthropic says it doesn’t sell data and uses automated filtering to protect privacy.
Sources: Hindustan Times, The Verge, GHacks.net
Key TakeawayS
– Consent shifted to opt-out: Anthropic used to retain chat data for just 30 days and required opt-in. Now, data is used unless users take explicit action to opt out.
– Extended data retention: For consenting users, Anthropic will hold onto chat and coding data for five years, a substantial change in retention period.
– User control limited and irreversible: While users can update settings anytime, any data already used for training can’t be “un-used” or removed from model learning.
In-Depth
Starting September 28, 2025, Anthropic—maker of the AI assistant Claude—is implementing a new policy that fundamentally changes how user data fuels its AI development. Under the old system, chats were retained for 30 days unless opt-in consent was given. Now, Anthropic’s default setting opts you in, storing new or resumed conversations for up to five years and using them to train its models—unless you actively opt out.
This change comes via a strikingly formatted prompt: a large “Accept” button dominates, while the opt-out toggle is smaller and preset to “On.” New users must make a choice during account creation; existing users are shown a pop-up they can “Not now” away, but they’ll face a hard deadline at the end of September. Once data is harvested for training, no amount of subsequent toggling can rewind that usage.
Anthropic maintains it filters or obfuscates sensitive content and doesn’t monetize user data. Still, critics worry that the switch from opt-in to opt-out undermines informed consent. The extended retention period—five years—is long compared to industry norms, raising further privacy red flags. Meanwhile, offering minimal friction to remain opted in effectively ensures more user data flows into their model pipeline, boosting Claude’s capabilities but triggering debate on data ethics.
This represents a sharper turn into the AI training arms race: more real-world data means smarter models—but at the cost of user control and privacy clarity. Whether Anthropic’s safeguards are enough to justify this trade-off will likely be tested in both public opinion and legal scrutiny in the months ahead.

