Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Utah Launches First-Ever AI Prescription Pilot in the U.S., Sparking Debate on Safety and Innovation

    January 13, 2026

    EU Widens Tech Crackdown, Targeting Musk’s Grok and TikTok Over Alleged AI Law Violations

    January 13, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

      January 12, 2026

      Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

      January 12, 2026

      Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

      January 12, 2026

      Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

      January 12, 2026

      AI Adoption Leaders Pull Ahead, Leaving Others Behind

      January 11, 2026
    • AI News
    TallwireTallwire
    Home»Tech»Anthropic Shifts Privacy Policy: User Chats May Be Used for AI Training Unless You Act
    Tech

    Anthropic Shifts Privacy Policy: User Chats May Be Used for AI Training Unless You Act

    Updated:December 25, 20253 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Anthropic Shifts Privacy Policy: User Chats May Be Used for AI Training Unless You Act
    Anthropic Shifts Privacy Policy: User Chats May Be Used for AI Training Unless You Act
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Anthropic is rolling out a major shift in how it handles user data, starting September 28, 2025—unless users actively opt out, their new or resumed chat and code sessions across all consumer tiers will be used to train its AI models, with data retention now extended to five years. The opt-out toggle appears in a prompt during sign-up or via a conspicuous pop-up for existing users, defaulted to “on.” This change raises concerns about the meaningfulness of consent, even though users can change their preferences later (though past data already used cannot be undone) and Anthropic says it doesn’t sell data and uses automated filtering to protect privacy.

    Sources: Hindustan Times, The Verge, GHacks.net

    Key TakeawayS

    – Consent shifted to opt-out: Anthropic used to retain chat data for just 30 days and required opt-in. Now, data is used unless users take explicit action to opt out.

    – Extended data retention: For consenting users, Anthropic will hold onto chat and coding data for five years, a substantial change in retention period.

    – User control limited and irreversible: While users can update settings anytime, any data already used for training can’t be “un-used” or removed from model learning.

    In-Depth

    Starting September 28, 2025, Anthropic—maker of the AI assistant Claude—is implementing a new policy that fundamentally changes how user data fuels its AI development. Under the old system, chats were retained for 30 days unless opt-in consent was given. Now, Anthropic’s default setting opts you in, storing new or resumed conversations for up to five years and using them to train its models—unless you actively opt out.

    This change comes via a strikingly formatted prompt: a large “Accept” button dominates, while the opt-out toggle is smaller and preset to “On.” New users must make a choice during account creation; existing users are shown a pop-up they can “Not now” away, but they’ll face a hard deadline at the end of September. Once data is harvested for training, no amount of subsequent toggling can rewind that usage.

    Anthropic maintains it filters or obfuscates sensitive content and doesn’t monetize user data. Still, critics worry that the switch from opt-in to opt-out undermines informed consent. The extended retention period—five years—is long compared to industry norms, raising further privacy red flags. Meanwhile, offering minimal friction to remain opted in effectively ensures more user data flows into their model pipeline, boosting Claude’s capabilities but triggering debate on data ethics.

    This represents a sharper turn into the AI training arms race: more real-world data means smarter models—but at the cost of user control and privacy clarity. Whether Anthropic’s safeguards are enough to justify this trade-off will likely be tested in both public opinion and legal scrutiny in the months ahead.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAnthropic’s Claude Sonnet 4.5 Pushes Coding Frontiers While OpenAI Rolls Out Proactive “Pulse” Feature
    Next Article Anthropic Taps Stripe Veteran as New CTO Amid Infrastructure Arms Race

    Related Posts

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026

    Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

    January 12, 2026

    Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

    January 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026

    Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

    January 12, 2026

    Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

    January 12, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.