Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    FCC Cyber Trust Mark Program Losses Lead Administrator Amid China Security Probe

    January 14, 2026

    Attackers Are Using Phishing Emails That Look Like They Come From Inside Your Company

    January 14, 2026

    Memory Market Mayhem: RAM Prices Skyrocket and Could “10x” by 2026, Analysts Warn

    January 14, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Replit CEO: AI Outputs Often “Generic Slop”, Urges Better Engineering and “Vibe Coding”

      January 14, 2026

      Memory Market Mayhem: RAM Prices Skyrocket and Could “10x” by 2026, Analysts Warn

      January 14, 2026

      New Test-Time Training Lets Models Keep Learning Without Costs Exploding

      January 14, 2026

      Ralph Wiggum Plugin Emerges as a Trending Autonomous AI Coding Tool in Claude

      January 14, 2026

      Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

      January 13, 2026
    • AI News
    TallwireTallwire
    Home»Tech»Google Pushes Back On Claims It Used Gmail Threads To Train AI
    Tech

    Google Pushes Back On Claims It Used Gmail Threads To Train AI

    4 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Google Pushes Back On Claims It Used Gmail Threads To Train AI
    Google Pushes Back On Claims It Used Gmail Threads To Train AI
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Google is firmly rejecting a flurry of recent allegations that its email service, Gmail, along with its associated chat and meeting platforms, has been scanned and used to train its AI models—contradicting a proposed class-action lawsuit filed earlier this month. The tech giant clarified that its “smart features” (such as autocomplete, predictive text, and calendar/itinerary extraction) have existed for years, and that enabling those features does not amount to granting consent for AI-model training using private email or chat content. At the same time, critics continue to question the transparency of those features, noting they are often enabled by default and that user settings and disclosures around personalization have recently been rewritten in ways that fueled confusion. Key reporting underscores that while Google says the allegations are misleading, the broader debate around how automated “smart” features operate—and what data they actually access—remains very much live.

    Sources: TBS News, Times of India

    Key Takeaways

    – Google’s public statement: Gmail content is not used for training its Gemini AI model, despite viral claims to the contrary.

    – Many of the contested smart-features are enabled by default and recently re-surfaced or renamed, prompting user confusion about what they actually entail.

    – Even if AI-training isn’t occurring, the fact that user content is leveraged for features such as Smart Compose, Smart Reply, calendar/itinerary extraction etc. keeps privacy and consent concerns front and center.

    In-Depth

    Google’s recent declaration that it does not use your Gmail messages or attachments to train its AI models marks a significant moment in the ongoing debate over user data, consent and the hidden workings of tech giants. On the surface, the announcement appears straightforward: the company states the core change alleged in a class-action privacy suit — that Gmail, Chat and Meet content was being automatically used to train AI models like Gemini without explicit opt-in consent — is incorrect. According to a Google spokesperson, the company has made “no change” to its settings and maintains that enabling smart features does not equate to handing over the content of your communications for AI training.

    Digging a layer deeper, however, some concerns remain valid. The focus here is on what Google classifies as “smart features,” such as Smart Compose, Smart Reply, predictive itinerary extraction, order tracking, and cross-product personalization in apps like Maps or Wallet. While these services may not train AI models, they clearly process user-generated content for personalization—so the line between “feature” and “training” becomes murkier. Many users are discovering that the checkboxes for these features are turned on by default, rather than requiring active opt-in, and that Google recently rewrote, repositioned, or renamed several of the related settings—actions which triggered the bulk of the controversy and subsequent media coverage.

    From a conservative perspective concerned with privacy, individual choice, and transparency, the heart of the matter isn’t just whether training is happening, but how much control users have over what happens to their data in the first place. Tech firms may argue that data used for “smart” features is distinct from data used for “AI training,” but for the end-user, the practical difference is minimal if both uses involve parsing email content and personal communications. This makes clarity around default settings, meaningful consent and the ability to truly opt out critical.

    Moreover, the optics don’t help: a lawsuit alleges that Google secretly enabled its Gemini AI model to access private communications starting around October 2025, a claim the company denies but which adds fuel to public suspicion. Security firms have described the situation as a “perfect storm of misunderstanding” — the features at issue aren’t new, but their presentation and the default setting behavior have changed enough that many users concluded consent had been assumed rather than obtained.

    For users, the takeaway is straightforward: if you want to mitigate risk, log into your Google account now and audit the settings for “Smart features in Gmail, Chat & Meet,” “Smart features in Google Workspace,” and “Smart features in other Google products.” Turn off anything you don’t want, knowing some conveniences—like Smart Compose—may cease to function. The broader lesson echoes beyond just Google: in the age of AI and ever-more capable machine-learning systems, what counts isn’t merely what companies say they won’t do, but what they can do by default—and whether users truly understand and consent to that. For an ecosystem rightly built on trust and choice, defaulting to the former should always mean empowering the latter.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleGoogle Pursues Major Amazon Reforestation Offset Deal
    Next Article Google Quietly Continues Gathering Data from Retired Nest Thermostats Despite Cutting Off Their Smart Functions

    Related Posts

    Replit CEO: AI Outputs Often “Generic Slop”, Urges Better Engineering and “Vibe Coding”

    January 14, 2026

    Memory Market Mayhem: RAM Prices Skyrocket and Could “10x” by 2026, Analysts Warn

    January 14, 2026

    New Test-Time Training Lets Models Keep Learning Without Costs Exploding

    January 14, 2026

    Ralph Wiggum Plugin Emerges as a Trending Autonomous AI Coding Tool in Claude

    January 14, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Replit CEO: AI Outputs Often “Generic Slop”, Urges Better Engineering and “Vibe Coding”

    January 14, 2026

    Memory Market Mayhem: RAM Prices Skyrocket and Could “10x” by 2026, Analysts Warn

    January 14, 2026

    New Test-Time Training Lets Models Keep Learning Without Costs Exploding

    January 14, 2026

    Ralph Wiggum Plugin Emerges as a Trending Autonomous AI Coding Tool in Claude

    January 14, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.