Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Ralph Wiggum Plugin Emerges as a Trending Autonomous AI Coding Tool in Claude

    January 14, 2026

    New Test-Time Training Lets Models Keep Learning Without Costs Exploding

    January 14, 2026

    UK, Australia, Canada Clash With Elon Musk Over AI Safety, Truss Pushes Back

    January 13, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      New Test-Time Training Lets Models Keep Learning Without Costs Exploding

      January 14, 2026

      Ralph Wiggum Plugin Emerges as a Trending Autonomous AI Coding Tool in Claude

      January 14, 2026

      Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

      January 13, 2026

      Researchers Push Boundaries on AI That Actually Keeps Learning After Training

      January 13, 2026

      UK, Australia, Canada Clash With Elon Musk Over AI Safety, Truss Pushes Back

      January 13, 2026
    • AI News
    TallwireTallwire
    Home»Tech»Japanese Media Giants Demand OpenAI Halt Use of Their Content
    Tech

    Japanese Media Giants Demand OpenAI Halt Use of Their Content

    5 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Japanese Media Giants Demand OpenAI Halt Use of Their Content
    Japanese Media Giants Demand OpenAI Halt Use of Their Content
    Share
    Facebook Twitter LinkedIn Pinterest Email

    A coalition of prominent Japanese entertainment companies, led by Studio Ghibli and including Square Enix and Bandai Namco, through the trade body Content Overseas Distribution Association (CODA) has formally asked OpenAI to cease training its AI models on their copyrighted works without permission. CODA argues that the use of copyrighted material for AI model training without explicit prior licensing may breach Japanese copyright law, which generally does not allow retroactive opt-outs. The letter follows public concern over AI-generated imagery and video using Japanese IP likenesses and styles, including content generated by OpenAI’s video tool Sora 2. While OpenAI has indicated it will offer more granular content controls, CODA maintains that the “opt-out” approach is legally insufficient and demands full cessation of unauthorized use.

    Sources: GamesRadar, Variety

    Key Takeaways

    – The demand by Japanese rights-holders underscores a growing global tension between generative AI firms and content creators over training data and intellectual-property rights.

    – Japanese copyright law emphasizes prior authorization over retroactive permission or opt-out models, meaning that “train now, ask later” approaches may not satisfy legal standards in Japan.

    – While OpenAI has promised more control and revenue-sharing options for rights holders, the lack of transparency over what data was used and how may increase litigation risk and regulatory scrutiny.

    In-Depth

    The move by Studio Ghibli and its peers is a clear signal that the culture war over AI and creativity is entering a more adversarial phase. On one side stands OpenAI, a technology push-machine whose tools such as GPT-4 and the newer Sora 2 have shown remarkable ability to mimic artistic styles, characters, even film sequences. On the other side stand creators and rights-holders—especially in Japan—arguing that their work, painstakingly crafted over decades, is being used without consent, fair compensation or meaningful control.

    The heart of the dispute rests on one question: what happens when an AI model is trained on copyrighted content? OpenAI may argue that their tools rely on publicly available data or fall within broad fair-use regimes, but Japanese rights-holders say that under Japan’s copyright regime the default is prior permission, not after-the-fact remediation. By sending a letter via CODA to OpenAI, the companies are not only protesting but signalling intent to enforce legal standards. The list of complainants is heavy-hitting: Studio Ghibli, Square Enix, Bandai Namco, Aniplex, Toei, Universal Music among others. They contend that OpenAI’s “opt-out” policy (whereby creators must identify their work and request non-use) is inadequate because Japanese law typically does not recognise liability-avoidance by after-the-fact objection.

    From the tech-industry vantage, OpenAI’s willingness to adopt broader content controls and explore revenue-sharing models offers some concession. CEO Sam Altman has publicly admitted that “people have got to get paid” when their work powers AI systems. Yet the question remains: did OpenAI already train on copyrighted content without agreement, and if so, can it unwind that usage now? An investigation by The Washington Post, for example, suggests that Sora 2 replicated identifiable logos, characters and visual trademarks—suggesting use of scraped, copyrighted content in ways the company has not fully disclosed.

    For the user-sector, especially creative professionals, this development carries deep implications. If Japanese rights-holders succeed in holding AI firms to stricter standards, this could force the entire industry to adopt more transparent datasets, explicit licensing regimes, or layering of compensation mechanisms for artists and studios. On the flip side, the weight of entrenched AI-training workflows may push companies to relocate data-gathering efforts to jurisdictions with looser IP-enforcement or adopt “style-only” training rather than exact replication.

    For the US and other Western jurisdictions, the Japanese stand serves as a cautionary tale. If global rights-holders coast to coast begin mounting unified resistance, the cost of operating large-scale, unfettered training regimes may increase dramatically. The battle ahead will likely involve lawsuits, regulatory investigations, and perhaps new international norms governing how creative works can be used to train generative AI.

    From a conservative, creator-rights perspective, the demands from Japan appear quite reasonable: if a machine is being trained on your work to produce outputs that closely mimic your style or characters, you deserve to know about it, you should have choice in whether it’s used, and you should share in any upside. Technology, after all, must respect property rights—not override them. If we allow AI firms to absorb the cultural capital of creators without consent or compensation, we risk hollowing out the incentive structure that sustains innovation and art.

    In short, the open letter from Japanese media heavyweights to OpenAI isn’t merely symbolic—it may become a watershed moment in how generative AI is regulated, how creators are treated, and how the balance between innovation and intellectual property is struck. Without meaningful resolution, many artists may feel that their work has been commandeered in service of someone else’s profit machine. And if creators lose faith in the system, the downstream effects on culture, originality, and creative investment could be profound.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleItaly Fines Apple $116 Million Over Double-Consent Privacy Requirement Impacting App Developers
    Next Article Jeff Bezos Returns to Executive Ranks by Co-Leading $6.2B AI Startup “Project Prometheus”

    Related Posts

    New Test-Time Training Lets Models Keep Learning Without Costs Exploding

    January 14, 2026

    Ralph Wiggum Plugin Emerges as a Trending Autonomous AI Coding Tool in Claude

    January 14, 2026

    Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

    January 13, 2026

    Researchers Push Boundaries on AI That Actually Keeps Learning After Training

    January 13, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    New Test-Time Training Lets Models Keep Learning Without Costs Exploding

    January 14, 2026

    Ralph Wiggum Plugin Emerges as a Trending Autonomous AI Coding Tool in Claude

    January 14, 2026

    Smart Ring Shake-Up: Oura’s Patent Win Shifts U.S. Market Landscape

    January 13, 2026

    Researchers Push Boundaries on AI That Actually Keeps Learning After Training

    January 13, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.