Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Microsoft Warns of a Surge in Phishing Attacks Exploiting Misconfigured Email Systems

    January 12, 2026

    SpaceX Postpones 2026 Mars Mission Citing Strategic Distraction

    January 12, 2026
    Facebook X (Twitter) Instagram
    • Tech
    • AI News
    Facebook X (Twitter) Instagram Pinterest VKontakte
    TallwireTallwire
    • Tech

      Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

      January 12, 2026

      Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

      January 12, 2026

      Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

      January 12, 2026

      Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

      January 12, 2026

      AI Adoption Leaders Pull Ahead, Leaving Others Behind

      January 11, 2026
    • AI News
    TallwireTallwire
    Home»AI News»Altman U-Turns on Sora Copyright Policy — Moves from Opt-Out to Granular Opt-In Controls
    AI News

    Altman U-Turns on Sora Copyright Policy — Moves from Opt-Out to Granular Opt-In Controls

    Updated:December 25, 20254 Mins Read
    Facebook Twitter Pinterest LinkedIn Tumblr Email
    Altman U-Turns on Sora Copyright Policy — Moves from Opt-Out to Granular Opt-In Controls
    Altman U-Turns on Sora Copyright Policy — Moves from Opt-Out to Granular Opt-In Controls
    Share
    Facebook Twitter LinkedIn Pinterest Email

    OpenAI CEO Sam Altman has announced a major pivot in how the company’s AI video app Sora will handle copyrighted content: rather than allowing studios and creators to opt out of having their IP appear by default, OpenAI will now require opt-in permission and offer granular controls over how characters and works are used. In a blog update, Altman said rights holders will be able to specify usage limits or block inclusion altogether, and OpenAI intends to explore revenue sharing with those who opt in. This reversal comes after widespread backlash over unauthorized use of iconic characters and concerns that Sora’s earlier policy placed too much burden on creators. Meanwhile, OpenAI is also rolling out more user safeguards around how users’ own likenesses appear in generated videos, letting people restrict contexts, language, or placement of their digital “cameos.”

    Sources: The Guardian, The Verge

    Key Takeaways

    – OpenAI’s shift to opt-in copyright control signals a retreat from a controversial default-permission model and aims to place control back with content owners.

    – Granular settings will let rights holders dictate how, where or whether their IP is used in Sora videos, with a pathway toward revenue sharing for those who participate.

    – Alongside IP changes, OpenAI is tightening user-facing safeguards — letting people limit where and how their AI likeness appears — as Sora faces scrutiny over misuse and deepfake risks.

    In-Depth

    OpenAI’s video generation tool, Sora, is stirring up the AI and entertainment worlds with a freshly announced policy reversal on handling copyrighted material. What started as a bold bet that Sora could rely on an opt-out system — where copyrighted characters would be included by default unless rightsholders demanded exclusion — has now pivoted to a more cautious, rights-sensitive approach. Sam Altman recently stated that OpenAI will deploy granular, opt-in controls that let rights holders control usage, including the option to block inclusion entirely. This move follows a wave of backlash from creators, studios, and legal observers alike.

    Under the original plan, IP owners had to proactively submit opt-out requests to keep their characters or content out of Sora-generated videos. Many saw this as unfair: it put the burden squarely on creators to monitor and police use of their own work. That approach also risked flooding Sora with unauthorized renditions of beloved characters, raising copyright infringement fears and legal exposure. High-profile misuse cases — like Sora-generated videos featuring SpongeBob in edgy, unofficial scenarios — added fuel to the fire and intensified pressure. Some of these derivative clips even veered dangerously close to depicting illicit actions or using characters in contexts far removed from their original works.

    Facing this backlash, Altman’s announcement indicates a tactical retreat. With the new system, rights holders must opt in for their work to be used, and OpenAI will introduce fine-grained rules over how characters are used — whether limiting contexts, forbidding certain behaviors, or blocking them entirely. OpenAI also plans to experiment with revenue-sharing models for creators who grant permission, to make participation more appealing and equitable.

    In parallel, OpenAI is enhancing the controls users have over their own digital likenesses. Sora’s “cameo” feature lets users upload biometric data, enabling AI-generated video versions of themselves. Now, users can restrict where their likeness appears (e.g. disallow political content), block certain words or contexts, or impose custom filtering. These updates are intended to curb misuse and restore a measure of trust, especially given growing concerns about deepfake proliferation.

    But the shift is not without challenges. OpenAI admits there will still be “edge cases” — content slips that bypass restrictions — and that the revenue model will require iteration. The legal landscape remains uncertain: even with opt-in controls, generating new content with recognizable characters may still spark disputes over derivative works or fair use boundaries. Studios and IP holders will need to engage proactively — reviewing permissions, drafting clauses, and monitoring usage. Meanwhile, OpenAI’s balancing act is delicate: it must maintain the creative promise of Sora, while ensuring it doesn’t trample creators’ rights.

    In short, this about-face in policy reflects both technological ambition and caution. OpenAI is acknowledging that, for Sora to succeed sustainably, it must offer stronger protections, transparent consent, and fair incentives for rights holders. Whether this recalibration will placate critics or shift legal risk is still to be seen.

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleAlphabet Breaks $3 Trillion Mark as AI Optimism & Antitrust Relief Propel Big Tech Surge
    Next Article Amazon AGI Labs Chief Defends ‘Reverse Acquihire’ Strategy

    Related Posts

    Microsoft Warns of a Surge in Phishing Attacks Exploiting Misconfigured Email Systems

    January 12, 2026

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026

    Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

    January 12, 2026
    Add A Comment
    Leave A Reply Cancel Reply

    Editors Picks

    Malicious Chrome Extensions Compromise 900,000 Users’ AI Chats and Browsing Data

    January 12, 2026

    Wearable Health Tech Could Create Over 1 Million Tons of E-Waste by 2050

    January 12, 2026

    Viral Reddit Food Delivery Fraud Claim Debunked as AI Hoax

    January 12, 2026

    Activist Erases Three White Supremacist Websites onstage at German Cybersecurity Conference

    January 12, 2026
    Top Reviews
    Tallwire
    Facebook X (Twitter) Instagram Pinterest YouTube
    • Tech
    • AI News
    © 2026 Tallwire. Optimized by ARMOUR Digital Marketing Agency.

    Type above and press Enter to search. Press Esc to cancel.