A new investigation reveals that the AI video app Sora 2, released by OpenAI, is already being used to create fetish-style videos featuring individuals’ faces without full informed consent—one tech journalist found ten out of her top twenty-five cameos were fetish-oriented, including belly-inflation and giantess content. The app allows users who opt-in “cameos” of their own face, but some users leave their settings open to strangers, opening the door to unsettling usage. OpenAI has acknowledged shadowy misuse, and although the app prohibits nudity and overtly sexual content, niche fetish videos appear to be proliferating. While Sora 2 also allows other users to generate videos with well-known figures (including deceased ones) and is now under fire from celebrity estates—for instance the estate of Martin Luther King Jr. requested a pause on his likenesses—critics say the platform’s safeguards around consent, likeness rights and “deep-fake” style misuse are still inadequate. The tech raises new questions about privacy, digital identity, and the future of AI-driven content creation in the social-media era.
Sources: Business Insider, The Guardian
Key Takeaways
– The Sora 2 app enables users to upload their own face and voice “cameo” to appear in generated videos, but when their settings are open to the public this exposure can lead to non-consensual usage in fetish or deep-fake style content.
– Despite OpenAI’s stated ban on nudity and sexual content, niche fetish outputs (belly-inflation, giantess, feet, etc.) are reportedly widespread—suggesting the guardrails may be too weak or too narrowly defined.
– Celebrity and estate push-back (such as demands around Martin Luther King Jr.’s likeness) highlight that the implications extend beyond individuals to public figures and copyright/likeness law—raising serious privacy, ethical, and regulatory issues for generative video platforms.
In-Depth
The launch of Sora 2 by OpenAI marks a leap in AI-driven video generation, but with that leap comes a substantial set of risks—and right-leaning watchers might question whether the promise of innovation is outpacing the protections for individuals, especially when it comes to explicit or quasi-explicit content and the use of human likenesses. According to the investigation published by Business Insider, a reporter allowed her face to be used in the “cameo” setting of Sora 2, thinking it would be a fun way to explore the app. Instead she discovered that many of the most popular videos using her face involved fetish content—belly-inflation sequences, giantess tropes, foot fetish clips—all created by strangers using her likeness without meaningful consent. The fact that ten out of her top 25 most-viewed “cameos” were fetish videos suggests a pattern—not random isolated misuse but an ecosystem inclined toward this kind of content. The app’s design—allowing anyone (if you permit it) to create a clip with your likeness—means the door is wide open.
From a conservative vantage, this raises immediate concerns. One is the erosion of individual control: consent should be clear, informed, and revocable—but opting into a cameo and then having your likeness reused in graphic or fetish contexts seems to fall short of that standard. In a world where digital identity is increasingly precious, giving away “face rights” loosely is akin to relinquishing personal control. Secondly, the fact that niche fetish content is thriving suggests that the platform’s content definitions are misaligned with common-sense expectations. While nudity and overt sexual acts might be banned, fetish content sits in a gray zone—and in practice that gray zone is becoming saturated. If a platform allows a user’s face to be used in fetish contexts by others, it opens the door to reputational harm, emotional distress, and even legal risk.
Moreover, the issue extends to public figures and deceased individuals. The Guardian reports increasing concern from actors like Bryan Cranston, who have found their likeness used in Sora 2 videos they did not authorize. In response, OpenAI has promised stronger safeguards and has publicly supported legislation like the NO FAKES Act, which aims to ban unauthorized AI-generated likenesses. Even more striking: the estate of Martin Luther King Jr. requested a pause on his likeness being used after “disrespectful” deepfakes appeared—OpenAI complied, but only after public backlash. This reveals a reactive rather than proactive regulatory posture.
On the business side, OpenAI’s move to launch Sora 2 as a social-video app (rather than simply a research tool or API) is strategic: vertical-scroll feeds, celebrity-style viral content, and user-generated “viral moment” potential tap directly into the largest social-media growth engine. That’s innovation, but from a policy lens it means the platform is exposed to mass-scale misuse almost immediately. The combination of advanced video generation, voice and face encoding, and social-feed mechanics creates a potent mix of creative empowerment and threat to personal and public rights.
A right-leaning viewpoint would question whether regulation is simply reacting too slowly. Rapid innovation in Big Tech often outpaces rights protections, and platforms like Sora 2 show that even “responsible AI” firms risk becoming conduits for misuse if guardrails are inadequate. The default setting should lean toward individual control and explicit consent, especially when one’s face or voice can appear in content they never envisioned. Platforms should anticipate worst-case uses—from fetish imagery to impersonation and deep-fake fraud—and build default “opt-in from closed” systems, rather than open by default. Additionally, content policies covering niche fetish contexts should be clearer—if misuse is happening because the rules are fuzzy or enforcement weak, liability should tilt toward stricter standards.
In practical terms for individuals, the Sora 2 scenario is a cautionary tale: be very careful about enabling your likeness in any AI platform, especially if you open it to “anyone.” The promise of creative fun may mask serious implications. For developers and platforms, the takeaway is that video generation is a different domain from still-image generation: motion, voice, and likeness combined raise far deeper personal-rights issues. And for regulators and policy-builders, this case underscores that AI content platforms should face the same kind of pre-emptive scrutiny that broadcast, film, and print media once did—not merely after the fact.
In sum, while Sora 2 may represent a breakthrough in creative technology, the way it is currently being used—especially for fetish content and deepfakes of real peoples’ faces—raises serious ethical and regulatory concerns. If a user’s face can be swept into a video by an app they barely understand, and emerge in a context they wouldn’t choose, then the societal cost of “innovative” can be high. As generative video becomes mainstream, ensuring individual consent, controlling public-figure impersonation, and clarifying the gray zone of fetish content should be treated not as optional but foundational.

