Sora 2’s Realistic AI Videos Raise Fears Of Deepfake Abuse And Disinformation
Within just three days of its invite-only launch, OpenAI’s new app Sora 2 has become the centre of a storm.
The text-to-video tool, capable of generating lifelike footage from a single prompt, was used by users to create fabricated scenes of ballot fraud, violent crimes, protests, and immigration raids — none of which actually occurred.
The platform also lets users upload their own images and voices, enabling hyperrealistic digital replicas of themselves in fictional scenarios.
While OpenAI says its system includes safety checks, The New York Times found the app could still generate convincing deepfakes of children, deceased celebrities and non-political public figures, raising major ethical and security concerns.
Experts Warn Of Deepfake Disinformation And The ‘Liar’s Dividend’
AI researchers have sounded the alarm over how quickly Sora’s ultra-realistic videos could blur the line between truth and fabrication.
Hany Farid, professor of computer science at the University of California, Berkeley, said,
“It’s worrisome for consumers who every day are being exposed to God knows how many of these pieces of content. I worry about it for our democracy. I worry for our economy. I worry about it for our institutions.”
Lucas Hansen from CivAI called it “the death of digital proof,” saying that convincing AI footage now allows anyone to dismiss real events as fake — a psychological effect experts call the “liar’s dividend.”
Although Sora videos include a moving watermark, analysts noted it can be removed with editing software, making false footage nearly indistinguishable from reality.
OpenAI Defends Its Safety Measures Amid Growing Backlash
OpenAI maintains that Sora underwent “extensive safety testing” before launch.
The company’s statement stressed:
“Our usage policies prohibit misleading others through impersonation, scams or fraud, and we take action when we detect misuse.”
In internal tests, the app refused to produce violent or explicit political content, and rejected requests involving global leaders like US President Donald Trump.
However, it still produced a rally featuring the voice of former President Barack Obama when prompted with vague political terms.
A spokesperson said the app’s “thoughtful and iterative approach” aims to reduce risks, but admitted the rollout would evolve with feedback.
Users Exploit Loopholes With Deepfakes Of Celebrities And Executives
Despite the restrictions, Sora 2’s feed was soon filled with deepfakes of deceased public figures including Michael Jackson, Tupac Shakur and painter Bob Ross.
One viral clip created by an OpenAI developer depicted CEO Sam Altman shoplifting from Target — a post that reignited debates over the credibility of digital media.
Users also began generating satirical clips of TV characters, including full AI-made episodes of South Park and SpongeBob SquarePants scenes that parodied illegal activities.
While OpenAI initially allowed such content under a loose “fair use” policy, the platform’s filters were later tightened dramatically.
Users complained that the new restrictions made Sora “literally unusable for anything even remotely creative.”
Copyright Confusion Deepens As Rights Holders Push Back
Reports from The Wall Street Journal revealed that OpenAI initially asked studios and agencies to “opt out” if they did not want their intellectual property appearing in Sora-generated videos — a policy that was later reversed after widespread criticism.
Now, the company says rights holders must “opt in” for their characters to appear and will gain “more granular control” over how they are used.
Varun Shetty, OpenAI’s head of media partnerships, said,
“We’ll work with rights holders to block characters from Sora at their request and respond to takedown requests.”
Sam Altman, OpenAI’s CEO, wrote in a blog post that rights holders could also earn a share of revenue from videos featuring their creations.
“People are generating much more than we expected per user. We are going to try sharing some of this revenue with rightsholders who want their characters generated by users.”
He admitted that “some edge cases of generations” may still slip through, but promised to refine the process.
“The exact model will take some trial and error to figure out, but we plan to start very soon.”
Can Sora Survive Its Own Hype?
Coinlive believes Sora’s rapid rise exposes a fundamental problem in OpenAI’s strategy — a rush to dominate the AI video space before securing the social and legal safeguards needed to sustain it.
What began as an ambitious leap in creativity has quickly turned into a legal and ethical minefield.
OpenAI now finds itself balancing innovation against accountability, while the world questions whether any technology capable of rewriting visual truth can be responsibly managed.
In the race to own the future of AI media, Sora may be proving that the most dangerous deepfakes are not the ones it creates — but the illusions its creators believed they could control.