OpenAI Stops Sora from Generating MLK Deepfake Videos, Responds to Family Complaints of Disrespectful Depictions

October 18, 2025
Sora
5 min

Abstract

OpenAI announced on October 17, 2025 (ET) that it would suspend the creation of video content depicting Martin Luther King Jr. using its AI video generation tool, Sora. This decision came in response to complaints from the Estate of Martin Luther King, Jr., Inc. regarding the widespread dissemination of numerous "disrespectful deepfake videos" on social media. Since Sora's application was released three weeks prior, users had leveraged the tool to produce a large volume of hyper-realistic fake videos showing Dr. King engaging in vulgar, offensive, or racially discriminatory acts, including content that reinforced racial stereotypes such as stealing and evading police pursuit.

Background

OpenAI officially launched the Sora application to the public in late September 2025. This AI tool generates short videos from text prompts. The application features a "Cameo" function, allowing users to upload multi-angle videos and voice recordings to create deepfake videos of themselves or others. However, in its initial release, the system did not implement sufficient restrictions on the use of historical figures' and celebrities' likenesses, leading users to generate unauthorized fake video content of historical figures including Princess Diana, John F. Kennedy, Kurt Cobain, and Malcolm X.

Controversial Content

Since Sora's launch, a large number of deepfake videos involving Dr. Martin Luther King Jr. quickly appeared on social media. These videos depicted the civil rights leader uttering vulgar remarks, engaging in criminal behavior, or reinforcing racial stereotypes. Specifically, this included:

  • Fictionalized scenes of Dr. King shoplifting in a grocery store.
  • Fake videos of him evading police pursuit.
  • Depicting Dr. King saying offensive or racist remarks.
  • Other absurd or disparaging scenarios that distorted his historical legacy.

OpenAI's Response

On the evening of October 17 (ET), OpenAI and the Estate of Martin Luther King, Jr., Inc. issued a joint statement announcing the suspension of all AI video generation features depicting Dr. King.

In the statement, OpenAI said: "While there are strong free speech interests in depicting historical figures, OpenAI believes that public figures and their families should ultimately have control over how their likeness is used."

The company pledged to "strengthen safeguards for historical figures" and allow authorized representatives or estate owners to request an opt-out from appearing in Sora videos.

Family's Appeal

Dr. King's daughter, Bernice King, had previously posted on social media platform X, simply writing: "Please stop." expressing the family's dissatisfaction with these inappropriate uses of her father's image.

Similar situations have occurred with the families of other deceased celebrities. Zelda Williams, daughter of the late comedian Robin Williams, posted on Instagram: "Please stop sending me AI videos of my dad... This is not what he wanted."

Legal and Ethical Considerations

Kristelia García, a professor of intellectual property law at Georgetown University Law Center, noted that OpenAI only acted after complaints from the estate, which aligns with the company's consistent "act first, ask forgiveness later" approach.

García stated: "The AI industry seems to be moving very quickly, and seizing market opportunities is clearly the currency of the day (certainly prioritized over a thoughtful, ethics-focused approach)."

She pointed out that varying state laws on the right of publicity and defamation may not always apply to deepfake content, meaning "there's very little legal risk for companies to continue operating unless someone complains."

In states offering strong protections, such as California, the heirs or estate management of public figures hold the right of publicity for 70 years after the celebrity's death.

Policy Adjustments

Within days of the Sora application's release, OpenAI CEO Sam Altman announced modifications to the application, changing the use of likenesses for rights holders from a default allowance to an opt-in model.

However, this policy shift did not entirely quell the controversy. Hollywood studios and talent agencies also expressed concerns about OpenAI launching the Sora application without obtaining consent from copyright holders.

Broader Implications

This incident reflects broader challenges facing the field of AI-generated content:

  1. Copyright and Likeness Rights Protection: OpenAI adopted a similar approach during ChatGPT's development, reaching licensing agreements with some publishers only after extensively using copyrighted content, a practice that has led to multiple copyright lawsuits.
  2. Misinformation Risk: Critics point out that deepfake technology is blurring the lines between real and fake, exacerbating the problem of "AI garbage content" and threatening the information ecosystem.
  3. Ethical Review Mechanisms: Legal experts and researchers are calling for AI developers to establish more comprehensive ethical review mechanisms before product launches, rather than adopting a reactive problem-solving approach.
  4. Establishment of Industry Standards: This incident may prompt the entire AI industry to establish unified ethical standards regarding the use of deepfake technology.

Sora Application Status

Sora currently remains in an invite-only phase, accessible only to a select group of users. According to OpenAI, the application garnered over 1 million downloads in less than five days after its launch, demonstrating strong market interest in AI video generation tools.

However, the application's "shoot first, aim later" strategy regarding safety and protective measures has raised alarms among intellectual property lawyers, public figures, and misinformation researchers.

Future Outlook

OpenAI stated it will continue to expand Sora's functionalities while strengthening content control mechanisms. The company plans to incorporate more advanced harmful content detection systems and may collaborate with other estate management organizations to establish more robust likeness rights protection mechanisms.

This incident highlights the need to find a balance between technological innovation, freedom of speech, and the protection of individual dignity in the rapidly evolving era of AI. As deepfake technology continues to advance, ongoing dialogue is required among tech developers, ethicists, and cultural heritage guardians to ensure that innovation does not come at the expense of dignity.


Related Links: