Can You Libel the Dead? Why Deepfaking Them Is Unethical

Zelda Williams’ emotional plea on Instagram – begging fans to stop sending her AI-generated videos of her late father, Robin Williams – strikes at the heart of a growing ethical crisis. ‘It’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want,’ she wrote. Her anguish arrives alongside the launch of OpenAI’s Sora 2 video model, an advanced AI-powered video generation tool that can create highly realistic deepfakes – synthetic media manipulated by artificial intelligence to make it appear as though someone said or did something they never did. While Sora restricts deepfakes of living people without consent, the deceased face no such protections. Legally, this exists in a gray zone: ‘You can’t libel the dead,’ [1], leaving figures like Robin Williams, John Lennon, and Richard Nixon vulnerable to digital puppetry. As Sora 2 video model [2] floods feeds with uncanny recreations, we must ask: just because we can, does it mean we should?

The Emotional Toll on Families

The emotional toll on families when AI recreates the likenesses of their deceased loved ones without consent is profound and often overlooked. Zelda Williams, daughter of the late Robin Williams, has become a poignant voice in this ethical debate, pleading with the public to stop sending her AI-generated videos of her father. ‘Please, just stop sending me AI videos of Dad. Stop believing I wanna see it or that I’ll understand. I don’t and I won’t,’ she wrote on Instagram, capturing the raw grief and frustration felt by those whose loved ones are digitally resurrected without permission. OpenAI’s Sora 2, which allows users to generate deepfake videos of deceased individuals – including Robin Williams – exacerbates this pain by treating the dead as fair game, since defamation laws do not protect them. This lack of legal and ethical guardrails means historical figures like Martin Luther King Jr., John Lennon, and Alex Trebek are also being subjected to AI-generated content that distorts or trivializes their legacies. For families, this is not innovation – it’s violation. Zelda Williams described the experience as ‘maddening,’ watching her father’s legacy reduced to ‘vaguely looks and sounds like them’ content churned out for TikTok trends. The psychological impact is immense: a sense of powerlessness, compounded by the knowledge that their loved one’s image is being manipulated in ways they would never have condoned. As AI capabilities grow, the absence of consent mechanisms for the deceased sets a dangerous precedent, turning real human legacies into digital playthings. This issue was foreshadowed in our earlier analysis, ‘The Ethics of Synthetic Media: Who Owns a Digital Likeness?’ [1], which warned that without regulation, grief itself becomes a commodity for algorithmic exploitation.

The unsettling reality is that legal frameworks do not currently protect against defamation or misuse of likenesses of deceased individuals. This gap creates a moral and ethical vacuum where technology, particularly AI, can operate without meaningful constraint. OpenAI restricts generating videos of living people without consent but lacks similar protections for the deceased, leaving figures like Robin Williams vulnerable to digital puppetry. While Sora 2 blocks attempts to generate Jimmy Carter or Michael Jackson, it freely permits deepfakes of Robin Williams – a glaring inconsistency that reveals the arbitrariness of corporate guardrails. Why one deceased icon is shielded while another is not remains unexplained, and OpenAI has offered no public rationale for these selective restrictions. This inconsistency isn’t merely technical – it’s deeply ethical. When a company allows the digital resurrection of a beloved comedian for viral TikTok clips, it reduces a human legacy to algorithmic entertainment. Zelda Williams’ plea – “It’s NOT what he’d want” – underscores the human cost of this legal void. The absence of posthumous rights means corporations face no legal liability, but they are not immune to reputational risk. Public backlash is mounting, and as deepfake realism improves, so too will societal demand for ethical boundaries. The question is no longer whether we can recreate the dead, but whether we should. Treating public figures – or any human being – as digital playthings normalizes a culture of posthumous exploitation. Without legal reform or ethical self-regulation, we risk setting a precedent where grief, memory, and dignity are algorithmically negotiable. As explored in our previous analysis, ‘The AI Race: Investing in Environments for Training AI Agents’ [2], the unchecked expansion of AI capabilities demands proportional ethical infrastructure – especially when the subjects can no longer speak for themselves.

Counterarguments and Alternative Perspectives

While the ethical outcry over deepfakes of deceased celebrities like Robin Williams is understandable, a growing chorus of technologists and cultural archivists argue that the technology, when governed by consent and context, could serve as a powerful tool for preserving legacies. Deepfake technology could be used responsibly to preserve cultural or historical legacies with proper consent mechanisms, allowing future generations to interact with historical figures in educational or artistic contexts – imagine a classroom where students debate civil rights with a simulated Martin Luther King, Jr., whose estate approved the use. This perspective challenges the notion that all posthumous digital recreations are inherently exploitative. Furthermore, existing copyright and personality rights frameworks may already offer some recourse for celebrity estates, as noted by legal scholars who argue that well-established copyright law safeguards the rights of creators and applies here [1]. Yet, a thornier question remains: whose wishes should prevail – the deceased’s, expressed or implied, or those of their surviving family? Families of the deceased may not always represent the wishes of their loved ones, complicating claims about misuse. Zelda Williams’ plea, while deeply personal, may not reflect what Robin Williams himself might have endorsed had he lived in an era where such technology was commonplace. Critics of heavy-handed regulation also warn that stifling innovation in generative media could prevent beneficial applications, from historical reenactments to therapeutic uses for grieving families. As explored in our earlier piece ‘The AI Race: Investing in Environments for Training AI Agents’ [2], the trajectory of AI development is accelerating beyond the reach of reactive legislation. The challenge, then, is not to ban deepfakes outright, but to build ethical guardrails that honor intent, context, and consent – before the technology renders those distinctions obsolete.

At NeuroTechnus, we view this research as a landmark moment demanding urgent ethical recalibration in AI development. Angela Pernau, our lead ethicist, insists that the unchecked proliferation of deepfake technologies – especially those exploiting the likenesses of the deceased – poses a profound moral crisis. ‘Consent doesn’t expire with death,’ she argues. ‘To treat human legacy as public domain is to erode the very foundation of dignity in the digital age.’ We advocate for mandatory opt-in policies governing posthumous representation, ensuring that no one’s image or voice is resurrected without explicit prior authorization. This isn’t merely about legal loopholes; it’s about aligning innovation with enduring societal values like respect, autonomy, and compassion. As platforms like Sora push the boundaries of synthetic media, the tech industry must pivot from reactive damage control to proactive ethical architecture. Embedding consent at the core of AI systems isn’t a constraint – it’s the only sustainable path forward. For deeper insights into how industry leaders are navigating these dilemmas, explore our curated guide on AI ethics in the article ‘Top Robotics and AI Blogs to Follow in 2025’ [1]. The future of AI must be built not just with intelligence, but with integrity.

Conclusion: Navigating the Future of AI-Generated Content

The debate over AI-generated deepfakes – especially those depicting the deceased – reveals a profound ethical crossroads. On one hand, the technology enables unprecedented creative expression and historical reenactment; on the other, it risks eroding dignity, consent, and trust. As Zelda Williams’ plea illustrates, the emotional toll on families is real and immediate. Three potential futures emerge: In the positive scenario, society adopts clear ethical guidelines and regulations around AI-generated content, ensuring respectful and consensual use while fostering technological progress. The neutral path sees public debate continue without significant regulatory changes, leaving both creators and critics navigating a gray area with mixed outcomes. The negative outcome? Widespread misuse of deepfake technology leads to widespread distrust, lawsuits, and stringent regulations that hinder future AI development. Without proactive governance, we risk normalizing the exploitation of personal legacies for entertainment or profit. The solution lies not in stifling innovation, but in anchoring it to human values. As we hurtle toward more powerful models like Sora, the imperative is clear: establish guardrails now – grounded in consent, transparency, and respect – or face a future where no image, living or dead, can be trusted.

Frequently Asked Questions

Why is Zelda Williams asking people to stop sending her AI-generated videos of her father?

Zelda Williams is pleading with the public to stop because these AI-generated videos of her late father, Robin Williams, cause her profound emotional pain. She emphasizes that such content is not what her father would have wanted and reduces his legacy to algorithmic entertainment.

What legal protections exist for the likenesses of deceased individuals against AI deepfakes?

Legally, there are virtually no protections — you cannot libel the dead, leaving deceased figures vulnerable to digital manipulation. While OpenAI restricts deepfakes of living people, it offers no such safeguards for the deceased, creating an ethical and legal gray zone.

How does OpenAI’s Sora 2 model handle deepfakes of deceased celebrities like Robin Williams?

Sora 2 allows users to generate deepfakes of deceased individuals like Robin Williams, while inconsistently blocking others like Jimmy Carter or Michael Jackson. This arbitrary approach reveals a lack of clear ethical or legal guidelines for posthumous digital representation.

What ethical solution does NeuroTechnus propose for AI-generated likenesses of the deceased?

NeuroTechnus, through lead ethicist Angela Pernao, advocates for mandatory opt-in policies that require explicit prior consent for posthumous digital representation. They argue that consent doesn’t expire with death and that human dignity must anchor AI innovation.

Could deepfake technology ever be used ethically to preserve the legacies of deceased individuals?

Yes, some argue that with proper consent and context — such as educational simulations approved by estates — deepfakes could preserve cultural or historical legacies. However, this requires clear ethical guardrails to honor the deceased’s intent and avoid exploitation.

Relevant Articles​

02.11.2025

DeepAgent AI: Autonomous Reasoning, Tool Discovery, and Memory Folding Achieves 91.8% success rate on ALFWorld, demonstrating superior performance in complex,…

01.11.2025

OpenAI GPT-OSS-Safeguard Release: Open-Weight Safety Reasoning Models The 16% compute efficiency allocation for safety reasoning in OpenAI's production systems demonstrates…