
Media ethics are under fire as Jim Acosta’s “interview” with an AI-generated avatar of a Parkland shooting victim ignites outrage over exploiting tragedy for political gain.
Story Snapshot
- Jim Acosta published an interview with an AI replica of Parkland victim Joaquin Oliver to promote gun control activism.
- The AI avatar was created by Oliver’s parents in 2024 for legislative advocacy, raising major ethical concerns.
- The unprecedented use of a digital likeness of a deceased minor in political media has sparked intense debate over consent, manipulation, and media responsibility.
- This event may set a precedent for future political campaigns using AI-generated representations of the dead, with unknown consequences for truth and public trust.
AI Avatars Enter the Political Arena: An Unsettling Precedent
On August 4, 2025, former CNN host Jim Acosta released an interview on his Substack that featured not a living subject, but an AI-generated avatar of Joaquin Oliver, one of the students killed in the 2018 Parkland school shooting. This digital likeness, created by Oliver’s parents, has been used since 2024 as a tool for gun control advocacy, delivering messages to lawmakers and the public. The interview, published on what would have been Oliver’s 25th birthday, blends activism, journalism, and artificial intelligence in a way never before seen, pushing the boundaries of ethical media practice and public discourse.
The use of an AI-generated version of a deceased child for political advocacy is unprecedented and widely debated. While digital recreations have appeared in entertainment and limited memorial projects, their deployment for legislative lobbying and media interviews marks a significant escalation. The Olivers’ stated intention is to keep their son’s memory alive and to intensify pressure on Congress for stricter gun control legislation. However, this approach raises profound questions about consent, authenticity, and the emotional impact on survivors, families, and the public at large. Media outlets and gun control advocacy groups have amplified the avatar’s messages, while critics decry what they see as emotional manipulation and a violation of journalistic ethics.
Media Ethics and Activism: Blurring the Line Between Memory and Manipulation
Jim Acosta’s choice to “interview” an AI representation of a deceased minor for the purpose of advancing a particular policy agenda is fueling a new debate over media responsibility and the ethical use of technology. The interview gave the AI avatar a platform to share personal anecdotes and advocate for legislative changes, a move that some hail as innovative in activism but others view as manipulative and exploitative. The power dynamic is clear: the Olivers and their supporting advocacy groups control the avatar’s narrative, while Acosta amplifies it to a much wider audience. This not only transforms how advocacy can be conducted, but also sets a potentially dangerous precedent for using artificial intelligence to simulate the voices of the dead in public debate.
Industry analysts at the AI Now Institute have raised concerns about what they term “digital resurrection”—the use of AI-generated likenesses of deceased individuals. Scholars specializing in AI ethics and media studies at institutions such as MIT Media Lab emphasize risks related to consent, emotional manipulation, and the erosion of public trust in journalism and advocacy. Creating lifelike avatars of those who have passed may enable the deployment of deepfakes in political or social campaigns. In this instance, using an AI avatar of a murdered child to advocate for gun control has prompted serious questions about where lines should be drawn in combining activism with journalistic presentation.
Public Reaction and Long-Term Implications: Erosion of Trust and Unintended Consequences
The immediate aftermath of Acosta’s interview drew widespread attention and sparked public debate. David Hogg, co-founder of March For Our Lives and a survivor of the Parkland shooting, expressed support for the AI initiative, stating that unconventional tools like AI are necessary to bring renewed focus to gun violence. Critics, including media ethicists and advocacy organizations, cautioned that using the image and voice of a deceased teenager for advocacy risks exploitation and may undermine the dignity of victims. Some conservative commentators, as reported by The Guardian, described the segment as an example of how media activism paired with AI could blur important ethical boundaries. The episode has prompted a broader discussion about media responsibility, consent, and the appropriate use of emerging technologies in public discourse.
Incredibly Disturbing: Disgusting Hack Jim Acosta 'Interviews' AI Version of School Shooting Victimhttps://t.co/8iLko1dQjA
— RedState (@RedState) August 4, 2025
There are also fears that this precedent could normalize the use of AI avatars in political campaigns and news media—making it increasingly difficult for the public to distinguish between authentic voices and manufactured narratives. This could further erode public trust in journalism and accelerate calls for stricter ethical guidelines and regulatory oversight of AI in advocacy. The debate is not just about gun control—it is about the integrity of the media, the boundaries of technology, and the fundamental values that should guide a free society.
Sources:
Jim Acosta ‘Interviews’ AI-Generated Avatar of Deceased Teen to Promote Gun Control – Outkick
Jim Acosta interviews AI avatar of deceased Parkland shooting victim Joaquin Oliver – AI Topics












