
A startling survey reveals that one in four people remain unconcerned by sexual deepfakes, raising alarms about digital abuse and privacy violations.
Story Highlights
- Survey shows 25% of people unconcerned by non-consensual sexual deepfakes.
- Deepfakes predominantly target women and minors, exacerbating digital abuse.
- Law enforcement and advocacy groups call for stronger regulations and awareness.
- Technology platforms face scrutiny for insufficient safety measures.
Rising Threat of Sexual Deepfakes
A recent survey commissioned by UK police and conducted by Crest Advisory has revealed a concerning complacency among the public towards non-consensual sexual deepfakes. Despite a staggering 1,780% increase in such content between 2019 and 2024, 25% of respondents expressed neutrality or lack of concern about these violations. The findings underscore a significant gap in public awareness and the normalization of digital exploitation, particularly affecting women and girls.
Deepfake technology, which emerged in the late 2010s, has rapidly evolved, allowing the creation of realistic yet fabricated images and videos. While initially used in humorous or political content, sexual deepfakes now account for 98% of all deepfake videos online. Social media and adult websites are the primary platforms for their dissemination, with the lack of robust legal frameworks exacerbating the problem.
Challenges for Law Enforcement and Advocacy
Law enforcement agencies and advocacy groups are increasingly vocal about the urgent need for stronger regulatory measures. The UK police, along with organizations such as Save the Children and Thorn, emphasize the psychological and reputational damage to victims, many of whom are minors or vulnerable individuals. These groups are calling for improved digital literacy and consent education to combat the normalization of digital sexual exploitation.
The survey’s findings have prompted renewed calls for action, with experts warning that the current legal and technological measures are insufficient. The rise in category A illegal AI-generated sexual content, now accounting for 56% of all illegal AI material, highlights the critical need for updated laws and reporting mechanisms to address these abuses effectively.
Implications and Future Directions
Short-term implications of the deepfake phenomenon include increased victimization and psychological harm, while long-term effects threaten to erode societal norms around consent and privacy. The economic impact, encompassing legal fees and mental health costs, compounds the social and political challenges faced by families and communities.
A recent survey reveals 1 in 4 people are unfazed by non-consensual sexual deepfakes. As AI pushes boundaries, our ethical compass must keep up. Innovation demands responsibility. https://t.co/DiKiTPHVb4 #DeepfakeDilemma
— AI Capital (@aicapital_io) November 24, 2025
As the issue continues to unfold, tech platforms are under pressure to enhance safety features and content moderation. Advocacy groups are pushing for tech accountability and stronger legal frameworks to protect individuals from digital exploitation. The survey serves as a wake-up call, urging policymakers and communities to take decisive action against the growing threat of sexual deepfakes.
Sources:
Police warn of rising threat from sexual deepfakes
Thorn Deepfake Nudes & Young People Report
Save the Children: AI Deepfakes Impact on Youth
Childlight: AI Deepfakes and Child Safety












