Deepfake Abuse: New Law & Calls for Stronger Protection for Victims
- A new law criminalising the creation of explicit deepfake images came into effect on February 6th, 2026, prompting both relief and continued calls for broader protections against the...
- The offence was introduced as an amendment to the Data (Use and Access) Act 2025, receiving royal assent last July, but wasn’t enforced until Friday.
- The legal change comes amid growing alarm over the proliferation of non-consensual intimate imagery (NCII) generated using artificial intelligence.
A new law criminalising the creation of explicit deepfake images came into effect on , prompting both relief and continued calls for broader protections against the rapidly evolving threat of AI-generated abuse. While welcomed by victims and campaigners, concerns remain regarding the law’s scope, particularly for those in the commercial sex industry, and the delays in its implementation.
The offence was introduced as an amendment to the Data (Use and Access) Act 2025, receiving royal assent last July, but wasn’t enforced until Friday. This delay, according to campaigners, left millions vulnerable to abuse. Jodie, a victim of deepfake abuse who testified against her perpetrator, Alex Woolf, who received a 20-week prison sentence, expressed frustration. “We had these amendments ready to go with royal assent before Christmas,” she said. “They should have brought them in immediately. The delay has caused millions more women to become victims, and they won’t be able to get the justice they desperately want.”
The legal change comes amid growing alarm over the proliferation of non-consensual intimate imagery (NCII) generated using artificial intelligence. A bipartisan coalition of 47 attorneys general, led by Utah Attorney General Derek Brown, recently urged tech companies to strengthen safeguards against the creation and spread of deepfakes. The coalition highlighted the failures of search engines and payment platforms to adequately limit the creation of these images and called for measures such as warnings and redirection of users away from harmful content, as well as the removal of payment authorization for deepfake NCII content. Utah’s own legislation, S.B. 66 enacted in 2024, already criminalizes the creation, distribution, and possession of deepfakes without consent.
The scale of the problem is significant. UNICEF estimates that at least 1.2 million youngsters have had their images manipulated into sexually explicit deepfakes in the past year, based on a study across 11 countries conducted with INTERPOL and the ECPAT global network. In some nations, this represents one in 25 children, or the equivalent of one child per classroom. The UN agency has unequivocally labelled deepfake abuse as abuse, stating that “there is nothing fake about the harm it causes.”
However, the new law doesn’t address all concerns. Madelaine Thomas, founder of tech forensics company Image Angel, while acknowledging the law as “a very emotional day” for victims, pointed out its shortcomings regarding the protection of sex workers. “When commercial sexual images are misused, they’re only seen as a copyright breach,” Thomas explained. “By discounting commercialised intimate image abuse, you are not giving people who are going through absolute hell the opportunity to get the help they need.” Thomas has experienced the daily sharing of her intimate images without consent for the past seven years, describing the initial discovery as a suicidal experience.
The broader context reveals a growing societal problem. According to domestic abuse organisation Refuge, one in three women in the UK have experienced online abuse. This underscores the need for comprehensive solutions beyond criminalization, including improved relationships and sex education, and adequate funding for specialist support services like the Revenge Porn Helpline, as advocated by Stop Image-Based Abuse, a movement comprising the End Violence Against Women Coalition, the #NotYourPorn campaign group, Glamour UK, and Durham University law professor Clare McGlynn.
The Ministry of Justice has indicated further action is planned. A spokesperson stated that the government is “going after the companies behind these ‘nudification’ apps, banning them outright so You can stop this abuse at source.” the technology secretary has designated the creation of non-consensual sexual deepfakes as a priority offence under the Online Safety Act, placing increased responsibility on platforms to proactively prevent the dissemination of such content.
The case of Leicestershire police opening an investigation in January into sexually explicit deepfake images created by Grok AI highlights the evolving nature of the threat and the challenges law enforcement faces in addressing it. The effectiveness of the new law and the planned measures under the Online Safety Act will be crucial in determining whether the tide can be turned against this increasingly prevalent form of abuse and exploitation.
