Australia’s online safety regulator has issued a stark warning to major technology companies – Apple, Google, Meta, and Microsoft among them – stating they continue to fall short in detecting and preventing online child sexual exploitation and abuse (CSEA), despite acknowledging some recent improvements. The findings, released in a new transparency report by the eSafety Commissioner, highlight critical gaps in safety protections across a range of platforms and content types, including livestreamed abuse and AI-generated material.
The report stems from legally enforceable transparency notices issued in July 2024 under Australia’s Online Safety Act. These notices required the eight companies monitored – Apple, Discord, Google, Meta, Microsoft, Skype, Snap, and WhatsApp – to disclose their safety practices. While the report acknowledges “welcome improvements,” it underscores a persistent inconsistency in applying even “basic” technology to protect children online.
A key area of concern identified by the eSafety Commissioner is the lack of proactive tools for detecting live abuse on video calling and livestreaming services. Platforms like Apple’s FaceTime, Microsoft Teams, Snapchat, WhatsApp, Discord, and Google Meet currently lack the capability to identify and intervene in real-time during instances of abuse. This represents a significant vulnerability, as livestreaming has become an increasingly popular avenue for perpetrators.
The report also revealed inconsistencies in how companies approach the detection of newly created abusive content. Apple, notably, does not utilize automated systems to detect such material across any of its services. Other companies, while employing detection tools, do so inconsistently, creating uneven levels of protection for users.
eSafety Commissioner Julie Inman Grant expressed her concern over the lack of substantial progress, particularly given the technological capabilities of these major platforms. These companies are no strangers to innovation and have the technological capability to develop new technologies to detect this harm in the fight to end live online abuse of children,
she stated, framing the issue not merely as a matter of legal compliance, but as a question of corporate conscience and accountability.
The findings come at a time of growing concern over the evolving tactics employed by perpetrators of online child sexual abuse. The report specifically calls out the increasing prevalence of AI-generated child sexual abuse material, grooming, and sexual extortion as areas requiring urgent attention. The ability to create realistic, synthetic content using artificial intelligence presents a new and complex challenge for platforms attempting to identify and remove harmful material.
The Online Safety Act, which came into effect in December, has empowered the eSafety Commissioner to demand greater transparency from tech companies regarding their safety practices. The transparency reports are intended to provide a clearer picture of the measures companies are taking to protect users and to identify areas where improvements are needed. This latest report suggests that, despite some progress, significant work remains to be done.
The report’s release follows a broader trend of increased scrutiny of tech companies’ handling of harmful content online. In early February , an online safety watchdog highlighted the ongoing failures of tech giants to effectively detect and stop child sexual abuse material on their platforms, despite possessing the necessary technology and resources. This underscores a global challenge in balancing freedom of expression with the need to protect vulnerable individuals from online harm.
While the eSafety Commissioner’s report focuses specifically on the actions of eight major tech companies, the implications extend to the broader online ecosystem. The findings serve as a reminder that proactive safety measures are essential to combatting online child sexual exploitation and abuse, and that tech companies have a critical role to play in creating a safer online environment for children. The call for increased accountability and the consistent application of safety technologies is likely to intensify as regulators and advocacy groups continue to push for greater protection.
The report doesn’t detail specific technical solutions, but implicitly calls for wider deployment of automated detection systems, particularly for live content. The challenge lies not only in developing these technologies, but also in ensuring they are accurate and do not infringe on privacy rights. Finding this balance will be crucial as tech companies work to address the concerns raised by the eSafety Commissioner and other stakeholders.
