California AI Police Reports Transparency Law
“`html
California Leads the Way in Regulating AI-Generated Police Reports
Table of Contents
Published October 16, 2025, at 08:02:30 AM PDT
The Rise of AI in Policing and the Concerns It Raises
California Governor Gavin Newsom has signed Senate Bill 524 (S.B. 524), marking a significant first step in regulating the increasingly common practice of using artificial intelligence to draft police reports. The Electronic Frontier Foundation (EFF) strongly supported this bill, having spent the past year vocally criticizing companies offering AI-generated police report services.
Products like Draft One, an AI tool marketed to law enforcement, automate the report-writing process. while proponents argue this saves officers time, critics raise serious concerns about potential biases embedded in the AI algorithms, the accuracy of the generated reports, and the lack of transparency surrounding their creation. These concerns extend to potential violations of record retention laws.
The EFF has published a guide to help the public file public records requests to investigate how police departments are utilizing AI in report writing.
What Does S.B. 524 Do?
S.B. 524 requires law enforcement agencies in California to disclose when AI is used to create police reports. Specifically,the bill mandates that any report generated with the assistance of AI must clearly state that fact. This transparency requirement is intended to allow defendants and the public to assess the potential influence of AI on the report’s content.
The text of S.B. 524 can be found on the California Legislative Data website: S.B. 524.
While S.B.524 is a positive step, it’s considered a starting point. The long-term implications of AI-written police reports on the criminal justice system remain unclear. Further consideration is needed regarding complete regulation, and even potential prohibition, of this application of generative AI.
The Broader Implications and Future Regulation
California and Utah are currently leading the charge in addressing the risks associated with AI in law enforcement. However,the issue is national in scope. The potential for algorithmic bias to perpetuate existing inequalities within the criminal justice system is a significant concern. If AI systems are trained on biased data, they may generate reports that unfairly target certain communities or individuals.
Moreover, the use of AI raises questions about accountability.
