from the that’s-not-AI,-that’s-LSD dept
AI can be useful. But so many people seem to feel it’s nothing more than an unpaid intern you can lean on to do all the work you don’t feel like doing yourself. (And the less said about it’s misuse to generate a webful of slop, the better.)
Like everyone everywhere, police departments are starting to rely on AI to do some of the menial work cops don’t like doing themselves. And it’s definitely going poorly. More than a year ago, it was already apparent that law enforcement agencies were just pressing the “easy” button, rather than utilizing it wisely to work smarter and faster.
Axon – the manufacturer of Taser and a line of now-ubiquitous body cameras – has pushed hard for AI adoption. Even it knows AI use can swiftly become problematic if it’s not properly backstopped by humans.But the humans it sells its products too don’t seem to care for anything othre than its ability to churn out paperwork with as little human involvement as possible.
The report notes that Draft One includes a feature that can intentionally insert silly sentences into AI-produced drafts as a test to ensure officers are thoroughly reviewing and revising the drafts. However, Axon’s CEO mentioned in a video about Draft One that most agencies are choosing not to enable this feature.
Yep. They just don’t care.If it means cases get tossed because sworn statements have been AI auto-penned,so be it. If someone ends up falsely accused of a crime or falsely arrested as of something AI whipped up, that’s just the way it goes.And if it adds a layer of plausible deniability between an officer and their illegal actions, even better.
Not only is the tech apparently not saving anyone much time, it’s also being abused by law enforcement officers to justify their actions .
to delve into the increasing use of artificial intelligence by Utah police departments, a recent report from Fox13Now offers a curious case study. The article, initially presented with a less-than-informative headline, now reads: “Ribbit ribbit! Artificial Intelligence programs used by Heber city police claim officer turned into a frog.”
It’s significant to note that this headline change, while acknowledging the software malfunction, doesn’t address the core issue: a lack of demonstrable evidence that these AI programs are actually improving public safety. The report details the testing of two AI platforms – Code four, created by young MIT alumni, and Draft One, an Axon product – but fails to illustrate how they contribute to safer streets.
The situation highlights a broader tapestry of concerns surrounding police adoption of AI. While the intention may be to enhance efficiency and objectivity, the reality often falls short. The focus seems to be on implementation rather than proven effectiveness. The article itself serves as a microcosm of this trend, prioritizing access to police and officials over critical examination of the technology.
This isn’t an isolated incident. Draft One, as previously covered, aims to automate report writing for officers. The potential for bias and inaccuracy inherent in such systems remains a significant concern. The Heber City incident, with its claim of an officer transforming into a frog, underscores the unreliability of these tools.
In Also to be considered:, the Fox13Now report, despite its revised headline, offers little evidence to support the claim that AI is making Utah communities safer. It serves as a cautionary tale,emphasizing the need for rigorous testing,transparency,and a critical assessment of the benefits and risks before widespread adoption of AI in law enforcement. A extensive guide to responsible AI implementation in policing is desperately needed, one that prioritizes public safety and accountability over technological novelty.
