Concerns about AI-written police reports spur states to regulate the emerging practice
AI-generated police reports promise to save cops time, but they also raise a host of legal and technical concerns.

Police are getting a boost from artificial intelligence, with algorithms now able to draft police reports in minutes. The technology promises to make police reports more accurate and comprehensive, as well as save officers time.
The idea is simple: Take the audio transcript from a body camera worn by a police officer and use the predictive text capabilities of large language models to write a formal police report that could become the basis of a criminal prosecution. Mirroring other fields that have allowed ChatGPT-like systems to write on behalf of people, police can now get an AI assist to automate much dreaded paperwork.
The catch is that instead of writing the first draft of your college English paper, this document can determine someone’s liberty in court. An error, omission or hallucination can risk the integrity of a prosecution or, worse, justify a false arrest. While police officers must sign off on the final version, the bulk of the text, structure and formatting is AI-generated.
Who – or what – wrote it
Up until October 2025, only Utah had required that police even admit they were using an AI assistant to draft their reports. On Oct. 10, that changed when California became the second state to require transparent notice that AI was used to draft a police report.
Governor Gavin Newsom signed SB 524 into law, requiring all AI-assisted police reports to be marked as being written with the help of AI. The law also requires law enforcement agencies to maintain an audit trail that identifies the person who used AI to create a report and any video and audio footage used in creating the report. It also requires agencies to retain the first draft created with AI for as long as the official report is retained, and prohibits a draft created with AI from constituting an officer’s official statement.
The law is a significant milestone in the regulation of AI in policing, but its passage also signifies that AI is going to become a major part of the criminal justice system.
If you are sitting behind bars based on a police report, you might have some questions. The first question that Utah and California now answer is “Did AI write this?” Basic transparency that an algorithm helped write an arrest report might seem the minimum a state could do before locking someone up. And, even though leading police technology companies like Axon recommend such disclaimers be included in their reports, they are not required.
Police departments in Lafayette, Indiana and Fort Collins, Colorado, were intentionally turning off the transparency defaults on the AI report generators, according to an investigative news report. Similarly, police chiefs using Axon’s Draft One products did not even know which reports were drafted by AI and which were not because the officers were just cutting and pasting the AI narrative into reports they indicated they wrote themselves. The practice bypassed all AI disclaimers and audit trails.
Many questions
Transparency is only the first step. Understanding the risks of relying on AI for police reports is the second.
Technological questions arise about how the AI models were trained and the possible biases baked into a reliance on past police reports. Transcription questions arise about errors, omissions and mistranslations because police stops take place in chaotic, loud and frequently emotional contexts amid a host of languages.
Finally, trial questions arise about how an attorney is supposed to cross-examine an AI-generated document, or whether the audit logs need to be retained for expert analysis or turned over to the defense.
Risks and consequences
The significance of the California law is not simply that the public needs to be aware of AI risks, but that California is embracing AI risk in policing. I believe it’s likely that people will lose their liberty based on a document that was largely generated by AI and without the hard questions satisfactorily answered.
Worse, in a criminal justice system that relies on plea bargaining for more than 95% of cases and is overwhelmingly dominated by misdemeanor offenses, there may never be a chance to check whether the AI report accurately captured the scene. In fact, in many of those lower-level cases, the police report will be the basis of charging decisions, pretrial detention, motions, plea bargains, sentencing and even probation revocations.
I believe that a criminal legal system that relies so heavily on police reports has a responsibility to ensure that police departments are embracing not just transparency but justice. At a minimum, this means more states following Utah and California to pass laws regulating the technology, and police departments following the best practices recommended by the technology companies.
But even that may not be enough without critical assessments by courts, legal experts and defense lawyers. The future of AI policing is just starting, but the risks are already here.
Andrew Guthrie Ferguson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
Read These Next
Winning with misinformation: New research identifies link between endorsing easily disproven claims
Conversations around misinformation that assume everyone cares about literal truth may be missing the…
Why higher tariffs on Canadian lumber may not be enough to stimulate long-term investments in US for
The US has ample forest resources, but replacing imports with domestic lumber isn’t as simple as it…
Detroit parents face fines if their children break curfew − research shows the policy could do more
The fines of $250 for a first offense and $500 for a second hit low-income families the hardest.