Police departments across the country are turning to generative AI to save time on the drudgery of writing up police reports. At least seven departments in the U.S. are using software called Draft One, made by the police technology company Axon, which takes transcribed audio from Axon’s body cameras and automatically generates a template for police reports. A POLITICO survey, however, found that once those reports go into the system, some departments can no longer track which ones were originally drafted by AI and which by humans. “There is no way for us to search for these on our end,” Lafayette, IN, police captain Brian Gossard wrote in an email. Gossard was responding to a public-records request that POLITICO sent to police departments across the U.S., including in Fort Collins, Colorado and Lafayette, Indiana. Both departments use Axon’s Draft One — and when POLITICO asked to see their AI-generated police reports, both said they were unable to respond because they couldn’t parse out which reports were written by AI. The departments told POLITICO that they don’t label or disclose when AI is used to write reports, and that it would be impossible to find them as a result. The widespread adoption of generative AI, from major Hollywood studios to your local police departments, raises an important question: How do I know what I’m looking at was made by a real person? While AI-generated images and videos sometimes have certain tells — AI generators are notoriously bad at drawing hands, for example — text for documents like police reports can be more difficult to discern. Lawmakers and platforms like Meta are trying to implement policies requiring disclosures or watermarks noting when something is made by AI, but no regulations currently exist in the U.S. In the public sector, this means that AI-generated content is already becoming part of the public record without a way for people to tell the difference. The Fort Collins and Lafayette departments both tested Axon’s Draft One before the company publicly launched the product in April, and had high praise for it, with one officer noting in an online “customer testimonial” that the technology had written about 90 percent of a report he filed. In criminal justice, the inability to discern whether a report was written by an officer or an AI raises concerns among experts who are already skeptical of the technology’s accuracy. Without the ability to tell whether or not a report was written by AI, police can’t compare its accuracy against human-written reports, and defendants can’t properly challenge the evidence against them, said Andrew Guthrie Ferguson, author of the book “The Rise of Big Data Policing” and a law professor at American University. “It feels so irresponsible that the supervisors and chiefs in a police department don’t know if their officers are using this technology or not,” Ferguson told POLITICO. He said that while defense attorneys can often check the accuracy of police reports against the body camera footage they’re based on, defendants accused of low-level crimes often do not go to trial and will not have that opportunity. For them, a police reports is often the sole document driving their plea bargains and pre-trial arrangements. Judges across the country have issued rulings requiring lawyers to disclose when they use generative AI to write legal documents, but police reports don’t have the same scrutiny. While Axon’s software requires officers to sign off an acknowledgment that the report was generated using its AI, that disclosure is not required to be a part of the final police report that is published and used in criminal procedures. The company told POLITICO it keeps a record of which reports are generated by AI and supplies them if a department requests it. “Axon has a longstanding commitment to provide training and resources to support the effective use of public safety technology,” an Axon spokesperson said in an emailed statement. The administrator of the police department in Frederick, Colorado, which uses Axon’s software, confirmed that the department couldn’t distinguish the AI reports itself, but received a list from the company when it sent in a help ticket. When the AI paper trail goes cold — Axon pointed out that it has multiple safety features built into Draft One to ensure that the AI-generated reports are accurate. The software transcribes audio from its body cameras, and then takes that text to create a narrative template for police to fill in the blanks. It blocks officers from exporting the text until all the blanks are filled in. There are settings that Draft One customers can enable to ensure that police are properly reviewing the auto-generated text for accuracy, like intentionally inserting obvious errors that police have to look for and remove before they can move forward. Police officers also have to sign an acknowledgment statement on every report saying they’ve reviewed the results and that the report was generated by AI. That disclosure does not always make it to the published report, however, and Axon does not require its customers to include it in the final document. Once the officer takes the generated text and pastes it into the department’s own records management system, the AI trail disappears for the police departments and for public records requests. Not every police department that uses Axon’s AI-generated police reports is in the dark about its use. In Mt. Vernon, Illinois, chief of police Robert Brands said he required all officers to disclose at the end of reports if they were generated by Draft One. He said he was a little surprised that other law enforcement agencies did not have the same requirements. While there are policy pushes to label AI-generated content in political ads as well as disclosures on AI generated images, videos and audio, no lawmakers have proposed regulations on AI generated police reports. “This would be an area that they should look into and ask some really hard questions,” Ferguson said, “but right now it’s completely unregulated.”
|