RCMP quietly testing AI-drafted reports from body camera audio
OTTAWA — Some RCMP body cameras in Alberta and B.C. aren’t just recording people anymore, they’re writing about them too. For months, the police force has been testing a new technology where artificial intelligence writes a draft report based on body camera audio of an interaction.
In July, the RCMP launched a year-long pilot project in eight B.C. and two Alberta detachments where audio from officers’ body cameras is uploaded into an AI transcription service to automatically generate a draft incident report. The force budgeted up to $200,000 for the test.
The AI software is Draft One by Axon, the U.S.-based public safety giant that supplies the RCMP’s body cameras since the national police force began rolling out the technology to officers in late 2024.
Despite launching the pilot over six months ago, the RCMP’s first public reference to the use of Axon’s Draft One AI tool appears to be a cursory mention in its 2026-2027 Departmental Plan report published recently .
The testing comes as police forces look for ways to harness artificial intelligence, even as special interest groups and security experts have raised concerns about the technology’s impact on civil rights, privacy and the risks associated with increased surveillance.
“A potential time-saver, Draft One uses artificial intelligence to automatically draft report narratives based on the audio captured from body-worn cameras,” RCMP spokesperson Marie-Eve Breton told National Post in response questions about the pilot.
“The pilot will evaluate whether Draft One can improve and reduce the amount of time officers spend writing reports, freeing up more time to do active policing, rather than administrative tasks. The pilot and ongoing evaluation of it is still ongoing,” she added.
The force will not use video captured from the cameras to feed the AI-generated draft, nor will it test Axon’s facial recognition feature like the Edmonton Police Service , she noted.
Breton also said that an officer must sign-off on a report drafted by AI before it can be submitted. That process includes mandatorily changing a minimum of 10 per cent of the draft and removing “obvious errors” inserted intentionally by Draft One.
“Once these conditions have been met and the draft is fully reviewed, officers are required to sign off on the accuracy of the report via an electronic acknowledgement,” Breton said.
Christopher Schneider, a professor at Manitoba-based Brandon University who has studied how body cameras affect police work for over one decade, said he has many concerns about the RCMP’s testing of AI-generated draft reports.
A police officer’s report is normally informed by what they hear, see and analyze during an interaction; far more information than simply audio captured by a body camera’s microphone, he noted.
Furthermore, police discretion is a crucial part of policing. But AI can’t exercise discretion in its reporting, stripping away another valuable element of the first draft of a report, he added.
Finally, Schneider said that “hallucinations” — an AI-generated response that contains false, misleading or misinterpreted information presented as fact — could very well find themselves in court evidence if not caught by an officer when submitting a report first drafted by AI.
“I think we really need to slow down here and consider the possible consequences on people’s lives with the use of these technologies in industries like policing, and I don’t think that’s being done,” said Schneider, who recently published the book Police Body-Worn Cameras: Media and the New Discourse of Police Reform .
“Rather, I think government and police officials are seduced by the idea of artificial intelligence, they’re seduced by the idea of body worn cameras. And again, the evidence is inconclusive that the cameras even work themselves.”
Axon bills its Draft One AI tool as a way to enable police to get a “head start” on drafting reports, arguing that the software can help an officer save one hour of paperwork per shift.
The company says the software, based on an OpenAI model, ensures the draft is proofread by including “insert” clauses that require an officer to manually add information in places suggested by the AI tool.
“The model was calibrated by the Axon team to remove creativity or embellishments — often referred to as ‘hallucinations’ — that may be more common in consumer-grade AI solutions,” reads the company’s website.
But the technology is not foolproof, as recently noted by a police force in Utah where Axon’s Draft One tool claimed in a draft report that a police officer had magically shape-shifted into a frog.
“The body cam software and the AI report writing software picked up on the movie that was playing in the background, which happened to be ‘The Princess and the Frog,'” a Herber City police spokesperson told FOX 13 News in December .
“That’s when we learned the importance of correcting these AI-generated reports.”
In a report last summer, the U.S.-based Electronic Frontier Foundation published a study that concluded that Axon’s Draft One “seems deliberately designed to avoid audits that could provide any accountability to the public”, largely because it’s virtually impossible to tell which part of a police report was drafted by the AI versus a human.
National Post
cnardi@postmedia.com
Our website is the place for the latest breaking news, exclusive scoops, longreads and provocative commentary. Please bookmark nationalpost.com and sign up for our politics newsletter, First Reading, here.