{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Will AI tools make better police officers?

Police officers often work with partial information under severe time constraints in situations that can change in seconds. Whether investigating a crime or patrolling a neighbourhood, they regularly have to make predictions based on instinct.

This “gut policing” isn’t just guesswork – it’s fast pattern recognition. It comes from training and years of dealing with real incidents, learning from colleagues, and building an instinctive sense of what matters and what doesn’t.

But instincts are no longer the only way police connect the dots. Many police forces are investing in AI-enabled tools, including predictive policing algorithms that forecast crime hotspots and offender assessment systems designed to support decision-making.


Read more: A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up


This reflects a wider global trend: police forces are integrating AI into everyday policing. These AI-enabled tools draw on large volumes of data and patterns that would be impossible for any single officer to analyse in real time. The aim is straightforward: to help ensure decisions are based on strong evidence and reliable data, rather than relying solely on instinct or experience.

Many people appear to accept the use of AI technology by police forces – so long as there are clear guidelines in place first.


AI has long been discussed as a threat to jobs and livelihoods. But what’s the reality? In this series, we explore the impact AI is already having on specific occupations – and how people in these jobs feel about their new AI assistants.


In England, police forces are already using AI tools in day-to-day work. These include Untrite Thrive, which helps staff in police control rooms decide how to allocate resources. Another example is Qlik Sense, used by Avon and Somerset Police for monitoring the likelihood of reoffending or perpetrating a crime. These developments align with a broader government agenda focused on efficiency and cost reduction.

But once you swap human judgment for more automated predictions, the value of officers’ traditional connect-the-dots police logic can be lost. There have been plenty of examples where AI tools have flagged the wrong people, the wrong places, or the wrong risks.

Unverified information

A House of Commons select committee recently highlighted serious failings in West Midlands Police’s use of the AI assistant Microsoft Copilot in its decision to stop Israeli fans of Maccabi Tel Aviv football club from travelling to Birmingham for a Europa League match against Aston Villa last November.

Claims made by this force about alleged disorder involving Maccabi fans at past matches were based on inaccurate information generated by Copilot, including a supposed game between the Israeli club and West Ham United that never happened.

“Information that showed the Maccabi fans to be a high risk was trusted without proper scrutiny,” explained the committee’s chair Karen Bradley. “Shockingly, this included unverified information generated by AI.”

This inaccurate AI‑generated information was repeated by senior police officers in safety advisory group meetings and even in oral evidence to MPs, demonstrating a lack of due diligence and overreliance on unverified AI outputs. The case is now subject to an investigation by the Independent Office for Police Conduct.

Video: Channel 4 News.

And this was not an isolated incident. The Harm Assessment Risk Tool deployed by Durham Constabulary was found to have displayed many flaws, from overestimation of the likelihood of reoffending to discrimination in its datasets.

And the Metropolitan Police’s now-discontinued Gang Matrix, a database that recorded intelligence related to alleged gang members, was heavily criticised by the Information Commissioner’s Office for unfairly labelling young black men as high‑risk based on flawed scoring.

Relying on AI-driven tools can be a double-edged sword in policing. They can improve decisions, but can also reinforce bias and amplify mistakes. In our experience of working with police forces in England, AI‑supported decision‑making works best when police officers combine their operational experience with data‑driven insights.

Reinforcing biases

Our ongoing study of AI use in policing shows that uncritical reliance on AI risks reinforcing existing biases, disproportionately affecting the poorest and most marginalised communities.

Our research, which is yet to be published, suggests that effective use of AI requires a difficult balance: officers must both trust and mistrust AI recommendations at the same time, maintaining a vigilant mindset.

To prevent biases creeping into AI‑supported decisions, police forces should invest in bias‑awareness training that prepares officers to question AI outputs regularly and constructively.

The National Police Chiefs’ Council covenant mandated that AI should support rather than replace human judgment. This is a step in the right direction. Yet even this principle can backfire if police officers treat AI recommendations as objective truth, rather than guidance that requires careful scrutiny.

These concerns take on renewed urgency in light of the government’s introduction of a national predictive policing prototype, announced in August 2025. The system, scheduled for nationwide deployment by 2030, combines AI‑powered crimemapping with behavioural‑pattern analysis, supported by a £4 million initial investment.

It draws on data from police forces, local councils and social services, and builds directly on the expanding fleet of live facial recognition vans now operating across seven forces across England and Wales.


Read more: Facial recognition technology used by police is now very accurate – but public understanding lags behind


At the same time, developments inside policing organisations highlight the limits of technological oversight. The Met was recently reported to have begun using AI tools to flag potential officer misconduct by analysing internal data such as sickness records, absences and overtime patterns.

While the Met argues that such systems help raise standards and rebuild public trust, critics warn that such monitoring risks misclassifying workplace pressures as misconduct and eroding accountability rather than strengthening it.

Ultimately, whether AI technology improves policing outcomes depends on the governance surrounding it. Ensuring there is a vigilant human in every AI loop should be a non-negotiable safeguard.

Federico Iannacci has received funding from the British Academy for a small research grant entitled "Investigating the future of work in policing: a Qualitative Comparative Analysis of police forces in England and Wales."

Stan Karanasios does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Ria.city






Read also

How Did America Know Where to Find Iran’s Leaders?

Gunfire in North Oakland results in city’s 12th homicide of 2026

Lebanon bans Hezbollah’s military activities

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости