Business Daily Media

Men's Weekly

.

Will AI tools make better police officers?

  • Written by Federico Iannacci, Senior Lecturer in Management, University of Sussex Business School, University of Sussex

Police officers often work with partial information under severe time constraints in situations that can change in seconds. Whether investigating a crime or patrolling a neighbourhood, they regularly have to make predictions based on instinct.

This “gut policing” isn’t just guesswork – it’s fast pattern recognition. It comes from training and years of dealing with real incidents, learning from colleagues, and building an instinctive sense of what matters and what doesn’t.

But instincts are no longer the only way police connect the dots. Many police forces are investing in AI-enabled tools[1], including predictive policing algorithms[2] that forecast crime hotspots and offender assessment systems[3] designed to support decision-making.

Read more: A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up[4]

This reflects a wider global trend: police forces are integrating AI into everyday policing. These AI-enabled tools draw on large volumes of data and patterns that would be impossible for any single officer to analyse in real time. The aim is straightforward: to help ensure decisions are based on strong evidence and reliable data, rather than relying solely on instinct or experience.

Many people appear to accept the use of AI technology[5] by police forces – so long as there are clear guidelines in place[6] first.

Will AI tools make better police officers?
AI has long been discussed as a threat to jobs and livelihoods. But what’s the reality? In this series[7], we explore the impact AI is already having on specific occupations – and how people in these jobs feel about their new AI assistants. In England, police forces are already using AI tools in day-to-day work. These include Untrite Thrive[8], which helps staff in police control rooms decide how to allocate resources. Another example is Qlik Sense[9], used by Avon and Somerset Police for monitoring the likelihood of reoffending or perpetrating a crime. These developments align with a broader government agenda focused on efficiency and cost reduction. But once you swap human judgment for more automated predictions, the value of officers’ traditional connect-the-dots police logic can be lost. There have been plenty of examples where AI tools have flagged the wrong people, the wrong places, or the wrong risks. Unverified information A House of Commons select committee recently highlighted[10] serious failings in West Midlands Police’s use of the AI assistant Microsoft Copilot in its decision to stop Israeli fans of Maccabi Tel Aviv football club from travelling to Birmingham for a Europa League match against Aston Villa last November. Claims made by this force about alleged disorder involving Maccabi fans at past matches were based on inaccurate information generated by Copilot, including[11] a supposed game between the Israeli club and West Ham United that never happened. “Information that showed the Maccabi fans to be a high risk was trusted without proper scrutiny,” explained the committee’s chair Karen Bradley. “Shockingly, this included unverified information generated by AI.” This inaccurate AI‑generated information was repeated by senior police officers in safety advisory group meetings and even in oral evidence to MPs, demonstrating a lack of due diligence and overreliance on unverified AI outputs. The case is now subject to an investigation by the Independent Office for Police Conduct. Video: Channel 4 News. And this was not an isolated incident. The Harm Assessment Risk Tool deployed by Durham Constabulary was found to have displayed many flaws[12], from overestimation of the likelihood of reoffending to discrimination in its datasets. And the Metropolitan Police’s now-discontinued Gang Matrix, a database that recorded intelligence related to alleged gang members, was heavily criticised[13] by the Information Commissioner’s Office for unfairly labelling young black men as high‑risk based on flawed scoring. Relying on AI-driven tools can be a double-edged sword in policing. They can improve decisions, but can also reinforce bias and amplify mistakes[14]. In our experience of working with police forces in England, AI‑supported decision‑making works best when police officers combine their operational experience with data‑driven insights. Reinforcing biases Our ongoing study of AI use in policing[15] shows that uncritical reliance on AI risks reinforcing existing biases, disproportionately affecting the poorest and most marginalised communities. Our research, which is yet to be published, suggests[16] that effective use of AI requires a difficult balance: officers must both trust and mistrust AI recommendations at the same time, maintaining a vigilant mindset. To prevent biases creeping into AI‑supported decisions, police forces should invest in bias‑awareness training that prepares officers to question AI outputs regularly and constructively. The National Police Chiefs’ Council covenant[17] mandated that AI should support rather than replace human judgment. This is a step in the right direction. Yet even this principle can backfire if police officers treat AI recommendations as objective truth, rather than guidance that requires careful scrutiny. These concerns take on renewed urgency in light of the government’s introduction of a national predictive policing prototype[18], announced in August 2025. The system, scheduled for nationwide deployment by 2030, combines AI‑powered crimemapping with behavioural‑pattern analysis, supported by a £4 million initial investment. It draws on data from police forces, local councils and social services, and builds directly on the expanding fleet of live facial recognition[19] vans now operating across seven forces across England and Wales. Read more: Facial recognition technology used by police is now very accurate – but public understanding lags behind[20] At the same time, developments inside policing organisations highlight the limits of technological oversight. The Met was recently reported[21] to have begun using AI tools to flag potential officer misconduct by analysing internal data such as sickness records, absences and overtime patterns. While the Met argues that such systems help raise standards and rebuild public trust, critics warn that such monitoring risks misclassifying workplace pressures as misconduct and eroding accountability rather than strengthening it. Ultimately, whether AI technology improves policing outcomes depends on the governance surrounding it. Ensuring there is a vigilant human in every AI loop should be a non-negotiable safeguard.

References

  1. ^ AI-enabled tools (www.rusi.org)
  2. ^ predictive policing algorithms (www.tandfonline.com)
  3. ^ offender assessment systems (theconversation.com)
  4. ^ A ‘black box’ AI system has been influencing criminal justice decisions for over two decades – it’s time to open it up (theconversation.com)
  5. ^ accept the use of AI technology (www.gov.uk)
  6. ^ clear guidelines in place (www.biometricupdate.com)
  7. ^ this series (theconversation.com)
  8. ^ Untrite Thrive (untrite.com)
  9. ^ Qlik Sense (www.avonandsomerset.police.uk)
  10. ^ recently highlighted (committees.parliament.uk)
  11. ^ including (news.sky.com)
  12. ^ displayed many flaws (www.fairtrials.org)
  13. ^ heavily criticised (ico.org.uk)
  14. ^ reinforce bias and amplify mistakes (www.theguardian.com)
  15. ^ study of AI use in policing (www.diva-portal.org)
  16. ^ suggests (www.youtube.com)
  17. ^ National Police Chiefs’ Council covenant (science.police.uk)
  18. ^ national predictive policing prototype (www.biometricupdate.com)
  19. ^ live facial recognition (theconversation.com)
  20. ^ Facial recognition technology used by police is now very accurate – but public understanding lags behind (theconversation.com)
  21. ^ recently reported (www.theguardian.com)

Read more https://theconversation.com/will-ai-tools-make-better-police-officers-277258

AIIMS Group and AdVisible merge

Two of Australia’s most established independent agencies unite, creating marketing powerhouse backed by three decades of combined experience     ...

Block's layoffs are a design win. Here's why

We spend millions designing features that save users 30 seconds. Block just saved thousands of employees 40 hours a week. That's not a crisis. That's...

Why I Decided to Build a Better Way to Build Homes

Why does building a home still feel like stepping into the unknown? In an industry where costs blow out and decisions come too late, certainty has...

Leonardo.Ai reveals new brand, expanding its creator-first platform for the next era of generative AI

The company has also launched its developer API to empower creators and builders to integrate AI into their workflows SYDNEY, Australia – 19 Febr...

Psychosocial injury risk starts inside workplace microcultures

Psychological injury is now one of the most expensive categories of workers compensation claims in Australia, with Safe Work Australia reporting t...

2025 Thryv Business and Consumer Report - Australian small businesses show grit under pressure

Australia’s small businesses are powering ahead with optimism, resilience and discipline, however, mounting pressures on costs, wellbeing and cons...