Addressing Officer Misconduct, Police Departments Turn to Artificial Intelligence

By Zuhair Riaz

In the wake of the Black Lives Matters protests across the country, police departments have taken steps to address the growing concerns of officer misconduct. Departments have turned to artificial intelligence to detect officers who are likely to engage in problematic or unconstitutional behavior. The use of artificial intelligence is expected to ease the implementation of early intervention systems in police departments.

The national attention that the killings of Breonna Taylor and George Floyd have garnered has shone further light on police departments’ need to root out problematic officers. Indeed, police departments as well as national security advisors, have been vocal in their desire to weed out “bad apples” in response to widespread calls to defund the police. Police departments hope that the advances in early intervention systems can address the growing concerns of police malfeasance and rebuild trust within the communities they serve.

What do these AI systems look like?

The goal of early intervention systems is to pinpoint officers who engage in problematic behavior, typically in the form of policy violations, and then place those officers in an interventional program designed to correct his or her performance issues. There have been previous iterations of early intervention systems, however, the use of artificial intelligence is a new development. The advantages of artificial intelligence in early intervention systems allow for faster metric analysis and a variety of factors to be considered.  Artificial Intelligence can track an officers training history, major policing events in an officer’s life and an officers engagement with the community. The wealth of information provides a holistic view on officer behavior and potential warning signs of misconduct.

The purpose behind early intervention systems is to enforce constitutional policing. Early intervention systems aim to gather data on officers in order determine which officers are likely to infringe on the constitutional rights of the people they serve. Furthermore, it then recommends proper course of action for dealing with those officers. It is then up to the supervisor in the police departments to implement those changes.

Benchmark Analytics LLC developed a one of a kind system that aims to transform early intervention systems in police force management. Benchmark’s artificial intelligence technology uses a series of predictive algorithms that identify patterns of officer conduct that leads to problematic behavior in policing. These algorithms are used to create comprehensive reports that are to be reviewed by supervisors. The University of Chicago has played a key role in the development of the artificial intelligence that Benchmark Analytics has created. They have created this system based on several longitudinal studies and analyses of officer conduct.

First Sign, the system that Benchmark Analytics designed, is an ever-evolving monitoring system that collects data on an ongoing basis. It uses artificial learning to compare an officer’s previous conduct against the past actions of officers who used excessive force or exhibited potentially dangerous behavior. The system allows supervisors to determine if officers are out of compliance with policies. Each officer is given a risk score based on the system metrics. Supervisors then use the risk scores to determine whether the officer needs counseling, training, or to be terminated.

The Charlotte-Mecklenburg police department instituted an early intervention system based on advanced machine learning technology. The use of machine learning technology allowed departments to tailor training, counseling, and other disciplinary measures to the officers who need them most.

The Chicago Police Department has announced that it will implement a newly defined early intervention system. This system will analyze records of personnel complaints, excessive force and other data to identify officers who require additional training. This system, however, is a constructed as an attempt to ensure officer wellness. The system only identifies minor problems, which results in retraining.

What will these systems accomplish?

A pervasive concern about police officers is the lack of training that officers go through. Benchmark’s system documents and manages training history for every officer and cross-references that data with police guidelines required by the state. Furthermore, First Sign offers research-based training programs that help officers identify, understand and address any opportunities to improve their performance. These training regimens are an essential part of curbing police officer misconduct. The artificial intelligence program tracks the officers history and recommends proper training protocols. Consequently, an important part of Benchmark’s First Sign success is that it relies on police supervisors to actively impose the proper disciplinary consequences that are required. This means that supervisors must vigorously monitor First Sign and follow its recommendation.

The widespread allegation against police, however, is that corruption runs throughout the department. While early intervention systems may detect which officers are prone to infringing on constitutional rights, it is still the obligation of police supervisors to take action. The system will not be able to yield the results that society demands unless the personnel in charge of the systems capitalize on these technological developments and implement the correct remedies.

This fear underscores limitations on the effect that artificial intelligence can have on police departments.  The algorithms that artificial intelligence use rely on reports, interviews and complaints that are entered into the system by the officers themselves. As a result, the goal of early intervention systems can be thwarted before even having an opportunity to be successful.

Following the killings of George Floyd, the Minneapolis Police Department was at the forefront of the Black Lives Matter protests. The Minneapolis Police Department announced its intent to partner with Benchmark Analytics to implement its system on its police force. In a news conference, Police Department Chief Medaria Arradondo highlighted how the use of real-time data can be used to improve department leaders ability to identify warning signs of officer misconduct. However, plans to implement this initiative fell through due to funding concerns.

Police departments across the country have agreed to implement Benchmark’s First Sign system. In July, the Nashville police department implemented First Sign. Police departments in San Jose and Albuquerque have all decided to use First Sign as attempt to protect its communities and its officers. These departments will serve as the litmus test to determine whether artificial intelligence in early intervention systems can prove to be successful.

The prevalence of artificial intelligence in dealing with police misconduct is evolving. The tangible results of Benchmark Analytics new early intervention systems are unknown. Given that police departments have only recently begun to implement First Sign, the hope is to limit and eliminate officers’ infringement on constitutional rights.

It is clear that early intervention systems serve as a great resource for police departments to monitor its force. However, the success of its program is heavily reliant on the actions of supervisors and department heads. Will the addition of artificial intelligence provide a means to prevent officer misconduct? It remains to be seen whether these systems will have the impact that they seek.