Artificial Intelligence and Criminal Justice

As our country looks for a quick fix to the many problems plaguing our criminal justice system, some courts and police departments are trying to use artificial intelligence as a cure-all. One flawed idea is that technology could be used by courts to take bias out of the equation for fairer sentencing decisions. Police departments could also determine which officers would be safest to use in a given situation. Or cops could use it for what they call “predictive policing” – using computers to anticipate in advance who is likely to commit crimes and then taking proactive actions to prevent them. In fact, this technology is already being used around the country in each of these ways. In the abstract, these are interesting solutions to very real problems with the criminal justice system like mass incarceration, racial profiling by police officers, rampant police shootings, and unequal sentencing based on race. But in practice, the technology has built in biases and disturbing implications.

First, consider “predictive policing.” Dozens of police departments, from Los Angeles to Chicago, are currently receiving federal funding to give “predictive policing” a whirl. In our best-intentioned imagination, it would be nice if computer predictions actually transformed cops into super heroes, able to swoop in and talk people out of committing violent crimes before they even happened. But the reality is that “predictive policing” is just a fancy, high-tech way of justifying racial profiling and excessive, selective policing of those who live in poor, high-crime neighborhoods. The prediction part of the computer program is based on the assumption that people surrounded by violence are more likely to commit violence, so the programs tend to flag most inner-city youth as potential criminals, making dramatic bias a key part of the programs. Then the policing part involves arresting the computer-flagged potential criminals for any minor infraction and seeking the harshest possible charges and sentences, so as to get them off the streets before they commit crimes and to scare other potential criminals straight. This, of course, is just a way of using a computer algorithm to justify racist, over-policing of inner-city minority communities.

Similarly, it’s great to imagine computer programs making sure that only certain “types” of police officers respond to potentially lethal crime scenes, to prevent violence. But sorting police officers into who would be less volatile in a given situation ignores the human aspect – cops’ ability to cope with a situation depends on all sorts of intangibles, like what sort of day they’re having and how they’re feeling. The program also ignores that many situations like routine traffic stops are not predictably violent situations, yet that’s often where the violence occurs. And most importantly, using artificial intelligence in this way is only a band-aid for the real problems: our police forces need to be trained in de-escalation instead of a shoot first mentality. There needs to be better screening before hiring officers, and there needs to be accountability and consequences for police brutality.

Finally, consider how artificial intelligence works in the sentencing context. Some courts around the country are using computer-generated scores to eliminate sentencing biases, but the results show that the bias is built right into the artificial intelligence. The computer assessments are supposed to predict how likely an offender is to commit future crimes and to give the person a sentencing score based on that prediction. But the programs tend to score black people as more likely to re-offend than white people and therefore more deserving of longer prison sentences. And by factoring in things like exposure to violence, the assessments unquestionably stack the deck against poor people.

We must remember that at the heart of all of the artificial intelligence programs being used are computer algorithms, and sexism, racism, socio-economic biases, and other forms of discrimination are being built right into those systems. Morevoer, the artificial intelligence systems being used now in other contexts, like in the marketing and employment settings, tend to become more racist and sexist over time. Artificial intelligence is not going to provide a quick fix for the flaws in our criminal justice system – it’s only going to further entrench the problems.

(Via Geekwire): A “Predictive Policing” software screengrab from Elgin, just northwest of Chicago.
(Via Geekwire): A “Predictive Policing” software screengrab from Elgin, just northwest of Chicago.

Insight

Take Action Today

To discuss your case with an experienced civil rights attorney, contact our firm today for a free and confidential consultation at 888-644-6459 (toll-free) or 312-243-5900.

Our Impact

Loevy & Loevy has won more multi-million dollar verdicts than perhaps any other law firm in the country over the past decade. 

Read the latest public reporting and press releases about Loevy + Loevy’s clients, our public interest litigation, and our civil rights impact.

We take on the nation’s most difficult public interest cases, advocating in and outside the courtroom to secure justice for our clients and to hold officials, governments, and corporations accountable.

Scroll to Top