A couple of weeks ago when Judge T.S. Ellis III, a federal district court judge in Virginia, sentenced former Trump campaign chairman Paul Manafort to four years in prison for eight counts of bank and tax fraud, my social media pages blew up with comments like:
“My client, who hasn’t been convicted of anything, has spent more time at CCJ waiting for his day in court than what Manafort has to serve.”
“Manafort will spend less time in prison than Crystal Mason, who was sentenced to 5 years for voting while on probation.” and
“Adnan Syed must remain in prison while waiting for a new trial but white, wealthy, and non-Muslim Paul Manafort gets 47 months in prison when sentencing guidelines required 19 to 24 years.”
You don’t need a law degree to recognize the fundamental unfairness in Manafort’s short sentence compared to others for lesser crimes (or no crime at all). Similar postings appeared when CPD officer Jason Van Dyke was sentenced to three years for shooting and killing Laquan McDonald. At first glance, these comparatively short sentences support the idea of using algorithms and artificial intelligences (AI) to determine sentencing. After all, computers don’t have biases like humans. But we shouldn’t be too quick to trust that AI will do a better job than humans.
For the most part, the algorithms used in the criminal justice system are based on historical data. AI analyzes historical data, identifies patterns, and uses these patterns to predict factors such as likelihood to re-offend. Those with higher recidivism rates get harsher sentences. This is problematic for a number of reasons. First, AI is using historical data that is tainted with biases. AI doesn’t know that the data it’s learning from is flawed. So if anything, AI is only reinforcing the biases already embedded in the criminal justice system, not eliminating it. Second, AI provides statistical correlations, not causations. In other words, AI does not explain why people break the law and tries to find solutions. Instead, it only provides demographic profiles of people who are likely to offend based on flawed data (doesn’t this sound like a modified form racial profiling?). Using the examples from above, Crystal Mason, a black, 43-year-old, middle-class citizen with a prior conviction would receive a higher score from the algorithm than Manafort, a white, 69-year-old, upper-class man. AI wouldn’t consider the fact that Ms. Mason didn’t know she wasn’t allowed to vote or that Manafort knowingly and continuingly committed fraud for over a decade. If you think about it, the algorithm would be rewarding Manafort for the crimes he was convicted of: Manafort made millions of dollars by committing bank and tax fraud, but according to AI, a high income means he is less likely to re-offend and deserves a good score.
In fairness to AI, there is a statistical correlation between income and crime. But rather than giving those in poverty harsher sentences we should be looking for solutions to the causes. Poverty creates fewer opportunities for quality education and jobs, and with more lower-skilled jobs being replaced by AI and automated machines, it is becoming more and more difficult for those living in poverty to get above the poverty line or even make enough money to cover the cost of basic human needs. If there’s no legitimate way to meet these basic needs, some will turn to crime as a mean to make ends meet. But what if every adult in this country received $1,000 each month to help cover the cost of living – would that lower crime rates? Would that decrease the rate of recidivism?
I think it might. I have heard countless stories about people released from jail with nothing more than a bus card. They are told to go find a job and figure it out. For those who do not have any skilled training, tacking on any kind of conviction to their background makes it nearly impossible to compete for a job that can be done by a machine. Unable to find a job, they see reoffending as the only option to survive.
The idea of a universal based income (UBI) is not new, but it is getting more attention these days. In fact, there’s even a 2020 presidential candidate who is running on a UBI platform. That’s probably because each year, regardless of party affiliation, hundreds of thousands of average citizens are losing their jobs to AI. Experts predict that automation could replace 40% of jobs in 15 years. Whether you are a middle-aged white male truck driver from Iowa or a black female cashier at Walgreens in Chicago, you can understand and relate to the stress of being replaced by a machine. Add that to the stress of starting with nothing – no food, no clothes, no shelter; and those are the circumstances that many individuals released prison are faced with. So perhaps instead of a bus pass, a $1,000 check will do more to help get them back on their feet.
I realize it may sound like I’m against technology and advancement in the industry. I’m not. I think technology is wonderful and it has made my life better in many ways. I do think, however, we need to understand the technology we are using and the consequences of AI. The current algorithms used to determine sentencing does not solve the problem of unfair sentencing due to biases. It is becoming increasingly more difficult for lower-skilled workers recently released from jail to reintegrate back into society without any assistance. These are realities we must accept and work towards finding solutions.