The 21st century has witnessed AI (Artificial Intelligence) accomplishing tasks like handily defeating humans at chess or teaching them foreign languages quickly.
A more advanced task for the computer would be predicting an offender’s likelihood of committing another crime. That’s the job for an AI system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions). But it turns out that tool is no better than an average bloke, and can be racist too. Well, that’s exactly what a research team hasdiscoveredafter extensively studying the AI system which is widely used by judicial institutions.
With considerably less information than COMPAS (only 7 features compared to COMPAS’s 137), a small crowd of nonexperts is as accurate as COMPAS at predicting recidivism.
COMPAS clocked an overall accuracy of 65.4% in predicting recidivism (the tendency of a convicted criminal to re-offend), which is less than the collective prediction accuracy of human participants standing at 67%.
Now just take a moment and reflect in your mind that the AI system, which fares no better than an average person, was used by courts to predict recidivism.
Commercial software that is widely used to predict recidivism is no more accurate or fair than the predictions of people with little to no criminal justice expertise who responded to an online survey.
Although both the sides fall drastically short of achieving an acceptable accuracy score, the point of using an AI tool which is no better than an average human being raises many questions.