AI used to Predict Criminality based on a Face Entices Controversy
A press release in early May this year from Harrisburg University that claimed they were developing an AI that could determine if someone would become a criminal. Critics of the research have called this proposed technology “Race Science”.
Two professors and a graduate student developed a facial recognition test program that was aimed at predicting in advance whether someone is likely to become a criminal. Their press release,titled A Deep Neural Network Model to Predict Criminality Using Image Processing, was set to be published in a collection by Springer Nature, a major academic publication.
The AI was reported to have eighty percent accuracy in predicting criminality with no racial bias, claiming to predict whether an individual was likely to be a criminal based solely on an image of their face. The press release has since been removed from the Harrisburg University website due to widespread criticism and backlash focussed on supposed racial bias the AI seemed to exhibit.
In response to the Harrisburg University press release, more than one thousand machine-learning specialists released a public letter (1) condemning the research findings, and as a result, Springer Nature subsequently confirmed that they would not be releasing the paper.
The consortium who penned the open letter, calling themselves the Coalition for Critical Technology (CCT), stated that the paper’s claims were based on “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The main point of the CCT objection seems to reside in the objection that criminality itself is racially biased.
The open letter argues that the racially skewed nature of policing in the USA will affect the predictions of the algorithm. As a team of algorithm developers ourselves, the team at Tower wonders if perhaps politics has gotten in the way of statistics this time around.