Can a computer predict who will commit a crime?
A research team from McMaster University in Canada and Shanghai Jiao Tong University in China originally set out to disprove the idea that there could be a link between facial features and criminality. However, in the 2016 paper they ended up writing, they claim their experiment proved otherwise – that computers could, in fact, detect a criminal based on their facial features.
Critics in the field are assailing the research. They’ve questioned the legitimacy of the project, pointing out that the research was skewed in various ways (for example, photos of criminals were taken from identification documents while the non-criminal photos came from social media profiles – does anyone look better on their driver’s license than they do on social media?). Even while the researchers tried to standardize the images by turning them greyscale or resizing them, other scientists feel it wasn’t enough to make up for the experiment’s design flaws. Others have simply called the overall experiment bad science and say that adding in the computer component only legitimizes the question, the experiment, and the conclusions.
The danger of these experiments are clear. Non-criminals are at risk of being labeled unfairly and, more generally, we risk validating that idea that a person can “look like a criminal.” Physiognomy (the assessment of character traits based on outer appearance) has long been regarded as a pseudoscience, and goes back at least to the Greek philosopher Aristotle. It regained favor in the European Middle Ages and Renaissance, and in the United States it was revived again as part of its eugenics program.
Researchers know that publishing their findings will open up questions from the press and public, especially when the results are controversial. This leaves scientists debating about the ethics of publishing results such as these. Algorithms are created by humans, so why wouldn’t they also be susceptible to the same biases? And with the preponderance of cameras in public places, are we in danger of labeling people as criminals right off the streets, before any crime takes place? That would be quite a twist on predictive policing.
Resources:
Automated inference on criminality using face images (Original research article, 2016)
Concerns as face recognition tech used to “identify” criminals (New Scientist, 2016)
A new program judges if you’re a criminal from your facial features (Motherboard, 2016)
Return of physiognomy? Facial recognition study says it can identify criminals from looks alone (RT News, 2016)