“Springer Nature” versus unscientific book on facial recognition26. June 2020
“Springer Nature” versus unscientific book on facial recognition
New York, 26.6.2020
Springer Nature opposes the publication of a book claiming that a facial recognition system can predict whether someone is likely to become a criminal. The publisher reacted after hundreds of AI researchers sent an open letter to the publisher asking the publisher to revoke the work and calling the technology racist.
The Harrisburg University study claims that the system can predict with 80% accuracy and without racial bias “whether someone is likely to be a criminal.
The open letter exposes the study as unscientific and argues that crime prediction technologies that use machine learning are racist. The letter was written by five researchers at MIT, AI Now Institute, Rensselaer Polytechnic Institute and McGill University.
The signatories have called on all academic publishers to stop publishing similar studies claiming that AI algorithms can predict a person’s crime. Springer stated that the paper had been rejected after a “thorough peer review process”.
Google and Princton researchers have already refuted researchers who published similar studies at Jiao Tong University in Shanghai in 2017, claiming that an algorithm can predict crime based on facial features.