AI at eye level with human experts in medical diagnoses?

AI at eye level with human experts in medical diagnoses?

10. October 2019 0 By Horst Buchwald

AI at eye level with human experts in medical diagnoses?

London, 10.10.2019

The potential of artificial intelligence in healthcare has generated enthusiasm, and advocates say it will free up resources, free up time for doctor-patient interaction, and even support the development of tailored treatment. Last month the government announced £250m of funding for a new NHS laboratory for artificial intelligence.

On the other hand, other scientists have criticized that the latest findings are based on a small number of trials because this field is littered with poor-quality research.

However, the use of AI in interpreting medical images is an emerging area that relies on deep learning, a sophisticated form of machine learning in which a series of labeled images are fed into algorithms that pick out features in them and learn how to classify similar images. This approach has proven successful in diagnosing diseases from cancer to eye disease.

However, it remains to be seen how such deep learning systems correspond to human abilities. Now researchers say they have conducted the first comprehensive review of published studies on the subject and found that people and machines are at eye level.

Prof. Alastair Denniston, from the University Hospitals Birmingham NHS Foundation Trust and co-author of the study, said the results were encouraging, but the study was a reality check to see if the current AI hype was justified. Dr. Xiaoxuan Liu, the lead author of the study, agreed. “There are a lot of headlines about the AI that surpasses people, but our message is that at best it can be equivalent.

Denniston, Liu and colleagues reported in Lancet Digital Health how they focused on research published since 2012 – a crucial year for deep learning.

An initial search revealed more than 20,000 relevant studies. However, only 14 studies – all based on human disease – reported high-quality data, tested the deep learning system with images from a separate record and the one it was trained with, and showed the same images to human experts.

The team bundled the most promising results from each of the 14 studies to show that deep learning systems correctly detected a disease condition in 87% of cases – compared to 86% for medical staff – and gave the all-clear in 93% of cases, compared to 91% for human experts. However, physicians in these scenarios were not provided with additional patient information that they would have in the real world to control their diagnosis.

Prof. David Spiegelhalter, chairman of the Winton Centre for Risk and Evidence Communication at the University of Cambridge, said the field was flooded with poor research. “This excellent rating shows that the massive hype about AI in medicine hides the unfortunate quality of almost all evaluation studies,” he said. “Deep learning can be a strong and impressive technique, but doctors and commissioners should ask themselves the crucial question: What does it actually do for clinical practice?

Denniston, however, remained optimistic about the potential of AI in healthcare, as such systems could serve as a diagnostic tool and help to eliminate the backlog of scans and images. In addition, Liu says, they could be useful in places where there is a lack of experts to interpret images.

Liu said it was important to use deep learning systems in clinical trials to determine whether patients’ outcomes have improved compared to current practices.

Dr. Raj Jena, an oncologist at Addenbrooke’s Hospital in Cambridge who was not involved in the study, said that deep learning systems would be important in the future, but stressed that they needed robust, hands-on testing. He also said it was important to understand why such systems sometimes make the wrong assessment.

Hits: 44