The New York Times pits man against machine in a CT-scan interpretation challenge – and the machine won. An AI got fewer false positives and fewer false negatives than a team of six medical experts:
“We have some of the biggest computers in the world,” said Dr. Daniel Tse, a project manager at Google and an author of the journal article [in Nature]. “We started wanting to push the boundaries of basic science to find interesting and cool applications to work on.”
In the new study, the researchers applied artificial intelligence to CT scans used to screen people for lung cancer, which caused 160,000 deaths in the United States last year, and 1.7 million worldwide. The scans are recommended for people at high risk because of a long history of smoking.
Studies have found that screening can reduce the risk of dying from lung cancer. In addition to finding definite cancers, the scans can also identify spots that might later become cancer, so that radiologists can sort patients into risk groups and decide whether they need biopsies or more frequent follow-up scans to keep track of the suspect regions.
But the test has pitfalls: It can miss tumors, or mistake benign spots for malignancies and push patients into invasive, risky procedures like lung biopsies or surgery. And radiologists looking at the same scan may have different opinions about it.
The researchers thought computers might do better. They created a neural network, with multiple layers of processing, and trained it by giving it many CT scans from patients whose diagnoses were known: Some had lung cancer, some did not and some had nodules that later turned cancerous.
Then, they began to test its diagnostic skill.
“The whole experimentation process is like a student in school,” Dr. Tse said. “We’re using a large data set for training, giving it lessons and pop quizzes so it can begin to learn for itself what is cancer, and what will or will not be cancer in the future. We gave it a final exam on data it’s never seen after we spent a lot of time training, and the result we saw on final exam — it got an A.”
Tested against 6,716 cases with known diagnoses, the system was 94 percent accurate. Pitted against six expert radiologists, when no prior scan was available, the deep learning model beat the doctors: It had fewer false positives and false negatives. When an earlier scan was available, the system and the doctors were neck and neck.
—
You can read the original study here, at Nature.
[via /r/science]