Typically, it works like this: Researchers develop an algorithm using “deep learning” — where a computer system mimics the brain’s neural networks. It’s exposed to a large number of images — digital mammograms, for example — and it teaches itself to recognize key features, such as signs of a tumor.
Other studies have suggested that AI can outperform humans in diagnosing certain cancers. One found that computers bested dermatologists in distinguishing harmless moles from melanoma skin cancer. Another found that AI was typically better than pathologists at finding breast tumor cells in lymph node samples.
This latest AI model was “trained” by exposing it to mammograms from over 90,000 women whose outcomes were known. The researchers then tested the model on a separate dataset, involving mammograms from over 25,000 U.K. women and 3,000-plus U.S. women.
Overall, the model reduced false positive and false negative results. The improvement was greater in the United States. While it’s not certain why, Etemadi pointed to one potential reason: In the United Kingdom, it’s standard for two radiologists to analyze a mammogram, which generally improves the accuracy.
But while the AI model performed well in this “controlled environment,” it remains to be seen how it will work in the real world, said Dr. Stamatia Destounis.
She is a spokesperson for the Radiological Society of North America and a clinical professor of imaging sciences at the University of Rochester, in New York.
“What’s needed are clinical studies in real day-to-day practice to see if these findings can be reproduced,” Destounis said.
Even in this controlled setting, the AI model was not foolproof. It did not detect all cancers or eliminate false positives. And sometimes it lost out to humans.
In a separate experiment, the researchers pitted the AI model against six U.S. radiologists. Overall, the computer was better, but there were cases where the doctors correctly saw a tumor the machine missed.
So what did the AI model overlook? And what did it see that doctors didn’t? No one knows, Etemadi said.
“At this point, we can only observe the patterns,” he said. “We don’t know the ‘why.'”