Machine Learning in Radiology: What Does the Future Hold?
Artificial intelligence looms large in radiology's future, but how it will be used remains unclear.
Radiologists create novel applications for existing imaging technologies and help develop new technologies. By its nature, diagnostic imaging is technology-centric, so progress in radiology is measured in terms of how much faster, safer, more comfortable, and less invasive the imaging experience is for patients. We could also look at the ease of performing the study, the consistency of the image quality, as well as the ease and accuracy of detection and diagnosis. When all these factors are successfully aligned, the potential for decreasing the overall cost of an episode of care exists, even if the imaging itself is not an inexpensive component of that care. Minimizing waste and improving accuracy being laudable goals, the desire to use machine learning to overcome the fallibility of human perception and judgment is not surprising.
The expanding concept of intelligence outside the confines of the cerebral cortex has necessitated a new focus of scientific inquiry related to the ethics of using autonomous machines. Ethics related to technologic innovation generally lag behind the development of the technology itself. There is no broad professional and societal consensus about which emerging technologies and applications are appropriate, legally and morally. The potential benefits of each technologic advance are initially touted with great fanfare by its promoters. But then the nascent cloud of conflict starts to dim the bright light of hype when someone inevitably asks a question that starts with the phrase "But what happens if..."
Given a recent worldwide data kidnapping via ransomware, the health care industry is acutely aware of the vulnerability of electronic information to increasingly sophisticated malicious attacks. Hospital computer networks are particularly vulnerable to cyberattacks because of minimal computer downtime for software upgrades.
So ask yourself: What could possibly go wrong if a deliberate attack (or human error) changed a few lines of code in machine-learning software? It's like asking, What could possibly go wrong if I changed one base pair of someone's DNA? If undetected, a seemingly insignificant change could produce deleterious downstream effects that are significant and costly, both in terms of health and well-being, as well as in dollars and cents.
In Robin Cook's medical thriller Cell, a health care app monitors, collects, and sends data to a computer that collates, integrates, and analyzes vital signs, lab values, results of image studies, etc. For example, not only is diabetes blood-glucose monitoring automated but the software independently downloads adjustments to implanted insulin pumps. Tailored to the patient and no human input needed. Sounds great, right? But, as expected in a medical thriller, there's a dark twist. Created by seemingly well-intentioned (but possibly nefarious) people, part of the software algorithm is designed to minimize pain and suffering. At some point, having continuously received information via feedback loops from many patients, a self-adjustment occurs in the code for this algorithm ("machine learning") and the app goes rogue, with unintended harmful results. Yes, it's a novel, a product of the author's mind, but that doesn't mean the story lacks prescience.
Perhaps one day machine learning will improve the quality of life for everyone, including radiologists. As math-averse Calvin, the comic strip kid in "Calvin and Hobbes," says, "Given the pace of technology, I propose we leave math to the machines and go outside and play." Although in general I'd decry Calvin's feigned math allergy, on a beautiful spring day like today, his proposal sounds about right.
Yes, sometimes I just want to bury my head in the sand and not think about the future of machine learning in radiology, especially given unanswered questions about how it will actually be used and how vulnerable it will be to coding-related errors and perhaps even to sabotage. But this is not one of those times. Why? Because this month's JACR tweetchat is about Machine Learning and Big Data.
Join a trio of moderators, namely Geraldine B. McGinty, MD, MBA, FACR, Arun Krishnaraj, MD, MPH, and Christoph Wald, MD, PhD, FACR, tomorrow, Tuesday, May 23, at noon EDT. Don't forget to search for and use #JACR in your tweets. And if you're at ACR 2017, be sure to join us for the live tweet up on the mezzanine level of the Marriott Wardman Park.
The three discussion questions are as follows:
1. If machine learning algorithms surpass radiologists in diagnostic accuracy, how will the role of radiologist change?
2. Will patients be comfortable receiving a diagnosis generated by a machine?
3. Do radiologists have an ethical obligation to their patients to pursue new technology?
The ACR Bulletin has posted a succinct online primer about machine learning. Just click on each of the four topics: Machine Learning 101, Seismic Shifts, Sizing Up Technology Symbiosis, and Riding the Technology Wave.