Reexamining Radiology Quality Measures
Are these metrics driving performance improvement or causing performance anxiety?
For quality measures to be meaningful, radiologists need to reach consensus about defining quality.
The Radiology Firing Line team is at it again, bringing you their third podcast, a lively and thoughtful exchange between Firing Line regular Saurabh (Harry) Jha, MBBS, and guest Jason N. Itri, MD, PhD, from University of Cincinnati, recorded at RSNA 2015. C. Matthew Hawkins, MD, moderates.
Here's an interesting experiment. The next time you're chatting with colleagues, innocently insert the following question into the conversation: "What exactly is quality?" Will you get a baker's dozen different answers? Awkward silence? Or an eyeroll-shoulder shrug combo?
Yes, good luck defining quality, much less measuring it. By quality, do we mean diagnostic accuracy? Is it better to be sensitive or specific? What constitutes a major miss versus a minor one? Who decides the meaningfulness of a minor finding? What about reporting language? What about costs to the medical system as a whole? There are so many potential determinants of what one can define as quality.
Like the story of the blind men and the elephant, no one person's viewpoint can possibly provide the whole picture. So, ultimately, who gets to decide? Whose opinions carry more weight? For now, a single agreed-upon definition for quality in radiology is as elusive as Higg's boson. But if we radiologists don't define quality, it may be defined for us.
Which brings us back to that elusive question: How can you measure what you can't even define? Yet measure we must. The drive to measure quality in health care has great momentum. So how we define quality will determine which measurements will be used. Like other types of initiatives for collecting data to demonstrate impact, quality improvement programs run the risk of measuring for measuring's sake rather than gathering meaningful data.
The aphorism "not everything that counts can be counted and not everything that can be counted counts," generally attributed to Albert Einstein, is a good fit with respect to quality measures in radiology.
We should focus not on what we can measure, but rather what should we measure. By not having good standards for the hard-to-measure things, we end up with easy-to-capture data, like turn-around times, that are not sufficiently sophisticated proxies to represent the complex concept of quality.
In the podcast, Drs. Jha, Hawkins and Itri delve into the opportunities and challenges surrounding defining and measuring quality. Their discussion results in many important but as-yet-unanswered questions: Do we address the quality conundrum by creating registries and analyzing big data to determine the impact of diagnostic radiologists on patient outcomes? If benchmarks are created, would those be used to compare and contrast individual radiologists or to drive performance improvement in systems? Can we modernize our historically apprenticeship-based model of training with a standardized residency curriculum? I believe that 2016 will be a pivotal year for tackling these questions (and many more) to achieve consensus on quality and measures.
As you engage your colleagues and the broader radiology community in consensus-building, the JACR has published many articles dealing with quality. Here are a recent few.
Data Drives Quality Improvement By Nadja Kadom, MD, and Paul Nagy, PhD
How Do Publicly Reported Medicare Quality Metrics for Radiologists Compare With Those of Other Specialty Groups? By Andrew B. Rosenkrantz, MD, MPA, Danny R. Hughes, PhD, and Richard Duszak Jr., MD
An Introduction to Basic Quality Metrics for Practicing Radiologists By Jonathan B. Kruskal, MD, PhD, and Ammar Sarwar, MD
Fundamentals of Quality and Safety in Diagnostic Radiology By Michael A. Bruno, MD, and Paul Nagy, PhD
Beginner's Guide to Practice Quality Improvement Using the Model for Improvement By Cindy S. Lee, MD, and David B. Larson, MD, MBA