Attachment B -- Sample Expert Commentary
Perspective
Is the Measurement Mandate Diverting the Patient Safety Revolution?
By: Robert M. Wachter, MD
"You can't manage what you don't measure" is a business-world truism. And it is true: measurement focuses our attention, informs goal-setting, creates accountabilities, and allows the determination of success and failure.
In health care, the emphasis on measurement has had some notable successes. For example, the use of beta blockers after acute myocardial infarction is now accomplished over 90% of the time in health plans reporting HEDIS data (1). This increase in use, from mean values of only 60% use in 1996, surely translates into many thousands of patients with better outcomes following acute MI.
But there is a dark side to measurement: measurement tends to skew the attention and resources of individuals and organizations toward that which is measured, creating the possibility of selectively ignoring that which is not. During this, National Patient Safety Week, it is worth reflecting on whether measurement — generally a good thing — is steering us away from some important safety targets.
Take, for example, the tremendous focus on hospital-acquired infections in most patient safety initiatives, including those promoted by the Institute for Healthcare Improvement (2) and the Centers for Medicare & Medicaid Studies (3). Who can argue against aggressively attacking central-line associated bloodstream infections, nosocomial urinary tract infections, or ventilator-associated pneumonias (VAP)? And undoubtedly some of the focus in these areas stems from studies showing that the widespread adoption of a series of commonsensical practices (handwashing, sterile draping, head of bed elevation, etc.) can reduce these life-threatening and expensive infections (4).
But some of this attention may be due to the fact that the rates of these infections may be measurable without expensive and tedious chart review or, even more burdensome, direct observation. Decades of work by the Centers for Disease Control and Prevention (CDC) and others have created a series of standard definitions that allow us to monitor and benchmark hospital infection rates (notwithstanding the challenges due to the lack of standardized definitions in diseases such as VAP [5]). Perhaps just as importantly, the presence of infection control officers, with epidemiologic training and pre-existing measurement resources, has facilitated efforts to measure, trend, and attack hospital-acquired infections (6).
The end result has been that infection control has become a strong, perhaps even the dominant, focus of the entire field of patient safety, trumping such important but less easily measurable errors as medication errors, handoff errors, and diagnostic errors. With this focus has come a reframing of certain aberrant clinical practices as "medical errors." For example, although it seems natural today, who would have guessed as recently as a decade ago that failure to clean one's hands would be dubbed an egregious medical mistake, eliciting the opprobrium of peers and the submission of incident reports? Or who could have predicted that a catheter-associated bloodstream infection would trigger a Root Cause Analysis and even, in some states, reporting to a state health authority?
The triumph of measurement is also responsible for the primacy of quality measurement (and its capitalistic cousin, pay-for-performance) initiatives over other aspects of patient safety. Take, for example, a patient admitted to the hospital with severe community-acquired pneumonia. Whether the patient received flu vaccine and guideline concordant antibiotics is easily measurable, and thus has become the substrate for public reporting, pay-for-performance, audit and feedback — in short, all the strategies in our collective toolboxes to catalyze improved performance. Meanwhile, in the absence of measures, major safety problems in the care of the same patient — medication errors, communication errors, patients crashing during transport or failing to receive their outpatient antibiotics — fly under our collective radar screen, in great danger of receiving too little attention and too few resources.
It should also be noted that measuring that which is easy to measure (e.g., using administrative data) can paint a different picture of the quality of care than measuring using the more comprehensive and resource-intensive methods like medical record review and physician over-read. For certain health care conditions such as falls, malnutrition, and pain management, there are no easy-to-measure processes. Even within certain health care conditions, one often reaches a different conclusion about quality when the data come from a few administrative measures versus a more comprehensive set of measures (7).
Perhaps the area most endangered by the measurement microscope is diagnostic error — an area in which there are virtually no agreed upon measures, no state or national reporting, and limited focus. Interestingly, in this area (unlike the rest of safety), the malpractice system probably remains the most potent driver of improvement, along with physician professionalism and the implementation of electronic medical records. But as long as a system or doctor can look good on public reports by giving "pneumonia" patients pneumovax but remain unscrutinized if they misdiagnose half the pneumonia patients, diagnostic errors are likely, in the words of Rodney Dangerfield, to "get no respect" (8).
The bottom line is that we need a rational way to apportion resources and attention across different medical error types, and between patient safety and broader issues of quality improvement. As we do this, it is important to take measurability into account, but equally crucial that we do not overlook those dimensions of care that are harder to measure. Where we lack measures (or the measurement burden is too onerous or expensive), the right focus may be on promoting scientific research to develop measures that capture the errors, to allow them to compete successfully with their better developed counterparts. In other cases (diagnostic, transition, and communication errors come to mind), we may need to take their importance on faith. If that occurs, it is likely that strategies to improve care in these areas will be worth our time, money, and attention, notwithstanding the absence of accurate, user-friendly ways to accurately measure the full dimensions of the problem or the extent of our progress.
Author
Robert
M. Wachter, MD
University
of California, San Francisco
Disclaimer
The views and opinions expressed are those of the author and do not necessarily state or reflect those of the National Quality Measures Clearinghouse™ (NQMC), the Agency for Healthcare Research and Quality (AHRQ), or its contractor ECRI Institute.
Potential Financial Conflicts of Interest
Dr. Wachter is the Project Director and Lead Contractor of AHRQ WebM&M and AHRQ Patient Safety Network, for which he receives compensation. He is also a paid member of Google's Healthcare Advisory Board and a member of the Board of Directors of the American Board of Internal Medicine.
Lee TH. Eulogy for a quality measure. N Engl J Med 2007; 357:1175-7.
Berwick DM, Calkins DR, McCannon CJ, Hackbarth AD. The 100,000 Lives Campaign: setting a goal and a deadline for improving health care quality. JAMA 2006; 295:324-7.
Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med. 2005; 353:255-64.
Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med 2006; 355:2725-32.
Klompas M, Platt R. Ventilator-associated pneumonia—the wrong quality measure for benchmarking. Ann Intern Med 2007; 147:803-5.
Gerberding JL. Hospital-onset infections: a patient safety issue. Ann Intern Med 2002; 137:665-70.
MacLean CH, Louie R, Shekelle PG, et al. Comparison of administrative data and medical records to measure the quality of medical care provided to vulnerable older patients. Med Care 2006; 44:141-8.
Graber M. Diagnostic errors in medicine; a case of neglect. Jt Comm J Qual Saf 2005; 31:106-13.
File Type | application/msword |
File Title | Commentary on “Diagnosis and Treatment of Low Back Pain: A Joint Clinical Practice Guideline From the American College of Physi |
Author | VA Employee |
Last Modified By | M. Swan |
File Modified | 2009-03-05 |
File Created | 2009-03-05 |