Skip Navigation U.S. Department of Health and Human Services www.hhs.gov
Agency for Healthcare Research Quality www.ahrq.gov
Archive print banner

This information is for reference purposes only. It was current when produced and may now be outdated. Archive material is no longer maintained, and some links may not work. Persons with disabilities having difficulty accessing this information should contact us at: https://info.ahrq.gov. Let us know the nature of the problem, the Web address of what you want, and your contact information.

Please go to www.ahrq.gov for current information.

Summary

Overview

Patient safety has become a major concern of the general public and of policymakers at the State and Federal levels. This interest has been fueled, in part, by news coverage of individuals who were the victims of serious medical errors and by the publication in 1999 of the Institute of Medicine's (IOM's) report To Err is Human: Building a Safer Health System. In its report, the IOM highlighted the risks of medical care in the United States and shocked the sensibilities of many Americans, in large part through its estimates of the magnitude of medical-errors-related deaths (44,000 to 98,000 deaths per year) and other serious adverse events.

The report prompted a number of legislative and regulatory initiatives designed to document errors and begin the search for solutions. But Americans, who now wondered whether their next doctor's or hospital visit might harm rather than help them, began to demand concerted action.

Three months after publication of the IOM report, an interagency Federal government group, the Quality Interagency Coordination Task Force (QuIC), released its response, Doing What Counts for Patient Safety: Federal Actions to Reduce Medical Errors and Their Impact. That report, prepared at the President's request, both inventoried ongoing Federal actions to reduce medical errors and listed more than 100 action items to be undertaken by Federal agencies.

An action promised by the Agency for Healthcare Research and Quality (AHRQ), the Federal agency leading efforts to research and promote patient safety, was "the development and dissemination of evidence-based, best safety practices to provider organizations." To initiate the work to be done in fulfilling this promise, AHRQ commissioned the University of California at San Francisco (UCSF)—Stanford University Evidence-based Practice Center (EPC)—in January 2001 to review the scientific literature regarding safety improvement. To accomplish this, the EPC established an Editorial Board that oversaw development of this report by teams of content experts who served as authors.

Defining Patient Safety Practices

Working closely with AHRQ and the National Forum for Quality Measurement and Reporting (the National Quality Forum, or NQF)—a public-private partnership formed in 1999 to promote a national health care quality agenda—the EPC began its work by defining a patient safety practice as:

A type of process or structure whose application reduces the probability of adverse events resulting from exposure to the health care system across a range of diseases and procedures.

This definition is consistent with the dominant conceptual framework in patient safety, which holds that systemic change will be far more productive in reducing medical errors than will targeting and punishing individual providers. The definition's focus on actions that cut across diseases and procedures also allowed the research team to distinguish patient safety activities from the more targeted quality improvement practices (e.g., practices designed to increase the use of beta-blockers in patients who are admitted to the hospital after having a myocardial infarction). The editors recognize, however, that this distinction is imprecise.

This evidence-based review also focuses on hospital care as a starting point because the risks associated with hospitalization are significant, the strategies for improvement are better documented there than in other health care settings, and the importance of patient trust is paramount. The report, however, also considers evidence regarding other sites of care, such as nursing homes, ambulatory care, and patient self-management.

The results of this EPC study will be used by the NQF to identify a set of proven patient safety practices that should be used by hospitals. Identification of these practices by NQF will allow patients throughout the nation to evaluate the actions their hospitals and/or health care facilities have taken to improve safety.

Reporting the Evidence

As is typical for evidence-based reviews, the goal was to provide a critical appraisal of the evidence on the topic. This information would then be available to others to ensure that no practice unsupported by evidence would be endorsed and that no practice substantiated by a high level of proof would lack endorsement. Readers familiar with the state of the evidence regarding quality improvement in areas of health care where this has been a research priority (e.g., cardiovascular care) may be surprised and even disappointed, by the paucity of high-quality evidence in other areas of health care for many patient safety practices. One reason for this is the relative youth of the field. Just as there had been little public recognition of the risks of health care prior to the first IOM report, there has been relatively little attention paid to such risks—and strategies to mitigate them—among health professionals and researchers.

Moreover, there are a number of methodologic reasons why research in patient safety is particularly challenging. Many practices (e.g., the presence of computerized physician order entry systems, modifying nurse staffing levels) cannot be the subject of double-blind studies because their use is evident to the participants. Second, capturing all relevant outcomes, including "near misses"(such as a nurse catching an excessive dosage of a drug just before it is administered to a patient) and actual harm, is often very difficult. Third, many effective practices are multidimensional, and sorting out precisely which part of the intervention works is often quite challenging. Fourth, many of the patient safety problems that generate the most concern (wrong-site surgery, for example) are uncommon enough that demonstrating the success of a "safety practice" in a statistically meaningful manner with respect to outcomes is all but impossible.

Finally, establishing firm epidemiologic links between presumed (and accepted) causes and adverse events is critical, and frequently difficult. For instance, in studying an intuitively plausible "risk factor" for errors, such as "fatigue," analyses of errors commonly reveal the presence of fatigued providers (because many health care providers work long hours and/or late at night). The question is whether or not fatigue is over-represented among situations that lead to errors. The point is not that the problem of long work-hours should be ignored, but rather that strong epidemiologic methods need to be applied before concluding that an intuitive cause of errors is, in fact, causal.

Researchers now believe that most medical errors cannot be prevented by perfecting the technical work of individual doctors, nurses, or pharmacists. Improving patient safety often involves the coordinated efforts of multiple members of the health care team, who may adopt strategies from outside health care. The report reviews several practices whose evidence came from the domains of commercial aviation, nuclear safety, and aerospace, and the disciplines of human factors engineering and organizational theory. Such practices include root cause analysis, computerized physician order entry and decision support, automated medication dispensing systems, bar coding technology, aviation-style preoperative checklists, promoting a "culture of safety," crew resource management, the use of simulators in training, and integrating human factors theory into the design of medical devices and alarms. In reviewing these practices, the research team sought to be flexible regarding standards of evidence, and included research evidence that would not have been considered for medical interventions. For example, the randomized trial that is appropriately hailed as the "gold standard" in clinical medicine is not used in aviation, as this design would not capture all relevant information. Instead, detailed case studies and industrial engineering research approaches are utilized.

Methodology

To facilitate identification and evaluation of potential patient safety practices, the Editorial Board divided the content for the project into different domains. Some cover "content areas," including traditional clinical areas such as adverse drug events, nosocomial infections, and complications of surgery, but also less traditional areas such as fatigue and information transfer. Other domains consist of practices drawn from broad (primarily nonmedical) disciplines likely to contain promising approaches to improving patient safety (e.g., information technology, human factors research, organizational theory). Once this list was created—with significant input from patient safety experts, clinician-researchers, AHRQ, and the NQF Safe Practices Committee—the editors selected teams of authors with expertise in the relevant subject matter and/or familiarity with the techniques of evidence-based review and technology appraisal.

The authors were given explicit instructions regarding search strategies for identifying safety practices for evaluation (including explicit inclusion and exclusion criteria) and criteria for assessing each practice's level of evidence for efficacy or effectiveness in terms of study design and study outcomes. Some safety practices did not meet the inclusion criteria because of the paucity of evidence regarding efficacy or effectiveness but were included in the report because an informed reader might reasonably expect them to be evaluated or because of the depth of public and professional interest in them. For such high profile topics (such as bar coding to prevent misidentifications), the researchers tried to fairly present the practice's background, the experience with the practice thus far, and the evidence (and gaps in the evidence) regarding the practice's value.

For each practice, authors were instructed to research the literature for information on:

  • Prevalence of the problem targeted by the practice.
  • Severity of the problem targeted by the practice.
  • The current utilization of the practice.
  • Evidence on efficacy and/or effectiveness of the practice.
  • The practice's potential for harm.
  • Data on cost, if available.
  • Implementation issues.

The report presents the salient elements of each included study (e.g., study design, population/setting, intervention details, results), and highlights any important weaknesses and biases of these studies. Authors were not asked to formally synthesize or combine the evidence across studies (e.g., perform a meta-analysis) as part of their task.

The Editorial Board and the Advisory Panel reviewed the list of domains and practices to identify gaps in coverage. Submitted chapters were reviewed by the Editorial Board and revised by the authors, aided by feedback from the Advisory Panel. Once the content was finalized, the editors analyzed and ranked the practices using a methodology summarized below.

Summarizing the Evidence and Rating the Practices

Because the report is essentially an anthology of a diverse and extensive group of patient safety practices with highly variable relevant evidence, synthesizing the findings was challenging, but necessary to help readers use the information. Two of the most obvious uses for this report are:

  1. To inform efforts of providers and health care organizations to improve the safety of the care they provide.
  2. To inform AHRQ, other research agencies, and foundations about potential fruitful investments for their research support.

Other uses of the information are likely. In fact, the National Quality Forum plans to use this report to help identify a list of patient safety practices that consumers and others should know about as they choose among the health care provider organizations to which they have access.

In an effort to assist both health care organizations interested in taking substantive actions to improve patient safety and research funders seeking to spend scarce resources wisely, AHRQ asked the EPC to rate the evidence and rank the practices by opportunity for safety improvement and by research priority. This report, therefore, contains two lists.

To create these lists, the editors aimed to separate the practices that are most promising or effective from those that are least so on a range of dimensions, without implying any ability to calibrate a finely gradated scale for those practices in between. The editors also sought to present the ratings in an organized, accessible way while highlighting the limitations inherent in their rating schema. Proper metrics for more precise comparisons (e.g., cost-effectiveness analysis) require more data than are currently available in the literature.

Three major categories of information were gathered to inform the rating exercise:

  • Potential Impact of the Practice. Based on prevalence and severity of the patient safety target, and current utilization of the practice.
  • Strength of the Evidence Supporting the Practice. Including an assessment of the relative weight of the evidence, effect size, and need for vigilance to reduce any potential negative collateral effects of the practice.
  • Implementation. Considering costs, logistical barriers, and policy issues.

For all of these data inputs into the practice ratings, the primary goal was to find the best available evidence from publications and other sources. Because the literature has not been previously organized with an eye toward addressing each of these areas, most of the estimates could be improved with further research, and some are informed by only general and somewhat speculative knowledge. In the summaries, the editors have attempted to highlight those assessments made with limited data.

The four-person editorial team independently rated each of the 79 practices using general scores (e.g., High, Medium, Low) for a number of dimensions, including those italicized in the section above. The editorial team convened for 3 days in June, 2001 to compare scores, discuss disparities, and come to consensus about ratings for each category.

In addition, each member of the team considered the totality of information on potential impact and support for a practice to score each of these factors on a 0 to 10 scale (creating a "Strength of the Evidence" list). For these ratings, the editors took the perspective of a leader of a large health care enterprise (e.g., a hospital or integrated delivery system) and asked the question, "If I wanted to improve patient safety at my institution over the next 3 years and resources were not a significant consideration, how would I grade this practice?" For this rating, the Editorial Board explicitly chose not to formally consider the difficulty or cost of implementation in the rating. Rather, the rating simply reflected the strength of the evidence regarding the effectiveness of the practice and the probable impact of its implementation on reducing adverse events related to health care exposure. If the patient safety target was rated as "High" impact and there was compelling evidence (i.e., "High" relative study strength) that a particular practice could significantly reduce (e.g., "Robust" effect size) the negative consequences of exposure to the health care system (e.g., hospital-acquired infections), raters were likely to score the practice close to 10. If the studies were less convincing, the effect size was less robust, or there was a need for a "Medium" or "High" degree of vigilance because of potential harms, then the rating would be lower.

At the same time, the editors also rated the usefulness of conducting more research on each practice, emphasizing whether there appeared to be questions that a research program might have a reasonable chance of addressing successfully (creating a "Research Priority" list). Here, they asked themselves, "If I were the leader of a large agency or foundation committed to improving patient safety, and were considering allocating funds to promote additional research, how would I grade this practice?" If there was a simple gap in the evidence that could be addressed by a research study or if the practice was multifaceted and implementation could be eased by determining the specific elements that were effective, then the research priority was high. (For this reason, some practices are highly rated on both the "Strength of the Evidence" and "Research Priority" lists.) If the area was one of high potential impact (i.e., large number of patients at risk for morbid or mortal adverse events) and a practice had been inadequately researched, then it would also receive a relatively high rating for research need. Practices might receive low research scores if they held little promise (e.g., relatively few patients are affected by the safety problem addressed by the practice or a significant body of knowledge already demonstrates the practice's lack of utility). Conversely, a practice that was clearly effective, low cost, and easy to implement would not require further research and would also receive low research scores.

In rating both the strength of the evidence and the research priority, the purpose was not to report precise 0 to 10 scores, but to develop general "zones" or practice groupings. This is important because better methods are available for making comparative ratings when the data inputs are available. The relative paucity of the evidence dissuaded the editors from using a more precise, sophisticated, but ultimately unfeasible, approach.

Clear Opportunities for Safety Improvement

The following 11 patient safety practices were the most highly rated (of the 79 practices reviewed in detail) in terms of strength of the evidence supporting more widespread implementation. Practices appear in descending order, with the most highly rated practices listed first. Because of the imprecision of the ratings, the editors did not further divide the practices, nor indicate where there were ties.

  • Appropriate use of prophylaxis to prevent venous thromboembolism in patients at risk.
  • Use of perioperative beta-blockers in appropriate patients to prevent perioperative morbidity and mortality.
  • Use of maximum sterile barriers while placing central intravenous catheters to prevent infections.
  • Appropriate use of antibiotic prophylaxis in surgical patients to prevent perioperative infections.
  • Asking that patients recall and restate what they have been told during the informed consent process.
  • Continuous aspiration of subglottic secretions (CASS) to prevent ventilator-associated pneumonia.
  • Use of pressure relieving bedding materials to prevent pressure ulcers.
  • Use of real-time ultrasound guidance during central line insertion to prevent complications.
  • Patient self-management for warfarin (Coumadin™) to achieve appropriate outpatient anticoagulation and prevent complications.
  • Appropriate provision of nutrition, with a particular emphasis on early enteral nutrition in critically ill and surgical patients.
  • Use of antibiotic-impregnated central venous catheters to prevent catheter-related infections.

This list is generally weighted toward clinical rather than organizational matters, and toward care of the very, rather than the mildly or chronically ill. Although more than a dozen practices considered were general safety practices that have been the focus of patient safety experts for decades (i.e., computerized physician order entry, simulators, creating a "culture of safety," crew resource management), most research on patient safety has focused on more clinical areas. The potential application of practices drawn from outside health care has excited the patient safety community, and many such practices have apparent validity. However, clinical research has been promoted by the significant resources applied to it through Federal, foundation, and industry support. Since this study went where the evidence took it, more clinical practices rose to the top as potentially ready for implementation.

Clear Opportunities for Research

Until recently, patient safety research has had few champions, and even fewer champions with resources to bring to bear. The recent initiatives from AHRQ and other funders are a promising shift in this historical situation, and should yield important benefits.

In terms of the research agenda for patient safety, the following 12 practices rated most highly, as follows:

  • Improved perioperative glucose control to decrease perioperative infections.
  • Localizing specific surgeries and procedures to high volume centers.
  • Use of supplemental perioperative oxygen to decrease perioperative infections.
  • Changes in nursing staffing to decrease overall hospital morbidity and mortality.
  • Use of silver alloy-coated urinary catheters to prevent urinary tract infections.
  • Computerized physician order entry with computerized decision support systems to decrease medication errors and adverse events primarily due to the drug ordering process.
  • Limitations placed on antibiotic use to prevent hospital-acquired infections due to antibiotic-resistant organisms.
  • Appropriate use of antibiotic prophylaxis in surgical patients to prevent perioperative infections.
  • Appropriate use of prophylaxis to prevent venous thromboembolism in patients at risk.
  • Appropriate provision of nutrition, with a particular emphasis on early enteral nutrition in critically ill and post-surgical patients.
  • Use of analgesics in the patient with an acutely painful abdomen without compromising diagnostic accuracy.
  • Improved handwashing compliance (via education/behavior change; sink technology and placement; or the use of antimicrobial washing substances).

Of course, the vast majority of the 79 practices covered in this report would benefit from additional research. In particular, some practices with longstanding success outside of medicine (e.g., promoting a culture of safety) deserve further analysis, but were not explicitly ranked due to their unique nature and the present weakness of the evidentiary base in the health care literature.

Conclusions

This report represents a first effort to approach the field of patient safety through the lens of evidence-based medicine. Just as To Err is Human sounded a national alarm regarding patient safety and catalyzed other important commentaries regarding this vital problem, this review seeks to plant a seed for future implementation and research by organizing and evaluating the relevant literature. Although all those involved tried hard to include all relevant practices and to review all pertinent evidence, inevitably some of both were missed. Moreover, the effort to grade and rank practices, many of which have only the beginnings of an evidentiary base, was admittedly ambitious and challenging. It is hoped that this report provides a template for future clinicians, researchers, and policy makers as they extend, and inevitably improve upon, this work.

In the detailed reviews of the practices, the editors have tried to define (to the extent possible from the literature) the associated costs—financial, operational, and political. However, these considerations were not factored into the summary ratings, nor were judgments made regarding the appropriate expenditures to improve safety. Such judgments, which involve complex tradeoffs between public dollars and private ones, and between saving lives by improving patient safety versus doing so by investing in other health care or non-health care practices, will obviously be critical. However, the public reaction to the IOM report, and the media and legislative responses that followed it, seem to indicate that Americans are highly concerned about the risks of medical errors and would welcome public and private investment to decrease them. It seems logical to infer that Americans value safety during a hospitalization just as highly as safety during a transcontinental flight.


Return to Contents
Proceed to Next Chapter

 

The information on this page is archived and provided for reference purposes only.

 

AHRQ Advancing Excellence in Health Care