Patient Centered Medical Home

PCMH Logo

You are here

Early Evidence on the Patient-Centered Medical Home

February 2012
AHRQ Publication No. 12-0020-EF
Prepared For:
Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, 540 Gaither Road, Rockville, MD 20850, www.ahrq.gov

Contract Number: HHSA290200900019I/HHSA29032002T, HHSA290200900019I/HHSA29032005T
Prepared By: Mathematica Policy Research, Princeton, NJ
Authors: Deborah Peikes, Ph.D., M.P.A., Mathematica Policy Research; Aparajita Zutshi, Ph.D., Mathematica Policy Research; Janice Genevro, Ph.D., Agency for Healthcare Research and Quality; Kimberly Smith, Ph.D., M.P.A., Mathematica Policy Research; Michael Parchman, M.D., Agency for Healthcare Research and Quality; and David Meyers, M.D., Agency for Healthcare Research and Quality

Disclaimers

This document is in the public domain and may be used and reprinted without permission except those copyrighted materials that are clearly noted in the document. Further reproduction of those copyrighted materials is prohibited without the specific permission of copyright holder.

None of the authors has any affiliations or financial involvement that conflicts with the material presented in this report.

Suggested Citation

Peikes D, Zutshi A, Genevro J, Smith K, Parchman M, Meyers D. Early Evidence on the Patient-Centered Medical Home. Final Report (Prepared by Mathematica Policy Research, under Contract Nos. HHSA290200900019I/HHSA29032002T and HHSA290200900019I/HHSA29032005T). AHRQ Publication No. 12-0020-EF. Rockville, MD: Agency for Healthcare Research and Quality. February 2012.

back to top

Acknowledgments

We would like to thank a number of people for their assistance with this paper. The authors of many of the studies patiently answered questions about the interventions and their evaluation methods. Kristin Geonnotti and Melissa Azur at Mathematica Policy Research provided in-depth analyses of some of the papers included in this review. Michael Barr at the American College of Physicians; Robert Reid at Group Health Research Institute; and Randall Brown, Christopher Trenholm, Silvie Colman, Brian Goesling, and Tim Novak at Mathematica provided helpful comments and guidance during the development of this paper.

back to top

Abstract

Purpose: The patient-centered medical home (PCMH, or medical home) aims to reinvigorate primary care and achieve the triple aim of better quality, lower costs, and improved experience of care. This study systematically reviews the early evidence on effectiveness of the PCMH.

Methods: Out of 498 studies published or disseminated from January 2000 through September 2010 on U.S.-based interventions, 14 evaluations of 12 interventions met our inclusion criteria: the evaluation (1) tested a primary-care, practice-based intervention with three or more of five key PCMH principles and (2) used quantitative methods to examine effects on either (a) a triple aim outcome (quality of care, costs (or hospital use or emergency department use, two major cost drivers), and patient and caregiver experience) or (b) health care professional experience. We use a formal rating system to identify interventions that were evaluated using rigorous methods and synthesize the evidence from these evaluations. We also provide guidance to inform current efforts and structure future evaluations to maximize learning.

Results: The results indicate that we need more evaluations of the medical home to assess and refine the model. The Joint Principles that first defined the PCMH were released in 2007, and we reviewed evidence through September 2010. Reflecting the time required to evaluate and publish findings on the model, the interventions most often cited in support of the medical home can be viewed as precursors to the medical home. While the interventions varied, most essentially tested the addition of a care manager operating from within the primary care practice rather than a fundamentally transformed practice. Most interventions were evaluated in practices that were part of larger delivery systems and targeted patients who were older and sicker than average. Turning to the evaluations, less than half assessed all triple aim outcomes. Evaluations of 6 of the 12 interventions provide rigorous evidence on one or more outcomes. This evidence indicates some favorable effects on all three triple aim outcomes, a few unfavorable effects on costs, and mostly inconclusive results (because of insufficient sample sizes to detect effects that exist or uncertain statistical significance of results because analyses did not account for clustering of patients within practices).

Conclusions: Improving primary care is the lynchpin of achieving the triple aim outcomes. The PCMH is a promising innovation, and the model is rapidly evolving. Stronger evaluations are needed to provide guidance on how to refine and target the model to ensure that the substantial efforts of practices and payers needed to adopt the model are most effective.

back to top

Background

Reinventing primary care is a task that is “far too important to fail” (Meyers and Clancy, 2009) and is central to reforming health care delivery. While patient-centered primary care once was the backbone of our health care system, over time the system has become more specialized and technologically sophisticated (Bodenheimer and Pham, 2010), and fewer residents are choosing to become primary care physicians (Bodenheimer, 2006). The current health care system, with its incentives to furnish more care, has produced highly fragmented care that emphasizes specialty and acute care over coordination, patient-centeredness, and population health management (Berenson and Rich, 2010b; Bodenheimer and Pham, 2010; Dentzer, 2010; Rittenhouse, Shortell, Fisher, 2009; Howell, 2010). Although 93 percent of Americans want one place or doctor who provides primary care and coordinates care with specialists, only half report having such an experience (Schoen, Osborn, Doty, et al., 2007; Stremikis, Schoen, and Fryer, 2011). The patient-centered medical home (PCMH) is a promising model that aims to reinvent primary care so that it is “accessible, continuous, comprehensive, and coordinated and delivered in the context of family and community” (American Academy of Family Physicians, American Academy of Pediatrics, American College of Physicians, et al., 2007), and, in so doing, to improve the triple aim outcomes of quality, affordability, and patient and caregiver experience, as well as health care professional experience.

The medical home concept first arose in the 1960s as a way of improving care for children with special needs, and policy interest outside of pediatrics grew over time (Kilo and Wasson, 2010). In 2007, primary care physician societies endorsed the “joint principles” of the primary care delivery model. Intrigued by the potential of the PCMH model, major employers, private insurers, and State Medicaid agencies across the Nation are rolling out pilots and demonstrations of the concept. The Centers for Medicare & Medicaid Services, the Department of Veterans Affairs, and other Federal agencies are also testing the model (visit /page/federal-pcmh-activities.1 It will likely be many years before results of current evaluations become available. Transforming care will require recognizing and addressing many barriers to change using lessons from these evaluations (Landon, Gill, Antonelli, et al., 2010).

back to top

Purpose

Against this backdrop, decisionmakers consider whether the evidence supporting the model is strong enough to proceed with widespread adoption, or whether gathering additional evidence is warranted. To contribute to this discussion, researchers at the Agency for Healthcare Research and Quality and Mathematica Policy Research undertook a systematic review of quantitative evaluations of the medical home model to inform current efforts and to structure future evaluations to maximize learning (see Zutshi, Peikes, Smith, et al., 2012, for a more detailed description of this review, and Peikes, Zutshi, Genevro et al., 2012 for a peer-reviewed article on this review). Given that interest in the model is recent, the expectation was that only precursors to the PCMH would have been evaluated so far. At the same time, these early evaluations present a valuable opportunity to inform stakeholders about the current state of the evidence on PCMH effectiveness on quality, cost, and patient and professional experience.

The review limits synthesis of findings to interventions evaluated using rigorous methods. While much can be learned from rapid-cycle evaluations of small pilots and from evaluations of specific components of the PCMH, this review intends to fulfill stakeholders’ need for high-quality quantitative evidence on broad medical home-like interventions that test multiple components of the PCMH and are costly for payers and providers to implement.2

Some readers may not consider an evidence review of the PCMH to be necessary because they believe that the evaluations conducted to date, combined with the vast cross-sectional literature on the positive relationship between more primary care and better outcomes, provide ample evidence to proceed with widespread adoption of the model. Others may feel that the model is being held to a higher standard than many clinical interventions that are currently being used without strong evidentiary support. However, we believe that, given the significant investments required to revitalize our primary care system, many decisionmakers are going to appropriately demand high-quality and rigorous evidence of effectiveness of the PCMH. Qualitative evaluations can also provide valuable insights into the implementation of PCMH interventions and provide context for generalizing findings; they were excluded from this review, however, because we focus on outcomes and because existing evaluations rarely documented their implementation experiences.

Historically, a number of promising health care interventions have been shown not to actually work when evaluated using rigorous methods. For example, telephonic disease management seemed to address obvious problems in coordination and patient self-management, but a number of randomized trials showed many ineffective programs and pointed the way to refining the model to offer better integration with providers, more in-person contact, and careful focusing of efforts to those most likely to benefit (McCall and Cromwell, 2011; Peikes, Chen, Schore, et al., 2009; Peikes, Peterson, Brown, et al., 2010). Similarly, rigorous evidence regarding the effectiveness of the PCMH model and how best to refine it is critical given the substantial investments this model requires, and the need to learn how to adapt the model to best meet local needs.

This review makes two important methodological contributions. First, we limited the review to multi-component interventions by requiring them to contain at least three of five principles of the PCMH model. Earlier reviews typically included results from interventions that had as few as one feature of the PCMH, due in large part to the infancy of the model. Homer, Klatka, Romm, et al. (2008) found that only 1 of the 33 studies they reviewed was of an intervention modeled after the medical home while the others tested selected components. Rosenthal (2008), the Robert Graham Center (2007), and DePalma (2007) each reviewed the literature on individual components of the medical home such as team-based care, rather than reviewing multi-component interventions that more closely resemble the PCMH model.

Second, we limited the synthesis of the evidence to that generated by rigorous evaluations, which we assessed using a systematic review process. Three previous reviews did not consider the rigor of the evidence (Grumbach and Grundy, 2010; Fields, Leshen, Patel, 2010; and DePalma, 2007). Two conducted a limited assessment by focusing on comparison group studies and peer-reviewed studies, respectively (Homer, Klatka, Romm, et al., 2008; Friedberg, Lai, Hussey, et al., 2009), but neither assessed the strength of the analytical methods used by the studies or excluded studies that did not use rigorous methods from their syntheses of the evidence.

back to top

Methods

We conducted the review by first identifying evaluations of interventions that met our inclusion criteria, then rating the rigor of these evaluations, and finally synthesizing the evidence on PCMH effectiveness using only rigorous evaluations.

Inclusion Criteria

We identified 498 citations of primary care interventions in the United States based on a search of published and gray literature from January 2000 through September 2010, inputs from experts in the field, and a review of 100 relevant Web sites (see Peikes, Zutshi, Smith, et al., 2012 for more details). Out of these citations, we found 14 evaluations of 12 interventions that met the following criteria:

  1. The evaluation tested a primary-care, practice-level intervention with three or more of the five medical home principles defined by AHRQ (delivering care that is patient-centered, comprehensive, coordinated, accessible, and that uses a systems-based approach to quality and safety). We excluded evaluations of care coordination and disease management interventions that met these criteria but were not provided from within, or in close partnership with, the practice (for example, interventions delivered by off-site care managers via telephone).3
  2. The evaluation used quantitative methods to examine effects on either (a) a triple aim outcome (quality of care, costs4 (or hospital use or emergency department use, two major cost drivers), and patient or caregiver experience) or (b) health care professional experience (given that the success of primary care transformation and improvements in care delivery are contingent on the well-being and ongoing engagement of health care personnel).

Rating the Rigor of the Evaluations

We developed a systematic approach to assess the rigor of the methods used to generate evidence on PCMH effectiveness. We drew broadly from the U.S. Preventive Services Task Force (USPSTF) review methods and supplemented them with specific criteria from well-regarded evidence reviews.5

Rather than give a global rating to each evaluation, we individually rated the internal validity of each analysis undertaken by the evaluation as high, moderate, low, or excluded. We rated individual analyses because evaluations often used different designs, samples, and methods (and sometimes different subgroups of patients) for different outcomes and followup periods. Therefore, to allow for the possibility that the evaluation of a single intervention could provide more rigorous evidence on some outcomes than on others, we conducted a separate assessment of the evidence for each outcome measure at each followup period and, if applicable, for each subgroup of patients. We view evidence rated high and moderate as rigorous evidence.

We did not factor generalizability (or external validity) into the rating because most interventions included in this review targeted a specific subpopulation of primary care patients, were implemented in unique settings, and either purposefully selected practices or relied on them to volunteer; therefore, findings from nearly all interventions have limited generalizability. We summarize the characteristics of patients and practice settings in the rigorously evaluated interventions to alert decisionmakers to the possibility that findings may differ in other populations and settings.

We rated each analysis using a sequence of criteria, starting with the most general (evaluation design) and ending with the most specific (such as whether the analysis controlled for outcome values before the start of the intervention (“at baseline”). Analyses were rated excluded if the methods were not described in sufficient detail to enable assessment. Analyses were rated low if they did not employ a control or comparison group6 (and instead used a pre-post or cross-sectional design). Such designs often make it difficult to assess what the sample’s outcomes would have been absent the intervention. (The purpose of a control/comparison group is to establish that counterfactual—a necessary condition for obtaining an unbiased impact estimate.) Analyses from randomized, controlled trials (RCTs) and nonexperimental comparison group evaluations were assessed for the strength of the methods to identify causal effects and produce unbiased estimates of the interventions’ effects and were accordingly rated high, moderate, or low. In many cases, because of the limits of what study authors can include in a journal article, we sought additional details from authors to be able to rate the analyses.

Analyses from RCTs were given a high rating if they had all of the following:

  • No systematic confounders.
  • No endogenous subgroups.
  • Low attrition.
  • Adjustment for any statistically significant baseline differences in the outcome between the intervention and control groups.

Analyses from comparison group evaluations, and from RCTs with high attrition or with endogenous subgroups, were given a moderate rating if they had all of the following:

  • No systematic confounders.
  • Baseline equivalence of the outcome between the intervention and comparison groups.
  • Adjustment for baseline outcomes.

Analyses from RCTs and comparison group evaluations were given a low rating if they did not meet the criteria for high and moderate ratings.

Synthesizing Evidence with a High or Moderate Rating

Next, we synthesized findings from analyses rated high or moderate. We did not synthesize findings from analyses rated low because we believe that if these interventions were evaluated using better methods, the results might differ substantially. For example, results could change from suggesting an intervention did not work to suggesting it worked, or vice versa. Evaluations rated as low represent important efforts to build the evidence base and may provide important insights about how best to refine a specific intervention; however, their usefulness in determining the quantitative effectiveness of the model is limited.

We categorized findings from analyses rated high or moderate as being (1) statistically significant and favorable, (2) statistically significant and unfavorable, (3) inconclusive (that is, they fail to indicate whether or not the intervention worked) because they were not statistically significant, or (4) inconclusive because their statistical significance was uncertain due to lack of adjustment for clustering of patients within practices. While “inconclusive” may be a frustrating label for decisionmakers, it accurately reflects the lack of certainty about whether or not the intervention worked.

We consider findings that are not statistically significant to be inconclusive rather than evidence of no effects because we suspect that most evaluations had inadequate power to detect effects that might have existed. None of the rigorous evaluations of practice-level interventions were implemented in more than 11 practices. As discussed in another AHRQ white paper, Building the Evidence Base for the Medical Home: What Sample and Sample Size Do Studies Need? (Peikes, Dale, Lundquist, et al., 2011), assuming a moderate amount of clustering, an intervention that is tested in 20 intervention practices (with 20 control practices) and targets all patients would need to reduce costs by 45 percent or more (a very large effect) to have an 80 percent chance of detecting the effect. If cost were measured among the chronically ill, as many of these evaluations do, the intervention might still need to reduce costs by 20 percent or more for the evaluation to have an 80 percent chance of detecting it. These are large effects for an intervention to achieve, and an evaluation would need even larger sample sizes to detect smaller, more plausible effects.

We also viewed findings as inconclusive when evaluations of practice-level interventions did not correctly account for clustering of patients within practices, leaving their tests of statistical significance inaccurate, and the significance of results uncertain. Peikes, Dale, Lundquist, et al. (2011) show that, if there is moderate clustering, statistical tests that ignore clustering have a false positive rate of 65 percent or more. Although we adjusted tests of statistical significance for clustering for cost and service use when possible, there was too little published information for us to make similar adjustments for other outcomes.

back to top

Results

Table 1. Overview of the 12 interventions reviewed

This table includes an overview of the 12 interventions reviewed and the sources that describe the interventions and evaluations.

View table in new window

Evaluations to date have assessed PCMH precursors. The Joint Principles that first defined the PCMH were released in 2007, and it takes time to design an intervention, implement it, evaluate it, and publish findings. In other words, the modern PCMH is a very young model. As a result, we found that many of the 14 interventions included in the review were developed before the recent interest in the medical home. Most of them essentially tested the addition of a care manager operating within the primary care practice, rather than a fundamentally transformed practice (see Table 1). Most of these early interventions included each of the five AHRQ medical home principles, but they did so in a less integrated and comprehensive way than current demonstrations do and are therefore best viewed as precursors to the PCMH model.7 This reflects the rapidly evolving field and serves as a reminder that the evidence that is commonly cited on the PCMH is actually on precursors and needs to be interpreted in that context.

Several evaluations comprehensively assessed triple aim outcomes. Among these early evaluations, 5 of the 14 were able to examine each of the triple aim outcomes (cost, quality, and patient experience). Understandably, only five evaluations examined patient experience, which may reflect the relatively high cost of collecting survey data or the fact that these models predated the current interest in the PCMH, which emphasizes patient-centeredness.

Table 2. Number of evaluations that assessed each triple aim outcome and health care professional experience

This table lists the number of evaluations that assessed each triple aim outcome and health care professional experience using any method, and rigorous methods.

View table in new window

Many evaluations did not use rigorous methods. Six of the 14 evaluations met formal criteria for a high or moderate rating on at least one outcome. Among the evaluations that examined a given outcome, typically only a subset did so using rigorous methods (see Table 2). The lack of an appropriate comparison group was the most common reason for a low rating (see Tables 3.1 and 3.2). Appropriate comparison groups (that are similar to the intervention group in terms of baseline patient outcomes, as well as practice variables like the mix of patients, number of providers, and key infrastructure such as electronic health records) are important to establish the counterfactual.8 In general, an evaluation that compares patients in pioneering, high-performing practices that chose to participate in an intervention with patients in practices that had average performance prior to the intervention and have not chosen to change may artificially make the intervention look more effective than it truly is. Two evaluations were excluded from the synthesis of evidence because they tested the intervention in a single intervention practice. While such a design can represent an important opportunity to pilot a new intervention and break ground toward a larger evaluation, it cannot distinguish the effects of the intervention from other characteristics of the particular practice that implemented it, thereby undermining the ability to attribute an observed effect to the intervention.

The rigorous evidence on the effectiveness of PCMH precursors contains some favorable results for all triple aim outcomes, some unfavorable results on costs, and many inconclusive results for all outcomes. Table 4 presents a snapshot of the evidence, and Appendix Table 1 provides more detail. For each outcome, the interventions, target populations, implementation settings, and outcome measures varied widely, which precluded a meta-analysis. Below, we summarize the rigorous evidence on each outcome.

Table 3.1. Evidence ratings by outcome: high or moderate

This table presents the evaluation design and evidence rating by outcome for each intervention whose evaluation had at least one outcome that was rated as having high or moderate rigor.

View table in new window

Improving the Quality of Care

  • Processes of care. Evaluations of three interventions (Improving Mood–Promoting Access to Collaborative Treatment for Late-Life Depression [IMPACT], Geriatric Resources for Assessment and Care of Elders [GRACE], and Care Management Plus [CMP]) provided rigorous evidence. Of these three, only the evaluation of IMPACT found favorable effects. The evaluations of GRACE and CMP did not adjust statistical significance for clustering, so their findings are inconclusive.
  • Health outcomes. Two of the three rigorous evaluations of functional status and other health outcomes (IMPACT, GRACE, and Veterans Affairs Team-Managed Home-Based Primary Care [VA TM/HBPC]) found that the interventions made some improvements. The evaluation of IMPACT reported the strongest evidence of these effects, and the evaluation of GRACE found favorable effects on some of these measures. The evaluation of VA TM/HBPC is inconclusive because the results were not statistically significant.
  • Mortality. While mortality effects would not be expected in the general patient population over short followups, they are theoretically possible in the high-risk patients served by some of these interventions. The results from the GRACE and CMP evaluations, which examined mortality among their target populations of high-risk Medicare patients, were not statistically significant and are therefore inconclusive.
Table 3.2. Evidence ratings by outcome: low or excluded

This table presents the evaluation design and evidence rating by outcome for interventions that were evaluated using methods that were rated as low or that were excluded from the synthesis due to limited information.

View table in new window

Reducing the Costs of Care

  • Costs (including intervention costs). The evaluation of GRACE was the only one of four rigorous evaluations to find any evidence of savings, and these were limited to the evaluation’s high-risk subgroup of Medicare patients in the post-intervention year. The 23 percent savings were enough to offset cost increases for patients who were not high risk, leaving the intervention cost neutral that year. However, GRACE increased total costs (by 28 percent and 14 percent) for its full sample of patients during both years of the intervention. Similarly, the VA TM/HBPC intervention increased total costs by 12 percent during its one year of operation. The other two interventions, Guided Care and IMPACT, both reported lower costs, but the results were not statistically significant and are therefore considered to be inconclusive.
  • Hospital use. One of the five rigorous evaluations of hospital use found that the intervention reduced the number of hospitalizations by 18 percent for all patients (GHS ProvenHealth Navigator, which served Medicare Advantage patients). In addition, GRACE and VA TM/HBPC had some favorable effects on the number of hospitalizations for high-risk subgroups of their enrollees. GRACE reduced hospitalizations by 40 percent and 44 percent in the second and third years, but results were not statistically significant in the first year. Similarly, VA TM/HBPC reduced readmissions by 22 percent in the first 6 months, although the reduction was not sustained through the rest of the year, as the results were no longer statistically significant over 12 months. In contrast, the findings on Guided Care and CMP are inconclusive. Guided Care did not have a statistically significant effect on the number of hospitalizations over the first 8 or 20 months. In the case of CMP, results among all patients and the subgroup without diabetes were not statistically significant, and results among the subgroup with diabetes had uncertain statistical significance due to lack of adjustment for clustering, rendering all these findings inconclusive.
  • Emergency Department (ED) use. The evaluation of GRACE is the only one of three rigorous evaluations of ED use to find some favorable effects; the intervention reduced the number of ED visits by 24 percent among its target population of Medicare patients in the second year, driven by reductions of 35 percent among the high-risk Medicare patients. However, results from GRACE are inconclusive in the first year, because they were not statistically significant. Similarly, evidence on Guided Care, where results were not statistically significant, and CMP, where results were either not statistically significant or had not been adjusted for clustering, is inconclusive.
Table 4. Snapshot of findings from rigorous evaluations

This table provides a snapshot of findings from rigorous evaluations by classifying each finding as statistically significant and favorable, statistically significant and unfavorable, not statistically significant and therefore inconclusive, or uncertain statistical significance and therefore inconclusive.

View table in new window

Improving the Experience of Care

  • Patient and caregiver experience. Two of the three rigorous evaluations of patient experience (VA TM/HBPC and IMPACT) found a preponderance of favorable effects. The third evaluation (Guided Care) did not adjust statistical significance for clustering so its findings are inconclusive.

    The evaluation of VA TM/HBPC found favorable effects on some measures of caregiver experience. However, results for other measures are inconclusive, as are the results for Guided Care, because they were either not statistically significant or had uncertain statistical significance due to a lack of adjustment for clustering.

Improving Professional Experience

  • Health care professional experience. The lone evaluation to provide rigorous evidence on professional experience (Guided Care) is inconclusive because results either were not statistically significant or had uncertain statistical significance due to lack of adjustment for clustering.
back to top

Placing the Findings in Context

The findings are less favorable than most prior reviews. We found some promising results across all three triple aim outcomes; however, the majority of findings were inconclusive. The conclusions we draw are consistent with those of Friedberg, Lai, and Hussey, et al. (2009), who described the evidence in favor of the medical home as “scant.” Our conclusions are more tentative than those of Homer et al. (2008); Fields, Leshen, and Patel (2010); and Grumbach and Grundy (2010), who claimed overwhelming evidence in support of the medical home. We conclude that more work, including additional well-designed, well-implemented evaluations of the full PCMH model, is needed to guide decisions regarding this young and rapidly evolving model.

Findings from the rigorous evaluations reflect unique contexts and populations. These findings from the rigorous evaluations were not based on the average patient population in U.S. primary care practices. All were tested in practices that were part of larger delivery systems and targeted patients who were older and sicker than average (see Tables 5.1 and 5.2). As a caveat, we expect it will be harder to generate effects of the same size among healthier patients, who do not use many services.

The improvements in cost and service use may have been concentrated among the sickest patients. Two of the six rigorous evaluations examined outcomes for different subgroups of patients among their target population of older or sicker patients.9 The evaluation of GRACE reported that, even among its low-income, elderly patients, improvements were concentrated among the sickest patients. The evaluation of the VA TM/HBPC intervention found favorable effects among severely disabled patients but not among other high-risk patients; it is unclear whether this reflects lack of power to detect effects (due to small samples), lack of long enough followup periods for effects to emerge, or a true lack of effects.

These results, while limited, raise the question of whether conducting separate analyses on sicker patients could be a useful approach for future evaluations. The highest-risk patients present providers with more opportunities to take action to reduce service use and costs in the relatively short followup periods observed, because a medical home intervention is likely to reduce hospitalizations more for patients who are frequently hospitalized. In addition, there is better power to detect effects among the highest-risk patients than among all patients, reducing the likelihood of missing important beneficial effects (Peikes, Dale, Lundquist, et al., 2011). This does not imply that the PCMH should be targeted only to patients with complex medical needs. The PCMH is a whole-practice-level intervention and is expected to improve care for all. It is critical not to confuse the goal and purpose of the intervention with suggestions for refining evaluations.

Table 5.1. Overview of the target populations, among interventions with rigorous evidence

This table provides an overview of the target populations, including whether the intervention served all patients, Medicare patients only, patients with chronic illness, and patients with both fee-for-service and managed insurance coverage.

View table in new window
Table 5.2. Overview of the practice settings, among interventions with rigorous evidence

This table describes the practice settings, including whether the practices were part of a larger delivery system, the number of practices, and the use of electronic health records.

View table in new window

Findings from more complete medical home interventions in other settings will likely differ. The findings on effectiveness will differ if the full medical home model is implemented, and is done so with different practices, markets, and patients. For example, implementing the PCMH model in certain markets or delivery settings where there is overuse of care could produce different results than in areas where there is underuse of care. Similarly, modifications of the interventions might alter outcomes. For example, it is possible that adding certain components of the medical home such as health information technology (IT) and stronger financial incentives to practices could improve outcomes. In addition, program designers may be able to identify areas to increase efficiency to achieve cost neutrality or generate savings. For example, although this information was not provided in reports of these evaluations, a careful review of which team members can provide which parts of interventions, and deploying them accordingly, could lower the costs of providing care.

back to top

Guidance to Improve the Future Evidence Base

This review highlights opportunities to identify effective ways to improve primary care by improving the evidence base on the PCMH. There is a large risk that research currently under way on PCMH interventions (not reviewed here) will fail to support decisionmakers’ information needs. A recent survey of 26 medical home pilots under way in 18 States concluded that only 40 percent of them had well-developed evaluation plans. Among those with plans, only about 40 percent planned to use a comparison group design, with the remainder planning to use pre-post designs (Bitton, Martin, and Landon, 2010), which typically provide weak evidence.

The challenges to conducting strong evaluations are not unique to the PCMH. The GAO in 2011 criticized evaluations of 127 diverse health care interventions for having weak evaluation designs and limited generalizability, and not reporting on the outcomes of interest (in their case, quality and cost) (U.S. Government Accountability Office, 2011). Below we describe a number of steps that can be taken to improve the evidence base. Some of these are specific to the PCMH field, and others are general best practices for conducting rigorous health service evaluations:

  • Use strong evaluation designs and methods. Current and future evaluators of PCMH interventions have an opportunity to fill knowledge gaps and contribute to the ongoing learning on PCMH effectiveness. Weak designs and analytical methods severely limit the potential of a strong intervention to produce rigorous evidence for decisionmakers. One challenge for a good evaluation of the medical home is to make sure the practices and patients in the intervention and comparison groups are comparable prior to the medical home startup. Otherwise it is difficult to distinguish effects that are due to the medical home model from pre-existing differences between the intervention and comparison practices and patients. Evaluations should also use rigorous analytical methods, including adjusting analyses for clustering of patients within practices (see Peikes, Dale, Lundquist, et al., 2011).
  • Conduct comprehensive implementation studies. We found that most evaluations did not report how the intervention was implemented. While undertaking an implementation evaluation requires additional expertise and resources, it adds tremendous value in identifying barriers and facilitators to improving outcomes, how findings might generalize to other contexts, and ways to refine the model. Implementation evaluations can provide powerful insights on their own, as well as when combined with quantitative outcome studies (a mixed-methods approach).10
  • Test the model in an adequate number of practices and measure different outcomes for different subgroups of patients. Because the PCMH is a practice-level intervention, it must be tested in a large number of practices or the evaluation is likely to lack the statistical power to identify effects even when they exist. As discussed in the methods section, measuring cost and service use among sicker patients permits detection of smaller effects than among all patients. In contrast, measures of quality of care and patient and provider experience typically take on a small number of values resulting in less variation; therefore, effects on these outcomes can more easily be detected among all patients (Peikes, Dale, Lundquist, et al., 2011).
  • Follow outcomes for longer periods of time. Evaluations examined outcomes for 1 to 3 years, with most following patients for 2 years. While most decisionmakers are eager to obtain results, given the dramatic changes many practices need to undergo to become medical homes, a short followup period might provide an overly pessimistic view of the medical home by capturing the negative effects of disruptive transformation. Consistent with this possibility, GRACE substantially increased costs by 28 percent early in the evaluation, but became cost neutral a year after the intervention ended. However, the VA TM/HBPC evaluation found that short-term favorable effects on readmissions dissipated over time. Evaluation designs should also explicitly consider the periods of time needed to observe the effects of complex interventions on health care processes and subsequently on different health outcomes; information from early evaluations may be useful in modeling time paths of effects on different outcomes.
  • Improve reporting and documentation. Many evaluations were not documented well enough to assess the strength of their methods. To allow objective assessment of the evidence, evaluation results—even preliminary results or results from pilot studies—should be accompanied by a detailed description of the methods used.
  • Independently evaluate the models to ensure objectivity. Many evaluations were conducted by intervention developers. While developers have deep knowledge of their initiatives and commitment to learning about them, independent evaluations may provide more credible evidence. At a minimum, peer review of evaluations conducted by developers would build a better evidence base.
  • Test the model in typical practices and among typical patients. All six interventions with rigorous evidence were tested exclusively in practices in larger delivery systems, which had some degree of integration across providers. Therefore, these results may not apply to independent practices. Ideally, future research would test the PCMH model with practices that are representative of the Nation’s primary care landscape. In terms of patients, all six interventions were tested on patients that were older or sicker than average. Also, while testing effects for specific patients is appropriate for evaluating specific research questions, as a practice-level intervention, the PCMH must be implemented in practices serving more diverse populations. Decisionmakers still require evidence of effectiveness for the general patient population.
  • Examine a core set of outcome measures and develop standardized measures of PCMH components. Estimating effects on a standard list of outcome measures would enable a meta-analysis of findings across different interventions. Such an analysis can dramatically improve the power to detect effects compared to individual evaluations, which are often underpowered. The body of evidence would also be improved if researchers use detailed, standardized measures of PCMH components and processes (Crabtree, Chase, Wise, et al., 2011). Such measures would enable meta-analyses to discern which interventions are most effective in which settings and why. The Commonwealth Fund (2011) has convened a collaborative for medical home evaluators to support this type of uniform research infrastructure, and will make the results available in the coming months.
  • Measure effects on all triple aim outcomes and health care professional experience. The PCMH model grew out of the need to improve quality and experience while reducing costs. It is critical that evaluations examine all these outcomes if they are to provide comprehensive information to decisionmakers. Improving one type of outcome may not warrant model adoption if it comes at the expense of deterioration in other outcomes. Examining the full range of outcomes might require addressing a number of barriers, including payer concerns about confidentiality of cost data, limited resources to collect and analyze multiple data sources, and lack of tools to measure certain outcomes.11
  • Explore novel approaches to evaluate PCMH interventions. A number of studies in the past decade have shown that health care interventions can be viewed as complex interventions within a complex adaptive system (CAS), similar to processes in ecology, computer science, and organizational science. Complexity science views the multiple components of complex interventions such as the PCMH as dependent on each other, as well as on the primary care practice and health care setting (Plsek and Greenhalgh, 2001). For example, quality of care delivered by a practice can be viewed as a system-level property that arises over time from the interactions among the members of the practice (Lanham, McDaniel, Crabtree, et al., 2009). As a result, in addition to individual processes or components, the relationships among practice team members become key levers for improving outcomes. Furthermore, the framework’s emphasis on the importance of the external environment underscores the influence of the medical neighborhood on key outcomes. Some evidence indicates that interventions designed and implemented using CAS principles were more effective at improving clinical outcomes (Leykum, Parchman, Pugh, et al., 2010; Leykum, Pugh, Lawerence, et al., 2007).

    Principles of complexity science might be used to create better approaches to evaluate PCMH interventions, including designing more insightful implementation analyses (Litaker, Tomolo, Liberatore, et al., 2006; Campbell, Fitzpatrick, Haines, et al., 2000; Craig, Dieppe, Macintyre, et al., 2008; Stetler, Damschroder, Helfrich, et al., 2011; Damschroder, Aron, Keith, et al., 2009; Nutting, Crabtree, Stewart, et al., 2010; May, Mair, Dowrick, et al., 2007; Cohen, McDaniel, Crabtree, et al., 2004). Measures of the internal and external environment might be useful both to select comparison practices that closely resemble the intervention practices and to help explain why an intervention is more successful in certain contexts than in others. More work is needed to develop such measures.12 In addition, from a complexity framework, attempts to isolate the relative contributions of individual components of the medical home are ill-advised and are likely to result in misleading findings because these components are dependent on each other to achieve the desired outcomes of medical home implementation.

    Applying methods based on complexity frameworks that move the field away from a mechanistic and reductionist perspective may help us evaluate PCMH interventions in more meaningful ways. Similarly, research approaches from the social sciences and other disciplines that have not been applied previously to the PCMH may also be beneficial.

back to top

Looking Forward

The medical home model is a promising innovation to reinvigorate primary care by improving quality, affordability and patient and provider experience. Many decisionmakers require rigorous assessments of the model’s likely benefits, as well as guidance on how to operationalize and refine the model. Such evidence can guide the substantial efforts of practices and payers to adopt the PCMH and ensure that the revitalized primary care system achieves the triple aim outcomes in a sustainable manner.

back to top

References and Included Studies

  1. AcademyHealth State Health Research and Policy Interest Group. The Pennsylvania Chronic Care Initiative. Presentation at the Annual AcademyHealth Research Meeting, 2009. http://www.academyhealth.org/files/interestgroups/shrp/SHRP_Breakfast_2009_Magistro.pdf. Accessed January 2012.
  2. Agar MH, Wilson D. Drugmart: heroin epidemics as complex adaptive systems. Complexity 2002;7(5):44-52.
  3. Agency for Healthcare Research and Quality. Primary care managers supported by information technology systems improve outcomes, reduce costs for patients with complex conditions. AHRQ Health Care Innovations Exchange Profile. Last updated: June 16, 2010. http://www.innovations.ahrq.gov/content.aspx?id=264&tab=1. Accessed January 2012.
  4. American Academy of Family Physicians, American Academy of Pediatrics, American College of Physicians, American Osteopathic Association. Joint principles of the patient-centered medicalhome. February 2007. http://www.aafp.org/online/etc/medialib/aafp_org/documents/policy/fed/jointprinciplespcmh0207.Par.0001.File.dat/022107medicalhome.pdf. Accessed January 2012.
  5. Barr VJ, Robinson S, Marin-Link B, Underhill L, Dotts A, Ravensdale D, Salivaras S. The expanded chronic care model: an integration of concepts and strategies from population health promotion and the chronic care model. Healthc Q 2003;7(1): 73-82.
  6. Berenson RA, Rich EC. U.S. approaches to physician payment: The deconstruction of primary care. J Gen Intern Med 2010b;25(6):613-8.
  7. Bielaszka-DuVernay C. The “GRACE” model: in-home assessments lead to better care for dual eligibles. Health Aff 2011;30(3):431-434.
  8. Bitton A, Martin C, Landon BE. A nationwide survey of patient centered medical home demonstration projects. J Gen Intern Med 2010;25(6):584-92.
  9. Bodenheimer T, Pham HH. Primary care: Current problems and proposed solutions. Health Aff 2010;29(5):799-805.
  10. Bodenheimer T. Primary care—will it survive? N Engl J Med 2006; 355 (9):861-4.
  11. Boult C, Reider L, Leff B, et al. The effect of Guided Care teams on the use of health services: Results from a cluster-randomized controlled trial. Arch Intern Med 2011;171(5):460-6.
  12. Boyd CM, Reider L, Frey K, et al. The effects of Guided Care on the perceived quality of health care for multi-morbid older persons: 18-month outcomes from a cluster-randomized controlled trial. J Gen Intern Med 2010;25(3):235-42.
  13. Campbell M, Fitzpatrick R, Haines A, Kinmonth AL, Sandercock P, Spiegelhalter D, Tyrer P. Framework for design and evaluation of complex interventions to improve health. BMJ 2000;321:694-6.
  14. Chronic Care Management, Reimbursement and Cost Reduction Commission. Prescription forPennsylvania. Right state, right plan, right now. Strategic plan. 2008. http://health-equity.pitt.edu/931/1/ChronicCareCommissionReport.pdf. Accessed January 2012.
  15. Cohen D, McDaniel RR Jr, Crabtree BF, Ruhe MC, Weyer SM, Tallia A, Miller WL, Goodwin MA, Nutting P, Solberg LI, Zyzanski SJ, Jaén CR, Gilchrist V, Stange KC. A practice change model for quality improvement in primary care practice. J Healthc Manag 2004 May-Jun; 49(3):155-68.
  16. Counsell SR, Callahan CM, Buttar AB, Clark DO, Frank KI. Geriatric Resources for Assessment and Care of Elders (GRACE): A new model of primary care for low-income seniors. J Am Geriatr Soc 2006;54(7):1136-41.
  17. Counsell SR, Callahan CM, Clark DO, et al. Geriatric care management for low-income seniors: a randomized control trial. JAMA 2007;298(22):2623-33.
  18. Counsell SR, Callahan CM, Tu W, Stump TE, Arling GW. Cost analysis of the Geriatric Resources for Assessment and Care of Elders care management intervention. J Am Ger Soc 2009;57(8):1420-6.
  19. Crabtree BF, Chase SM, Wise CG, et al. Evaluation of patient centered medical home practice transformation initiatives. Med Care 2011 Jan;49(1):10-6.
  20. Craig P, Dieppe P, Macintyre S, Michie S, et al. Developing and evaluating complex interventions: new guidance. London, UK: Medical Research Council; 2008. http://www.mrc.ac.uk/Utilities/Documentrecord/index.htm?d=MRC004871. Accessed January 2012.
  21. Creswell JW, Klassen AC, Plano Clark VL, Smith KC for the Office of Behavioral and Social Sciences Research. Best practices for mixed methods research in the health sciences. National Institutes of Health; August 2011. http://obssr.od.nih.gov/mixed_methods_research. Accessed January 2012.
  22. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Imp Sci 2009;4:50.
  23. Dentzer S. Reinventing primary care: a task that is far “too important to fail.” Health Aff 2010;29(5):757.
  24. DePalma JA. Evidence to support medical home concept for children with special health care needs. HHCMP 2007;19(6):473-5.
  25. Department of Veterans Affairs. Home-based primary care program. VHA Handbook 1141.01. January 31, 2007. http://www1.va.gov/vhapublications/ViewPublication.asp?pub_ID=1534. Accessed January 2012.
  26. Domino ME, Humble C, Lawrence WW, Wegner S. Enhancing the medical homes model for children with asthma. Med Care 2009;47(11):1113-20.
  27. Donaldson MS, Yordy KD, Lohr KN, and Vanselow NA, eds. Primary care: America's health in a new era. Committee on the Future of Primary Care, Division of Health Care Services, Institute of Medicine. Washington, DC: National Academy Press, 1996.
  28. Dorr DA, Wilcox AB, Brunker CP, Burdon RE, Donnelly SM. The effect of technology-supported, multidisease care management on the mortality and hospitalization of seniors. J Am Geriatr Soc 2008;56(12):2195-202.
  29. Fields D, Leshen E, Patel K. Driving quality gains and cost savings through adoption of medical homes. Health Aff 2010;29(5):819-26.
  30. Friedberg MW, Lai DJ, Hussey PS, Schneider EC. A guide to the medical home as a practice-level intervention. Am J Manag Care 2009;15(10 Suppl):S291-9.
  31. Gilfillan RJ, Tomcavage J, Rosenthal MB, et al. Value and the medical home: Effects of transformed primary care. Am J Manag Care 2010;16(8):607-14.
  32. Glasgow RE, Orleans CT, Wagner EH, Curry SJ, Solberg LI. Does the chronic care model serve also as a template for improving prevention? Milbank Q 2001 Dec;79(4): 579–612.
  33. Gold M, Helms D, and Guterman S. Identifying, monitoring, and assessing promising innovations: using evaluation to support rapid-cycle change. The Commonwealth Fund, June 2011.
  34. Graff T. ProvenHealth Navigator ICSI update. Geisinger Health System, May 2009. http://www.icsi.org/icsi_colloquium_-_2009_/proven_health_navigator_.html. Accessed January 2012.
  35. Group Health News. Medical home pays off with improved care and reduced costs. 2010 May 11. http://www.ghc.org/news/news.jhtml?reposid=%2fcommon%2fnews%2fnews%2f20100504-medicalHome.html. Accessed January 2012.
  36. Grumbach K, Grundy P. Outcomes of implementing patient centered medical home interventions: a review of the evidence from prospective evaluation studies in the United States. Patient Centered Primary Care Collaborative 2010 Nov. http://www.pcpcc.net/files/evidence_outcomes_in_pcmh.pdf. Accessed February 2011.
  37. Guided Care Web site. 2010. http://www.guidedcare.org/index.asp. Accessed November 2010.
  38. Harris RP, Helfand M, Woolf SH, et al. Current methods of the U.S. Preventive Services Task Force: A review of the process. Am J Prev Med 2001;20(3Suppl);21-35. http://www.uspreventiveservicestaskforce.org/uspstf08/methods/procmanual.htm. Accessed January 2012.
  39. Homer CJ, Klatka K, Romm D, et al. A review of the evidence for the medical home for children with special health care needs. Pediatrics 2008; 122(4):e922-37.
  40. Hostetter M. Case study: Aetna’s embedded case managers seek to strengthen primary care. The Commonwealth Fund 2010 Aug/Sept. http://www.commonwealthfund.org/Content/Newsletters/Quality-Matters/2010/August-September-2010/Case-Study.aspx. Accessed January 2012.
  41. Houy M. The Pennsylvania Chronic Care Initiative. Appendix. Presentation prepared for Maryland Health Quality and Cost Council. 2008.
  42. Howell, JD. Reflections on the past and future of primary care. Health Aff 2010 May;29(5):760-765.
  43. Hughes SL, Weaver FM, Giobbie-Hurder A, et al. Effectiveness of team-managed home-based primary care: a randomized multicenter trial. JAMA 2000;284(22):2877-85.
  44. Hunkeler EM, Katon W, Tang L, et al. Long term outcomes from the IMPACT randomised trial for depressed elderly patients in primary care. BMJ 2006;332(7536):259-63.
  45. IMPACT Implementation Center Web site. http://impact-uw.org/.
  46. Katerndahl D. Explaining health care utilization for panic attacks using cusp catastrophe modeling. Nonlinear Dynamics Psychol Life Sci 2008 Oct;12(4):409-24.
  47. Kilo, CM, Wasson, JH. Practice redesign and the patient-centered medical home: history, promises, and challenges. Health Aff 2010 May;29(5):773-8.
  48. Landon BE, Gill JM, Antonelli RC, Rich, EC. Prospects for rebuilding primary care using the patient-centered medical home. Health Aff 2010;29(5):827-34.
  49. Lanham HJ, McDaniel RR Jr, Crabtree BF, Miller WL, Stange KC, Tallia AF, Nutting P. How improving practice relationships among clinicians and nonclinicians can improve quality in primary care. Jt Comm J Qual Patient Saf 2009 Sep;35(9):457-66.
  50. Leff B, Reider L, Frick K, et al. Guided Care and the cost of complex health care: a preliminary report. Am J Manag Care 2009;15(8):555-9.
  51. Levine S, Unqtzer J, Yip J, et al. Physicians’ satisfaction with a collaborative disease management program for late-life depression in primary care. Gen Hosp Psychiatr 2005;27(6):383-91.
  52. Leykum LK, Pugh J, Lawrence V, Parchman M, Noël PH, Cornell J, McDaniel RR Jr. Organizational interventions employing principles of complexity science have improved outcomes for patients with Type II diabetes. Implement Sci 2007 Aug 28;2:28.
  53. Leykum LK, Parchman M, Pugh J, Lawrence V, Noël PH, and McDaniel RR Jr. The importance of organizational characteristics for improving outcomes in patients with chronic disease: a systematic review of congestive heart failure. Implement Sci 2010, 5:66.
  54. Litaker D, Tomolo A, Liberatore V, Stange KC, Aron D. Using complexity theory to build interventions that improve health care delivery in primary care. J Gen Intern Med 2006 Feb;21 Suppl 2:S30-4.
  55. Lodh M. Mercer Government Human Services Consulting. ACCESS cost savings – state fiscal year 2004 analysis. Letter to Jeffrey Simms, State of North Carolina, Office of Managed Care, Division of Medical Assistance, 2005.
  56. Marsteller JA, Hsu YJ, Reider L, et al. Physician satisfaction with chronic care processes: a cluster-randomized trial of guided care. Ann Fam Med 2010;8(4):308-15.
  57. Mathematica Policy Research. How effective is home visiting? http://www.mathematica-mpr.com/EarlyChildhood/homvee.asp. Accessed January 2012.
  58. May CR, Mair FS, Dowrick CF, Finch TL. Process evaluation for complex interventions in primary care: understanding trials using the normalization process model. BMC Fam Prac 2007;8:42.
  59. McCall, N, Cromwell C. Results of the Medicare Health Support Disease Management Pilot Program. N Engl J Med 2011;365:1704-12.
  60. McCarthy D, Nuzum R, Mika S, et al. The North Dakota experience: achieving high-performance health care through rural innovation and cooperation. The Commonwealth Fund Commission on a High Performance Health System 2008. http://www.commonwealthfund.org/Content/Publications/Fund-Reports/2008/May/The-North-Dakota-Experience--Achieving-High-Performance-Health-Care-Through-Rural-Innovation-and-Coo.aspx.. Accessed January 2012.
  61. Meyers DS, Clancy CM. Primary care: too important to fail. Ann Intern Med 2009; 150(4):272-273.
  62. Nutting PA, Crabtree BF, Stewart EE, Miller WL, Palmer RF, Stange, KC. Effect of facilitation on practice outcomes in the national demonstration project model of the patient-centered medical home. Ann Fam Med 2010;8(Suppl 1):s33-s44.
  63. Palfrey JS, Sofis LA, Davidson EJ, et al. The Pediatric Alliance for Coordinated Care: evaluation of a medical home model. Pediatrics 2004;113(5 Suppl):1507-16.
  64. Parchman ML, Scoglio CM, Schumm P. Understanding the implementation of evidence-based care: a structural network approach. Implement Sci 2011;6:14-23.
  65. Paulus RA, Davis K, Steele GD. Continuous innovation in health care: implications of the Geisinger experience. Health Aff 2008;27(5):1235-45.
  66. Peikes D, Chen A, Schore J, Brown R. Effects of care coordination on hospitalization, quality of care, and health care expenditures among Medicare beneficiaries: 15 randomized trials. JAMA 2009;301(6):603-618.
  67. Peikes D, Dale S, Lundquist E, Genevro J, Meyers D. Building the evidence base for the medical home: what sample and sample size do studies need? White Paper (Prepared by Mathematica Policy Research under Contract No. HHSA290200900019I TO2). AHRQ Publication No. 11-0100-EF. Rockville, MD: Agency for Healthcare Research and Quality. 2011. /page/papers-briefs-and-resources.
  68. Peikes D, Peterson G, Brown R, Schore J, Razafindrakoto C. Results from a radical makeover of a care coordination program show how program design affects success in reducing hospitalizations and costs: evidence from a randomized controlled trial before and after key changes in program design. Presentation at AcademyHealth Annual Conference, 2010 Jun 27.
  69. Peikes D, Zutshi A, Genevro JL, Parchman ML, Meyers DS. Early evaluations of the medical home: building on a promising start. Am J Manag Care 2012;18(2):105-116.
  70. Plsek PE, Greenhalgh T. Complexity science: the challenge of complexity in health care. BMJ 2001;323;625-628.
  71. Reid RJ, Coleman K, Johnson EA, et al. The Group Health Medical Home at year two: cost savings, higher patient satisfaction, and less burnout for provider. Health Aff 2010;29(5):835-43.
  72. Reid RJ, Fishman PA, Yu O, et al. Patient-centered medical home demonstration: a prospective, quasi-experimental, before and after evaluation. Am J Manag Care 2009;15(9):e71-87.
  73. Ricketts TC, Greene S, Silberman P, Howard H, Poley S. Evaluation of Community Care of North Carolina asthma and diabetes initiative: January 2000-December 2002. 2004 Apr 15. A report submitted under a contract agreement with the Foundation for Advanced Health Programs, Inc. 2004. UNC contract # 5-35174. http://www.shepscenter.unc.edu/research_programs/health_policy/Access.pdf Accessed January 2012.
  74. Rittenhouse DR, Shortell SM, Fisher ES. Primary care and accountable care—two essential elements of delivery-system reform. N Engl J Med 2009;361(24):2301-3.
  75. Robert Graham Center. The patient centered medical home: history, seven core features, evidence and transformational change. 2007 Nov. http://www.aafp.org/online/etc/medialib/aafp_org/documents/about/pcmh. Accessed January 2012.
  76. Rosenthal TC. The medical home: growing evidence to support a new approach to primary care. JABFM 2008;21(5):427-40.
  77. Schoen C, Osborn R, Doty MM, Bishop M, Peugh J, Murukutla N. Toward higher-performance health systems: adults’ health care experiences in seven countries, 2007. Health Aff 2007;26(6):w717-34.
  78. Silvia TJ, Sofis LA, Palfrey JS. Practicing comprehensive care: a physician’s manual for implementing a medical home for children with special health care needs. Boston: Institute for Community Inclusion, Children’s Hospital; 2000. http://www.communityinclusion.org/article.php?article_id=193&type=topic&id=2. Accessed January 2012.
  79. Starfield B. Primary care: concept, evaluation, and policy. New York: Oxford University Press; 1992.
  80. Starfield B. Is the patient-centered medical home the same as “primary care”?: measurement issues. May 20, 2008. http://ncvhs.hhs.gov/080520p01.pdf. Accessed January 2012.
  81. Steele GD, Haynes JA, Davis DE, et al. How Geisinger’s advanced medical home model argues the case for rapid-cycle innovation. Health Aff 2010;29(11):2047-53.
  82. Steiner BD, Denham AC, Ashkin E, Newton WP, Wroth T, Dobson LA. Community Care of North Carolina: improving care through community health networks. Ann Fam Med 2008;6:361-7.
  83. Stetler CB, Damschroder LJ, Helfrich CD, Hagedorn HJ. A guide for applying a revised version of the PARIHS framework for implementation. Imp Sci 2011;6:99.
  84. Stremikis K, Schoen C, Fryer A-K. A call for change: the 2011 Commonwealth Fund survey of public views of the U.S. health system. New York: The Commonwealth Fund; 2011 Apr.
  85. The Commonwealth Fund. The patient-centered medical home evaluators’ collaborative. http://www.commonwealthfund.org/Content/Newsletters/The-Commonwealth-Fund-Connection/2011/Mar/March-18-2011/Announcements/Patient-Centered-Medical.aspx. Accessed January 2012.
  86. Torregrossa AS. Pennsylvania’s efforts to transform primary care. Governor’s Office of Health Care Reform, Commonwealth of Pennsylvania. http://www.cthealthpolicy.org/webinars/20100223_atorregrossa_webinar.pdf. Accessed January 2012.
  87. United States Government Accountability Office. Value in health: key information for policymakers to assess efforts to improve quality while reducing costs. 2011 Jul. http://www.gao.gov/new.items/d11445.pdf. Accessed January 2012
  88. Unützer J, Katon WJ, Callahan CM, et al. Collaborative care management of late-life depression in the primary care setting: a randomized controlled trial. JAMA 2002;288(22):2836-45.
  89. Unützer J, Katon WJ, Fan MY, et al. Long-term cost effects of collaborative care for late-life depression. Am J Manag Care 2008;14(2):95-100.
  90. Unützer J, Katon WJ, Williams JW, et al. Improving primary care for depression in late life: the design of a multi-center randomized trial. Med Care 2001;39(8):785-99.
  91. What Works Clearinghouse. Procedures and standards handbook (version 2.1). http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_procedures_v2_1_standards_handbook.pdf. Accessed January 2012.
  92. Wilhide S, Henderson T. Community Care of North Carolina: A provider-led strategy for delivering cost-effective primary care to Medicaid beneficiaries. American Academy of Family Physicians 2006. http://www.aafp.org/online/etc/medialib/aafp_org/documents/policy/state/medicaid/ncfull.Par.0001.File.tmp/ncfullreport.pdf. Accessed January 2012.
  93. Wolff JL, Rand-Giovanetti E, Boyd CM, et al. Effects of Guided Care on family caregivers. Gerontologist 2010;50(4): 459-70.
  94. Wolff JL, Rand-Giovanetti E, Palmer S, et al. Caregiving and chronic care: the Guided Care program for family and friends. J Gerontol Biol Med Sci 2009;64(7):785-91.
  95. Zutshi A, Peikes D, Smith K, Genevro J, Azur M, Parchman M, Meyers D. The medical home: what do we know, what do we need to know? A review of the current state of the evidence on the effects of the patient-centered medical home model. Final paper submitted to the Agency for Healthcare Research and Quality. Princeton, NJ: Mathematica Policy Research, 2012.
back to top

Footnotes

  1. 1. We note that pilots and demonstrations are testing different variants of the model. The variants reflect different ways of operationalizing the principles that we refer to collectively as the PCMH model.1
  2. 2. For example, a practice interested in decreasing the time between the receipt of laboratory results and patient notification need not wait for the results of a rigorous, controlled evaluation. It could convene the practice team members to redesign their workflow and measure changes in outcomes of interest (such as percentage of results delivered within 2 days) before and after implementation of the redesigned process. This approach provides quick answers to a low-cost initiative. While decisionmakers may require solid evidence on outcomes to justify large, transformative investments in primary care, for smaller initiatives, overreliance on rigorous evaluations carries the risk of delaying beneficial changes (Gold, Helms, and Guterman, 2011).2
  3. 3. The AHRQ definition also emphasizes the central role of health information technology, workforce development, and fundamental payment reform. It builds on the traditional definition of primary care established by the Institute of Medicine and Barbara Starfield (Donaldson, Yordy, Lohr, et al., 1996; Starfield, 1992, 2008) and incorporates aspects of the expanded care model (Barr, Robinson, and Marin-Link, 2003; Glasgow, Orleans, Wagner, et al., 2001). It is similar to the definition of the medical home provided in the joint principles with a greater emphasis on team-based care.

    This first criterion excludes two studies of medical home interventions—the American Academy of Family Practice’s National Demonstration Project (NDP), which is often cited in the medical home literature, and the Illinois Medical Home Project—because rather than testing the effect of a medical home, they tested the effect of facilitation as an intervention for practice redesign efforts. In other words, they tested the effect of helping practices redesign themselves to become medical homes relative to the effect of practices becoming medical homes on their own. While not included in this review, the NDP provided rich insights about their implementation experience.3
  4. 4. None of the studies reported effects on out-of-pocket patient costs or practice revenues.4
  5. 5. In addition to the USPSTF methods (see Harris, Helfand, Woolf, et al., 2001), we drew specific operational criteria from the What Works Clearinghouse (WWC) review of educational interventions (which also typically employ clustered designs like the many practice-level interventions reviewed here, see http://ies.ed.gov/ncee/wwc/pdf/reference_resources/wwc_procedures_v2_1_standards_handbook.pdf) and from an evidence review of home visiting programs for families with pregnant women and children (see http://www.mathematica-mpr.com/EarlyChildhood/homvee.asp).5
  6. 6. The term “control group” is used exclusively when the group was assigned using a randomized, controlled trial. The term “comparison group” indicates the group was selected using nonexperimental comparison group methods.6
  7. 7. See Zutshi, Peikes, Smith et al. (2012) for a detailed categorization of the interventions using the AHRQ PCMH principles.7
  8. 8. Because most studies do not report all of this information, our formal rating criteria were more liberal: we assessed the comparability of intervention and comparison groups only on baseline values of the outcome.8
  9. 9. CMP did so, too, but results are inconclusive due to the lack of adjustment for clustering.9
  10. 10. Creswell, Klassen, and Clark, et al. (2011) provide useful guidance on mixed methods. Crabtree, Chase, Wise, et al. (2011) emphasize the necessity of a mixed-methods approach when evaluating the PCMH model.10
  11. 11. The recent release from AHRQ of the PCMH-Consumer Assessment of Healthcare Providers and Systems (PCMH-CAHPS) survey designed to assess patient experience with the PCMH may address one barrier and enable future evaluators to more easily measure patient experience. Built on the existing, well-validated Clinician and Group survey, it covers topics such as provider-patient communication, coordination of care, and shared decisionmaking, and is available in adult and child versions, and in Enghlish and Spanish (https://www.cahps.ahrq.gov/Surveys-Guidance/CG/PCMH.aspx).11
  12. 12. For example, measures of the external environment within which a PCMH operates could build on Parchman, Scoglio, and Schumm’s (2011) modeling of health care delivery across a network of providers.12
back to top

Appendix - Supplemental Table on Findings From Evaluations With High and Moderate Ratings

Additional Tables

This table presents the results by outcome for outcomes rated as high or moderate. The results are classified as statistically significant and favorable, statistically significant and unfavorable, not statistically significant and therefore inconclusive, or uncertain statistical significance and therefore inconclusive.

View tables in new window