The Medical Home: What Do We Know, What Do We Need to Know? A Review of the Earliest Evidence on the Effectiveness of the Patient-Centered Medical Home Model

March 2013
AHRQ Publication No. 12(14)-0020-1-EF
Prepared For:
Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services, 540 Gaither Road, Rockville, MD 20850, www.ahrq.gov

Contract Numbers: HHSA290200900019I/HHSA29032002T, HHSA290200900019I/HHSA29032005T
Prepared By: Mathematica Policy Research, Princeton, NJ; Aparajita Zutshi, Ph.D., Deborah Peikes, Ph.D., M.P.A., Kimberly Smith, Ph.D., M.P.A., Melissa Azur, Ph.D. (Mathematica Policy Research),  Janice Genevro, Ph.D., Michael Parchman, M.D., David Meyers, M.D. (Agency for Healthcare Research and Quality).

Appendix A

AHRQ’S Definition of the Patient-Centered Medical Home

The patient-centered medical home (PCMH, or medical home) model holds promise as a way to improve health care in America by transforming how primary care is organized and delivered. Building on the work of a large and growing community, the Agency for Healthcare Research and Quality (AHRQ) defines a PCMH not simply as a place but as a model of the organization of primary health care that delivers the core functions of primary health care (AHRQ, 2012).

The PCMH encompasses five functions and attributes:

  1. Patient-centered. The PCMH provides relationship-based primary health care that is oriented toward the “whole person.” Partnering with patients and their families requires understanding and respecting each patient’s unique needs, culture, values, and preferences. The PCMH practice actively supports patients in learning to manage and organize their own care at the level the patient chooses. Recognizing that patients and families are core members of the care team, PCMH practices ensure that they are fully informed partners in establishing care plans.
  2. Comprehensive care. The PCMH is accountable for meeting the bulk of each patient’s physical and mental health care needs, including prevention and wellness, acute care, and chronic care. Comprehensive care requires a team of care providers, possibly including physicians, advanced practice nurses, physician assistants, nurses, pharmacists, nutritionists, social workers, educators, and care coordinators. Although some PCMH practices may bring together large and diverse teams of care providers to meet the needs of their patients, many others, including smaller practices, will build virtual teams linking themselves and their patients to providers and services in their communities.
  3. Coordinated care. The PCMH coordinates care across all elements of the broader health care system, including specialty care, hospitals, home health care, and community services and supports. Such coordination is particularly critical during transitions between sites of care, such as when patients are being discharged from the hospital. PCMH practices also excel at building clear and open communication among patients and families, the medical home, and members of the broader care team.
  4. Superb access to care. The PCMH delivers accessible services with shorter waiting times for urgent needs, enhanced in-person hours, around-the-clock telephone or electronic access to a member of the care team, and alternative methods of communication, such as email and telephone care. The medical home practice is responsive to patients’ preferences regarding access.
  5. A systems-based approach to quality and safety. The PCMH demonstrates a commitment to quality and quality improvement by ongoing engagement in activities such as using evidence-based medicine and clinical decision-support tools to guide shared decisionmaking with patients and families, engaging in performance measurement and improvement, measuring and responding to patient experiences and patient satisfaction, and practicing population health management. Publicly sharing robust quality and safety data and improvement activities is also an important marker of a system-level commitment to quality.

AHRQ recognizes the central role of health IT in successfully operationalizing and implementing the key features of the medical home. In addition, AHRQ notes that building a primary care delivery platform that the Nation can rely on for accessible, affordable, high-quality health care will require significant workforce development and fundamental payment reform. Without these critical elements, the potential of primary care will not be achieved.

 

back to top

Appendix B

Methods for Reviewing the Evidence on the Patient-Centered Medical Home

This appendix describes the methods used for reviewing the evidence on the PCMH, beginning with selecting evaluations for inclusion in the review and developing and applying a formal rating system to identify rigorously evaluated interventions.

Evaluation Selection

The review team conducted a broad search to identify English-language studies in the published and grey literature on the PCMH in the United States. To capture published studies, we searched several databases using the Ovid and EBSCO search engines for articles from January 2000 to September 2010 containing the words “medical home” or “primary care transformation.” Using the Ovid search engine, we searched the following databases: Journals@Ovid, HealthSTAR, Ovid MEDLINE, and PsycINFO. We used the EBSCO search engine on the following databases: Academic Search Premier, Business Source Corporate, Cumulative Index to Nursing and Allied Health Literature, Cochrane Center Register of Controlled Trials, Cochrane Systematic Database of Reviews, Cochrane Methodology Register, Database of Abstracts of Reviews of Effects, E-Journals, Econ Lit, Health Technology Assessments, National Health Service, Economic Evaluation Database, Health Policy Reference Center Database, and Health Policy Reference Center.

We also conducted targeted searches to identify evaluations of initiatives for which no start dates were specified, but that are widely cited as part of the evidence base for the medical home. We identified additional evaluations by reviewing content on 100 relevant Web sites, examining bibliographies in existing review articles, and gathering input from experts in the field. This search process yielded 498 potentially relevant citations. As with all evidence reviews, owing to publication bias, the evaluations selected and synthesized here may be more likely to include favorable effects (versus those that show no effects or unfavorable effects) than those excluded by our search and synthesis criteria.28

Of the 498 citations, we selected evaluations that met the following two criteria:

  1. The evaluation tested a primary-care, practice-based intervention with three or more of the five medical home components defined by AHRQ (delivering care that is patient-centered, comprehensive and team-based, coordinated, accessible, and systems-based in its approach to quality and safety) described in detail in Appendix A. We excluded evaluations of care coordination and disease management interventions that met these criteria but were not provided from within, or in close partnership with, the practice (for example, interventions delivered by off-site care managers via telephone).
  2. The evaluation used quantitative methods to examine effects on either (a) a triple aim outcome: quality of care, costs29 (or hospital use or emergency department use, two major cost drivers), and patient or caregiver experience; or (b) professional experience.

Because most interventions targeted different subgroups of the U.S. primary care population, our inclusion criteria did not consider the population served. We also did not require that the intervention use health IT or provide enhanced payment.

Using these criteria, the review team identified 14 evaluations of 12 distinct interventions (one intervention, CCNC, was evaluated by three distinct evaluations) for inclusion in the review.30 Although most of these interventions can be viewed as precursors to the medical home, they share multiple components of the medical home and are frequently cited as part of the evidence base for it.

Methods to Assess the Rigor of the Evaluations

In this section, we first provide an overview of the rating system and then describe, in detail, the individual criteria that factor into this system.

Rating System

To assess the rigor of the 14 evaluations selected for review, we developed a systematic approach by drawing broadly from the USPSTF review methods and supplementing them with specific criteria used by well-regarded evidence reviews from the fields of education and home visiting programs for families with pregnant women and children.31

Rather than give a global rating to each evaluation, we individually rated the internal validity of each analysis conducted as part of an evaluation as high, moderate, low, or excluded. We rated individual analyses because evaluations often used different designs, samples, and methods (and sometimes different subgroups of patients) to analyze different outcomes over varying followup periods. Therefore, to allow for the possibility that the evaluation of a single intervention could provide more rigorous evidence on some outcomes than others, we separately assessed the analysis of each outcome measure at each followup period and, if applicable, for each subgroup of patients. We consider analyses rated high or moderate as providing rigorous evidence and include such analyses only in our synthesis of the evidence.

Our rating of each analysis is based solely on an assessment of its internal validity. We do not factor generalizability (external validity) into the rating because most interventions included in this review targeted a specific subset of primary care patients, were implemented in unique settings, and either purposively selected practices or relied on them to volunteer; therefore, nearly all these evaluations have limited generalizability. We summarize the characteristics of patients and practice settings used in the rigorous evaluations to alert decisionmakers to the possibility that findings may differ if interventions are implemented in other populations and settings

We rated each analysis using a sequence of criteria, starting with the most general (evaluation design) and ending with the most specific (such as whether the analysis controlled for outcome values prior to the start of the intervention, in other words, at baseline). As a first step in assessment, we considered only analyses conducted as part of randomized, controlled trials (RCTs) and nonexperimental comparison group evaluations for a high or moderate rating, based on the strength of the methods the evaluations used to produce unbiased estimates of the effects of the interventions. If they failed to meet criteria for either a high or a moderate rating, they received a low rating. Analyses from evaluations that did not include a control or comparison group32 (for example, pre-post or cross-sectional evaluations) always received a low rating. This is because such designs make it difficult to assess what the outcomes would have been absent the intervention (the purpose of a control/comparison group is to establish this counterfactual). Analyses were rated excluded if the evaluation design or methods were not described in sufficient detail to permit assessment of their internal validity. In many cases, because of the limits on what study authors can include in a journal article, we sought additional details from authors to be able to rate analyses.

We note that the rating of the internal validity of the evidence does not take into account whether an evaluation has sufficient power to detect policy-relevant effects, or whether tests of statistical significance in clustered designs (that is, ones that intervene with entire practices or sets of providers) account appropriately for clustering. However, because these are important considerations for the interpretation of findings, we do consider them when we synthesize the findings, as described in Chapter 2.

Below, we define the ratings and the criteria they are based on.

High rating. A high rating reflects high confidence that the analysis accurately estimated the effect of the intervention (where the effect might be favorable, unfavorable, or zero). A high rating was reserved for analyses from RCTs with no systematic confounding factors, no endogenous subgroups, high maintenance of the intervention and control groups at followup through low attrition rates, and use of regression analysis to control for reported statistically significant baseline differences between the intervention and control groups in the outcome. (These terms are defined in more detail below.)

Moderate rating. A moderate rating reflects moderate confidence that the analysis accurately estimated the effect of the intervention. Future research based on more rigorous evaluation designs or methods might alter the estimated effect. A moderate rating was assigned to analyses from RCTs that fulfilled all criteria for a high rating but failed to control for reported statistically significant baseline intervention-control differences in the outcome. A moderate rating was also given to analyses from comparison group designs that (1) had no systematic confounding factors, (2) were based on intervention and comparison groups with equivalent outcomes at baseline, and (3) used regression analysis to control for baseline values of the outcome.33 Analyses from RCTs that suffered from high attrition or were based on endogenous subgroups were treated similarly to those from comparison group evaluations, and had to meet the same criteria as a comparison group evaluation

Low rating. A low rating reflects low confidence that the analysis accurately estimated the effect of the intervention. Future research based on more rigorous evaluation designs or methods is likely to alter the estimated effect. A low rating was given to analyses from RCTs and comparison group designs that suffered from systematic confounding. It was also given to analyses from comparison group designs and from RCTs with high attrition or endogenous subgroups under two conditions: (1) the intervention and control/comparison group analysis samples did not have equivalent baseline values of the outcome; or (2) if that condition was met, the analysis did not control for baseline values of the outcome. Finally, analyses from pre-post and cross-sectional evaluations always received a low rating.

Excluded rating. Some evaluations provided insufficient information to establish whether the estimates accurately reflect the effect of the intervention. Analyses were rated excluded if the design or methods were not described in sufficient detail to enable assessment. In this case, we cannot know with certainty whether the reported effects are a result of the intervention.

Table 16. Definition of ratings
High RCTs (including cluster-RCTs) with no systematic confounding factors, no endogenoussubgroups, no sample reassignment from the control to the intervention group or viceversa,34 and low attrition at followup. To receive a high rating, RCTs that meet all thesecriteria also need to control for any reported baseline difference between the interventionand control groups on the outcome.
Moderate
  • Comparison group evaluations (including case control and cohort studies with comparison groups) with no systematic confounding factors, analysis showing the intervention and comparison groups have equivalent outcomes at baseline, and controls for baseline values of the outcome.
  • RCTs with:
    • No systematic confounding factors, no endogenous subgroups, no samplereassignment, and low attrition at the unit of analysis, but that fail to control for reportedstatistically significant baseline differences in the outcome between the intervention andcontrol groups.
    • No systematic confounding factors, but with (1) endogenous subgroups, (2) highattrition of the analysis sample at followup, or (3) sample reassignment. These arereviewed as comparison group evaluations and receive a moderate rating if they meetapplicable criteria for a comparison group evaluation.
Low RCTs that did not meet the criteria for a moderate or high rating and comparison groupevaluations that did not meet the criteria for a moderate rating, as well as pre-post andcross-sectional evaluations.
Excluded Analyses from evaluations for which the design and/or methods were not described in sufficient detail to enable assessment.
Description of the Individual Criteria

Here, we describe in detail the key criteria that factor into the rating system.

Evaluation design. The highest rating is reserved for analyses from evaluations that randomly assigned subjects to the evaluation’s research groups. Evaluations using random assignment can—if well implemented and analyzed—provide the strongest evidence that differences in the outcomes between the intervention and control groups can be attributed to the intervention.

Comparison group evaluations can achieve a moderate rating at best. In such evaluations, subjects are sorted into intervention and comparison groups in a nonrandom way; therefore, even if the groups have comparable observed characteristics before the intervention, they still may differ on unmeasured characteristics. We therefore cannot rule out the possibility that the findings are attributable to unmeasured differences between the intervention and comparison groups. Certain RCTs (as described in Table 16) are treated similar to comparison group evaluations and, at best, considered for a moderate rating.

For a stepwise illustration of the rating process for RCTs and comparison group evaluations, see figures 3 and 4, respectively.

Attrition among RCTs. We assess attrition in RCTs but not in comparison group evaluations. Comparison group evaluations examine outcomes based on the final analysis samples, from which there is, by definition, no attrition.

In RCTs, loss of data on some evaluation participants can bias the evaluation’s impact estimates by creating, over time, differences in the characteristics of the intervention and control groups that had originally been comparable because of randomization. Bias can arise from overall attrition (the percentage of evaluation participants lost among the total evaluation sample) and differential attrition (the difference in attrition rates between the intervention and control groups).

We use “liberal standards” employed by the What Works Clearinghouse to assess the level of attrition for each outcome examined in a given evaluation. To determine whether attrition may be a source of bias in the impact estimates, this assessment takes into account both overall attrition and differential attrition between the intervention and control groups. Figure 5 shows the cutoffs for combinations of overall and differential attrition used to determine “low” or “high” attrition. Evaluations with a relatively high level of differential attrition can still meet standards for the “low” attrition category if they have a relatively low level of overall attrition, whereas evaluations with a relatively high level of overall attrition require a lower level of differential attrition to meet standards. For example, as Figure 5 indicates, if the rate of attrition is the same for the intervention and control groups (that is, there is zero differential attrition), the evaluation can fall in the low attrition category even with 60 percent overall attrition. However, even a small amount of differential attrition (say 10 percent) requires the overall attrition rate to be very low (in this case, less than 13 percent) to meet the standards for low attrition.35

We consider attrition due to mortality as inconsequential for analysis of claims-based outcomes, such as service use and costs, and do not apply it as a criterion for assessment of the rating because we know with certainty that there are no service use and costs for people who died. For survey-based outcomes, however, we treat mortality as any other type of attrition and factor it into the rating process.

Only RCTs meeting the standard for acceptable combinations of overall and differential attrition are considered for the high rating. RCTs that do not meet these standards are considered for the moderate rating.

Baseline equivalence of the intervention and control or comparison groups. To obtain a moderate rating, RCTs with high attrition or endogenous groups and comparison group evaluations must (1) demonstrate baseline equivalence of the two research groups, and (2) control for baseline values of the outcome when estimating the effect of the intervention. We use the first criterion because the use of comparable intervention and control/comparison groups minimizes the bias in the estimated effect. We examine statistical tests of the difference in means to show baseline equivalence.36 Evaluations must establish baseline equivalence using the analysis sample at followup (as opposed to the sample at the start of the intervention). The second criterion ensures that any differences at baseline do not bias the estimated effects at followup. For example, if a comparison group evaluation examines effects on two outcomes— costs and hospitalizations—but finds baseline equivalence only on costs and not on hospitalizations, only costs will be considered for a moderate rating, while hospitalizations will receive a low rating. To actually receive a moderate rating on costs, the analysis would also need to control for baseline costs. Finally, if the outcomes of the intervention and control/comparison groups are not equivalent at baseline, then the analysis will receive a low rating even if it controlled for baseline values of the outcome. This is because controlling for baseline values of the outcome will not account for the potential differences in unobserved characteristics between the intervention and control/comparison groups that can bias the estimated effect.37

Systematic confounding. A systematic confounding factor is a component of the research design or methods that undermines the credibility of attributing an observed effect to the intervention. One example of a systematic confounder is the use of one practice in the intervention or control/comparison group. Using a single practice precludes factoring in the variation in outcomes that occurs at the practice level (in addition to the variation that occurs across patients within a practice) in estimating the overall variance in the outcome and conducting tests of statistical significance to determine whether the observed intervention-control difference is due to the intervention or to chance. Another example of a confounding factor is systematic differences in data collection methods for the intervention and control/comparison groups. Because the presence of such confounding factors severely weakens the credibility of an analysis’ findings, a low rating is assigned to analyses from RCTs or comparison group evaluations with such factors.

Endogenous subgroups. A subgroup is considered endogenously formed and estimated effects for this subgroup considered biased if the subgroup is based on the followup (or postrandomization) value of an outcome that could be affected by the intervention. The extent of this bias may be small if the intervention and control arms of the subgroup are comparable at baseline, or if the intervention had no effect on the outcome that defines the subgroup. For example, in an intervention aimed at improving depression care, examining satisfaction with depression care among people who reported receiving such care during the intervention constitutes, by definition, analysis of an endogenous subgroup, because the intervention may affect receipt of depression care. Analyses based on endogenous subgroups in an RCT are treated similarly to those from a comparison group evaluation and must meet criteria applicable to a comparison group evaluation to receive a moderate rating.

 

back to top

Figure 3. Rating Criteria for Randomized Controlled Trials

Figure 3. Rating criteria for randomized controlled trials
This figure contains the stepwise illustration of the rating process for randomized controlled trials. If the evaluation has a systematic confounding factor, it receives a low rating. If the evaluation examined an outcome for an endogenous subgroup, reassigned sample members, or experienced high attrition, it is reviewed as a comparison group evaluation. If the evaluation has no systematic confounding factors, has no endogenous subgroups, does not reassign sample members, and does not experience high attrition, it can receive one of two ratings: (1) a moderate rating if it does not control for statistically significant baseline differences in the outcome, or (2) a high rating if it controls for statistically significant baseline differences in the outcome.

Figure 4. Rating Criteria for Comparison Group Evaluations

Figure 4. Rating criteria for comparison group evaluations
This figure contains the stepwise illustration of the rating process for comparison group design evaluations. If the evaluation has a systematic confounding factor, it receives a low rating. If the evaluation has no systematic confounding factors and does not establish baseline equivalence of the intervention and comparison group samples, it receives a low rating. If the evaluation has no systematic confounding factors and establishes baseline equivalence of the intervention and comparison group samples, it can receive one of two ratings: (1) a moderate rating if it controls for baseline differences in the outcome, or (2) a low rating if it does not control for baseline differences in the outcome.

Figure 5. What Works Clearinghouse Liberal Attrition Standards

Figure 5. What Works Clearinghouse liberal attrition standards
This figure shows the liberal standards employed by the What Works Clearinghouse to assess the level of attrition in an evaluation. The standards determine whether the level of attrition is “low” or “high” based on a combination of overall attrition and differential attrition (between the intervention and control groups) rates. Overall attrition rate ranges from 0 to 100 percent, while differential attrition rate ranges from 0 to 16 percent. Evaluations with a relatively high level of differential attrition rate can still meet the standard for “low” attrition if they have a relatively low level of overall attrition rate. Similarly, evaluations with a relatively high level of overall attrition require a lower level of differential attrition rate to meet the standard for “low” attrition.

Appendix C

Supplemental Table on Descriptions of the Interventions, by AHRQ PCMH Principles and Facilitators

Table 17. Descriptions of the interventions, by AHRQ PCMH principles and facilitators
AHRQ PCMH Principles and Facilitators
InterventionOverviewPatient-CenteredComprehensive CareCoordinated CareAccess to CareSystems Approach to Quality and SafetyPayment and Other Resources to the Primary Care PracticeHealth IT
Aetna’s Embedded Case Managers Program assigns nurse case managers to primary care practices to help manage care for Medicare Advantage members and collaborate with the clinical team Care plans; disease management coaching; family members can sit in on patient office visits Team-based care, including the nurse case manager and clinical team, who address needs of patients with multiple chronic conditions, including dementia and depression, and provide end-of-life care Case manager coordinates care, including hospital discharge plan, and links patients to social services No changes in access to care Case manager uses clinical decision support software to identify gaps in treatment; reviews data weekly with the clinical team and monthly with the medical director Program provides nurse case managers; practice receives an extra fee for patients enrolled in program and incentives for meeting quality targets Clinical decision support software
Care Management Plus Nurse care managers, supported by specialized health IT tools within primary care clinics, orchestrate care for chronically ill elderly patients Develop care plan with patients and family; teach self-management to patients Team-based approach to patient assessment and care planning Care manager coordinates care across providers Patient-specific secure messaging system facilitates communication Care management tracking (CMT) database embeds disease protocols and generates flexible, patient-specific care plans, as well as aggregate statistics No payment component. Program provides care manager and specialized IT tools Existing electronic health records (EHRs) and CMT to track all contacts with patients, families and providers; generate reminders, calculate patient statistics; and provide electronic protocols
Community Care of North Carolina Community-based care management provided through networks of primary care providers (PCPs), a hospital, the Department of Social Services, and the health department. Case managers from a nonprofit work with PCPs to coordinate care and undertake population health management Providers and/or case managers (a nurse, social worker, or other clinician) coach and educate patients on disease management and assess psychosocial needs Practice team includes primary care provider and case managers who provide comprehensive case management Case manager coordinates with providers, hospitals, health departments, and social service agencies that are part of network; web-based program used to coordinate care 24/7 on-call assistance; case managers make home visits Random chart reviews to assess adherence to case management protocols; review of claims data and charts to assess clinical improvements PCPs receive $2.50 per member per month (PMPM) for medical home and population management activities and the help of the case manager; networks receive $3 PMPM ($5 PMPM for elderly or disabled patients) No standardized health IT component; some participating physicians may be using EHRs
Geisinger Health System ProvenHealth Navigator Geisinger Health Plan (GHP) provided one nurse case manager for every 900 Medicare Advantage patients in each primary care practice to identify high-risk patients, design patient-centered care plans, provide care coordination and care transition support, and monitor patients using patient-accessible EHRs Case manager develops individualized care plans; provides self-management education to patient and family; assesses patient satisfaction. Care teams composed of PCP, physician’s assistant, nurse practitioners, nurses, administrative staff, and case manager address patient’s care needs, including medication management and end-of-life planning Case manager coordinates care across providers, including during care transitions, and conducts outreach to home health agencies and nursing homes 24/7 access, same-day appointments, self-scheduling using EHR, direct telephone lines to case managers, home interactive voice response for high-risk or postdischarge patients EHRs provide preventive and chronic care reminders and embedded care workflows; program tracks 10 quality-of-care metrics, including chronic and preventive care, postdischarge followup, and patient satisfaction and experience; monthly meetings with primary care practices, navigators, and GHP staff to review results Program provided case manager and funding for new services, physician and practice transformation stipends, and staff incentives, including employee stipends and quarterly performance-based payments; program also used a shared savings incentive model based on quality and efficiency performance Existing EHR embeds care workflows, captures patient information, tracks patient care, generates reminders, and calculates patient statistics; EHR is patient-accessible via a Web-based interface; Bluetooth scales for daily monitoring of heart failure patients
Geriatric Resources for Assessment and Care of Elders Advanced practice nurse and social worker (GRACE support team) assess low-income seniors in home, and develop and implement a care plan with a geriatrics interdisciplinary team, in collaboration with the patient’s primary care provider Initial and annual in-home comprehensive geriatric assessment; annual individualized care plan; minimum 1 in-home visit to review care plan and 1 face-to-face or telephone contact per month with patients and family members or caregivers Care plan developed and implemented in collaboration with the GRACE interdisciplinary team of a pharmacist, physical therapist, community resource expert, and mental health case manager, led by a geriatrician and the patient’s PCP. The care plan covers physical, mental, and social needs The nurse practitioner-social worker team coordinates with the inpatient and nursing home teams for patients who have been hospitalized or using skilled nursing facility services; the team conducts a home visit and full review of the case after hospital and ED visits. They also coordinate specialty visits Dedicated telephone line to GRACE support team Care protocols for evaluation and management of 12 common geriatric conditions No payment component. Program provides assistance of GRACE support team to primary care practice. Integrated EHRs and web-based tracking tool support care management and coordination of care.
Group Health Cooperative Medical Home Group Health redesigned one pilot clinic to be a PCMH by changing staffing, scheduling, point of care, patient outreach, health IT, and management; reducing caseloads; increasing visit times; using team huddles; and rapid process improvements Individualized care plans viewable through patient EHRs Care team composed of PCP, nurse care manager, pharmacist, medical assistant, and a Licensed Practical Nurse deliver primary care to patients, which includes pre-visit contact to discuss concerns Nurse works with PCP to coordinate care across providers, including during transitions between care sites 24-hour telephone access to consulting nurse, same-day appointments, online services, self-scheduling using EHRs’ direct telephone lines to case managers EHR provides preventive and chronic care reminders and embeds care workflows Physicians paid a salary and shared savings based on quality targets achieved; program provided additional staff Existing EHR records patient information and care, generates reminders; its messaging feature is used for real-time specialist consultations. Patients can access EHRs
Guided Care GC nurse (GCN) joins primary care practice, provides assessments, care plans, monthly monitoring, and transitional care to highest-risk Medicare patients Home-based assessment; individualized care plan and a patient self-care plan to promote self-management; group classes for caregivers GCN and PCP discuss and modify individualized care guide. GCN proactively manages patients, mostly by telephone GCN coordinates care and provides care plan to other providers; facilitates care transitions; monitors patients during hospital stays; and facilitates access to community services Telephone access to GCN Evidence-based guidelines, embedded in Guided Care EHRs, used to generate individualized care guides and monthly reports on GCN performance. GCN, study team, and nurse managers met monthly to review performance No payment component. Program provides on-site registered nurse (the GCN) EHR embeds evidence-based guidelines; generates individualized care guides based on guidelines and patient information; tracks patients; and sends reminders to GCN
Improving Mood-Promoting Access to Collaborative Treatment for Late-Life Depression Depression care for elderly depressed patients is integrated into primary care via a depression clinical specialist (DCS) (a nurse or psychologist) who coordinates care between the PCP, consulting PCP, and psychiatrist Patient and DCS establish individualized care plan, which includes education, care management, problem-solving treatment, support for antidepressant use, and relapse prevention DCS, in consultation with the consulting PCP and team psychiatrist, works with patient and regular PCP to provide depression care. DCS supports antidepressant therapy and behavioral activation DCS does not coordinate with external providers (psychiatrist and DCS become part of internal team) Telephone and in-person contact with DCS Evidence-based treatment algorithm used by DCS and care team. The DCS and psychiatrist review progress weekly over the year-long intervention No payment component. Program provides DCS, consulting PCP, and psychiatrist Internet-based system used to record patient contacts; electronic reminders to DCS if time for a contact or on ineffective treatment
IMerit Health System and Blue Cross Blue Shield of North Dakota Chronic Disease Management Pilot BCBS embedded a chronic disease management nurse in a clinic for patients with diabetes. The nurse assesses patient knowledge of diabetes, sets goals for disease self-management, establishes the need for in-person or telephone followup, and refers to services Nurse and patients set goals, and nurse provides self-management education Focused on diabetes care Nurses make referrals for services such as nutrition counseling Nurse available by telephone (unclear whether 24/7 access is available) EHRs allow patients and physicians to track patient outcomes and provide aggregate performance information to physicians $20,000 startup grant and 50% of savings generated in the first year of the pilot. Program provides a disease management nurse in the clinic. After the pilot, BCBS replaced the startup grant and in-kind nurse with a disease management fee Existing EHR used by physicians and patients to track patient care
IPediatric Alliance for Coordinated Care A pediatric nurse practitioner (PNP) from each practice allocates 8 hours per week to coordinate care of children with special health care needs and make expedited referrals to specialists and hospitals; a local parent of a child with special health care needs consults to the practice Individualized health plan developed with the patient and family Practice-based team care that includes physicians, PNP, office staff, and family consultants. Provides 8 hours per week of comprehensive case management; social support and activities PNP makes expedited referrals and coordinates care across providers (e.g., therapists, school nurses), and education, social services, and recreation After-hours coverage; PNP conducts home visits PNPs and physicians receive ongoing training. Local parent provides feedback to practice No payment to practices. Stipend to family members serving as consultants. Continuing medical education for physicians No health IT component
Pennsylvania Chronic Care Initiative Integrates the chronic care model and the medical home model for patients with diabetes and pediatric patients with asthma and includes patient-centered care, teaching self-management of chronic conditions, forming partnerships with community organizations, financial incentives for providers, and making data driven-decisions Self-management support and coaching Practice-based team care, which includes case managers, physicians, nurses, and office staff Referral process to community services Timely or same-day appointments Use of performance measures and evidence-based guidelines to inform planning and treatment Providers in practices that meet National Committee for Quality Assurance (NCQA) PCMH standards are eligible for supplemental payment, including an annual payment for clinicians ($40,000 to $95,000), infrastructure payments (starting at $20,895), and provider performance incentives Electronic patient registry
Veterans Affairs Team-Managed Home-Based Primary Care Comprehensive and longitudinal primary care provided by an interdisciplinary team that includes a home-based primary care (HBPC) nurse in the homes of veterans with complex, chronic, terminal, and disabling diseases Individualized treatment plan developed in collaboration with patient and caregiver; HBPC nurse teaches both patients and caregivers about the disease, treatment, and self-care; caregiver support provided Patient assessment by HBPC team members from at least three different disciplines (social workers, dietitians, therapists, pharmacists, and paraprofessional aides); weekly team meetings HBPC team coordinates patient care across all settings, and is involved in hospital discharge planning 24-hour contact for patients Mandatory annual performance improvement plan; quarterly medical record reviews No payment component. Physicians are salaried staff who devote a specific percentage of time to the HBPC program HBPC information system designed to help HBPC teams manage their patients and resources, as well as to provide VA Central Office with site-specific information for all programs
back to top

Footnotes

  • 28 See http://www.cochrane-net.org/openlearning/html/mod15-2.htm for more details on publication bias. 28
  • 29 None of the studies reported effects on out-of-pocket patient costs or practice revenues. 29
  • 30 In general, we found that, for most of the interventions, different analyses from the same study design were published in multiple articles. 30
  • 31 For the USPSTF guideline, see Harris et al., 2001. For education guidelines, see http://ies.ed.gov/ncee/wwc/. For the home visiting guidelines, see http://www.mathematica-mpr.com/EarlyChildhood/homvee.asp. 31
  • 32 The term control group is used exclusively when the group was assigned using an RCT. The term comparison group indicates that the group was selected using nonexperimental comparison group methods. 32
  • 33 Comparison group evaluations and RCTs with high attrition, or endogenous subgroups that show baseline equivalence on the outcome being examined, are also required to control for the baseline values of the outcome in their analyses because this ensures that any small differences at baseline do not bias the impact estimates. 33
  • 34 In RCTs, deviation from the original random assignment (“sample reassignment”) can bias the study’s impact estimates. This can occur if patients in control practices obtained care from intervention practices, or vice versa. Therefore, for an RCT to receive a high rating, the analysis (in addition to meeting other criteria for a high rating) should be performed on the sample as originally assigned. RCTs that somehow alter the original random assignment must establish baseline equivalence of the intervention and control group members in the analysis sample to be considered for a moderate rating. None of the studies we reviewed reported sample reassignment. 34
  • 35 More information on the attrition calculations used can be found in the What Works Clearinghouse Procedures and Standards Handbook at: http://ies.ed.gov/ncee/wwc/pdf/wwc_procedures_v2_standards_handbook.pdf. Future reviews of primary care interventions could consider whether there is a need to tailor these attrition standards to the primary care setting. 35
  • 36 This is a liberal criterion for the evaluations in this review that have small sample sizes and are likely underpowered. Such studies are more likely to find differences at baseline not to be statistically significant. Alternatively, with large samples, it is possible that even a very small difference appears as statistically significant. To address this possibility, future reviews could establish a threshold (such as 0.25 standard deviations from the pooled mean) below which even statistically significant differences would be considered as meeting the baseline equivalence criterion. 36
  • 37 RCTs that otherwise meet the criteria for the highest rating are not required to establish baseline equivalence, because randomization is expected to produce intervention and control groups that are equivalent, on average, on both observed and unobserved characteristics. Nevertheless, chance differences between the two groups can arise despite randomization, especially with small samples. As a result, to meet the criteria for the highest study rating, RCTs that showed evidence of statistically significant baseline differences on outcome measures are required to control for these differences in their statistical impact analyses. RCTs that do not control for statistically significant baseline differences in the outcome measure are assigned the moderate rating. 37

 

back to top