A medical director pulls the quarterly quality report. Breast cancer screening sits at 68%. Colorectal cancer screening at 54%. Diabetes blood sugar control at 61%. The numbers have not moved in two quarters.
She knows what needs to happen. The mammograms need to be scheduled. The colonoscopies need referrals. The A1C labs need to be ordered and the results need to trigger medication adjustments. She has known this since residency.
The report goes back in the folder. The clinic has 22 patients on the schedule today. The MA liaison sent another spreadsheet of open gaps last Tuesday. It sits in a shared drive. No one has opened it since Thursday. The medical assistant who was doing reminder calls left in October. The replacement starts in three weeks.
By the end of the measurement year, the numbers will look roughly the same. The practice will miss its quality bonus threshold. The plan will send another spreadsheet.
I have seen this cycle in dozens of practices. The gap data exists. The clinical knowledge exists. What does not exist is a system that converts knowledge of the gap into closure of the gap, reliably, for every patient, every time.
Care gaps do not persist because physicians lack clinical judgment. Care gaps persist because no one owns the workflow between identifying a gap and closing it. The problem is operational, not clinical. The solution is infrastructure, not education.
What this chapter covers: How care gaps are identified and what actually closes them. Which HEDIS measures matter most for primary care. How the 2026 Star Ratings and the CY 2027 proposed rule change the financial calculus. The measurement year calendar that determines closure strategy by quarter. The data reconciliation problem most practices ignore. The revenue model and cost model for systematic gap closure. Written for physicians and practice operators managing Medicare panels in value-based care arrangements.
What Is a Care Gap in Healthcare
A care gap is a recommended clinical service that a patient is due for and has not received within the specified measurement window. The recommendation comes from clinical guidelines. The measurement comes from HEDIS.
HEDIS, the Healthcare Effectiveness Data and Information Set, is developed and maintained by the National Committee for Quality Assurance. It includes more than 90 measures covering prevention, chronic disease management, behavioral health, medication safety, and utilization. More than 235 million people are enrolled in plans that report HEDIS results.
When a 55-year-old woman with diabetes has not had a retinal exam in 14 months, that is a care gap. When a 62-year-old man has not completed a colonoscopy within the specified screening window, that is a care gap. When a patient with hypertension has no blood pressure reading below 140/90 documented in the measurement year, that is a care gap.
Each open gap represents a failure of execution. Each closed gap represents a clinical service delivered within the measurement window and documented in a way the plan can capture.
The distinction matters. A screening that happens but is not documented in a format the plan can ingest does not close the gap. A referral sent but never completed does not close it. A lab ordered but whose result is not acted upon does not close it. Closure requires the full sequence: identification, outreach, scheduling, service delivery, documentation, and data capture.
Why Care Gaps Matter: The Financial Architecture
Care gaps sit at the intersection of two financial systems that determine whether a practice or health plan earns or loses significant revenue.
How Care Gaps Affect Medicare Advantage Star Ratings
CMS publishes Star Ratings annually for every Medicare Advantage contract. The ratings directly determine Quality Bonus Payments. Plans rated four stars or higher receive a 5% bonus on their benchmark payment (42 CFR 422.258). Plans rated 3.5 stars may receive a reduced bonus in qualifying bonus payment counties. In most configurations, plans below four stars receive no quality bonus.
The stakes are large. In 2025, federal spending on MA quality bonuses totaled at least $12.7 billion, more than four times the amount a decade earlier. In 2026, approximately 64% of MA enrollees are in plans rated four stars or higher, and the average MA quality rating is 3.98 stars. Just below the threshold.
For a plan with 100,000 enrollees, the difference between being in bonus and out of bonus can represent tens of millions in annual revenue.
The 2026 Star Ratings include up to 33 Part C measures for MA-Only contracts and up to 45 measures across 9 domains for MA-PD contracts (43 unique). Since the vast majority of MA contracts are MA-PDs, the full measure set is what most practices are working against. The clinical care measures most directly affected by primary care gap closure:
Triple-weighted (weight of 3): Diabetes Care — Blood Sugar Controlled. Intermediate outcome measure. Controlling Blood Pressure. Intermediate outcome measure.
Single-weighted (weight of 1): Breast Cancer Screening. Colorectal Cancer Screening. Diabetes Care — Eye Exam. Kidney Health Evaluation for Patients with Diabetes (new for 2026). Statin Therapy for Patients with Cardiovascular Disease. Medication Reconciliation Post-Discharge. Care for Older Adults — Medication Review.
On the Part D side, Medication Adherence for Diabetes Medications, Medication Adherence for Hypertension (RAS antagonists), and Medication Adherence for Cholesterol (Statins) are each weighted at 3. A practice that moves these intermediate outcome and adherence measures moves the overall rating more than one that improves five process measures weighted at 1.
For 2027 Star Ratings, Improving or Maintaining Physical Health and Improving or Maintaining Mental Health will increase from a weight of 1 to a weight of 3, adding two more triple-weighted outcome measures.
HEDIS Measure Explorer
Primary care HEDIS measures for Medicare Advantage. Weight ×3 measures have the largest impact on Star Ratings.
Weight ×3 measures (amber) have the largest impact on Star Rating calculations. Part D medication adherence measures are payer-side — provider influence is indirect. Click any row for closure strategy and measurement window details.
2027 CMS Proposed Rule: What Changes for Star Ratings
In November 2025, CMS released the Contract Year 2027 proposed rule. Three changes matter for practices focused on gap closure.
Twelve measures removed. Beginning with the 2027 measurement year (affecting 2029 Star Ratings), CMS proposes removing 12 measures, primarily administrative and process measures where performance is uniformly high. Seven focus on operational performance, three on process of care, two on patient experience. Press Ganey's modeling estimates 89% of contracts would see Stars scores decline under these changes if applied to 2026 results.
Depression screening added. A new Depression Screening and Follow-Up measure for Part C will first appear on the display page for 2026 Star Ratings (using 2024 measurement year data), then enter scored Star Ratings for 2029 (based on 2027 measurement year data). The measure assesses both screening rates and follow-up within 30 days for positive screens. Practices have roughly two years before this measure affects their plan's Quality Bonus Payment.
Health equity reward paused. CMS proposes not implementing the Excellent Health Outcomes for All reward (previously the Health Equity Index), which had been finalized to replace the historical reward factor starting with 2027 Star Ratings. The EHO4All was designed to incentivize plans to improve care for dually eligible members, low-income subsidy recipients, and beneficiaries with disabilities. CMS proposes continuing the historical reward factor instead.
The policy rationale involves methodology preferences. The human consequence is simpler. The financial incentive to close care gaps equitably just got weaker. The disparities those patients experience did not get weaker. A practice that only closes gaps for its most reachable patients is optimizing a measure, not delivering equitable care.
The net effect: fewer measures, more weight on clinical outcomes, a new behavioral health measure, and no dedicated financial reward for closing equity gaps. Those relying on high scores from administrative process measures will lose a buffer they may not realize they depend on.
HEDIS Measures and Value-Based Care Contracts
HEDIS measures also drive value-based contract performance between plans and provider groups. A health plan negotiates quality targets with a provider organization. The targets are HEDIS-based. Meeting them determines shared savings, quality bonuses, or withholds.
A provider group with 5,000 attributed MA lives and a quality bonus of $25 PMPY tied to four HEDIS targets stands to gain or lose $125,000 annually. Scale to 20,000 attributed lives and the figure approaches $500,000. These are simplified projections. Actual contract structures vary by market, plan, and negotiating position.
The financial case for gap closure is not theoretical. It is contractual.
What Practices Should Demand from Their Plans
The plan-provider dynamic around care gaps is often one-directional. Plans send gap lists. Providers close gaps. Plans report quality scores. This puts the operational burden on the practice without always giving it the tools to succeed.
Practices entering or renegotiating VBC contracts should ask for five things.
Timely gap data. Quarterly updates are operationally useless. Monthly refreshes are the minimum. Weekly or real-time gap feeds through an API or portal integration are the standard in high-performing arrangements.
Clear supplemental data submission pathways. If a practice closes a gap through a service documented in the EHR but not captured in claims, the plan must define a submission process with confirmation that the data was received and accepted.
Reconciliation tools. The practice needs to compare its view of patient-level gap status against the plan's view. Discrepancies need a resolution process with a named contact and a defined turnaround time.
Attribution clarity. Which patients are attributed. When attribution changes. How attribution affects which gaps the practice is accountable for.
Performance transparency. The practice should see the same measure-level data the plan uses to calculate quality bonuses, before the bonus is determined. No practice should learn its performance only when the check arrives or does not arrive.
These are not unreasonable asks. They are the minimum conditions for a functional quality partnership.
How Care Gaps Are Identified: Claims, EHR, and Supplemental Data
Care gaps are identified through four data sources, each with limitations.
Claims data is the primary source. When a service is billed, the claim populates the measure numerator. Claims carry a lag of 30 to 90 days. Services rendered but not billed, or billed with incorrect codes, remain invisible.
Electronic health record data captures what claims miss: vital signs, lab results, screening scores. The transition to Electronic Clinical Data Systems (ECDS) reporting makes EHR data increasingly important. For Measurement Year 2025, NCQA transitioned several measures to ECDS-only reporting, including Colorectal Cancer Screening, Childhood Immunization Status, Immunizations for Adolescents, and Cervical Cancer Screening.
Supplemental data is submitted by providers outside the claims process: medical record review results, lab feeds, registry data. For measures transitioning to ECDS-only, compliance from medical records must now be processed through prospective supplemental data rather than the traditional annual chart retrieval. This is a significant operational change that many practices have not fully absorbed.
Patient-reported data includes health risk assessments, CAHPS surveys, and screening instruments. It is inherently limited by what patients choose to report, remember, or understand.
Care Gap Data Reconciliation Between Plans and Providers
The practical challenge for most practices is not gap identification. It is gap reconciliation. The gap list from the plan does not match what the practice sees in its EHR.
A patient completes a colonoscopy at an ambulatory surgery center. The ASC bills the plan directly. The PCP's EHR shows the referral was made but has no documentation of the result. If the plan has not processed the ASC claim by the time it generates the gap report, the gap remains open even though the service was completed.
A patient gets a mammogram during a hospital stay for an unrelated condition. The PCP's gap report still shows breast cancer screening as open because the data sits in a different system.
A practice administers a PHQ-9 during an office visit and documents the score in a progress note. The plan's HEDIS calculation engine looks for specific CPT II codes or structured data elements. If the screening was documented only in free text, the gap remains open.
These are not edge cases. In my experience, 15 to 30% of “open” gaps on a plan-generated list have already been addressed but not captured in a format the plan can ingest. That is not a clinical problem. It is a data problem. Practices that do not reconcile before they outreach waste resources calling patients about services already completed, which erodes patient trust in the outreach itself.
Reconciliation should happen before outreach. Compare the plan's gap list against the practice's EHR. Identify services completed but not captured. Submit supplemental data. Remove confirmed false positives from the outreach list. Then start calling.
How to Close a Care Gap: The Six-Step Workflow
Identifying a gap and closing a gap are fundamentally different operations. Identification is an analytics problem. Closure is a workflow problem.
A gap closes when six things happen in sequence.
One. The gap is identified and matched to a specific patient. Not a list. A patient with a name, a phone number, a chart, and a scheduled or schedulable visit.
Two. Someone contacts the patient. A call, a message, a portal notification. The contact must result in either an appointment or a completed service. A letter mailed to a last-known address with no confirmation of receipt does not reliably close gaps.
Three. The service is delivered. The mammogram is completed. The lab is drawn. The blood pressure is measured and documented. The depression screen is administered and scored.
Four. The result is acted upon. An A1C of 9.2 triggers a medication adjustment. A positive PHQ-9 triggers a follow-up plan. A blood pressure of 152/94 triggers an intervention. Delivering the service without acting on the result closes the process gap but may not close the outcome gap.
Five. Completion is confirmed. The mammogram was not just scheduled but attended. The lab was not just ordered but drawn. The referral was not just sent but completed. The gap between ordering a service and confirming its completion is where most closures fail.
Six. The service and result are documented in a format the plan can capture. CPT codes, ICD-10 codes, LOINC codes, structured data fields. A screening documented only in a free-text note may not register.
Most practices fail at steps two, five, and six. They fail to reach the patient, they fail to confirm the service was completed, or they deliver the service but lose the documentation.
HEDIS Measurement Year Calendar: Q1 Through Q4 Strategy
Gap closure is a time-bound operation. Every HEDIS measure has a measurement year, typically January 1 through December 31. The measurement year calendar is the single most important operational constraint on gap closure strategy, and most practices do not plan around it.
A gap identified in January has 12 months to close. A gap identified in October has 90 days. The tactics are different.
Process measures like cancer screenings have more flexibility. A mammogram completed any time in the measurement year (or within the look-back period, which extends to October 1 of the prior year for BCS-E) closes the gap. The constraint is scheduling capacity, not clinical timing.
Outcome measures like blood pressure control and A1C control follow different logic. HEDIS captures the most recent result documented in the measurement year. A patient whose A1C was 9.1 in March and 7.6 in November closes the gap. A patient whose A1C was 7.6 in March and 9.1 in November does not. The most recent reading wins.
This creates a Q4 problem that every quality team lives with. Patients seen early in the year who do not return may have stale values. Patients identified late with poor control have limited time for medication adjustments to produce results.
The measurement year closes December 31. Outcome measure timing is the most common operational error — don't start a medication intervention in Q4 expecting a HEDIS-closing result. Click a quarter for strategy details.
Which HEDIS Measures Matter Most for Primary Care Gap Closure
Not all HEDIS measures are equally actionable for a primary care practice. The highest-yield targets share three characteristics: the service can be ordered or delivered by the primary care team, the population can be identified from existing data, and the documentation requirements are well defined.
Cancer Screenings. Breast Cancer Screening (BCS-E) targets persons 40 to 74 who are recommended for routine screening. For HEDIS MY 2025, NCQA expanded the age range from 50–74 to 40–74, aligning with the 2024 USPSTF recommendation. Colorectal Cancer Screening (COL-E) targets adults 45 to 75. FIT annually, colonoscopy every 10 years, among other options. Both are process measures. The gap is binary: the screening happened or it did not.
Diabetes Care. Glycemic Status Assessment measures the percentage of diabetic patients with documented A1C results. The Blood Sugar Controlled indicator is the triple-weighted Star measure. Blood Pressure Control for Patients with Diabetes requires BP below 140/90. Eye Exam for Patients with Diabetes requires a retinal exam within the measurement year. Kidney Health Evaluation requires both a urine albumin test and an eGFR within the measurement year, now a scored Star measure for 2026.
Blood Pressure Control. Controlling Blood Pressure (CBP) is the triple-weighted Part C Star measure. Most recent blood pressure for hypertensive patients 18–85, below 140/90. The 5-star cut point for 2026 is 86%.
Statin Therapy and Medication Adherence. Statin Therapy for Patients with Cardiovascular Disease (SPC) and Statin Use in Persons with Diabetes (SUPD) are process measures. The adherence measures (Medication Adherence for Diabetes, Hypertension, and Cholesterol) are Part D measures each weighted at 3, requiring 80% or greater proportion of days covered. These are among the most impactful Star Rating measures and among the hardest for providers to influence directly.
Depression Screening. Depression Screening and Follow-Up for Adolescents and Adults (DSF-E) measures screening using a standardized instrument and follow-up within 30 days for positive screens. Practices that build depression screening into every AWV and chronic care visit now will have two measurement years of operational data before the measure starts counting.
Medication Reconciliation. Medication Reconciliation Post-Discharge requires reconciliation within 30 days of hospital discharge. This connects directly to Transitional Care Management workflows.
Care Gap Closure Revenue Model: Direct and Indirect
Gap closure does not have its own billing code. Revenue flows through the encounters that close the gaps and the quality performance that results from closing them.
Direct encounter revenue varies by service. An AWV addressing multiple gaps generates approximately $176 to $282 depending on initial or subsequent visit. An office visit (99213–99215) generates standard E/M reimbursement. Screenings generate additional revenue: PHQ-9 (G0444), alcohol screening (G0442), lab work.
Quality bonus payments in value-based contracts can represent $25 to $100 PMPY. For a practice with 3,000 attributed MA lives and a $50 PMPY quality bonus, meeting HEDIS targets represents $150,000 in annual quality revenue. Missing them represents zero. These are illustrative. Actual terms vary.
Star Rating impact. A plan achieving four-star status gains approximately 5% in benchmark bonus payments. Plans pass a portion to provider groups through richer contracts and preferred network positioning.
Downstream care management revenue. Gaps identified through AWVs feed into CCM and APCM enrollment (Chapters 1–3). A patient whose diabetes gaps are identified during an AWV becomes a CCM-eligible patient generating $42 to $83 per month.
No single gap closure generates transformative revenue. Systematic closure across a panel, sustained over a measurement year, compounds through multiple revenue channels.
The Cost Side
Revenue projections without delivery cost are incomplete. Closing a gap costs money: staff time for outreach, visit time to deliver the service, administrative time for documentation and data submission, technology infrastructure to track the process.
I do not have reliable published data on cost per gap closure, and I have not seen anyone in the industry publish it rigorously. What I can say is that practices should model their own cost before projecting margin.
The inputs: average outreach attempts per patient before a service is scheduled (typically 2 to 4 for phone-based outreach), staff time per attempt, visit time allocated to gap closure beyond the primary reason for the encounter, and administrative time for supplemental data submission. Until a practice knows its cost per closure, it cannot evaluate whether its program generates positive margin or simply generates revenue consumed by the cost of delivery.
Care Gap Revenue & Cost Estimator
Model the revenue and delivery cost of systematic gap closure for your practice.
Check your VBC contract for the actual quality bonus structure.
Extra clinician time per visit devoted to gap closure beyond the primary reason for the encounter.
These are projections, not guarantees. Actual revenue depends on contract structure, payer mix, documentation accuracy, visit coding, and measurement year timing. Quality bonus estimates assume improved HEDIS scores translate directly to plan-level bonus threshold movement — actual contracts vary. Diabetic patient proportion estimated at 30% and hypertensive at 55% of MA lives; adjust expectations for your panel. Net margin is not profit and does not include technology, supervision, or indirect overhead.
Evidence on Care Gap Closure Programs
Published research on care gap closure is extensive but heterogeneous. Study designs, populations, and interventions vary widely.
A consistent finding: proactive outreach increases completion of preventive services. The effect is strongest for screening measures where the barrier is scheduling rather than clinical complexity. A 2020 randomized trial in JAMA Internal Medicine by Coronado and colleagues examined mailed FIT kits for colorectal cancer screening across 26 community health centers serving predominantly Hispanic and low-income populations. Screening completion was approximately 19 percentage points higher in the intervention group. The finding is robust for that population but may not generalize to different demographics, outreach infrastructure, or baseline screening rates.
The evidence on whether gap closure reduces total cost of care is thinner. Most studies show an association between higher preventive care completion and lower downstream utilization, but the association is confounded by patient selection. Patients who complete screenings tend to be healthier and more engaged. Attributing cost savings to the screening rather than to the characteristics of the patient who completed it remains methodologically difficult. Association, not causation.
The evidence gap that matters most is operational. There is limited published research comparing specific gap closure workflows. Most of what we know about what operationally works comes from health system QI projects, MA plan programs, and vendor case studies with limited external validity. Practices should approach vendor-reported gap closure rates with skepticism and ask for methodology, denominators, and timeframes.
The Goodhart's Law Problem
Any honest discussion of care gap closure must confront a tension the industry avoids. HEDIS measures are proxies for good care. They are not good care itself.
A practice that closes every gap on paper but does not follow up on abnormal results has optimized the metric without improving the outcome. A blood pressure reading taken in the last week of December may reflect a well-managed patient or it may reflect a measurement strategy. An A1C drawn in November after a medication adjustment may show improvement, but whether it persists depends on whether follow-up continues after the measurement year closes.
The goal is not a clean gap report. The goal is a patient whose diabetes is actually managed, whose cancer is actually caught early, whose blood pressure is actually controlled year-round. The measurement year creates urgency. The patient's health does not observe the measurement year boundary.
I name this not to undermine the value of HEDIS measurement but to hold the execution standard higher than the metric itself. The practices that earn trust close gaps because the clinical action matters, not because the quality report is due. The measurement system rewards them. But the reward is a consequence, not the purpose.
Technology-Supported Care Gap Closure Workflow
Here is a concrete model for systematic gap closure.
The physician controls clinical decisions at Stage 4. The system owns the operational workflow at every other stage. The loop closes on confirmation, not on assumption. Tap any stage for details.
Step 1: Data Ingestion, Gap Identification, and Reconciliation. System ingests EMR data, claims history, payer rosters, and supplemental data. Cross-references against HEDIS measure specifications. Reconciles plan-identified gaps against practice-held data to eliminate false positives before outreach. Owner: AI system. Output: Clean, reconciled, patient-level gap list with measure, due date, and priority ranking.
Step 2: Patient Stratification and Outreach. System stratifies by number of open gaps, risk level, time remaining in measurement year, and preferred contact method. Initiates outreach by phone in English and Spanish. Confirms the gap, educates on the service, schedules the appointment. Owner: AI system for outreach. Staff for complex scheduling. Output: Scheduled appointment or documented declination.
Step 3: Pre-Visit Gap Preparation. For patients with scheduled visits, the system generates a pre-visit summary: all open gaps, codes to document, orders to place. The goal is to close every possible gap during the visit rather than requiring a return trip. Owner: AI system generates. Clinical staff reviews. Output: Pre-visit gap summary in the EHR.
Step 4: Visit-Day Execution. The physician addresses open gaps as clinically appropriate. Screenings administered. Orders placed. Results documented in structured fields. Owner: Physician for clinical decisions. Staff for documentation.
Step 5: Post-Visit Follow-Through. The system tracks whether orders were completed. Was the mammogram attended. Was the lab drawn. Was the referral completed. If incomplete, automated follow-up with patient and receiving provider. The loop does not close on an assumption. It closes on confirmation. Owner: AI system for tracking. Staff for escalation.
Step 6: Documentation, Data Capture, and Plan Reconciliation. All completed services documented with correct codes and transmitted to the plan for HEDIS capture. Supplemental data submitted where claims alone will not capture the service. Internal records reconciled against plan gap lists to confirm closure at the plan level, not just in the practice's EHR. Owner: AI system for documentation and reconciliation.
The physician stays in control of every clinical decision. The system owns the operational workflow that ensures those decisions result in completed services, proper documentation, and confirmed closure.
Limitations and Disparities in Care Gap Closure
Disparities in Gap Closure
The patients with the most open care gaps are the least likely to have them closed through standard outreach.
KFF data shows persistent racial and ethnic disparities in preventive service receipt among Medicare beneficiaries. Black beneficiaries received flu vaccinations at 64% compared to 73% for White beneficiaries. Hispanic beneficiaries at 68%. Research from Penn's Leonard Davis Institute using MCBS data (2015–2020) found that Medicare Advantage modestly narrowed Black-White disparities in preventive care compared to traditional Medicare, but significant gaps persisted in both programs.
NCQA now requires race and ethnicity stratification for a growing number of HEDIS measures. For MY 2026, reporting categories will align with updated OMB standards.
The CMS proposed rule for CY 2027 removed the financial incentive specifically designed to reward plans for closing gaps for their most vulnerable members. The disparities did not change with the policy. Practices committed to equitable care need to stratify their own gap data by race, ethnicity, language, and geography regardless of whether a financial incentive exists. Without it, an overall closure rate of 80% can mask 90% for White patients and 55% for Black patients.
Structural Barriers
Transportation remains a barrier for screenings requiring in-person visits. Health literacy affects whether patients understand why a screening matters. Language barriers persist despite bilingual outreach. Trust barriers are real and historically grounded. A 2024 qualitative study among older minority Medicare beneficiaries found that participants raised concerns about historical discrimination affecting their willingness to engage with preventive care.
Technology Limitations
AI outreach cannot close gaps that require in-person clinical services. It can schedule the mammogram. It cannot perform it. AI cannot build the relationship that makes a patient willing to complete a colonoscopy preparation. It cannot fix a broken referral network. If the nearest endoscopy suite has a three-month wait, the gap will not close within the measurement year regardless of outreach efficiency. AI cannot ensure documentation accuracy after the service is delivered.
Workforce Constraints
The model assumes someone is available to review pre-visit gap summaries, act on them during the visit, and verify completion. In practices with lean staffing, adding gap closure tasks to a compressed visit feels impossible. The system reduces administrative burden. It does not eliminate the clinical time required.
Measurement Limitations
HEDIS measures capture whether a service was delivered and whether a target was achieved within the measurement year. They do not capture the quality of the interaction, the patient's understanding of the result, or the long-term trajectory. The transition to ECDS reporting improves data capture but creates new challenges. The shift from hybrid reporting to ECDS-only means that if data is not in the electronic system in a structured format, it does not count.
What Changes This Week
First, pull your current HEDIS gap report from every MA plan you contract with. If you do not have current gap data, request it. You cannot close what you cannot see.
Second, reconcile plan-identified gaps against your EHR. For 20 patients with open gaps, check whether the service was completed and the data simply was not captured. Count the false positives. That number tells you how much of your gap problem is a data problem versus a care delivery problem.
Third, identify your three highest-volume open gap categories. For most Medicare-heavy practices: breast cancer screening, colorectal cancer screening, and one diabetes measure. Pick one.
Fourth, assign one person to own gap closure for that single measure. Not “the team.” One person. Give them the list. Set a 30-day deadline. Track how many patients are reached, how many schedule, and how many complete.
Fifth, measure the gap closure rate at 30, 60, and 90 days. Compare it to your rate before the assigned owner existed. That delta is the value of ownership.
Frequently Asked Questions About Care Gap Closure
The Model Pear Health Builds
Most practices know their gaps. They can pull the report. What they cannot do is convert the report into closed gaps at scale, consistently, without burning out the staff who try.
Pear Health builds the infrastructure between the gap list and the closed gap. The system starts where most analytics platforms stop.
It ingests EMR data, claims history, and payer rosters. It reconciles plan-identified gaps against practice-held clinical data before outreach begins, so the team is not calling patients about mammograms they already had. Reconciliation alone eliminates a meaningful percentage of the outreach list, freeing capacity for gaps that actually need closing.
The AI agent calls patients by phone in English and Spanish. The conversations are specific to the gaps that patient has. It names the screening, explains why it matters, and schedules the appointment. It documents directly into the EHR.
After the visit, the system does the thing most practices cannot sustain manually: it tracks whether the order became a completed service. If the mammogram was scheduled but not attended, the system follows up. If the lab was ordered but not drawn, it follows up. The loop does not close on an assumption. It closes on confirmation.
The physician stays in control of clinical decisions. Pear owns the operational layer that makes those decisions stick across a panel of thousands.
If your practice is tracking care gaps on spreadsheets, assigning follow-up by memory, or discovering in Q4 that gaps you thought were closed are still open in the plan's system, we should talk. Not about reporting. About what changes in the closure rate next month.
Learn how Pear Health closes care gaps systematically for your practice.
Book a DemoThis is Chapter 4 of "Preventive Care in the Modern Era," a series on how modern healthcare practices can build systematic, patient-centered prevention programs using the tools, workflows, and technology available today.
