Quality: The Translation Gap ... And Why FHIR Changes Everything - Part 2 of 3
The Strategic Alignment of Professional Organizations with Payer Value Frameworks
In Part 1, I described how our national quality measurement infrastructure, despite significant investment from organizations like NCQA, NQF, and AHRQ, has produced fragmented measures, claims-based approximations, and outcomes research that is at best mixed. The system is not merely underperforming. It is structurally misaligned.
This post digs into why that misalignment happens at the technical level, and why emerging infrastructure such as FHIR, TEFCA, and USCDI now makes a fundamentally different approach possible.
The Fundamental Disconnect
When a clinical guideline committee defines a quality measure, they think in clinical language: patient populations identified by lab values, imaging findings, symptom severity scales, and biomarker thresholds. This is also where the lifes sciences industry anchors its R&D. A heart failure quality measure might specify a population using ejection fraction below 40%. An oncology measure might require a confirmed pathologic staging. These are the criteria that distinguish the patients for whom a recommended intervention is appropriate.
When a payer actuary goes to implement that measure, they face a different reality. They have claims. Claims carry diagnosis codes, procedure codes, and encounter dates. They do not carry ejection fractions, staging results, or biomarker levels. So the actuary approximates using ICD codes that roughly correspond to the clinical population, making assumptions where clinical context is absent, and building logic that cannot fully represent what the guideline intended.
This is the translation gap...and it has profound consequences.
When population identification is inaccurate, measure results are unreliable. Practices get penalized or rewarded based on measurement artifacts rather than actual care quality. One study comparing measurement methods across cardiovascular quality measures found that for smoking cessation, medical record abstraction suggested a 94% probability of meeting the performance target. All the while, EHR-generated reports from the same patient population suggested only 11%. Same patients. Same care. Wildly different conclusions depending on how measurement was operationalized.
That's not a data problem. That's a structural problem with how quality measures are specified and handed off.
Why Single-Site Data Isn't Enough
The translation gap is compounded by the fragmentation of care delivery itself. Research shows that 79% of patients receive care at more than one facility during a calendar year. A quality measure calculated using only the data available to a single practice or health system is working with an incomplete picture.
When health information exchange (HIE) data was incorporated into quality calculations in published studies, 15% of all measure calculations changed, affecting nearly one in five patients. These weren't edge cases, they were systematic errors introduced by incomplete data. Under the current model, neither the clinician nor the payer has visibility into the full picture.
FHIR and TEFCA: Infrastructure That Changes the Equation
The healthcare system now has building blocks that make a different approach not just theoretically possible but practically achievable:
- USCDI (United States Core Data for Interoperability) standardizes the clinical data elements that electronic health systems must make available — demographics, medications, vital signs, diagnoses, laboratory results. This is the vocabulary layer.
- FHIR (Fast Healthcare Interoperability Resources) provides a flexible, modular standard for organizing and exchanging that data through use-case-specific profiles. This is the grammar layer.
- TEFCA (Trusted Exchange Framework and Common Agreement) is the nationwide network enabling secure health information exchange across hundreds of hospitals, thousands of physician offices, and more than 100,000 clinicians. This is the infrastructure layer.
CMS has designated FHIR as the standard for interoperability and now requires payers to establish patient access APIs using it. The Interoperability and Prior Authorization Final Rule creates bidirectional data flow enabling payers to access clinical data for quality measurement while returning claims data to clinicians. The regulatory momentum has arrived.
A Proof of Concept Already Exists
This is not a theoretical future. ASCO, ASTRO, and partners have already demonstrated feasibility through the CodeX Quality Measures for Cancer project, using FHIR standards and specialty-specific data profiles (mCODE) to author, test, and execute oncology quality measures.
Their findings: measures generated accurate results when executed against FHIR repositories, burden was reduced for all parties in the quality measure lifecycle, and the approach creates "a less burdensome path" for everyone involved.
The technology works. The infrastructure is coming. The regulatory mandate is in place. What's missing is specialty-level clinical ownership of quality measure specifications built for this new environment.
A Potential Paradigm Shift
The current paradigm is professional organizations develop quality metrics in narrative form. Payers interpret those narratives using claims data. Interpretation gaps are inevitable, measurement accuracy suffers, and clinicians bear the burden of a system that doesn't reflect their actual practice.
The emerging paradigm that could take shape includes professional organizations develop FHIR-based quality measure specifications using executable code that payers can implement directly using clinical data accessed through TEFCA. The measure travels with its clinical logic intact. Population identification uses the same clinical criteria the guideline intended. Calculation is automated and auditable. Interpretation gaps are then eliminated by design.
This isn't an incremental improvement. It's a structural shift in who controls the quality specification and how accurately care actually gets measured.
What This Asks of Professional Organizations
The opportunity here is significant, but it requires professional organizations to expand their conception of what quality measure development means. Writing a narrative measure specification is not enough. The new standard is developing FHIR implementation guides, authoring Clinical Quality Language (CQL) specifications, mapping specialty-specific data elements to USCDI, and partnering with payers on implementation through TEFCA-enabled exchange.
This is a different kind of work than most specialty societies have done. It requires technical infrastructure, informatics capability, and a genuine commitment to clinical data stewardship...far more than just guideline publication.
The organizations that build this capability first will define quality measurement for their specialties for the next generation and defend the posture of the professional specialty they serve. Those that don't will find their guideline-based measures continued to be approximated (and misrepresented) by payers working with the only tools available to them....and abrasion will persist in some form or fashion.
----------------------------------------------------------------
In Part 3, I'll outline what a transformed professional organization strategy could actually look like from integrating cost-effectiveness into guidelines, to building FHIR-based measure specifications, to driving transformational rather than incremental change.
