Invitational Meeting of Ambulatory Care Quality Alliance
September 7-8, 2005
Contents
Introduction
Report of the Reporting Workgroup
Report of the Performance Measurement Workgroup
Report of the Data Sharing and Aggregation Workgroup
Conclusion
Introduction
The fourth meeting on performance measurement, data aggregation, and reporting was convened by the Agency for Healthcare Research and Quality (AHRQ), America’s Health Insurance Plans (AHIP), the American Academy of Family Physicians (AAFP), and the American College of Physicians (ACP). The four groups joined together in September 2004 to lead a collaborative effort for determining how to most effectively and efficiently improve performance measurement, data aggregation, and reporting in the ambulatory care setting.
The effort, termed the Ambulatory Care Quality Alliance (AQA), has as its aim to improve health care quality and patient safety through a collaborative process in which key stakeholders agree on a strategy for measuring performance at the physician level; collecting and aggregating data in the least burdensome way; and reporting meaningful information to consumers, physicians, and other stakeholders to inform choices and improve outcomes. AQA’s mission and goals focus on key areas that can help identify quality gaps, control skyrocketing cost trends, reduce confusion and burdens in the marketplace, and otherwise address the challenges of the current health care system.
The fourth meeting of the Ambulatory Care Quality Alliance convened to review the activities of its three workgroups (on reporting, performance measurement, and data sharing and aggregation). Participants reviewed reporting principles; discussed a proposed definition of efficiency and examined next steps on performance measurement; and reviewed proposals for a health data stewardship board. They also discussed possible pilot projects on data sharing and aggregation.
The timing of this stakeholder process coincides with a growing interest in rewarding high-quality providers (through “pay for performance” or “p4p”) and clinicians’ burgeoning interest in adopting health information technology to enhance the quality, safety, and efficiency of care delivery. AHRQ Director Carolyn Clancy chaired the meeting, and Centers for Medicare & Medicaid Services (CMS) Administrator Mark McClellan delivered keynote remarks.
Opening Remarks
—Carolyn Clancy, AHRQ
Carolyn Clancy welcomed participants to this fourth meeting of the Ambulatory Care Quality Alliance and said she was pleased by the record attendance (almost 140 people RSVP’d). She noted that the AQA has made considerable progress in the past year. The question is no longer “What are we going to do,” she said, but “How are we going to close the gap” between the best possible quality care and the care people typically receive. Our objective now is to move ideas into action, she said.
Clancy announced that the steering committee has been expanded to include new members and supporting organizations. The new governing structure is similar to the governing framework for the Hospital Quality Alliance (HQA).
AQA Steering Group Members:
- AARP
- Agency for Healthcare Research and Quality (AHRQ)
- American Academy of Family Physicians (AAFP)
- American College of Physicians (ACP)
- American College of Surgeons (ACS)
- American Medical Association (AMA)
- American Osteopathic Association (AOA)
- America’s Health Insurance Plans (AHIP)
- National Partnership for Women and Families
- Pacific Business Group on Health (PBGH)
- The Society of Thoracic Surgeons (STS)
AQA Supporting Organizations:
- AFL-CIO
• Association of American Medical Colleges (AAMC)
• American Academy of Pediatrics (AAP)
• Centers for Medicare and Medicaid Services (CMS)
• Medical Group Management Association (MGMA)
• National Committee on Quality Assurance (NCQA)
• National Quality Forum (NQF)
Clancy also announced that the AQA has launched a Web site (www.ambulatoryqualityalliance.org).
Clancy noted that, in the reporting area, AQA has a very simple goal: to reduce the burden on physicians by providing uniformity of reporting. The key difference between this initiative and previous ones, she pointed out, is that two major physician groups are co-conveners.
Regarding performance measurement, Clancy noted that the AQA has approved a starter set of measures and is now turning its attention to efficiency measures. She stressed that there were significant challenges in areas where measures have yet to be developed. With the interest in moving forward on pay for performance, she said, the lack of valid measures is a very big problem. Clancy also noted the need to develop an infrastructure to collect data and report in a way that is fair and equitable.
Representatives from the American College of Physicians, the American Academy of Family Physicians, and America’s Health Insurance Plans also offered brief opening remarks. The ACP representative said that while the challenges were formidable, his organization was committed to getting the work done. He noted that thousands and thousands of practices have yet to see the wave of quality improvement coming at them. One way to help them succeed, he said, is to create a set of performance measures. The AAFP representative added that physicians are at the table and are serious about the process. It’s not just AAFP and ACP, he added, but many physician groups who are seeking the most efficient way to provide quality care with the least burden to providers and patients alike.
Keynote Remarks
—Mark McClellan, Centers for Medicare & Medicaid Services
Mark McClellan opened his remarks by saying that it was very impressive to see how well participants have been collaborating, and he praised the increasing participation of physician groups and other stakeholders. In particular, he noted the inclusion of more specialty groups and said the effort works best when physicians—the experts themselves—are involved. Thank you for helping to make that happen, he said.
Making investments in quality has really started to pay off, said McClellan. He said he was tremendously impressed by the AQA’s work on performance measures to date, and he expressed hope that the AQA will continue to find practical and short-term solutions.
Our goal is not just to get costs down, stressed McClellan, but to best support health care providers in doing the most for patients with the resources available. He noted that the last time he spoke before the AQA, he discussed the need for relevant measures in the full range of medical specialty and ambulatory care settings. We also need resource use performance measures, said McClellan, to achieve an understanding of efficiency and resource use.
McClellan noted that CMS has planned several pilot projects to capture and aggregate data needed for quality and pay for performance programs, and that each project counts on leadership from physician organizations. We need to think about how to provide support for patients with complications, he said. We also want to get more resources to those efforts that avoid complications and help to keep down overall costs—and to help physicians make investments that will lead to achieving quality improvements. These pilot programs will help give us some insight into how to deliver more effective care and avoid unnecessary costs, concluded McClellan.
Questions and Answers
The question-and-answer period began with a question about whether there were any Federal programs or pilots on the horizon to help physicians get up to speed on health information technology. In response, McClellan said that CMS was launching several demonstration programs within the next month. One is a program to support quality improvement efforts in small physician offices. McClellan explained that CMS would pay for quality measurement reporting and for better results. We will also provide technical assistance with health information technology, he said. In conjunction with this, said McClellan, the Quality Improvement Organizations are leading an effort to identify electronic health systems that can be adopted by small offices and used to improve quality.
McClellan also noted that a second CMS demonstration program would focus on Medicare improvement. He noted that there are ongoing quality measurement and outcome efforts at the regional level, including physician consortiums in New York and Santa Barbara designed to accomplish what individuals cannot do alone. He said the aim of the new CMS project would be to provide higher payments to providers who are participating in these regional efforts if they achieve better outcomes and lower costs.
Beyond these demonstration programs, added McClellan, there is a lot of interest in Congress in finding ways to pay for better results and more efficient care—which, he noted, is exactly what health information technology will allow us to do.
McClellan said that he viewed the AQA’s work as directly related to the broader goal of supporting health information technology and other strategies to get to better quality and lower costs. He added that he needed everyone’s help to find clinically sensible and valid ways to shift the focus to better quality and to avoiding unnecessary costs.
In addition, McClellan pointed out that the pilots offered a good opportunity to air and address potential problems associated with aggregating data. He stressed the view that data aggregation offers opportunities not only for quality improvement and measurement, but also to answer questions about the best medical treatment options. What treatments are really working? he asked. What’s the best use of prescription drugs or different technologies? This can be a very valuable approach to augmenting physician practices, he said.
A second participant asked what CMS needed from physicians and other stakeholders.
We’re spending a lot of money, replied McClellan, but our challenge is to make sure we’re spending it on what you need. He noted that the American Medical Association (AMA) and the specialty groups have endorsed moving to better payment and reporting systems. While this is a start, McClellan said, we need to move up the timetable. This is why the work of the AQA is so important, he said.
One participant noted that performance measures are foundational. What about the idea of setting targets? he asked.
McClellan noted that everyone is seeking a dashboard to see how they are doing on performance. Unfortunately, he said, without good measures of what we’re trying to achieve (and good data to guide efforts), it is hard to establish targets.
We need to see all these efforts meshed up, said McClellan. The AQA gives us a clear path, he added, because the measures being developed are those that can be measured validly and are proven steps to get to improvements in performance and quality.
Report of the Reporting Workgroup
—Randy Johnson, Motorola
—Nancy Nielsen, American Medical Association
Randy Johnson reminded participants that the workgroup had been divided into two separate subgroups:
- One to develop principles for provider reporting (chaired by Nancy Nielsen).
- A second to develop principles for consumer and purchaser reporting (chaired by Randy Johnson).
Johnson then reminded participants that purchasers and consumers have been working together for about 5 years to develop a report card to identify the performance level of hospitals, physicians, and health plans—and said that the Consumer Purchaser Disclosure Project has endorsed AQA efforts.
Turning to the principles for reporting to consumers and purchasers, Johnson said that consumer and purchaser groups have reviewed the draft principles, and they are unchanged from those presented at the last AQA meeting. He touched on the specific principles (see report of the third AQA meeting) and stressed that the subgroup believed that the report should reflect the total spectrum of care (e.g., lab and x-rays and prescriptions) to the extent that the physician has a role in that care.
Nielsen said that the principles for reporting to hospitals and physicians also remained unchanged from those agreed to at the last AQA meeting. She noted that because end users are different (consumers/purchasers versus hospitals/physicians), it was appropriate to keep the two reporting principles separate. These are intended to be guideposts, she stressed, and we need to recognize that different groups will implement these principles in different ways.
Like Johnson, Nielsen touched on the specific principles (see report of the third AQA meeting) and said the subgroup recognized that the richest data are in the medical record. As a result, she said, data collection is very important. While we all know that claims and other administrative data are flawed, said Nielsen, we recognize that moving to medical records involves a burden. We need to collect these data in a way that minimizes that burden, she said.
Discussion
The discussion opened with a question about how the reports would rank providers. In response, Johnson said that the workgroup on reporting had not specifically discussed how the report would be generated and presented to consumers. We want a consumer-style report, he said, which might require stars or something else. He added that different reports (with different levels of detail) might be needed for different segments of the consumer population.
Nielsen added that her subgroup was envisioning reports to physicians that would be more along the model of a series of dots in which the individual’s specific place in the ranking was identified. She noted that all providers want to see where they stand relative to everyone else who is performing the same type of services. Because we all think we deserve a lot of stars, she said, it is very disconcerting when we find out that there is a systems issue. Nielsen used the example of a patient who doesn’t get a mammogram because she doesn’t make an appointment for that procedure.
Would the physician receive two reports? asked the same participant.
Not necessarily, replied Nielsen, because it is not clear how often the public reports would be provided. Timeliness is critical for physicians, she said, because I need to know if my patient didn’t get her mammogram.
One participant noted that she wanted consumers to be able to interpret the information that is provided to them. A second person said that it is important to elaborate on the meaning of “literacy” (both health literacy and English-language literacy).
Turning to the data collection principles for reporting to physicians and hospitals, one participant suggested that a third bullet be added to address auditing and evaluation of data submission. Another suggested that skilled nursing facilities, clinical labs, dialysis facilities, and others should probably be included in the comprehensive reporting principles for consumers and providers.
Has there been any discussion of aggregation of measures into an overall index? asked another participant. He noted that while there are lots of specific performance measures, there is a desire to see what these all add up to. Can they be summed to create an index for overall performance? he asked.
This is an important issue, said Nielsen, especially when trying to achieve statistical significance. However, we have not addressed it, she said. Clancy added that while it is possible to sum up almost anything, the question is how to weigh it empirically. It may be less meaningful if we don’t discuss what to fix, she said.
One participant noted that the reporting principles do not address risk adjustment. There is a concern, he said, that a physician with a noncompliant health care population could end up not ranking well because of factors outside his or her control. This needs to be addressed, he said.
Another participant suggested that the first iteration of the reporting principles was architectural. It may be that once we hear from the aggregation and performance measurement groups, she said, it will be time to figure out how to move from architectural principles to what people need to see, how people will be properly informed, and how to ensure that physicians are treated fairly.
What about setting goals for reporting? asked another participant. For example, most hospitals accept a 1-percent infection rate but that others believe it should be zero. We talked about this abstractly, replied Nielsen, as we really want to improve quality for everyone. Our first step is to identify best practices and start to learn from each other.
A participant expressed concern that the tendency to bifurcate takes away from the core question of how to design information that supports improved patient care and fosters informed patient decisionmaking. We have the capacity to do only so much, he continued, and a well-designed set of information tools will stimulate improvement.
Nielsen used the analogy of automobile consumer reports. If you have a report that says a car is crummy, then people don’t buy it, she said. Translating that analogy to physicians, she reiterated the need for meaningful data. The consumer is at the center of everything we’re trying to do, she stressed. Health plans and others are already evaluating physician performance, said Nielsen. These principles are meant to inform and guide the work of these groups. This is not meant to delay consumer reporting, she stressed.
Finally, one participant spoke in favor of bifurcation of the reporting system. She said she would not be likely to change her performance unless she received specific performance data that would allow her to look at her system and how it is working. I need the data in a different format (than needed by consumers) in order to improve my quality, she said.
Report of the Performance Measurement Workgroup
—Kevin Weiss, American College of Physicians
Kevin Weiss reported on the activities of the workgroup on performance measurement and said that he hoped the discussion would help to guide the group’s work moving forward.
At the last AQA meeting, he said, the workgroup was given several tasks—including expanding the AQA performance measures:
- To include surgical specialty and medical subspecialty measures.
- To explore expansion of the AQA measure set to include efficiency measurements.
- To determine objectives for efficiency measurement.
Since then, the workgroup has convened an all-day non-primary care measurement workshop; convened an all-day efficiency measurement workshop with the National Committee for Quality Assurance (NCQA) and the Commonwealth Fund; and considered a broad strategy for expanding the starter set.
Turning to the non-primary care measurement workshop, Weiss said the meeting brought together
subspecialty and surgical specialty societies in early August with the aim to assess these groups’ interest in the AQA process.
Operationally, said Weiss, the workshop participants reached consensus on moving forward with:
- Proposing revisions to the AQA’s performance measurement document on parameters for selecting condition-specific performance measures.
- Proposing new subgroups on acute and chronic illnesses.
- Proposing that the AQA leadership consider further expansion of the steering committee.
- Specialty and subspecialty groups offering existing performance measures for consideration.
Weiss added that the workgroup had used AMA Physicians’ Consortium and CMS documents as a starting point for reviewing existing specialty care measures. He said those that exist are very uneven: some are very broad while others are quite specific. This is something we will have to discuss over time, he said.
AQA Parameters for Selecting Ambulatory Care Performance Measures
Weiss reviewed the revised set of 15 proposed parameters, which, he noted, have been modified somewhat since they were first introduced at the January 2005 AQA meeting. The parameters are as follows:
- Measures should be reliable, valid, and based on sound scientific evidence.
- Measures should focus on areas that have the greatest impact in making care safe, effective, patient centered, timely, efficient, or equitable (the six aims for improvement of the Institute of Medicine [IOM]), and primarily, but not exclusively, where the most improvement can be made (“80/20 rule”).
- Measures should be selected based on strong consensus among stakeholders and predictive value of overall quality performance.
- Measures should reflect processes of care that physicians can influence or impact.
- Measures that have been endorsed by National Quality Forum (NQF) should be used when available.
- Evidence-based quality measures should be evaluated with attention to resource consumption whenever possible.
- Measures of resource consumption should be evaluated with evidence-based quality measures whenever possible.
- Outcome measures should be appropriately risk adjusted and stratified.
- Measures should, as much as possible, be constructed so as to result in minimal or no unintended harmful consequences (e.g., limit adverse impact on access to care).
- When relevant, physician-level measures should as much as possible complement measures in hospital and other health care settings.
- The measurement set should include, but not be limited to, measures that are aligned with the IOM’s priority areas.
- The measurement set should balance completeness and measurement burden and strive to include the minimum number of needed measures.
- The set of measures should reflect a spectrum of care rather than a single dimension of care (e.g., prevention and health promotion, chronic illness, acute care and procedures [diagnostic and surgical]).
- Implementation of measures should be the least burdensome possible (i.e., electronic data systems should be considered whenever possible).
- Performance measures should be developed, selected, and implemented through a transparent process.
Discussion
The discussion on the parameters for selecting ambulatory care performance measures opened with a question about next steps. While the workgroup is focusing on ambulatory care measures, asked the participant, is it perhaps time to start to focus more broadly on those things (including hospital care) that can be measured more broadly?
Weiss noted that the workgroup had thus far focused on ambulatory care because there was so much work to be done in that area alone. Clancy added that it was important not to do too much too fast. The ambulatory care arena alone is very broad. At the same time, she stressed that there ultimately needed to be synergies between the AQA and the Hospital Quality Alliance since it is the same patient that gets care in both settings. I think we should start with specialty measures in ambulatory care, she said, and move on later to link the other components.
Roughly 40 percent of surgery is done in some ambulatory setting, said another participant. At some point, we need to lap over and deal with inpatient as well as outpatient care. When that happens, does it stay in the AQA? In response, Clancy reiterated that it was important not to take on too much at once.
One participant said that the parameter that states that “measures should reflect processes of care that physicians can influence or impact” is very important. He added, however, that he didn’t believe that there was a good process in place yet to work that out.
Did the workgroup focus on the fact that many patients have multiple physicians and multiple diseases? asked another participant. No, replied Weiss. This is a key issue and we need to figure out the best place within AQA to address this. Clancy added that this might be another important empirical issue worth looking into in the context of pilot projects.
One participant expressed concern that the parameters are too physician-centric and don’t address other settings (including nursing care and physical training facilities). Another expressed concern about attribution and the need to understand which pieces of the system have the most influence on performance measurement. He also said it was important to recognize where physicians have influence as opposed to total control.
Finally, one participant suggested that it would be helpful for a subgroup to be tasked to sit down and talk with the HQA about how to address some of these issues.
Efficiency Measurement
Turning to the work done at the efficiency measurement workshop, Weiss discussed a subset of six AQA principles for efficiency measures. A key element of these principles is a definition related to performance measures of efficiency that addresses cost efficiency, economic efficiency, quality of care, and value:
- “Cost efficiency” is a measure of total health care spending--including total resource use and unit prices--by payer and consumer combined, for a single health care service or group of health care services, associated with a specified patient population, time period, and unit(s) of clinical accountability.
- “Quality of care” is a measure of performance on the five other IOM-specified health care aims (safety, timeliness, effectiveness, equity, and patient-centeredness).
- “Economic efficiency” is a measure of the cost efficiency associated with a specified level of quality or the quality associated with a specified level of cost efficiency.
- “Value” is a measure of a specified stakeholder’s (such as an individual patient’s, consumer organization’s, payer’s, provider’s, government’s, or society’s) utility-weighted assessment of a particular combination of quality and cost-efficiency performance scores.
A second participant at the workshop on efficiency measurement offered an example of how these definitions were created. The example addresses four physicians with differences in their average cost per diabetic episode and their percentage rates of control (outcome):
- Physician I charges $1,500 and has a 60-percent rate of control.
- Physician II charges $1,200 and has an 80-percent rate of control (and is thus more cost efficient and provides higher quality care).
- Physician III charges $1,500 and has an 80-percent rate of control. (He is thus more economically efficient than Physician I because he achieves better outcomes at the same level of cost. Physician II is, however, more cost efficient.)
- Physician IV charges $1,600 (and is the least cost efficient of the four) and has a 90-percent rate of control (thus the highest quality care).
Regarding Physician IV, he continued, some consumers might value (and be willing to pay more for) higher outcomes, whereas others might be willing to accept an 80-percent outcome rate. Payers, meanwhile, might value lower costs (and be willing to accept a lower outcome rate).
Discussion
There was considerable discussion about the components of the definition of performance measures of efficiency. One participant asked about differences between economic efficiency and value; a second questioned whether efficiency was being addressed broadly enough (to include allocative and administrative efficiency along with clinical/technical efficiency). A third person expressed concern with the term “cost efficiency.” We’re talking about costs, she said, and it is incorrect to assume that 100 percent of these costs are attributable to physicians.
Weiss stressed that all the definitions need to be vetted within the AQA and also more broadly. We recognize that this definition is not complete, he said.
Administrative efficiency is partly covered in the definition if you believe that this is within the care of the provider, continued Weiss. Allocated efficiency (are the right people getting the right care at the right time?) isn’t addressed, he added, but is very important.
Another participant also stressed the need to vet the definitions and commented that attaching the concept of efficiency to an aggressive spending concept is wrong. We don’t know whether it’s efficient or not until we test, he said.
What I don’t see on the list is whether there is any intention to look at resource use alone, said one participant. She added that it was important to separate out resource use.
Returning to costs, one participant indicated that she felt cost efficiency was a better term than cost. Without that, she said, you lose the notion that this is in comparison to other providers. Perhaps we should consider “relative cost,” she said. Another person said that he was opposed to de-linking cost from resource use. A third commented that costs and value should not be viewed as relative terms, saying that these terms are absolutes. A fourth participant noted that while the definition addresses costs, it does not address financial benefits (i.e., the savings in resources down the road as a result of higher costs today). On the latter point, Weiss noted that those at the workshop had started to look at that issue as part of a larger discussion on longitudinal costs.
What we’re trying to get at is the fully loaded cost (all health care associated with an episode of care), said one participant. Another commented that regardless of the term used, the taxonomy will have meaning to both physicians and consumers at large. We must be careful and deliberate in refining these definitions and the words we use to define them, he said. This is where haste may be the enemy, he warned.
If our ultimate goal is to translate standards into actionable goals for patients, said another participant, then we need to get more clarity around these definitions. We need a benchmark, said yet another person, before we can have relative costs, values, and efficiency.
Weiss noted that there are a number of measures in use in the cost arena by a number of health plans, and asked the health plans to provide copies to the workgroup to inform its work.
Finally, Weiss noted that the workgroup would review everyone’s comments and redraft the principles with the aim of having a proposal that can be voted on at the next AQA meeting.
Objectives for Efficiency Measurement
A member of the workgroup walked participants through the objectives for efficiency measurement. They are to identify and apply a system of tools for:
- Maximizing value and efficiency, driving toward optimal states of health for individual patients, subgroups with like conditions (or health states), populations in a region, and the nation as a whole.
- Improving value and efficiency in the short and long term for consumers and purchasers.
- Reducing variation, waste, misuse, and overuse to achieve optimal resource allocation.
- Making policy decisions around resource allocations within the health care sector and between other sectors of the economy.
- Encouraging innovation in health care delivery and changing physician behavior to improve quality and efficiency.
The workgroup member noted that the second bullet attempts to get at the idea that costs today might result in benefits in the long run. For example, immunization of children may be all costs in the short term but all benefits in the long term, and it is important to think about this when discussing efficiency. The third bullet attempts to get at resource use and optimal allocation of resources; the fourth, to the kind of system that may eventually be useful for rational policymaking. Finally, he stressed that these objectives are a starting point for the discussion.
Discussion
The discussion on the objectives for efficiency measurement started with a participant suggesting that an objective be added on identifying transferable processes or practices to measure efficiency. Another stressed the need to make the first bullet (on maximizing value and efficiency) as broad as possible.
One participant said it was clear that there wasn’t the same evidence base for efficiency that exists around quality. She added that it would be helpful to include somewhere in the document that the AQA encourages research to establish this evidence.
One participant said that it was important to think about quality as relevant to all of these elements. Weiss agreed, and noted that this discussion relates back to the principles discussed earlier. Another participant said that the second bullet (on short- vs. long-term improvements in value and efficiency) is not congruent since the savings may be reaped in nonmedical settings (such as in prisons or schools).
Finally, a participant noted that the objectives lacked a key factor: enabling decisionmaking by consumers and providers. This was certainly intended, said a member of the workgroup, but we can make this explicit.
Next Steps: Models for Accelerating Measurement Implementation
Turning to next steps, Weiss said that he was hoping to move groups of measures forward quickly. He said it would be useful for the workgroup to look at projects that are under way and to bring health plans together to talk about what they are undertaking.
Weiss said that there are three issues to consider moving forward:
- Transparency measures. (Weiss noted that there were a number of proprietary measures on the market that have good, reliable methods. We’d like to have an opportunity to look at these, he said.)
- A standard resource use measurement set. (Weiss asked whether it was possible to create this measure. He said he believed that the AQA would bring forth a transparency measure—and that the proprietary sets would then be value-added.)
- Field testing. (Weiss noted that the workgroup had held very preliminary discussions on developing a demonstration project on efficiency measures.)
Discussion
Opening the discussion, a representative from the Centers for Medicare & Medicaid Services said that CMS is fully committed to a pilot project on efficiency measurement. We are proceeding post haste on pay for performance mechanisms—and the efficiency piece is the most difficult. We need expert help from the external world to achieve this, he said.
Speaking on behalf of employers, one participant stressed that there is huge demand and interest in achieving high performance. At the same time, he said, we have five health plans around the table with different definitions of what this means. We need to push high performance now—and we want a standard measure for efficiency, he said.
One participant said that it would be helpful to have information, even if only anecdotal, on what the health plans are trying to do. He suggested that perhaps some of that information could be posted on the AQA Web site. Another noted that the workgroup had already started on Version 2.0 of performance measures. Even if the deliverable isn’t available by the next meeting, she said, she hoped that an early version would be available for review. A third participant stressed the need to coordinate AQA measurement development activities with the work of other groups. The NCQA is working with the AMA Physicians’ Consortium, he said, and we don’t want physician groups to run off and make up their own measures.
Weiss said that he hoped to get full participation from purchasers and health plans. If both these groups agree to a common set of performance measures, he said, then I can give you a solid progress report at the next AQA meeting. He then asked the health plans to provide copies of their performance measurement standards to the AQA Workgroup on Performance Measurement. In addition, he asked purchaser groups to send in their approaches, as applicable. Weiss also recommended holding a workgroup meeting with technical experts on measurement.
Finally, Weiss addressed the need to look at the expansion of the starter set. The workgroup has proposed that the starter set be expanded to include:
- Efficiency measures.
- Non-primary care measures.
- Measures that address various periods or aspects of the life cycle.
- Cross-cutting measures (e.g., infrastructure support measures).
- Patient experience measures (including patient satisfaction and access to care).
- Composite measures.
- Outcome measures.
- Measures that have been endorsed by NQF but not yet adopted by AQA.
He indicated that the workgroup would start by developing a set of efficiency measures.
Report of the Data Sharing and Aggregation Workgroup
—David Kibbe, American Academy of Family Physicians
—George Isham, Health Partners
Before launching into his presentation, David Kibbe noted that George Isham is now cochairing the workgroup.
The workgroup has three interrelated objectives, said Kibbe:
- To reach consensus on principles.
- To reach consensus on the concept of a National Health Data Stewardship Board.
- To reach consensus on proposed pilot projects.
The first two objectives, said Kibbe, received tentative approval at the last AQA meeting; the third is a new item.
Kibbe noted that the workgroup has undertaken a number of activities since the last AQA meeting, including a face-to-face meeting in Chicago in early June, multiple phone conferences, and the formation of two subgroups to address the National Heath Data Stewardship Board and the proposals for pilot projects.
Data Sharing and Aggregation Principles
Kibbe walked participants through the data sharing and aggregation principles, which were revised to incorporate comments from consumer groups. An effective data sharing and aggregation model requires the following:
- Transparency with respect to framework, process, and rules.
- A process that allows provider performance to be compared using standardized metrics and data collection protocols against national or regional benchmarks and otherwise assists in the analysis of assessments of health care quality and efficiency.
- A process that facilitates making the data useful for physicians to improve the quality and efficiency of care they provide to their patients, and to use for other appropriate purposes (e.g., maintenance of certification).
- A process that results in public reporting to consumers of user-friendly and actionable information about physician quality and efficiency.
- The collection of both public and private data so that physician performance can be assessed as comprehensively as possible.
- Standardized and uniform rules associated with measurement and data collection.
- Compliance with privacy, confidentiality, and other applicable rules, while ensuring that providers, plans, other data contributors, and consumers have necessary and appropriate access to useful information.
- Systems or processes to share, collect, aggregate, and report quality, efficiency, and patient experience data; systems should be designed to minimize cost and burden.
Regarding the third bullet, Kibbe noted that this was expanded to stress that data should be immediately useful to physicians. Regarding the fifth bullet, Kibbe said that it was designed to address the fact that physicians at the practice level see only slivers of data and not the total picture.
National Health Data Stewardship Board
Next, Kibbe discussed the workgroup’s working draft on establishment of a National Health Data Stewardship Board. The Board’s mission, he said, would be to address data aggregation and sharing issues in all health care settings. The problem today, he said, is that there are many disparate activities. The document addresses this issue:
Currently, many disparate organizations are trying to solve this problem. However, the proliferation of multiple regional efforts to aggregate and report data on quality and efficiency, while well-intentioned, is creating significant burdens for physicians as they are faced with multiple, uncoordinated demands for data on performance with little input into the process; doing little to help the consumer; and wasting limited resources that can be used more effectively if combined in a uniform effort. These individual initiatives also do not comprehensively assess provider performance since the data collected are often insufficient to reliably measure quality and efficiency performance.
Given the significant and urgent need to address data aggregation issues in the ambulatory care
setting, the Board’s initial objective is to set policies, rules, and standards for the sharing and
aggregation of public and private sector physician-level data on quality and efficiency. Once this
goal is reached, the workgroup anticipates that there will be a need to link the Board’s activity
with similar activities in other health care settings—such as hospital, home-care, long-term care,
and hospice settings—to prevent the development of silos within the health care system. The
workgroup and, once established, the Board should then reach out to the Hospital Quality
Alliance and other key stakeholders to discuss ways to collaborate, the future role of the Board,
and other related issues.
Kibbe stressed that it was not the intent of the workgroup that the Board itself collect data. He also pointed to the proposed precepts for the Board and the proposed scope of its work. The scope of work includes setting policies, rules, and standards for data aggregation, data collection, attribution, methodologies, data analysis, data validation (audits), uses of data, data access, and data sharing and reporting.
Finally, Kibbe discussed the proposed steps for implementing the Board. The workgroup proposes that the AQA recommend Board members by the end of 2005. The Board members, said Kibbe, would represent a broad range of stakeholder communities (including physicians, purchasers, consumers, health insurance plans, government, and quality experts).
Kibbe noted that the workgroup proposes that the Board, within a year, work with AQA and other key organizations to finalize its governing structure; obtain Federal authority; arrange short- and long-term funding; and develop a strategy to integrate aggregation activities in multiple health care settings. The initial strategy will be to address issues in ambulatory care settings and then to link ambulatory care activities with similar efforts in other health care settings.
Pilot Projects
George Isham discussed the pilot projects. The key elements of the proposed pilot projects are:
- To assess clinical quality, efficiency, and patient experience. (Isham noted that the inclusion of efficiency measures had generated considerable discussion in the workgroup; he also recognized the lack of a generally endorsed process for measuring efficiency.)
- To collect and aggregate Medicare claims data and private-sector data from multiple sources. (Isham said that collecting public and private data is necessary in order to get a full picture of what is going on.)
- To explore both existing and new methods for collecting, submitting, and sharing data from physicians’ medical practices. (Isham noted that current methods have limits on both the kind of data and the quality of data that can be collected. It is very important for the pilots to address this, he added.)
- To leverage the experience of existing aggregation efforts.
- To disseminate measurement information.
Isham also discussed some of the key questions that the workgroup felt that the pilots should address:
- What are the advantages of and most effective methods for linking measures of quality, efficiency, and patient experience?
- What is the most effective method for collecting and linking data from different sources?
- Due to the complexities of measures and aggregating physician-level performance, what methodological issues and questions should the pilots evaluate and address? (The workgroup recommended looking at questions related to provider IDs, sample size, attribution, composite measures, risk-adjustment, data analysis, and Health Insurance Portability and Accountability Act [HIPAA] and data access.)
- What type of information should be reported back to physicians to ensure quality improvement? What are the best ways to report this information back to physicians to produce physician behavior changes?
- What type of information is most useful to consumers, employers, and other stakeholders?
- Regarding the particular measures used in the pilots
- Are the measures used in the pilots valid (i.e., precise and reliable)?
- Do the collection, aggregation, and reporting of the particular measures used in the pilots lead to specified goals and desired outcomes?
- Which measures apply to primary versus specialty care? How do methodological rules differ based on this applicability?
- Based on the results of the pilots, what lessons/conclusions can be applied to future policy?
Discussion
The discussion opened with a question about coordination between the workgroup and other organizations. In response, David Kibbe stressed that the workgroup’s discussions had been broadly inclusive. He said that much of the background for the recommendations being put forth acknowledges the work of regional initiatives. There is no question, he said, that the new Board is being proposed with the intention of respecting this work. Kibbe added that it will be important to use super-aggregated data, and that he did not believe that it was possible to achieve this with the regional structure now in place.
One participant noted that many providers at the group level are already collecting data but lack interoperability. Regarding efficiency, asked another participant, are we counting deductibles and copayments? Another participant said that it was his understanding that while the focus is on the physician in the ambulatory care setting, the aggregation of data would capture the totality of what is happening to patients. We cannot measure efficiency if we are not capturing the totality of data, he said.
The more we dig, the more we need an organizational structure, said Kibbe. He said that this forum needed to persist, but that it needs funding and credibility as a place for these kinds of issues to be dealt with fairly.
Are we talking about physician performance or the entire health care realm? asked a participant. He added that some of the principles for data sharing and aggregation appear to deal with physician performance while others are broader. I think that it would make sense to move forward with a physician-based approach, he said, and then talk simultaneously about the need for a broader data aggregation approach.
The more we look at this, said another participant, the more it becomes clear that we will do a disservice if we set up two parallel processes that might result in two different approaches. If AQA endorses the idea of a board, he said, then I would recommend that we talk immediately to HQA to get buy-in as soon as possible.
It’s not just hospitals, said another participant, it is also the long-term health care community, the psychiatric community, and others. Calling it a complicated and challenging process, he expressed concern about adopting too broad an approach.
Kibbe weighed in on the discussion, saying that he didn’t believe it was realistic to look only at ambulatory care. Ambulatory care is the first step, however, he said, and this is acknowledged in the draft language on establishing the National Health Data Stewardship Board.
One participant encouraged the workgroup to look at existing activities and perhaps set up pilots in conjunction with these ongoing initiatives. Another wondered whether, as the AQA looked at launching pilot projects, it might be worth considering a parallel track. Should AQA road-test our standards for Board governance along with those for data aggregation? she asked. Two other participants also recommended pilot testing the governance structure before pushing for Federal authority for the Board.
Kibbe reiterated that the workgroup’s proposal was to form the governing entity and have it funded and federally authorized. We’re willing to consider alternatives, he said, but warned against overstretching the volunteer resources of workgroup members.
In response to a question about whether there might be a conflict between what the workgroup might endorse and ongoing Federal pilot projects, an official from CMS said that while the agency was not yet prepared to endorse the effort, it likely would be complementary.
In response to a question about whether the Board’s purview will extend to hospital data, Kibbe said it would ultimately address policies and procedures across all health care data. He stressed, however, that the starting point was ambulatory care data, as this appears to be the area in which there is the greatest urgency to act.
Turning specifically to the pilot projects, a workgroup participant said that one aim was to aggregate data from multiple sources, including Medicare data. Our aim was to design some projects to include Medicare data into the mix, she said. Several participants expressed interest in also including Medicaid data.
Another participant stressed that the workgroup has moved from initial conceptualization to the concept of setting very rigid rules for data aggregation that will allow us to build on what already exists. It is important to move very quickly to hospital data, he said, because we are already using them for resource use. He added that he opposed setting up two structures that might set different rules. Funding is the really critical issue, he concluded. His remarks were echoed by a second participant, who added that it was important that standardization occur without stifling innovation.
Kibbe emphasized that the workgroup recognizes the importance of fostering innovation—but doesn’t yet have a way to pull it all together. He also noted that no one should interpret the proposal as having set so high a bar that no one can pass.
Another participant stressed the need to act quickly. He also noted that the AQA must address the totality of care. He said he was puzzled by the apparent reluctance to move beyond a sequential approach. I think it is critical to talk about how we are going to impact treatment and measure effectively, he said. I don’t understand why we won’t do it.
Clancy stressed that the question was not one of whether to do this but when. She acknowledged the general frustration about the current state of affairs. We can look at only snippets of the system because we don’t have a mechanism for looking more broadly, she said. This isn’t about the ultimate goal—which we all agree on—but the specific, intermediate steps.
Isham added that the pilots offered an opportunity to learn more about the things that need to be studied.
One participant asked about including composite (index) measures in the Board’s mandate. Kibbe said it was a good idea and he would bring it back to the workgroup.
As we get later into the day, said one participant, we see the overlaps in all the issues being addressed by the AQA. As a result, he said, it is important to make sure that the pilot projects ask the right questions. He and a second participant called on the various stakeholders in the room to look carefully at the questions and make sure they cover all the bases.
Kibbe posed a final question: Who will measure the quality and efficiency of care for those who are uninsured? This is a question, he said, that needs to be put on the agenda at some point. In response, Clancy said there might be an opportunity at a later date to reach out to the community health centers.
Conclusion
Wrapping up the meeting, Clancy stressed that it was important for participants to look closely at the documents discussed today and to provide comments as soon as possible to guide each workgroup’s ongoing activities. She also noted that there appears to be a general consensus on the need to act sooner rather than later to open discussions between the AQA and the HQA. Finally, Clancy said that the AQA will continue to struggle with how to separate everyone’s broader vision from what can be accomplished concretely in the shorter term.

|