On the Ethics of Accountable Care Research

On the Ethics of Accountable Care Research
  • Is it ethical for health policy researchers to claim that a Medicare ACO reduced “spending” by 2 percent if the reduction was not statistically significant?
  • Is it ethical for them to do so if they made no effort to measure the cost to the ACO of generating the alleged 2 percent savings nor the cost to Medicare of giving half the savings to the ACO?
  • Does it matter that the researchers work for the flagship hospital within the ACO that was the subject of their study?
  • Does it matter that the ACO and the flagship hospital are part of a huge hospital-clinic chain that claims its numerous acquisitions over the last quarter-century constitute not mere empire-building but rather “clinical integration” that will lower costs, and the paper lends credence to that argument? 
  • Is it ethical for editors to publish such a paper? Is it ethical to do so with a title on the cover that shouts, “How one ACO bent the cost curve”?

These questions were raised by the publication of a paper  by John Hsu et al. about the Pioneer ACO run by Partners HealthCare System, a large Boston hospital-clinic chain, in the May 2017 edition of Health Affairs. Of the eight authors of the paper, all but two teach at Harvard Medical School and all but two are employed by Massachusetts General Hospital (MGH), Partners’ flagship hospital and Harvard’s largest teaching hospital. [1]

Partners has been on a buying and merger binge since it was co-founded by MGH and Brigham and Women’s Hospital in 1994, the year after merger fever broke out across the American health care system following the endorsement of HMOs and “managed competition” by candidate Bill Clinton late in 1992. Partners’ empire-building has been so aggressive it has provoked resistance from antitrust authorities and has probably contributed to the high cost of health care in Massachusetts. [2] 

In this essay I explore the ethical questions raised by Hsu et al.’s article.  I begin with a review of the article.

More bad news for ACOs, some good news for disease management

The paper I am examining is the third that Hsu et al. have published in Health Affairs about Partners’ Pioneer ACO, the second largest of the 32 ACOs that entered Medicare’s Pioneer ACO program in 2012. [3] I described Hsu et al.’s first two papers in an article I posted on THCB last May, Those papers were quite useful (they reported that approximately half of the ACO’s doctors and patients left the ACO over a three-year period).

Hsu et al.’s third paper reported expenditure data at two levels – at the level of Partners’ ACO and at the level of a disease management program within that ACO that enrolled only very sick people. The paper contained good news about the disease management program, a program the authors called the “care management program” (CMP), but bad news for the ACO.

The authors found that the CMP cut Medicare expenditures on “high risk” patients by a statistically significant 6 percent. The authors made no effort to determine what it cost Partners to run the CMP, so they drew no conclusions about the net impact the CMP had on total costs. This conclusion about Partners’ CMP confirms many other studies which found that it’s possible to reduce covered medical costs (usually by reducing hospital utilization) by raising the cost of other services that insurance companies typically don’t cover, typically services provided by nurses to patients with chronic illnesses.

However, Hsu et al. reported more bad news for ACOs. They found that Partners’ ACO cut Medicare’s costs by a statistically insignificant 2 percent. This outcome is consistent with CMS’s data on the performance of Medicare ACOs, as well as the extremely rare studies of total spending by private-sector ACOs (see my discussion of the Blue Cross Blue Shield of Massachusetts ACO here . The only papers seeming to contradict this bad news are two “studies” of simulated ACOs (see my discussion of studies by J. Michael McWilliams and David Nyweide et al. here ). We may infer from the literature that if ACO start-up and operating costs, including the costs of disease management programs, are taken into account, ACOs are raising total health spending.

However, in this third paper, Hsu et al. did not convey to their readers the impression I have just conveyed: Despite their insignificant results, they claimed Partners’ ACO is cutting costs. “Our major overall finding is that participating in an ACO and a care management program lowered utilization and spending,” they concluded (somehow managing to write a sentence with no agent). (p. 881)

Hsu et al. employed two tactics that lulled readers into thinking their data supported their claim that Partners’ ACO “lowered spending.” The first was to treat the statistically insignificant reduction in Medicare spending as if it were statistically significant. The second was to ignore the overhead costs incurred by CMS and Partners’ ACO, a problem that occurs so frequently I have proposed giving it a name – the “free-lunch syndrome.” I ignore for now a third questionable tactic: Rather than use the actual savings data reported by CMS for the Partners ACO, Hsu et al. simulated the impact of Partners’ ACO on Medicare costs. [4]

I examine each of the first two tactics in the following sections.

Questionable tactic No. 1: Celebrating statistically insignificant results

Hsu et al. reported that Partners’ ACO cut Medicare spending on beneficiaries attributed to the ACO by CMS during 2012 and 2013 by a statistically insignificant 2 percent. As the authors put it, “this association was not significantly different from no change” (p. 880). Yet the authors treated this 2 percent difference as if it were significant. Throughout the paper they claimed Partners’ ACO had lowered “Medicare spending.” They did so in the title (“Bending the spending curve….”), the abstract (“ACO participation had a modest effect on spending”), and in the text (see the quote above, as well as, “There were modest overall ACO spending reductions….” and, “This study provides some evidence of how one large … ACO appears to have achieved its stated savings….”). [5]

On May 1, Partners’ flagship hospital, Massachusetts General Hospital (where six of the eight authors are employed) aggravated these sins by issuing a press release about the paper that stated, “Today, researchers at Partners HealthCare published a study showing that Partners Pioneer ACO not only reduces spending growth, but does this by reducing avoidable hospitalizations for patients with elevated but modifiable risks.… The entire ACO population … reduced health care spending $14 per participant per month, a 2 percent decline.” (Note again the confusion caused by the effort to avoid identifying an agent.) It is true that Hsu et al. found that Partners’ CMP program reduced hospital use by a statistically significant amount. It is not true that Hsu et al. found that Partners’ ACO “reduces spending growth.”

Questionable tactic No. 2: Ignoring program costs

Even if the 2-percent savings had been statistically significant, the authors should have subtracted from the claimed savings the cost of the interventions that led to the savings. These costs fall into two categories: Those CMS incurred to run the Pioneer program and those Partners’ ACO incurred attempting to achieve savings. Not reporting these offsetting costs made it easier for Hsu et al. to mislead readers into accepting their statement that Partners’ ACO “bent the cost curve.”

The most obvious cost to CMS Hsu et al. should have subtracted was the share of the simulated savings CMS would have had to give to Partners had this been the real-world program. That share would have varied depending on how well Partners’ ACO scored on several dozen “quality measures,” but 50 percent is a reasonable estimate. Cutting CMS’s savings by 50 percent reduces the non-significant 2-percent savings to a non-significant 1 percent.

A less obvious cost to CMS is the cost CMS incurred to administer the Pioneer ACO program. Analysts routinely ignore those costs. A complete accounting of the net impact of the Pioneer ACO program on health care spending should include them.

Hsu et al. also ignored the start-up and maintenance costs Partners incurred to run its ACO and its CMP program. (I discuss these costs in more detail below and in footnote 8.)

Hsu and his co-authors in fact warned readers that they intended to ignore all “program costs” incurred by the ACO, CMS or any other entity, that is, all costs that didn’t require reimbursement by Medicare under Parts A, B or D. They didn’t say why. The only explanation they offered was, “To our knowledge, no other study of ACOs has included program costs in its analysis.” This is true. The vast majority of American health policy researchers think it’s totally appropriate to ignore program costs when analyzing the impact of ACOs. Moreover, they think that if their limited analysis shows the ACO cut Medicare’s gross spending it’s ok to state repeatedly the ACO “bent the cost curve” or “lowered spending.”

How much does the free lunch really cost?

I won’t comment further here on dubious tactic number 1 – treating non-significant results as significant. I’ll focus the remainder of this essay on a question raised by the second tactic: Is it possible Partners’ ACO and CMP program costs were so inconsequential Hsu et al. were justified in ignoring them?

We know woefully little about ACO start-up and operating costs even though the ACO fad is now entering its second decade. We have some ballpark estimates from the staff of the Medicare Payment Advisory Commission (MedPAC), and we have an evaluation of MGH’s CMP program done for CMS in 2010. Both suggest that the interventions Partners’ ACO deploys are very expensive relative to the meager savings Partners’ ACO and other Pioneer ACOs are achieving.

We know that Pioneer ACOs are cutting Medicare’s net spending by no more than a few tenths of a percent on average (and CMS’s MSSP ACOs are raising costs by a few tenths of a percent). According to MedPAC’s staff, ACOs incur costs equal to 1 to 2 percent of their ACO Medicare spending. [6] If we assume 1-to-2 percent is what Partners’ ACO spent to achieve the (statistically insignificant) 2-percent savings reported by Hsu et al., of which Partners would have kept 1 percent, that would mean the ACO would have broken even or lost 1 percent.

This conclusion is reinforced by an examination of the CMP program. It appears that that program costs at least as much to run as it saves Medicare.

According to Hsu et al., the CMP cut Medicare spending by a statistically significant 6 percent. They claimed in both their paper and in MGH’s May 1 press release that it was this 6-percent savings on a small fraction of the ACO’s total assigned population that explains the (non-significant) 2-percent ACO gross savings for Medicare (see the quote above from MGH’s press release).

The press release cited an evaluation of the CMP for CMS by RTI International published in 2010. (The CMP was the subject of a three-year CMS demonstration that began in August 2006.) In RTI’s evaluation, we discover that MGH told CMS its CMP program costs equaled 5 percent of Medicare spending on CMP enrollees. [7]

If CMP’s program costs were still 5 percent of Medicare spending during the 2012-2013 period examined by Hsu et al., that would mean the CMP cut net spending on its enrollees by only 1 percent (the 6 percent reduction reported by Hsu et al. minus the 5 percent program costs). What happens when this 1-percent savings on a few thousand CMP enrollees is spread out over the 50,000 or 60,000 Medicare beneficiaries assigned to Partners ACO? I suspect the savings disappear or turn into losses. In any event, I have made my point: The CMP program costs are not trivial relative to the savings the CMP achieves. Hsu et al. should have investigated those costs, and until they did they should have refrained from claiming that either the CMP or the ACO “lowered spending.”

But MGH’s statement to RTI that its CMP program costs equalled just 5 percent of Medicare spending may have been an underestimate. (RTI’s report contained no data supporting this claim.) In a 2012 report  by the Congressional Budget Office on 34 disease management demonstrations conducted by CMS, including MGH’s CMP demo, the CBO concluded, “On average, the 34 … programs had little or no effect on hospital admissions or regular Medicare expenditures…. To offset the fees they charged CMS, the programs would have had to reduce regular Medicare expenditures by an average of 11 percent.” (pp. 11-12)

That statement that CMS paid out fees averaging 11 percent of Medicare expenditures suggests that the 5-percent-of-expenditures fee MGH negotiated with CMS was not enough to cover MGH’s actual costs of operating its disease management program. Obviously, if the CMP’s real program costs are closer to the average claimed by the other participants in CMS’s disease management demos, Partners’ CMP lost a lot of money during 2012-2013, the period Hsu et al. studied. If, for example, the real program costs for the CMP were 10 percent, the CMP would have lost money (10 percent minus the 6 percent Hsu et al. reported). [8]

Making sense of Hsu et al’s and Partners’ behavior

The data in Hsu et al.’ paper is useful even if it is incomplete. It contributes to a growing body of evidence indicating that ACOs cannot cut total spending, in part because ACOs cannot focus. They are measured on their ability to cut the cost of an entire population by unspecified means rather than on their ability to cut the cost of a clearly defined slice of their sickest “attributees” by clearly defined methods.

My criticism of Hsu et al. is their misuse of their data. They implied statistically insignificant results were significant, and by stating over and over that Partners’ ACO cut “spending” they misled readers into thinking they had measured total costs when they hadn’t. 

We badly need research on the cultural and financial incentives that induce health policy analysts to misuse data and to avoid studying issues (such as the start-up and maintenance cost of ACOs) that risk contradicting reigning managed care doctrine. This problem occurs at epidemic levels. Let me suggest two incentives worth further study.

First, Hsu et al. work for one of the nation’s pre-eminent hospital-clinic chains that has long acted like a cartel, and like all cartels, it stands to benefit from research that seems to prove that the cartel is doing the Lord’s work (it is “clinically integrating” all parts of the cartel for the betterment of humanity, we are told) and, therefore, anti-trust authorities should not object to the cartel’s next acquisition. The Department of Justice, state attorneys general and other anti-trust enforcers must weigh the benefits to society of mergers against the damage mergers may do to competition. Partners’ lawyers will no doubt brandish the paper by Hsu et al. the next time an acquisition by Partners is challenged on anti-trust grounds.

The second incentive worth study falls into the category of incentives created by culture or the expectations of one’s peers. Hsu et al. work within a culture that arose in the 1970s, roughly simultaneously with the rise of HMOs and the establishment of health services research as a separate discipline. One of the norms of that culture is to treat the managed care diagnosis (overuse due to the fee-for-service method) and the managed care solution (shifting risk to doctors and micromanaging them) as articles of faith, not hypotheses to be tested. We need research on how this casual attitude toward a basic rule of science became so widespread among people with degrees in the medical and social sciences. [9]

[1] According to Massachusetts General Hospital’s website , “nearly all” of MGH’s physicians are on the Harvard Medical School faculty.

[2] Massachusetts had the nation’s second-most expensive per capita health care cost as of 2014 according to CMS’s latest report on state-level spending.

[3] The five-year Pioneer ACO program ended in 2016.

[4] I have criticized the conflation of simulated with real ACO results by the authors of two other papers – one by J. Michael McWilliams (also at Harvard) and the other by David Nyweide et al. (employed by CMS). I have chosen not to discuss Hsu et al.’s decision to study a simulated version of Partners’ ACO in this article because the version Hsu et al. simulated closely resembled the real version. Hsu et al. did not use a different experimental group from the one CMS used (which was the case in the McWilliams and Nyweide studies), and the method Hsu et al. used to create a control group was less vulnerable to distortion by differences in patient health and income than those used by McWilliams and Nyweide et al.

[5] Hsu et al. applied a double standard to statistically insignificant results. While they repeatedly celebrated the non-significant 2-percent reduction in gross Medicare costs achieved by Partners’ ACO, they did not celebrate a statistically insignificant 2- or 3-percent increase in hospitalizations among Medicare beneficiaries assigned to the ACO.  In fact, Hsu et al. didn’t even report the percent by which hospitalization rates increased; I had to eyeball a graph in Exhibit 3 to make the 2-to-3-percent estimate. Instead, Hsu et al. merely noted, “There was no significant association between overall ACO participation and hospitalization rates” (p. 879). After that, they never came back to the subject.

[6 ] At the September 11, 2014 MedPAC meeting, commissioner David Nerenz asked MedPAC staffer Jeff Stensland if “we know anything about” ACO “overhead.” Stensland replied, “[P]eople we talk to and the data we have seen, it looks like maybe 1 to 2 percent of your spend, that that’s what they’re spending on their ACO to operate it….” (p. 133 of the transcript of the meeting). Stensland also reported, “[I]f you averaged everybody [that is, all ACOs] … the share of savings … that they get is going to be less than their administrative costs of being in it….” (p. 144)

[7] I calculated the CMP’s program cost to be 5 percent of Medicare expenditures on CMP participants based on data reported in the RTI evaluation of the CMP. RTI stated, “MGH negotiated a [per enrollee per month] management fee of $120 for the original and refresh intervention groups through the duration of the demonstration.” (p. 4). RTI also reported that the CMP cut Medicare’s costs by $288 “per beneficiary per month” (PBPM) and that this constituted 12.1 percent of PBPM Medicare spending. (p. 14) This allowed me to determine that the $120 monthly fee thus amounted to 5 percent of Medicare spending on the CMP patients. (The $120 fee paid by Medicare is 41.7 percent of $288, and 41.7 percent of 12.1 percent is 5.0 percent.)

[8] I encourage readers to peruse RTI’s evaluation of the CMP to get a clearer view of the complexity and expense of Partners’ CMP program. To give you just a taste of the resources Partners is investing now for the 4,000 CMP enrollees examined by Hsu et al., consider these excerpts from RTI’s report describing elements of the program for the 2,000 CMP enrollees during 2006-2009:

  • “Eleven nurse case managers [each of whom worked with about 200 patients] who received guidance from the program leadership and support from the project manager, an administrative assistant, and a community resources specialist” (p. 7);
  • “a social worker to assess the mental health needs of CMP participants” (p. 6);
  • “a mental health team director, clinical social worker, two psychiatric social workers, and a forensic clinical specialist (M.D./J.D.), who follows highly complex patients with issues such as legal issues, guardianship and substance abuse” (p. 10);
  • “a pharmacist to review the appropriateness of medication regimens” (p. 6);
  • “home delivery of medications five days per week” (p. 7);
  • “a nurse who specialized in end-of-life-care issues” (p. 7);
  • “a patient financial counselor who provided support for all insurance related issues” (p. 7);
  • “The clinical team leader provided oversight and supervision of case managers” (p. 8);
  • “The medical director provided oversight and day to day management of MGH’s CMP….” (p. 8);
  • “MGH developed a series of clinical dashboards using data from the MGH electronic medical record …, claims data, and its enrollment tracking database” (p. 8);
  • “MGH provided [200] physicians with a $150 financial incentive per patient per year to help cover the cost of physician time for [CMP-related] activities” (p. 8);
  • “a designated case manager position to work specifically on post discharge assessments to enhance transitional care monitoring” (p. 9);“ and
  • “a data analytics team to develop and strengthen program’s reporting capabilities” (p. 10).

In addition to all these goods and services, a true accounting of the cost of the CMP would include numerous housing, transportation and other “support services” and “community services” (p. 6) that RTI described only vaguely. The cost of these additional “non-clinical” services obviously show up on someone else’s books but might well have a positive impact on medical costs.

We must remember that all these goods and services were provided to patients who cost Medicare about three times the cost of an average Medicare beneficiary. Nevertheless, this long list of goods and services clearly cost a pretty penny, and should have received serious attention from Hsu et al. before they announced to the world that Partners’ ACO “bent the cost curve” and it was their CMP that did it.

[9] On July 7, 2017 I sent an email to Dr. John Hsu, the lead author of the Health Affairs paper, at the address listed in the paper. I asked him if I was correct in interpreting the 2-percent savings as non-significant and why he treated those results as significant. I also asked whether he knows what Partners’ ACO and CMP overhead costs are. I have not received a reply as of August 24, 2017.