Doctors Should Also Be Fighting “Fake News”

Doctors Should Also Be Fighting “Fake News”

I see them every time I wait in the inescapably long lines at grocery store. They’re offering me so much. Fat-melting foods that “work like gastric bypass.” Sleep masks that prevent breast cancer. One day diets. And, of course, the perennial “medical miracles.” All these revelations can be mine with a simple magazine purchase.

It’s easy to dismiss the medical advice being propagated through the supermarket checkout aisle. Who would take health advice from a magazine sitting next to a box of Snickers and the National Enquirer? This visceral elitism, however, is causing doctors and scientists to miss out on a powerful avenue for improving people’s health.

Mainstream health advice was “fake news” before it had a name.

It has remained rampant and popular because doctors have refused to engage with the popular press, except for their own profit. When we reject bringing our ideas to the most unpretentious of media outlets, then only mercenaries like Drs. Mehmet Oz and Andrew Weil adorn the covers of these rags. We cannot always stop quackery from being disseminated, but we can drown it out with accurate and nuanced information.

So here’s a challenge for my scientific and medical colleagues: publish your next article in Woman’s World. Or maybe in Family Circle, Real Simple, or Glamour. These magazines, and others like them, have circulations of over 1 million readers. If we insist on publishing medical knowledge only in obscure journals that are read by a few of our colleagues, we are ceding the public conversation to people without the appropriate experience and intentions. University press offices have started to aggressively “fill the gap” in health news. However, relying only on press offices to promote our work to the public allows these large organizations to prioritize their own success and aggrandizement above the public’s health. Effusive reports about preliminary trials and mouse studies contribute to “fake news” rather than counter it.

Academics are well aware of the flaws of modern peer review and journal publication. Except for a handful of high-impact journals, most articles are read only by the editors, the author, and their grad students. We publish in peer-reviewed journals because that’s what it takes to be taken seriously. Academic promotions are tied to journal publications. Scientific ideas are only considered “legitimate” if they appear in the academic press. The widely-recognized importance of citation counts and impact factors make it clear that journal articles are a fiat currency in academia as much as they are about the spread of scientific research. Predatory journals have exploded in the modern era to profit from manipulating this currency.

The value of peer-review and specialized conversation should be not dismissed. Sure, only a dozen people will read your article, but sometimes they’re the “right” dozen people. The peer-review and journal system should be improved, not destroyed. However, by showing we are willing to publish outside of academic publishing’s cultural hegemony, maybe academics can take back some of their power from journals as arbiters of scientific and medical knowledge. This exercise also offers us valuable practice in communicating our ideas not just to the people who study them but to the patients and citizens who will be directly affected by them.

The Woman’s World challenge isn’t just for our benefit. It’s also for the benefit of everyday people who crave medical information and use the convenience of the popular media to receive it. Too many people today still lack the access and financial capacity to receive all the medical care and education they need. Too many health resources target wealthy, educated patients, rather than reaching out to every community that needs this knowledge. I dare say that there may even be a hint of sexism at our dismissal of “women’s” magazines as an influential medium for the public good. The intelligentsia’s surprise at Teen Vogue’s quality reporting is emblematic of this mild chauvinism. Since academics, doctors, and other professionals still equate exclusivity with value, I am challenging us to try populism on for size.

I call this a challenge because I recognize it’s not an easy transition from journal to supermarket broadsheet. These magazines frequently promote sensational and unproven health ideas, and we don’t want our ideas associated with this stigma. We have a situation right out of a game theory textbook: we would all benefit from improving the information in the popular media, but no one wants to take the risk first.

The style and connections required for mass media publishing are separate from those needed for academic publishing. It can be scary to start from square one. There have been some admirable attempts to help doctors and academics break into this world. The OpEd Project supports academics who want to publish in the mainstream media. The Conversation publishes syndicated articles by academics. For years, Health News Review has been taking the mainstream media to task for poor health reporting. We need all the support we can get as we strive to communicate our ideas in an appealing, clear way.

Now let’s take another small step. We can improve the public’s health by asking every doctor and scientist to complete the challenge of submitting one article or idea to a mainstream publication. Patients should be encouraging their doctor to write for their local paper. Magazines should be reaching out to respected doctors and academics who remain “undiscovered.” We can have a tremendous impact if we start to think beyond impact factors.

Benjamin Mazer, MD is a resident in pathology at Yale-New Haven Hospital. His views are his own and don’t represent those of his employer.

Winning the Doctor Lottery

Winning the Doctor Lottery

A poignant piece recently appeared in the journal Health Affairs and was rapidly devoured on social media by the health policy community. The story is a harrowing first person account of a woman’s multiple interactions with doctors. The doctors in the story are either very good or very bad. One pediatrician turns the author and her sick son away on three consecutive days with colic, only to have a more careful partner sound the alarm and discover pyloric stenosis. The author then recounts the tale of her father’s death at age 42 due to a surgeon who operated for diverticulitis unnecessarily.

My family and I haven’t always won The Doctor Lottery. My father’s surgeon, for instance, had pushed him to have the bowel resection to “cure” him of diverticulitis, a disease in which the colon’s lining becomes inflamed. He stitched up my father’s intestines with a suture known to dissolve in patients who’ve been on steroids and hadn’t read my father’s chart to see that his internist had recently had him on cortisone. Nor did he look at the list of medications my father had carefully written down on his patient intake forms. When the sutures dissolved, my father, who had a bleeding disorder, went into shock. His abdomen was distended and hard.

My mother asked the nurse to page the surgeon. “My husband is in so much pain!” she said. The surgeon, who was playing golf, told the nurse to tell my mother, “Pain after surgery is normal.” By the time my father developed a fever, and peritonitis, it was too late. He died of a heart attack.

It’s a moving anecdote with a tragic ending that has the requisite story elements – arrogant uncaring doctor ignoring patient and family concerns while on the golf course – that policy folks use to argue for remaking the current health care system into a more patient-centric world. Unfortunately, medicine is hard, and while there are certainly errors that are avoidable, many are not. The best surgeon, the best system, and the best medical care are at times no match for the randomness of life. A certain percentage of patients will have an infection after an abdominal surgery despite every current safeguard that is known. The vast majority of patients with abdominal pain and distention after surgery do not routinely need to be reoperated on. Deciding who to reoperate on is often challenging. Is a good surgeon one who takes every patient who has abdominal pain and distention back to the operating room? Is it feasible to have an attending surgeon on hand to evaluate every complaint of abdominal pain? Should we ban all surgeons from playing golf for 48 hours after they operate?

None of these questions have answers that don’t involve tradeoffs in the real world. In policy world, however, solutions are magical constructs that don’t involve robbing Peter to pay Paul. In this fantasy world Peter and Paul find a leprechaun with a pot of gold at the end of a rainbow. As a result, the solutions proposed to ‘ending the doctor lottery’ involves fostering strong patient doctor relationships that align incentives based on number of patients seen rather than the value of care delivered. Apparently, what promises to save us is a large order of payment models based on value, with teams composed of generous helpings of social workers, behavior health experts, and cute puppies.

It is with this noble intent that our physician overlord masters in the Center for Medicare and Medicaid services , at the bidding of the public and congress, have applied themselves to the small task of assigning value to physicians. There are many prongs to this worthy desire to measure the nation’s doctors, but a particularly sharp prong advanced by the Agency for Healthcare Research and Quality is the patient-centered Consumer Assessment of Healthcare Providers and Systems (CAHPS). The CAHPS tool is actually a standardized survey that has been in use since 1997 to measure and report on the experiences of consumers with the health care services they come into contact with. More recently, a sister to the CAHPS tool was born so that physicians in office settings may be measured by their patients – this was named the Clinician and Group – CAHPS survey (CG-CAHPS). The stated goal of this tool is to publicly report survey results to allow patients to choose good doctors.

The assessments are performed by practices and health care systems. Patients receive surveys that seek to get to the heart of what everyone wants in a physician – Does your physician listen carefully? Did your physician spend enough time with you? It appears that practices have some latitude in how the question is asked, what questions are asked, as well as how to interpret the results.

Value-based care sounds good, and enjoys widespread support among every non-clinician that seems to matter in the world of health care policy. Physicians seem generally apathetic, though overconfident about the coming valuations – after all, it’s always the other guy that sucks. Not surprisingly, the worst physicians have the least insight into their own limitations, and suffer the most from delusions of grandeur. While physicians may be excessively poor at grading themselves, the physician community that loves gossip about as much as the Real Housewives of Atlanta are much less forgiving. Yes, that’s right – there is general widespread agreement among physicians of the worst among us.

So, imagine my surprise when one of the good guys that I worked and trained with called to tell me that he had the lowest CG-CAHPS scores in his group and he may need ‘remediation.’ Value-based care takes on a whole new dimension when you’re the one that carries the 21st century version of the scarlet letter.

Unpacking the genesis of a bad CG-CAHPS score is an exercise in revealing the idiocy that results from the many good intentions in healthcare. In this particular case, the hospital sends out a survey to patients that have come into contact with its physicians. Those who respond to the survey select answers that range from ‘always’ to ‘most of the time’ to ‘never’. Only ‘always’ counts towards a ‘Top Box’ score, and this Top Box score is then compared to a national average to generate the provider’s percentile. For instance. if 9 out of 10 patients checked off ‘always’ to ‘Did Provider listen carefully to you?’ your Top Box score is 90% – but if the national or health system average for that category is much higher – that score may still put you in the 50th percentile.

The problems with all of this are legion. Of the 1500 unique patients who this physician saw in the prior year, 120 patients took the time to respond to the survey. I’m always surprised that anyone fills out any surveys – I fill out one every 2 years. Regardless, of the 120 patients who responded, six chose not to select the top box, resulting in this physician being labeled a problem in need of remediation. Beyond the problem of generalizing from 120 patients who have an unnatural affinity for filling out surveys in the mail, one wonders if there is more to a physician’s worth than her ability to communicate? As a medical student, I recall a surly surgeon who minced few words in his communication with patients, but was technically brilliant. Many a grateful patient or family was indebted to him for a life saved, but I recall a smattering of patients put off by an approach that had little time for the worried well.

In the name of transparency, CMS plans to publicly share this quality information via an online physician compare tool to allow patients to finally win the doctor lottery, and perhaps more importantly, tie reimbursement to value.

Health systems nervous about decreasing reimbursements related to their bad physicians need not worry because riding furiously to their rescue are health care consultants, who for a pretty penny, promise a smooth transition to this new world. These words from the Studer-Huron health care consultancy appear designed to allay the health system executive’s fears:

“Plenty of evidence shows that patient experience and clinical quality are two sides of the same coin. You already want to provide the best possible care. And now that Clinician and Group Consumer Assessment of Healthcare Providers and Systems is here, there’s a new reason to focus on patient perception: CG CAHPS will impact ACOs, PQRSs, PCMHs, and many other programs, and survey results will link to payments in 2015.”

These same consultants lined up not long ago to help hospitals achieve pay for performance metrics. It surprises no practicing physician that pay for performance metrics and value based payments as currently designed were an abject failure. While there are some like Ashish Jha (Harvard School of Public Health) who have noticed and publicly called out the failure of value-based payment, the answer disappointingly appears to be ever better patient-centered metrics. The latest idea that relates to my scant enthusiasm for basing value on patient surveys, unfortunately, comes from no other than Dr. Jha, who wrote recently in JAMA on a proposal to query Medicare patients 30 to 60 days after discharge on the quality of care they received, and tie up to 10% of a hospitals reimbursement to these scores. I can almost feel the frisson of excitement travel through the offices of the Studer-Huron group at this latest opportunity to manage patient perception and save the day.

I fear a noble profession has lost the plot when it chooses to measure value based on patient satisfaction simply because it is the easiest and most politically correct metric to measure. It seems that the vision of measuring value is what’s important – it matters not that the value quantified by these wonderful tools is the health care policy equivalent of fake news. What matters is that surveys measure something that can be quantified, regressed, risk-adjusted and published. The truth may be shrouded in darkness, but falsehoods found where the light happens to shine now comes to masquerade as the truth.

When it comes to one’s health, the desire for an assurance of quality is an understandable one. We are supposed to assure quality by making medical school admission a privilege reserved for those who have demonstrated intellectual vigor, board certifications that test competency, and continuing education to attempt to demonstrate maintenance of competency. Unfortunately, we live in a time where acceptance into medical school relates more to virtue signaling than intellectual horsepower, and board certification is a mechanism to siphon dollars from physicians to take tests that have little to do with the practice of medicine, and certainly don’t weed out the bad.

The medical profession has done itself few favors by having a remarkably anemic approach to ferreting out physicians who fall egregiously below professional standards. There are no perfect solutions, but I would suggest with much bias that having a healthy pool of primary care physicians not employed and beholden to health systems are vital to improving the chances patients have to win the doctor lottery. I understand the public desire for guarantees when it comes to those we trust with our lives. The only thing the current approach guarantees is the health of the bank accounts of health care consultants, but protecting patients from bad doctors? What a joke.

Anish Koka is a cardiologist in private practice in Philadelphia. Most of the opinions he has aren’t put on surveys, but can be found on twitter @anish_koka

MD vs. DNP: Why 20,000 Hours of Training and Experience Matters

MD vs. DNP: Why 20,000 Hours of Training and Experience Matters

As southern states entertain legislation granting nurse practitioners independent practice rights, there are some finer details which deserve careful deliberation. While nurse practitioners are intelligent, capable, and contribute much to our healthcare system, they are not physicians and lack the same training and knowledge base. They should not identify themselves as “doctors” despite having a Doctor of Nursing Practice (DNP) degree. It is misleading to patients, as most do not realize the difference in education necessary for an MD or DO compared to a DNP. Furthermore, until they are required to pass the same rigorous board certification exams as physicians, they should refrain from asserting they are “doctors” in a society which equates that title with being a physician.

After residency, a physician has accrued a minimum of 20,000 or more hours of clinical experience, while a DNP only needs 1,000 patient contact hours to graduate. As healthcare reform focuses on cost containment, the notion of independent nurse practitioners resulting in lower healthcare spending overall should be revisited. While mid-level providers cost less on the front end; the care they deliver may ultimately cost more when all is said and done.

Nurse Practitioners already have independent practice rights in Washington State. In my community, one independent NP has had 20 years of clinical experience working with a physician prior to going out on her own. Her knowledge is broad and she knows her limits (as should we all); she prominently displays her name and degree clearly on her website. This level of transparency, honesty, and integrity are essential requirements for working in healthcare. Below is a cautionary tale of an independent DNP elsewhere whose education, experience, and care leave much to be desired. I thank this courageous mother for coming forward with her story.

After a healthy pregnancy, a first-time mother delivered a beautiful baby girl. She was referred to “Dr. Jones,” who had owned and operated a pediatric practice focused on the “whole child” for about a year.   This infant had difficulty feeding right from the start. She had not regained her birthweight by the standard 2 weeks of age and mom observed sweating, increased respiratory rate, and fatigue with feedings. Mom instinctively felt something was wrong, and sought advice from her pediatric provider, but he was not helpful. This mother said “basically I was playing doctor,” as she searched in vain for ways to help her child gain weight and grow.

By 2 months of age, the baby was admitted to the hospital for failure to thrive. A feeding tube was placed to increase caloric intake and improve growth. I have spent many hours talking with parents of children with special needs who struggle with this agonizing decision. It is never easy. A nurse from the insurance company called to collect information about the supplies, such as formula, required for supplemental nutrition. Mom was so distressed about her daughters’ condition, she could not coherently answer her questions. As a result, the nurse mistakenly reported her to CPS for neglect and a caseworker was assigned to the family.

Once the tube was in place, the baby grew and gained weight over the next three months. At 5 months of age, mom wanted to collaborate with a tube weaning program to assist her daughter with eating normally again. A 10% weight loss was considered acceptable because oral re-training can often be quite challenging. As this infant weaned off the tube, no weight loss occurred over the next two months, though little was gained. She continued to have sweating with feeds and associated fatigue. On three separate occasions mom specifically inquired if something might be wrong with her daughters’ heart and all three times “Dr. Jones” reassured her “nothing was wrong with her heart.”

However, “Dr. Jones” grew concerned about the slowed pace of weight gain while weaning off the feeding tube. Not possessing the adequate knowledge to recognize the signs and symptoms of congestive heart failure in infants, he mistakenly contacted CPS instead. After being reported for neglect a second time, this mother felt as if she “was doing something wrong because her child could not gain weight.” This ended up being a blessing in disguise, however, because the same CPS worker was assigned and recommended seeking a second opinion from a local pediatrician.

On the first visit to the pediatrician, mom felt she was “more knowledgeable, reassuring, and did not ignore my concerns.” The physician listened to the medical history and upon examination, heard a heart murmur. A chest x-ray was ordered revealing a right-shifted cardiac silhouette, a rather unusual finding. An echocardiogram discovered two septal defects and a condition known as Total Anomalous Pulmonary Venous Return (TAPVR), where the blood vessels from the lungs are bringing oxygenated blood back to the wrong side of the heart, an abnormality in need of operative repair.

During surgery, the path of the abnormal vessels led to a definitive diagnosis of Scimitar Syndrome, which explains the abnormal growth, feeding difficulties, and failure to thrive. This particular diagnosis was a memorable test question from my rigorous 16-hour board certification exam, administered by the American Board of Pediatrics. If one is going to identify themselves as a specialist in pediatrics, they should be required to pass the same arduous test and have spent an equivalent time treating sick children as I did (15,000 hours, to be exact.)

A second take away point is to emphasize the importance of transparency. This mother was referred to a pediatric “doctor” for her newborn. His website identifies him as a “doctor” and his staff refers to him as “the doctor.” His DNP degree required three years of post-graduate education and 1,000 patient contact hours, all of which were not entirely pediatric in focus. His claim to have expertise in the treatment of ill children is disingenuous; it is absolutely dishonest to identify as a pediatrician without actually having obtained a Medical Degree.

The practice of pediatrics can be deceptive as the majority of children are healthy, yet this field is far from easy. Pediatricians are responsible for the care of not only the child we see before us, but also the adult they endeavor to become. Our clinical decision making affects our young patients for a lifetime; therefore it is our responsibility to have the best possible clinical training and knowledge base. Acquiring the aptitude to identify congenital cardiac abnormalities is essential for pediatricians, as delays in diagnosis may result in long-term sequelae such as pulmonary hypertension which carry with it a shortened life expectancy.

Nurse practitioners have definite value in many clinical settings. However, they should be required to demonstrate clinical proficiency in their field of choice before being granted independent practice rights, whether through years of experience or formal testing. In addition, the educational background of the individual treating your sick child should be more transparent.

Raising our children is the most extraordinary undertaking of our entire lives. Parting advice from this resolute mother is to “trust your gut instinct, and no matter what, keep fighting for your child.” Choosing a pediatrician is one of the most significant decisions a parent will make. This child faced more obstacles than necessary as a result of the limited knowledge base of her mid-level provider. A newly practicing pediatrician has 15 times more hours of clinical experience treating children than a newly minted DNP.   When something goes wrong, that stark contrast in knowledge, experience, and training really matters. There should be no ambiguity when identifying oneself as a “doctor” in a clinical setting; it could be the difference between life or death.

When it comes to the practice of medicine, the knowledge and experience required are so vast that even the very best in their field continue learning for a lifetime.

Some graduating nurse practitioners believe they are equally as prepared as newly trained physicians to care for their patients. The numbers, however, in hours of hands-on training and experience, simply do not back up that assertion. Physicians have at least 11 years of education after high school. By the time we set off to practice independently, we have had a minimum of 20,000 supervised patient contact hours. Depending on the type of training and school attended, a nurse practitioner has had a minimum of 500-1,000 supervised patient contact hours.

Niran Al-Agba, MD

 

Information Blocking Under Attack: The Challenges Facing EHR Developers and Vendors

Information Blocking Under Attack: The Challenges Facing EHR Developers and Vendors

In March 2017 Milbank Quarterly, researchers Julia Adler-Milstein and Eric Pfeifer found that information blocking — which they define as a set of practices in which “providers or vendors knowingly and unreasonably interfere with the exchange or use of electronic health information in ways that harm policy goals” – occurs frequently, and is motivated by revenue gain and market-share protection.

Among the practices most often cited were deployment of products with limited interoperability (49%) and high fees for health information exchange unrelated to [actual] cost (47%).  Of note: This is the first empirical research identifying and quantifying the specific information blocking practices reported by a group of information exchange experts.

The authors concluded “Information blocking appears to be real and fairly widespread. Policymakers have some existing levers that can be used to curb information blocking and help information flow to where it is needed to improve patient care. However, because information blocking is largely legal today, a strong response will involve new legislation and associated enforcement actions.”

The legal situation regarding the controversial subject of information blocking may have already changed dramatically.  Two important events occurred since this research was undertaken.  First, the strongly bipartisan-backed 21st Century Cures Act was signed into law by President Obama late last year.  The health information technology (HIT) provisions of the law now make it illegal for a vendor or provider to engage in information blocking. Second, the new law provides the nation with a new and comprehensive statutory definition of information blocking:

A practice, except as required by law or allowed by the HHS secretary pursuant to rulemaking, that:

–Is likely to interfere with, prevent or materially discourage access, exchange or use of electronic health information.

–If conducted by an HIT developer, exchange or network, such entity knows or should know that such practice is likely to interfere with, prevent or materially discourage the access, exchange or use of electronic health information.

–If conducted by a health care provider, such provider knows that such practice is unreasonable and is likely to interfere with, prevent or materially discourage access, exchange or use of electronic health information.

(Emphasis added)

The law further directs that information blocking may include the following:

– Practices that restrict authorized access, exchange or use of such information for treatment and other permitted purposes under such applicable law, including transitions between certified HIT systems.

– Implementing HIT in nonstandard ways that are likely to substantially increase the complexity or burden of accessing, exchanging or using electronic health information.

– Implementing HIT in ways that are likely to restrict access, exchange or use of electronic health information with respect to exporting complete information sets or in transitioning between HIT systems.

– Lead to fraud, waste or abuse, or impede innovations and advancements in health information access, exchange and use, including care delivery enabled by HIT.

In case there is any doubt about the gravity placed on this issue by the authors of the law, consider that Cures requires the Secretary of HHS establish a process for collecting complaints of information blocking and investigating them, and designates the Inspector General as the party responsible for carrying out the investigations. It assigns a penalty of up to $1 million per information blocking episode and funds program operations with the proceeds from such fines and penalties after a $10 million start-up allocation is made.

The law also makes it mandatory for EHR and other HIT vendors seeking federal certification of their products to attest they are not engaging in any form of information blocking as described in the law, and that false attestation may be cause for removal of certification status.  Certification is virtually a requirement of doing business as an HIT developer or vendor in health care, since many federal Medicare and Medicaid payments to doctors and hospitals are tied to the use of products certified by the Office of the National Coordinator for HIT (ONC), and known as certified EHR technology (CEHRT).

No one knows for certain whether these provisions will trigger a large number of complaints from providers, patients and other interested parties who have been frustrated by problems encountered when they have tried to move, share, or exchange data and health information between organizations and across the boundaries of HIT systems.  But the widespread practices described in the Milbank Quarterly research do not augur well for EHR technology vendors and developers being able to avoid at least some negative attention.

At the very least, it is likely that the new Cures information blocking provisions will pose challenges to vendors and developers for the next couple of years.  The law’s language is remarkably broad and, in my opinion, purposely comprehensive.  For example, EHR vendors  cannot use lack of knowledge about their conduct as an excuse if it is found that they should know about the information blocking effects of that conduct.  They can be found in violation of the law for restricting access to, exchange of, or use of health information.This means software vendors and their customers will need to carefully monitor and audit information flows for data, both at rest and in storage, to assure potential authorized users don’t suffer restrictions.  Monitoring these flows and processes will increase awareness and knowledge of any potential restrictions, thus obligating them to remove them. The law sets a very low bar by stipulating that merely making it “likely” that restricting information occurs is information blocking, but also creates new (and high) expectations for what must be exchanged when it references “exporting complete information sets” as the object of such restrictions.

The law’s list of practices considered information blocking includes “Implementing HIT in nonstandard ways that are likely to substantially increase the complexity or burden of accessing, exchanging or using electronic health information.”

I find this last provision to be a particularly worrisome challenge for vendors to attempt to avoid, and to defend against should they be accused of such behavior.  The issue here will revolve around what are considered “standard and nonstandard ways” of implementing HIT, and what constitutes a “substantial increase in complexity or burden” of accessing, exchanging or using electronic health information.  And, of course, who gets to decide these matters is equally important.  If, in the process of investigating complaints of a vendor implementing HIT in nonstandard ways that cause a “substantial increase in the complexity of health information exchange”, will the Inspector General make it the burden of the vendor to prove that it has used standardized ways? Or that there are no standards for the implementation in question?  Or that the standards ought not to apply in a particular case?  And in what situations?  Will those standards be those of a local community, or those that are state-wide or national?

I fear that the “next patch” is likely to be difficult for many EHR vendors. Additional challenges include these considerations:

  1. They do not have the new MU revenue coming in as they did for the past five years.  With 80+% adoption of EHRs, there is little new business to be had, limited to a small amount of rip and replacement going on.  Vendors will be looking for new sources of revenue and profit, but there have been complaints  that their products and service contracts are already over-priced.  Every health care provider organization in the country is looking for ways to stabilize or reduce the ongoing costs of ownership of their health IT software and services, while at the same time improving security.  Value-based care and risk taking contracts, along with repeal of coverage under the replacement for Obamacare, may only increase the economic pressure.
  2. New certification criteria under the law require EHR developers and vendors to work harder on interoperability and to demonstrate usability in the field to maintain certification.  As we all know, real world testing of software can reveal a lot of warts.  Contractual agreements recently imposed by EHR vendors that prohibit users from sharing screen shots and features sets are prohibited under the law’s HIT provisions.  New certification criteria under the law requires EHR developers and vendors to attest that they do not engage in any form of information blocking, as defined in the law, and levels penalties for both information blocking and false attestation.  This new definition of information blocking is deep and wide.
  3. New measures for interoperability are one of the responsibilities of the new HIT Advisory Committee being formed to work with Secretary Price and National Coordinator Rucker at HHS/ONC, as is the establishment of a Trust Framework and Agreement.  These activities will likely put new pressures on vendors to make their products more network-able and more able to share data with authorized providers and patients/consumers.  In a statement released by ONC on May 23, 2017, quoted from Politico’s Morning eHealth May 24, the agency indicated their interest in information blocking thus: “In FY 2018, ONC will continue to address and discourage information blocking by aggressively implementing ONC Certification Program rules, creating and promoting channels for reporting information blocking, and enforcing information blocking provisions required by the Cures Act.”

Working with physicians, hospitals, and with interoperability standards groups like DirectTrust, will be in the best interest of EHR developers and vendors as they seek to avoid problems of information blocking, real or imagined. There are legitimate reasons for restricting access to health care information in electronic formats, including security measures and identity assurances needed to protect the privacy of that information.  The vendors need to articulate these carefully and with the interest of their customers and their customers’ patients – not their own profits – clearly in mind.  Physicians and clinicians will tend to put the interests of patients and consumers of health care services first. They must also—as does DirectTrust and other like-minded communities—do what is reasonable to better knit together the fabric of the entire health care system so that care is better and costs are lower.  That is our duty under the new laws. Quite simply, we cannot make good progress toward meeting these goals without the collaboration of the major EHR vendors, their customers, and the public.

How CMS Undermines Pioneer ACOs and What to do About It

How CMS Undermines Pioneer ACOs and What to do About It

In my first post  in this three-part series, I documented three problems with Pioneer ACOs: High churn rates among patients and doctors; assignment to ACOs of healthy patients; and assignment of so few ACO patients to each ACO doctor that ACO “attributees” constitute just 5 percent of each doctor’s panel. I noted that these problems could explain why Medicare ACOs have been so ineffective.

These problems are the direct result of CMS’s strange method of assigning patients to ACOs. Patients do not decide to enroll in ACOs. CMS assigns patients to ACOs based on a two-step process: (1) CMS first determines whether a doctor has a contract with an ACO; (2) CMS then determines which patients “belong” to that doctor, and assigns all patients “belonging” to that doctor to that doctor’s ACO. This method is invisible to patients; they don’t know they have been assigned to an ACO unless an ACO doctor tells them, which happens rarely, and when it does patients have no idea what the doctor is talking about. [1]

This raises an obvious question: If CMS’s method of assigning patients to ACOs is a significant reason why ACOs are not succeeding, why do it? There is no easy way to explain CMS’s answer to this question because it isn’t rational. The best way to explain why CMS adopted the two-step attribution method is to explain the method’s history.

I will do that in this essay. We will see that Congress, on the basis of folklore, decided that doctors in the traditional fee-for-service (FFS) Medicare program were ordering too many services and needed to be herded into “physician groups” that would resemble HMOs (see my comment here  on the obsession with overuse). But Congress also decided they didn’t want to force Medicare beneficiaries to enroll with the quasi-HMOs. That was a critical decision because it required that Congress figure out some method other than enrollment to determine which patients “belonged” to which physician groups. Congress, in its Infinite Wisdom, decided they would let someone else figure that out. They assigned that task to CMS.

The assignment was impossible. CMS should have told Congress they were nuts but, understandably, CMS eschewed that option. So CMS did the best they could. Based on some arbitrary assumptions, CMS devised the two-step assignment method that is now causing so much trouble for ACOs.

The first mistake: Demonizing fee-for-service

The label “accountable care organization” was invented by Elliott Fisher and members of the Medicare Payment Advisory Commission at MedPAC’s November 9, 2006 meeting. [2] At that meeting Fisher presented to MedPAC a version of the two-step process for assigning Medicare beneficiaries that CMS was already using for the Physician Group Practice (PGP) demonstration (which began in 2005) and that CMS would go on to use for its first two ACO programs – the Pioneer ACO program and the Medicare Shared Savings Program (MSSP) (both inaugurated in 2012). That fact, and Fisher’s aggressive promotion of ACOs after that meeting, earned Fisher the title of “father of the ACO.”

But the ACO concept was being discussed by CMS by the early 1990s (at that time CMS was known as the Health Care Financing Administration), and the two-step method of assigning patients to groups of doctors was being discussed within CMS by the early 2000s. The impetus for these discussions was a series of laws enacted by Congress in1989, 2000, 2005, and 2010, all aimed at reducing inflation in the cost of Medicare’s traditional FFS program.

In the Omnibus Budget Reconciliation Act (OBRA) of 1989, Congress authorized a “volume performance standard” (VPS) for Part B, the first version of what would soon become the Sustainable Growth Rate formula (which would in turn be replaced by MACRA in 2015). The VPS was a limit on total Part B spending. Because Congress had some doubts about how well the VPS would work, and perhaps more importantly, because Congress had bought the conventional wisdom that FFS causes overuse and overuse was causing health care inflation, Congress included a provision within OBRA authorizing CMS/HCFA to develop an alternative to Part B’s FFS method. This alternative was supposed to employ managed care tactics, including shifting insurance risk to doctors.

This was the first mistake Congress would make on its way to endorsing ACOs and ultimately MACRA. By demonizing FFS and lionizing managed care, Congress got it totally backwards. Congress should have investigated how to make both the insurance industry and the privatized portion of Medicare (what we now call Medicare Advantage) look more like traditional Medicare, not the other way around. By the early 1990s Congress had been warned numerous times that there was little evidence for the claims being made on behalf of HMOs and the HMO wannabees that were rapidly taking over the insurance industry, and much evidence indicating that the portion of Medicare run by HMOs (today’s Medicare Advantage) was costing much more per insured beneficiary than the traditional FFS program. [3]

But Congress, under the spell of the managed care movement, didn’t grasp that it had it backwards. So rather than instruct CMS to look for ways to induce private-sector insurers to act more like traditional FFS Medicare, Congress endorsed the opposite policy. Provisions in OBRA instructed CMS to start looking for ways to make the traditional Medicare program look more like the managed care insurance companies that were taking over the private sector.

The sound of one hand clapping

Predictably enough, the VPS system authorized by OBRA didn’t work and Congress replaced it with the doomed Sustainable Growth Rate (SGR) formula in 1997. The SGR soon proved it wasn’t going to work either.

The problem with both the VPS and SGR (aside from the fact they were designed to address overuse, a problem that was minor compared to underuse and excessive prices and administrative costs) was that they both applied expenditure growth limits to the entire pool of 700,000 American doctors who treated Medicare patients. That pool was too large; there was no way individual doctors could perceive that it was in their self-interest to reduce their own contribution to the alleged overuse problem by cutting back services to their own patients.

By the late 1990s, Congress and the managed care movement were even more obsessed with overuse and, given the failure of the VPS and SGR mechanisms, even more determined to find a way to break the ocean of Medicare doctors into smaller pools to which mini-SGRs and managed-care tactics could be applied. The thinking was that if doctors were no longer in a pool of 700,000 doctors but were instead in much smaller pools (say 200 to1,000 doctors), doctors would find it in their financial interest to stop ordering all those unnecessary services and, if they didn’t, they could be micromanaged by a third party.

The failure of the VPS and then the SGR led Congress to enact two laws that contained provisions that accelerated the search for the Holy Grail – quasi-HMOs that could serve as the holding pens for pools of doctors much smaller than the national pool. The first of these laws, the Medicare, Medicaid, and State Child Health Insurance Program Benefits Improvement and Protection Act (BIPA) of 2000, authorized CMS/HCFA (hereafter just CMS) to create the Physician Group Practice demo, and the other, the Deficit Reduction Act of 2005, instructed the Medicare Payment Advisory Commission (MedPAC) to dream up some other small-pool ideas. It was the 2005 instructions to MedPAC that caused MedPAC to hold that November 9, 2006 meeting with Elliot Fisher at which the “ACO” label was invented and endorsed. MedPAC’s endorsement in turn contributed significantly to the groupthink that induced Congress to include provisions in the Affordable Care Act authorizing CMS to start the Pioneer and MSSP ACO pilots.

The BIPA law of 2000 was the second in which Congress asked CMS to solve the Zen riddle they refused to solve – how to determine “belongingness” of patients to doctors without making patients enroll with a doctor or clinic. But this time CMS would have to do more than produce a study on how the impossible question might be answered. This time they would have to choose a method and use it in an actual demonstration – the PGP demo. There could be no more delay. CMS had to solve the Zen riddle presented to them by Congress – it had to devise a way to assign patients to groups of doctors so that those doctors could be punished if “their” patients got too many services even if many of those patients, um, weren’t “theirs.”

Solving the unsolvable

After Congress passed OBRA, CMS contracted with scholars at Brandeis to make recommendations on how to expose groups of doctors to financial incentives to reduce medical services. The Brandeis scholars delivered their first paper on “group specific volume performance standards” to CMS in 1991, and subsequent installments in 1992 and 1995 (see their 1995 paper here and a 2003 version of it here ).

These papers proposed the basic elements of what would later be called the ACO. They proposed “shared savings” programs under which groups of doctors allegedly large enough to bear some insurance risk would somehow cut medical costs and share the savings with CMS. CMS would measure savings (or, perish the thought, increased costs) by calculating total spending on all the patients seen by a “physician group” in a baseline year, and then compare that with total spending on the patients seen by that group in a subsequent (“performance”) year. The Brandeis papers regurgitated the folklore peddled by the dominant managed care movement, to wit:

  • FFS was responsible for “runaway” growth in Part B spending (whether growth was even more “runaway” in the private sector didn’t matter);
  • “managed care” was the solution;
  • under the lash of exposure to insurance risk, doctor groups would adopt managed care tactics;
  • those tactics would generate savings and improve quality, not the other way around;
  • CMS would find a way to measure physician cost and quality accurately; and
  • the savings would be shared between Medicare and the doctors.

The Brandeis papers offered no evidence for these claims. [4]

Although the Brandeis scholars acknowledged in their reports that Congress didn’t want Medicare recipients to be forced to enroll with physician groups, they dodged the question of how, short of forced enrollment, CMS would know which recipients belonged to which doctors. [5] It was not until the early 2000s, when CMS began planning the PGP demo, that CMS “solved” that problem. Sometime shortly before the 2005 inauguration of the PGP demo, CMS adopted the peculiar two-step method of assigning patients that they would use in the PGP, Pioneer, and MSSP programs.

When CMS began designing the PGP demo, they contracted with RTI International, not the Brandeis scholars. In a paper published in a 2007 edition of Medicare and Medicaid Research Review, RTI’s John Kautter and colleagues laid out the design of the PGP demo they had recommended and that, by then, CMS had adopted. Kautter et al. stated their recommendations “build on” the Brandeis papers, then went beyond the Brandeis studies and proposed an algorithm by which CMS could assign Medicare beneficiaries to the ten groups participating in the PGP demo.

Kautter et al. recommended that CMS first determine to which PGPs doctors belonged, and then assign patients to those doctors based on the plurality-of-primary-care-visits method. Under this method, patients would be assigned to the primary doctor they saw most often. Thus, if I see two primary care doctors during a baseline year (say 2017) a total of five times, and three of those visits were to Dr. Inside ACO and two were to Dr. Outside, I will be assigned to Dr. Inside during the performance year (say 2018). Even though I’m free to see Dr. Outside and other doctors in 2018, and even though I may never again visit Dr. Inside after 2017, Dr. Inside is still “accountable” for me in 2018.

Kautter et al. did not comment on the irrationality of the riddle posed to CMS by Congress. They merely declared, “Because the PGP demonstration is a Medicare FFS innovation, there is no enrollment process whereby beneficiaries accept or reject involvement. Therefore, we developed a methodology to assign beneficiaries to participating PGPs based on utilization of Medicare-covered services.” Having thus delicately skirted the issue of congressional sanity (don’t you just love the unctuous phrase “FFS innovation”?), they went on to say they used two criteria to determine the best assignment method:

We evaluated the alternative assignment methodologies on two criteria: Provider responsibility and sample size. First, providers must believe that the numbers and types of services they provide mean that they have primary responsibility for the health care of beneficiaries assigned to them. Otherwise, PGPs may have difficulty responding effectively to the demonstration incentives…. Second, sample size is critically important for the statistical reliability of performance measurement. If the number of beneficiaries assigned to a participating PGP is too low, then cost and quality performance measurement may be unstable.

Note the phrase “providers must believe.” How did Kautter et al. determine what doctors “must believe” about how turbulent the pool of patients assigned to them should be? Answer: They interviewed a few doctors. What did the doctors tell them? Kautter et al. didn’t say. How did Kautter et al. determine what constitutes accurate “performance measurement” and how big the pool of patients must be to achieve that? They didn’t say. They simply concluded that when they balanced the two criteria in their own minds – “responsibility” and sample size – they came up with the plurality-of-visits method.

Kautter et al. simulated their plurality-of-visits method and discovered it would cause great churn among patients. “PGPs generally retained approximately two-thirds of their assigned beneficiaries from one year to the next,” they reported. How did Kautter et al. justify such a high churn rate? Other than to say they interviewed some doctors, they didn’t. They also had no comment on the possibility that the two-step process would assign few patients to doctors and that those patients might be healthier than average.

Sometime shortly before CMS implemented the PGP demo in 2005, CMS adopted Kautter et al.’s two-step assignment method for that demo. In November 2006, the ineffable phrase “accountable care organization” was concocted by Fisher and MedPAC. And sometime between the enactment of the Affordable Care Act in March 2010 and the 2012 start date of the Pioneer and MSSP ACO programs (probably 2011), CMS decided to use the same two-step method they had adopted for the PGP demo for those ACO programs.

We have seen the consequences. PGPs/ACOs can’t cut costs and, at best, make modest improvements on a tiny handful of quality measures, an improvement which may have been accompanied by a decline in the quality of unmeasured care.

No exit

If you followed my discussion of how Kautter et al. struck a balance (at least in their own minds) between their two criteria – “belongingness” and adequate sample size – then you already know that the problems created by CMS’s two-step assignment method are not fixable.

Consider CMS’s only option to reduce patient churn. If CMS abandons the plurality-of-visits rule in favor of, for example, an 80-percent-of-visits rule, that would greatly increase the odds that the patients assigned to a doctor really do “belong” to that doctor and will continue to see that doctor in the performance year. But that would also assign even healthier patients to ACOs, and it would greatly reduce the number of patients that could be assigned to ACOs. The reduction in the number of patients assigned to ACOs would in turn make CMS’s measurements of cost and quality even cruder, and it would push the percent of ACO patients in a doctor’s panel even lower than the 5 percent level I discussed in my previous post.

ACO proponents have only two options: Explicitly prohibit patients from visiting doctors outside their ACO, which would be tantamount to admitting ACOs really were HMOs in drag all along; or redefine ACOs so that they are no longer responsible for entire “populations” but instead focus on the chronically ill. I will discuss these options, the impact these options would have on MACRA, and the final Pioneer ACO evaluation in my next post.

[1] Evidence on the near-total lack of awareness among patients of their assignment to an ACO appears in the final evaluation of the Pioneer ACO program, released last December. The author of that evaluation, L&M Policy Research, held focus groups with Medicare recipients who had been assigned to a Pioneer ACO. “[W]e learned that beneficiaries were generally unaware of the ACO organization and the term ‘ACO,’” L&M reported. “In the few cases where the beneficiaries reported hearing the term ACO, they were not able to describe what an ACO is and its relationship to them as recipients of health care services. Since beneficiaries were not even aware of the term ‘ACO,’ they also were unaware that their care was being provided or coordinated by an ACO.” (p. 51)

Patient ignorance of their status as an ACO member may have been aggravated by physician ignorance. According to L&M, “In several respects, physicians were not particularly knowledgeable about the ACO. When asked if they knew which of their patients were aligned with the Medicare ACO, just over a third of Pioneer physicians reported knowing which beneficiaries were aligned and a similar proportion reported not knowing their aligned beneficiaries at all. When asked about the elements of their compensation, almost half of physicians participating in the Pioneer model reported not knowing whether they were eligible to receive shared savings from the ACO if the ACO achieved shared savings.” (p. 43)

[2] As Kelly Deverson and Robert Berenson  put it, “Together, the Medicare Payment Advisory Commission … and [Elliot] Fisher provided the impetus for the current concept and interest in ACOs.” (p. 2)

[3] It should have been obvious to Congress why traditional FFS Medicare was beating the pants off the insurance industry. First, the traditional Medicare program paid doctors and hospitals substantially less than private-sector insurers did. Second, the traditional program devoted a much smaller percent of its expenditures to overhead (2 percent since the early 1990s) compared with the 20-percent overhead of the insurance industry. (For evidence that the insurance industry’s overhead is 20 percent, see this graphic  published by America’s Health Insurance Plans. For evidence that traditional Medicare’s overhead is 2 percent, see citations to reports by the Medicare trustees, the Congressional Budget Office and others in my paper on this subject in the Journal of Health Politics, Policy and Law.) The insurance industry has never figured out how to overcome those two advantages – lower payment to providers and lower overhead – and it never will.

Moreover, as of the early 1990s the traditional Medicare FFS program did not micromanage Part B doctors as the insurance industry did and as MACRA is forcing CMS to do today. That in turn meant traditional Medicare was not driving up physician overhead costs, and was not burning doctors out, anywhere near as much as the insurance industry did and does now.

[4] Here is just one example of numerous evidence-free paeans to HMOs and managed care strewn throughout the Brandeis papers produced under contract with CMS, then HCFA: “The efficiencies … should be achieved through effectively managed care…. Presence of utilization review and quality assurance programs and other features associated with managed care may also be prerequisite.” (“Models for Medicare payment system reform based on group-specific volume performance standards (GVPS),” unnumbered page, Appendix B )

[5] The authors of the Brandeis papers might object to my statement that they “dodged” the issue of how to assign patients to physician groups. They could argue, correctly, that they did mention the issue and decided they didn’t need to assign patients to each doctor. I won’t try to explain here the strange logic they used to justify that position. Suffice it to say their primary argument, presented without a shred of evidence, was that risk adjustment could accurately detect changes in the average health status of an ever-changing pool of patients.

Open Season on Health Privacy in Washington DC

Open Season on Health Privacy in Washington DC

With Senate bill S.3530, data brokers would remove the last shreds of transparency and control that patients still have over our health data and drive healthcare costs even higher in the process. Will hospitals and the pharmaceutical industry go along?

It’s been 17 years since patients lost control over how our hospitals and insurance companies use our personal health data without any consent or a convenient accounting for disclosures. HIPAA allows so-called Covered Entities to use and sell our data without consent and, separately, often under the pretense of de-identification, through a $100 Billion network of hidden data brokers that we know don’t know about, choose, or oversee. Our data is worth $100 Billion because it helps health businesses to maximize profits and it contributes to an unknown extent to the uniquely high cost of healthcare in the US.

The lack of health data access and transparency under current HIPAA is evident to anyone that wants to understand how much a health service will cost, who wishes there was a rational way to choose a health plan, or anyone that would like to have some idea of the quality of a hospital or the cost-effectiveness of a drug. From a privacy perspective HIPAA has not served patients particularly well.

It could get worse.

The cynically named “Ensuring Patient Access to Healthcare Records Act of 2016” S.3530 a coalition of data brokers is asking Washington to remove the little control over privacy that we have left by giving the data brokers the same HIPAA lack-of-consent treatment that our hospitals and insurance companies already have. Along the way, the data brokers are asking for various safe harbors and elimination of the state preemption parts of HIPAA. (This allows states like California to treat HIPAA as a floor by adding privacy protections such as a patient right of action.) One well-known privacy consultant characterized S.3530 as a “sinister plot”.

At first look, extending Covered Entity status to data brokers seems like a quantitative shift and possibly a benefit to patients. But the deceptive part is that unlike today’s Covered Entities (hospitals, pharmacies, and insurance companies), data brokers do not have to compete for the patient’s business. They’re infrastructure, common to whatever healthcare service we might choose. By giving the infrastructure business the right to use and sell our data without consent or even transparency, we are enabling a true panopticon – an inescapable surveillance system for our most valuable personal data.

Open season on privacy in Washington, DC is not limited to healthcare. Congress is about to make your Web browsing history a matter for surveillance at the infrastructure level as well. A recent article by Bruce Schneier, explains:

“Unlike service providers like Google and Facebook, telecom companies are infrastructure that requires government involvement and regulation. The practical impossibility of consumers learning the extent of surveillance by their internet service providers, combined with the difficulty of switching them, means that the decision about whether to be spied on should be with the consumer and not a telecom giant. That this new bill reverses that is both wrong and harmful.”

There are too many other frightening aspects of S.3530 to go into detail here. One of them, (para 3-D-(2)) however, stands out for the sheer cynicism, where the data broker will sell our own data back to us after purchasing it from other data brokers.

17 years into HIPAA, computers and networks are now effectively free relative to the value of the personal health data being managed. Clearinghouses and other vestiges of the paper age should be irrelevant and not a $100 B hidden surveillance business. From a privacy and patient rights perspective, S.3530 is a disaster. It will be interesting to see how our healthcare providers, pharmaceutical and device manufacturers, and other principals that legitimately need and should have consented access to our private data react to S.3530.

The Delta of Discomfort and the Agony of Despair

The Delta of Discomfort and  the Agony of Despair

I’m a radiologist. I spend my day looking at CT scans and MRI scans. When it’s a good day, I have interesting scans to review, but much of my work is not too dissimilar from a TSA screener’s. One normal scan after the next, it’s akin to trying to stay alert so that the gun someone’s trying to sneak through in their luggage isn’t missed. In my case, of course, it’s a cancer or other unexpected medical abnormality finding.

Computer-Assisted Diagnosis

Most of my day I work on a computer workstation presenting the exams. In my dream, my workstation does more than simply display the exam. It assists in reading the case. If there’s an abnormality, I can click on the area, and the workstation takes the image and compares it to millions of other cases in the cloud. It tells me, based on that patient’s age and sex and other information, how likely it is the finding is a tumor and maybe even, if so, what kind of tumor. At other times the workstation tells me whether a study is normal or not, freeing me to do other activities. However, this dream is not shared by all of my colleagues.

When I mentioned this work-flow scenario to one of my residents—this idea of computer-assisted diagnosis with constant improvement, or what is known as artificial intelligence, or AI—he said it sounded awesome, but he really didn’t want to have it if it was available to everyone. His comment is something of a mixed message: Yes, I see value but no, I don’t want it.

It’s largely based on fear of being replaced by computers. And it’s a song I’ve heard before.

The Transition from Film to Digital

Back in the late 90s, I replaced my camera with a digital camera. I’ve included an early digital picture of my then 4-year-old tearing up, thinking that his Christmas gift was a book. His older brother and sister of course realized that the book was a guidebook to Disneyland and the gift was a trip. To provide some perspective, my son turned 21 last month.

At the same time I was replacing my film-based camera, radiology—my field—was transitioning from film to computers. I started my residency in the same way that my father started his residency in the 1960s. Radiology looked a lot like what we see on TV: lots and lots of films everywhere hung on view boxes to be reviewed. However, by the end of my fellowship, about 6 years later, much of my work was on computers.

I remember at the time of this transition many radiologists had to be dragged into this change. A mentor of mine, Dr. Evan Fram, told me about giving a talk at a university program on how the digital revolution for radiology was really a game-changer. When one of the radiologists in the audience realized that computers would allow fewer radiologists to read the same number of cases, he stood up and said, well, I don’t want this technology then; you’re basically going to eliminate me or somebody else in this room. And Dr. Fram said calmly, look, I’m not the only game in town. This technology’s coming, and we really have to figure out how you’re going to integrate it into your practice. There’s no stopping this now.

Now, 20 years later, I couldn’t practice without computers. The case numbers have exploded in terms of volume and complexity. An average CT that was 40 pictures at the start of my residency might be 2000 images now. And we can do things manipulating those images we couldn’t dream of back in the 90s.

Community and Value in the Digital Age of Radiology

With AI I expect the same level of change over the next 10-20 years. Where the transition from film to digital has allowed radiologists to read more complex cases and a higher number of cases/day, the new technology will increase our value with better diagnoses. The focus will be on the difficult cases rather than on the high volume of normal cases. If AI can help me make a diagnosis that I can’t see with the naked eye—say, for instance, how the brain volume’s changing over months, or how a patient with MS is changing, or insights, perhaps, into a particular tumor that will improve my differential—I’m all for it.

However, there has been a dark side to this innovation. Before, when we were all on film, I used to see my colleagues. They would come to the department; doctors would discuss the patient together; and I would get an opportunity to hear about all sorts of information that today I’m not privy to. Today, I’m much more of a replaceable commodity. My job is not the same job I started with in the 1990’s. And frankly, the radiology community is struggling with this change, with how they work with their colleagues and figure out how to add value in the digital age.

The future of artificial intelligence will be the same—a mix of improved tools but a disruption in who we are as professionals. In his book, Thank You for Being Late, Thomas Friedman has outlined this mismatch, where the speed of technical innovation has outstripped our ability to change with it. There are folks like my resident who fear the future and see only the need for fewer radiologists. The fact of the matter is, however, that the business of healthcare—better diagnosis at a lower cost—will drive this transition. And for me, as with the previous transition, I—or more likely my daughter’s generation—can choose to resist the change and essentially get annihilated by the wave, or try and ride on top of it, directing how the change is applied to our profession and our patients.

Ultimately, healthcare is not for the benefit of the physician but rather for the benefit of the patient. Freed from many of the normal cases I read today, the future radiologist will have to find other ways to add value, likely helping to develop new computer algorithms, further adding to care. They might even have more time to talk to other doctors and patients about what’s really wrong with the patient.

I still think we’re a ways from HAL, the computer in the movie 2001, delivering bad news or even giving a treatment plan to patients. In the words of Mark Twain, “The reports of my death”—or in this case, the death of radiology—”have been greatly exaggerated.” However, this is another song I know. Radiologists in the future will, like the generation before them in adapting to digital reading, have to adopt the new tools of assisted diagnosis. This is in the interest of their patients as well as their profession.

Compete for 2017’s Startup Spotlight at Health 2.0’s Traction Competition!

Compete for 2017’s Startup Spotlight at Health 2.0’s Traction Competition!

 Pitch and Get Funded!

With a new political climate, exponential growth in tech, and an increasing awareness on key issues, the health care industry is ever changing, and now’s the best time for your startup to breakthrough in the digital health community.

Demonstrate your company’s potential to prominent investors and enter your startup in Health 2.0’s Startups Pitch competition, Traction, and give your company the perfect opportunity to pitch to a room full of attendees looking to get involved with your startup. You’ll work with industry experts to perfect your pitch and to ensure it is stage-ready by the time of the conference. Investors will be so impressed they’ll be left with no choice but to invest in your company!

Traction will be kicking off the Health 2.0 11th Annual Fall Conference on Sunday, October 1, 2017 at 3 PM. This competition recruits companies ready for Series A in the $2-12M range. Teams will compete in two tracks, consumer facing and professional facing technologies.

The application deadline is Tuesday, July 25th at 11:59PM EST.

Six teams will be selected as finalists in mid-August for the two different tracks. These finalists will then be paired with exceptional mentors to help them prepare for the stage at the Fall Conference. Once the teams have prepared, they will hit the stage and TWO startups (one from the consumer facing track and the other from the professional facing track) will claim the title of 2017’s Startup Champs.

Don’t wait and enter your company NOW, before July 25th, to be selected as one of the six finalists to pitch live to venture capitalists, angel investors, government officials, and health care industry experts at Health 2.0’s 11th Annual Fall Conference.

Deepa Mistry is an associate producer at Health 2.0.

On Teaching Hospitals and Conflict of Interest and Other Politically Charged Topics

On Teaching Hospitals and Conflict of Interest and Other Politically Charged Topics

How much does it matter which hospital you go to? Of course, it matters a lot – hospitals vary enormously on quality of care, and choosing the right hospital can mean the difference between life and death. The problem is that it’s hard for most people to know how to choose. Useful data on patient outcomes remain hard to find, and even though Medicare provides data on patient mortality for select conditions on their Hospital Compare website, those mortality rates are calculated and reported in ways that make nearly every hospital look average.

Some people select to receive their care at teaching hospitals. Studies in the 1990s and early 2000s found that teaching hospitals performed better, but there was also evidence that they were more expensive. As “quality” metrics exploded, teaching hospitals often found themselves on the wrong end of the performance stick with more hospital-acquired conditions and more readmissions. In nearly every national pay-for-performance scheme, they seemed to be doing worse than average, not better. In an era focused on high-value care, the narrative has increasingly become that teaching hospitals are not any better – just more expensive.

But is this true? On the one measure that matters most to patients when it comes to hospital care – whether you live or die – are teaching hospitals truly no better or possibly worse? About a year ago, that was the conversation I had with a brilliant junior colleague, Laura Burke. When we scoured the literature, we found that there had been no recent, broad-based examination of patient outcomes at teaching versus non-teaching hospitals. So we decided to take this on.

As we plotted how we might do this, we realized that to do it well, we would need funding. But who would fund a study examining outcomes at teaching versus non-teaching hospitals? We thought about NIH but knew that was not a realistic possibility – they are unlikely to fund such a study and even if they did, it would take years to get the funding. There are also some excellent foundations, but they are small and therefore, focus on specific areas. Next, we considered asking the American Association of Medical Colleges (AAMC). We know these colleagues well and knew they would be interested in the question.  But we also knew that for some people – those who see the world through the “conflict of interest” lens – any finding funded by AAMC would be quickly dismissed, especially if we found that teaching hospitals were better.

Setting up the rules of the road

As we discussed funding with AAMC, we set up some basic rules of the road.  Actually, Harvard requires these rules if we receive a grant from any agency. As with all our research, we would maintain complete editorial independence. We would decide on the analytic plan and make decisions about modeling, presentation, and writing of the manuscript. We offered to share our findings with AAMC (as we do with all funders), but we were clear that if we found that teaching hospitals were in fact no better (or worse), we would publish those results. AAMC took a leap of faith knowing that they might be funding a study that casts teaching hospitals in a bad light. The AAMC leadership told me that if teaching hospitals are not providing better care, they wanted to know – they wanted an independent assessment of their performance using meaningful metrics.

Our approach

Our approach was simple. We examined 30-day mortality (the most important measure of hospital quality) and extended our analysis to also examine 90 days (to see if differences between teaching and non-teaching hospitals persisted over time). We built our main models, but in the back of my mind, I knew that no matter which choices we made, some people would question them as biased. Thus, we ran a lot of sensitivity analyses, looking at shorter-term outcomes (7 days), models with and without transferred patients, within various hospital size categories, and with various specification of how one even defines teaching status. Finally, we included volume in our models to see if volume of patients seen was driving differences in outcomes.

The one result that we found consistently across every model and using nearly every approach was that teaching hospitals were doing better. They had lower mortality rates overall, across medical and surgical conditions, and across nearly every single individual condition. And the findings held true all the way out to 90 days.

What our findings mean

This is the first broad, post-ACA study examining outcomes at teaching hospitals, and for the fans of teaching hospitals, this is good news. The mortality differences between teaching and non-teaching hospitals is clinically substantial: for every 67 to 84 patients that go to a major teaching hospital (as opposed to a non-teaching hospital), you save one life. That is a big effect.

Should patients only go to teaching hospitals though? That is wholly unrealistic, and these are only average effects. Many community hospitals are excellent and provide care that is as good if not superior to teaching institutions. Lacking other information when deciding where to receive care, patients do better on average at teaching institutions.

Way forward

There are several lessons from our work that can help us move forward in a constructive way.  First, given that most hospitals in the U.S. are non-teaching institutions, we need to think about how to help those hospitals improve. The follow-up work needs to delve into why teaching hospitals are doing better, and how can we replicate and spread that to other hospitals. This strikes me as an important next step.  Second, can we work on our transparency and public reporting programs so that hospital differences are distinguishable to patients? As I have written, we are doing transparency wrong, and one of the casualties is that it is hard for a community hospital that performs very well to stand out. Finally, we need to fix our pay-for-performance programs to emphasize what matters to patients. And for most patients, avoiding death remains near the top of the list.

Final thoughts on conflict of interest

For some people, these findings will not matter because the study was funded by “industry.” That is unfortunate. The easiest and laziest way to dismiss a study is to invoke conflict of interest. This is part of the broader trend of deciding what is real versus fake news, based on the messenger (as opposed to the message). And while conflicts of interest are real, they are also complicated. I often disagree with AAMC and have publicly battled with them. Despite that, they were bold enough to support this work, and while I will continue to disagree with them on some key policy issues, I am grateful that they took a chance on us. For those who can’t see past the funders, I would ask them to go one step further – point to the flaws in our work. Explain how one might have, untainted by funding, done the work differently. And most importantly – try to replicate the study. Because beyond the “COI,” we all want the truth on whether teaching hospitals have better outcomes or not. Ultimately, the truth does not care what motivated the study or who funded it.

Only Alternative Facts Can Support the Protecting Access to Care Act

Only Alternative Facts Can Support the Protecting Access to Care Act

In late March of this year, JAMAInternal Medicine published a study finding that the “the overall rate of [malpractice] claims paid on behalf of physicians decreased by 55.7% from 1992 to 2014.”  The finding wasn’t new.  In 2013, the Journal of Empirical Legal Studies published a study co-authored by one of us (Hyman) which found that “the per-physician rate of paid med mal claims has been dropping for 20 years and in 2012 was less than half the 1992 level.”  In fact, peer-reviewed journals in law and medicine have published lots of studies with similar results.  It is (or should be) common knowledge that claims of an ongoing liability crisis are phony.

But inconvenient facts have never stopped interest groups or politicians from making false claims about med mal litigation.  Since 1991, when Dan Quayle struck gold by asserting that the U.S. had too many lawyers, Americans have heard non-stop about “jackpot justice” in which patients who weren’t even injured win millions; about the flood of frivolous lawsuits in which doctors are sued even though they didn’t make any mistakes; about jury verdicts skyrocketing out of control; and about doctors working all their lives only to have their savings wiped out by a single malpractice suit.  All of these charges are false—you can find the evidence here, here, here, and here.  But in politics, it’s staying on message that counts; it doesn’t seem to matter whether the message is true.

Kellyanne Conway brought “alternative facts” into the political lexicon, but tort reform advocates have been mouthing alternative facts for decades — and researchers who study the civil justice system empirically have been debunking them for just as long.

Politicians don’t take kindly to being called out.  When Texas’ tort reformers sought to limit lawsuits in 2003, they promised that caps on damages would save money by reducing the practice of defensive medicine.  Then, we co-authored a study showing that health care spending rose at the same pace after 2003 as it had before.  How did then-Governor Rick Perry respond? He said that the real goal of reform was to lure doctors to Texas, not to save money.  And, he claimed the real goal had been accomplished, because thousands of new doctors had flooded into the state.  Perry’s fallback claim was also based on alternative facts.  As we showed in another co-authored study, there was no evidence of an increase in the number of direct patient care physicians in Texas during the post-reform period. Perry’s career did not suffer because of these statements – he now heads the U.S. Department of Energy.

Perry has a kindred spirit in Dr. Tom Price, the Secretary of the U.S. Department of Health and Human Services.  In 2010, Price asserted that defensive medicine accounted for $650 billion  every year, or 26% of health care spending.  Price’s “evidence” for this astonishing claim was a survey of physicians, asking them to estimate what percentage of health care spending was attributable to defensive medicine.  This figure is implausible on its face, and is roughly 15 times the estimate of $45.6 billion per year published in 2010 in Health Affairs.  Some studies have found even lower estimates.  And, with 30 states already enacting tort reform, many of the potential savings (if any) have already been realized.

Republicans are still pushing for tort reform at the federal level.  H.R. 1215, the Protecting Access to Care Act would impose a $250,000 cap on recoveries for non-economic losses.  The FY 2018 budget for HHS includes a similar proposal to “modernize” the medical liability system by capping non-economic losses at $250,000, but indexing the cap for inflation. We doubt that these caps will have much of an impact on health care spending or physician supply. They are also a remarkable intrusion into an area traditionally regulated by the states.  And they are aimed at a peculiar target, since there is no evidence that med mal victims are routinely over-compensated.  To the contrary, it is well established that most receive amounts that are too small to cover the economic losses they incurred, or, in the case of plaintiffs who win at trial, too small to cover the losses that juries think they incurred.

H.R. 1215 also includes a sliding scale cap on plaintiffs’ lawyers’ contingent fees.  The FY 2018 budget for HHS includes a similar provision allowing courts to modify fee arrangements. These provisions are sold as protections for vulnerable patients who can’t protect themselves from greedy attorneys, but they are really just price controls that prevent many victims with meritorious claims from obtaining the legal services they need.  We have studied the market in which clients hire med mal lawyers.  Lots of firms compete for their business, and “it is hard to make an economically plausible argument for capping contingency fees.”  The only explanation for the GOP’s desire to cap their fees is that plaintiffs’ attorneys tend to support Democrats.

To support H.R. 1215, then, one needs a host of alternative facts—a med mal liability crisis that doesn’t exist, hundreds of billions of dollars in imaginary health care savings, fictional damage awards, and imaginary overcharges by plaintiffs’ attorneys.  Could even Kellyanne Conway deliver all that?

Charles Silver is a professor at the University of Texas School of Law and David A. Hyman is a professor at Georgetown University School of Law.  They are co-authors of After Obamacare: Making American Health Care Better and Cheaper (forthcoming 2018).