This Digital Health Tool is Proven to Improve Cognitive Fitness | Jean Castonguay, NeuroTracker

This Digital Health Tool is Proven to Improve Cognitive Fitness | Jean Castonguay, NeuroTracker

By JESSICA DAMASSA, WTF HEALTH

Neuroscience startup, NeuroTracker, has a virtual training tool with a proven ability to help improve “cognitive fitness.” Jean Castonguay, co-founder, board member and Head of Global Strategic Partnerships at NeuroTracker, explains the science and clinical validation behind their tech and drops some big name users in the process — Manchester United, German and French soccer teams, US special forces, as well as some of the world’s leading sports concussion rehabilitation clinics. What sets the startup apart from other companies in the mental performance space? How have they shored up their science in the face of Lumosity’s Federal Trade Commission suit against false claims about brain health outcomes? It shook up the industry, and NeuroTracker actually feels it strengthened their business and their value proposition.

Filmed at Bayer G4A Signing Day in Berlin, Germany, October 2019.

The post This Digital Health Tool is Proven to Improve Cognitive Fitness | Jean Castonguay, NeuroTracker appeared first on The Health Care Blog.

Lower Health Insurance Premiums Sound Like Great News – But It’s Only Part Of the Story

Lower Health Insurance Premiums Sound Like Great News –  But It’s Only Part Of the Story

By A. MARK FENDRICK, MD

It’s great news to read headlines that the average health-insurance premium will drop by 4% next year in the 38 states using federal Obamacare exchanges. As millions of Americans entered open enrollment this year to choose their health insurance plans, it is important to remember that premiums are only one of the ways that we pay for our medical coverage. 

In
many plans lower premiums (paid by everyone) often mean a higher deductible —
or paying more out-of-pocket before insurance coverage kicks in. This burden is
paid only by those who use medical care services.

Deductibles are rising, and so is the number of
Americans enrolled in so-called high-deductible health plans
(HDHPs). Thus, more people with health insurance are being asked to pay
full price for all their care, regardless of its clinical value. Although
it may be better for many people with significant medical needs (and less
disposable income) to avoid plans with high deductibles, more and more people
who receive health insurance through their employer no longer have a choice
except to choose a plan with hefty costs in addition to premiums.

The Blunt Instrument 

Nearly half — 43 percent of adults — who get health
insurance through their jobs have a high-deductible plan, which requires them
to spend between $1,300 (individual) and $2,600 (for a family) before their
insurance starts covering their care.

These numbers should raise concern,
considering 40% of Americans would struggle with an unexpected bill of
$400. Given this simple math, it’s easy to see why Americans with health
insurance increasingly forego the care they need.

One of the goals of deductibles is to encourage consumers to become engaged in their health care purchases. Unfortunately, in their current ‘blunt’ form, they make no actual distinction—in terms of what members are asked to pay out-of-pocket—between high-value medical care and lower-value care, which includes unnecessary tests and the use of expensive, branded pharmaceuticals when generics would suffice.

That’s right: while in the
deductible phase, many HDHP members pay full price for critically important
health interventions (e.g. insulin) as well as for services they don’t actually
need.

This is why it’s important for
consumers to look beyond premiums when they choose their health plan, and not
automatically equate lower premiums with lower total out of pocket costs. 

But there is good news: there are
new policies and technologies available to make these plans less blunt.

Thanks
to a rule change being implemented
by the IRS, some 14 crucial health services, including services used to
treat common chronic conditions like diabetes and asthma, can now be covered
prior to meeting the plan deductible for members of HDHPs with health savings
accounts. This is in addition to some preventive care already covered on a pre-deductible
basis by the ACA. Additionally, some health plans are adopting
technologies and programs to help their members afford the medications,
screenings and other healthcare services they need.

The Trend
Toward Higher Value Care

With open enrollment coming to a
close, there is still time to comparison shop to see which plans improve access
to high-value care, and without draining your savings. This is true whether
you’re an HDHP member or not. Some plans offer mechanisms to affordably steer
people toward high-quality care, including:

  • Dynamic
    pricing for branded pharmaceuticals to treat chronic health conditions, which
    means members can lower their copays simply by refilling their prescriptions on
    time.
  • Insulin
    coverage to members with a $0 co-pay at the pharmacy. Insulin is the very
    definition of high-value care, so focusing incentives there shows the tide is
    shifting away from lower-value products and services.
  • For people
    whose health insurance is not provided by an employer, options like the
    California Individual Market feature lower costs than other plans and
    cover many services with a $0 copay.

While not every insurer offers
innovative benefit structures like these, many do. Here are several questions
you can ask to determine whether your plan choice meets your clinical and
financial needs, including:

·       

  • Will I be able to afford the
    out-of-pocket expenses to reach my plan deductible?
  • If the plan has a deductible, are there
    services that are covered before I meet the deductible? 
  • Does this plan offer tools and programs to lower
    my out-of-pocket payments for medications and other services used to manage
    chronic conditions? 

Fortunately,
the new IRS rule change is an important step forward to help more people afford
the high-quality care they need.

As
implementation of these value-based insurance designs occurs, it remains
important to become more engaged consumers when choosing a health plan. The
goal of open enrollment should be to choose a plan that covers the care we need
at a total price – not just premium – that we can afford.


Learn more about Value-Based Insurance Design here.

Dr. Mark Fendrick is a Professor at the University of Michigan Medical

School and
Director of the Center for Value-Based Insurance Design
 

The post Lower Health Insurance Premiums Sound Like Great News – But It’s Only Part Of the Story appeared first on The Health Care Blog.

Health in 2 Point 00, Episode 102 | Proteus, Health 2.0 Asia/Japan, and…a Jewel Heist?

Health in 2 Point 00, Episode 102 | Proteus, Health 2.0 Asia/Japan, and…a Jewel Heist?

Today on Health in 2 Point 00, Jess is in Las Vegas while I’m all the way in Tokyo for Health 2.0 Japan. In Episode 102, Proteus Digital (finally) announces that they’re running out of money. Does this put the whole category of digital therapeutics at risk? In other news, Seema Verma’s jewelry was stolen and she wants taxpayers to pay her back! How is she going to survive this? And find out what’s going on in Tokyo at Health 2.0 Japan—a whopping 50 startups pitched in a contest yesterday and we’re really seeing the coming of age for this market. —Matthew Holt

The post Health in 2 Point 00, Episode 102 | Proteus, Health 2.0 Asia/Japan, and…a Jewel Heist? appeared first on The Health Care Blog.

RSNA 2019 AI Round-Up

RSNA 2019 AI Round-Up
Shah Islam
Hugh Harvey

By HUGH HARVEY, MBBS and SHAH ISLAM, MBBS

AI in medical imaging entered the consciousness of radiologists just a few years ago, notably peaking in 2016 when Geoffrey Hinton declared radiologists’ time was up, swiftly followed by the first AI startups booking exhibiting booths at RSNA. Three years on, the sheer number and scale of AI-focussed offerings has gathered significant pace, so much so that this year a decision was made by the RSNA organising committee to move the ever-growing AI showcase to a new space located in the lower level of the North Hall. In some ways it made sense to offer a larger, dedicated show hall to this expanding field, and in others, not so much. With so many startups, wiggle room for booths was always going to be an issue, however integration of AI into the workflow was supposed to be a key theme this year, made distinctly futile by this purposeful and needless segregation.

By moving the location, the show hall for AI startups was made more difficult to find, with many vendors verbalising how their natural booth footfall was not as substantial as last year when AI was upstairs next to the big-boy OEM players. One witty critic quipped that the only way to find it was to ‘follow the smell of burning VC money, down to the basement’. Indeed, at a conference where the average step count for the week can easily hit 30 miles or over, adding in an extra few minutes walk may well have put some of the less fleet-of-foot off. Several startup CEOs told us that the clientele arriving at their booths were the dedicated few, firming up existing deals, rather than new potential customers seeking a glimpse of a utopian future. At a time when startups are desperate for traction, this could have a disastrous knock-on effect on this as-yet nascent industry.

It wasn’t just the added distance that caused concern, however. By placing the entire startup ecosystem in an underground bunker there was an overwhelming feeling that the RSNA conference had somehow buried the AI startups alive in an open grave. There were certainly a couple of tombstones on the show floor — wide open gaps where larger booths should have been, scaled back by companies double-checking their diminishing VC-funded runway. Zombie copycat booths from South Korea and China had also appeared, and to top it off, the very first booth you came across was none other than Deep Radiology, a company so ineptly marketed and indescribably mysterious, that entering the show hall felt like you’d entered some sort of twilight zone for AI, rather than the sparky, buzzing and upbeat showcase it was last year. It should now be clear to everyone who attended that Gartner’s hype curve has well and truly been swung, and we are swiftly heading into deep disillusionment.

Still, the venue was well decorated — the shiny booths and showcase were surrounded by numerous well-coiffed hedges — an unintentional reflection on the profession’s ability to beat around the bush with a hedge when it comes to forming a diagnostic conclusion. Speaking of beating and bushes, let’s dive straight in to the main themes that emerged in the AI showcase this year:

We have reached market saturation for diagnostic AI startups

On the Thursday during the conference, Dr Harvey gave a talk on ‘How to build an AI company’. The title of the talk had already been chosen a year in advance by the moderator (the pre-eminent Dr Saurabh Jha). There was only one possible conclusion — DON’T. It’s no longer worth it. The lowest branches of the diagnostic tree are bare, with every low hanging fruit having been picked, often multiple times. The greatest challenge, rather like Indiana Jones in The Last Crusade, is picking the right chalice. This becomes difficult when there are so many Chest X-ray, CT head and lung solutions to choose from. 

The melange of startups was made even more dense this year with the arrival of a new Asian cohort. A team from Chinese tech giant Tencent won the RSNA kaggle competition, and Lunit and Vuno were back again proclaiming traction in their local markets with their broad, but overlapping suite of AI products. One company, JLK Inspection, not previously on anyone’s radar that we spoke to, spent almost $1million on their flashy, spacious booth. Not content with one or two algorithms — they showcased ALL the algorithms the other vendors were showing, and more. Retinal scanning, stroke, mammo, pneumothorax, bone age — you name it, they got it  (37 solutions for 14 body parts)— or so they claimed. Backed by the Samsung Medical Hospital, and with over 50 employees, they certainly looked like the real deal. Their marketing was slick, but it was their stats that gave the game away. No-one takes an AUC of 0.99 seriously anymore, especially if all your algorithms curiously have the same accuracy. Nevertheless, the Korean FDA has let them all through, so the proof will be in the pudding.

Positive signs for the sector were still evident, however — Aidence announced a well deserved deal with Affidea (Europe’s largest private radiology service provider) for their lung screening solution, joining Icometrix’s comprehensive neurodegenerative suite who were also snapped up by Prof Illing and his team last year (Affidea also announced they are rolling out GE hardware across Europe). Other diagnostic fruit are less bitten — automated ultrasound diagnostics for instance, which Koios certainly seems to be leading at in the breast and thyroid lump-measuring realm, with a strong machine-embedded partnership with GE, and prospective studies in the pipeline. AIdoc continue to forge deals with their head triage solutions, their new relationship with Philips being a key driver in credibility, alongside MaxQ AI, Quibim, Riverain and Zebra. 

Several startups focussed on biomarker-driven insights. Quibim are one to watch in this space, with a long history of publications and partnerships. Similar competitors such as CorTechs Labs, Perspectum Diagnostics and Healthmyne offer a suite of different modality and disease imaging biomarkers. However, these tools are not what your average radiologist is looking for — adding more data and charts to look at during a day’s work is not going to drive productivity. We struggle to see a solid market for those algorithms that provide longer term predictions, such as 5 year risk of Alzheimer’s , or 10 year risk of cancer recurrence— what are we supposed to do with this information, and how much is a prediction worth paying for? The market for these solutions is in pharma and clinical trials, and immunotherapy or oncology research, so perhaps RSNA (being largely a payer-driven conference) is the wrong place to be?

Serious rumours abounded that some of the bigger AI startup names were in financial trouble, backed up by reports of large layoffs. There’s no smoke without fire, so at a guess, this is likely due a lack of investor willingness to prop up companies any longer without demonstrable traction and annualised recurring revenue (ARR). VCs don’t want incremental gains, they want 10x gains. And this leads us on to our next point… 

Show us the money

While it may be true that radiology is a multi-billion dollar global industry, so far not one of those billions has made it into the coffers of startups through revenue-driven services. Yes, some are gaining traction, but the uptake is painfully slow. This is not your usual consumer tech market – this is medicine, where procurement cycles take eons, and without evidence of downstream cost efficiencies or productivity boons you stand little chance of making a quick buck. The vast majority of startups are backed by silicon-valley style money, tranches of which come in the tens of millions at a time. Unfortunately for them, these are not the kind of figures hospitals pay out for unproven promises of automated AI nirvana, even if it can potentially augment the large volume, mundane tasks.

One startup with happy customers is Viz.ai, a company focussed on large vessel occlusive stroke. The clever trick here is that the customers aren’t radiologists at all (neither are the company founders)– the consumers are neurointerventional/stroke teams who get a shiny app which pings them whenever a possible clot is detected, complete with pictures straight from the CT scanner. ‘Time is brain’ has now become a marketing slogan. Which only raises the question – is the only way to make money with AI to remove radiologists from the loop? The efficiency is clear – surgically bypassing the radiology read shortens time-to-needle and improves patient outcomes. The radiologist has not been replaced per se, they are just no longer central to this critical care pathway, their report essentially meaningless post hoc. It doesn’t even matter how accurate the clot-detection algorithm is – the neurointerventionalists can make their own mind up using the images on their iPhone. Heck, even if the scan is called false positive, they can always pick up a juicy fee for an easy consult in the middle of the night, if they’re so inclined.

Other use-cases do not provide such a clear return on investment, because ‘Time is brain’ unfortunately doesn’t translate to many other parts of the body. Pneumothorax perhaps, but only if the AI is embedded in the scanner for immediate flagging, again, not to the radiologist, but to the radiographer taking the picture (aka, the GE Critical Care Suite approach). It is now apparent that prioritising non-urgent findings to a radiologist doesn’t really add much, especially if their turn around times are sub 30 minutes already. In fact, prioritising non-urgent general imaging may even have a negative effect — radiologists may suffer in accuracy the further down the priority list they go, as their expectation of actually finding pathology decreases. 

Some startups theorise that their AI can prune out thousands of normal CTs, CXRs or mammograms from radiologists reading lists by performing ‘hard triage’ at specificity thresholds nearing 100%, but the reality is that no-one is yet prepared to let the machines take over, even for a little bit. One reason may simply be that American radiologists make money per scan read, and reducing this number by 10% or more isn’t likely to go down too well. The opposite is true in state-funded health systems where radiologists aren’t paid per case, which is why ‘hard triage’ may be more palatable in the EU for instance, presuming the evidence base catches up. Many of these technologies are pitching to wealthy economies, where they are being compared to radiologists, but are better served in the emerging markets where there are none. You can’t replace what is missing, and AI might well fill the gap there.

The largest benefit of AI will need to be demonstrated by improved overall diagnostic accuracy of non-time-critical pathologies. Consistently finding pathologies that have a significant downstream cost when not detected on initial imaging (non-critical findings such as breast cancers and lung nodules) is the most common startup offering. Of course, to sell this effectively, we need to see the evidence, and this requires prospective studies on broad, generalisable data to convince payers of the value. Unfortunately for startups, prospective studies take time, and some may not have the financial runway to wait for the results. The whole AI industry is still riding on retrospective studies — allowing for FDA clearances only, and not approvals, akin to letting drugs on market after Phase II trials only. This crucial difference means that there is a significant gap to be jumped in terms of both clinical and economic evidence to show that any of this stuff makes a dent in healthcare at all. Very few vendors showcased the potential economic advantages of their software, and none that we saw had published prospective randomised control trials. On top of the need for proper studies, there’s the small matter of creating new CPT codes for the US billing machine — and until these are formalised, no-one is yet clear how much to pay for an adjunctive AI read yet…

For our money, the well-published (and non-VC funded startups) stand the best chance of survival. Indian company, Qure.ai (image above), certainly fit this bill (32 peer-reviewed papers to date helps cut through the hype), signalling an impressive dedication to the scientific process. They have a world-class CXR TB detection algorithm, having recently proven themselves worthy equals in an open independent test against South Korean competitors, Lunit, which was published in Nature. Also, with their qCXR and qER suites, Qure offer the most comprehensive amount of pathology coverage for CXR and CT head analysis, the latter appearing in The Lancet, a first for deep learning.

In comparison, no-one even blinked an eye during the week as Google revealed the results of its work on the ChestX-ray14 set with Apollo Hospitals. This wasn’t surprising, as those in the know have already seen better — four pathologies is vastly inferior to many of the existing CXR start-ups and academic literature. (A deeper dive into the paper unearths some poor methodology, including using the same data superset for training, testing and validation. Tsk tsk.) Still, they relabelled a significant portion of the ChestX-ray14 dataset and made it available for free, which is nice.

The future ain’t so ‘appy for the app stores

For those radiologists and hospitals wondering how to access the smorgasbord of algorithms on offer, there are a few marketplaces or ‘app stores’ available. You can take your pick from Nuance, Incepto, Envoy, Wingspan and Blackford which all offer integration platforms featuring many of the startups’ algorithms. Arterys has also recently pivoted into this space, showcasing someone else’s algorithm for the first time (a fracture detection tool from French unknowns Milvue). If you want to know about the different features of each of the app stores we recommend the well-researched selection guide from Signify Research. Some come with their own viewers, most come in cloud form, and if you ask nicely, offer free time-limited trials.

While it might make sense for early adopters to go for an AI app store instead of choosing from hundreds of startups to partner with, RSNA attendees who visited the OEM and major PACS providers booths upstairs will have seen plenty of slick AI integration demos, making us wonder why there are even independent third party app stores at all. GE has the well-marketed all-singing and dancing Edison, Fuji has REiLI, Siemens, Philips and Nuance all have digital marketplaces too. Not only do these offer many of the algorithms available direct from startups or via the independent stores, they also come bundled within recognisable branded PACS and viewers. Intelerad impressed us most with their new AI hub streamlined integrations (complete with confusion matrices for the stats geeks out there), which includes offerings from Envoy, Blackford and AIdoc as well as a tempting 12 month free trial of all of Zebra’s algorithms through their AI1 platform. So, if you are looking to procure a new PACS in the next year or so, you’ll be offered AI marketplaces anyway, and we see no reason to go direct to a third party store, unless you are the type of person who likes to be first in line at the Apple store whenever a new iPhone is released. Where the AI platforms can make a dent is in white-labelling their software to the big fish.

Of course, GPU giants NVIDIA were at the show too, offering forward-thinking radiology departments the opportunity to build their own AI and deploy it locally through their Clara suite (notably with no mention of the medical device regulatory hurdles involved. Who needs the FDA anyway?). The thing is, you don’t need GPU to run diagnostic inference, you only need it for training, so unless you have a team of data scientists and radiologists with spare time to label vast quantities of data, this is likely a pointless endeavour, designed only to bring in continued revenue to the GPU manufacturer as their sales to bitcoin miners continue to shrink. Or it could be an attempt to make you think twice about sending all your data and processing needs into the hands of the giants in the cloud, where NVIDIA dont have a controlling stake. And don’t get us started on the pitfalls of federated learning (training on data from multiple sites) — the chances of two hospitals labelling their data in the same way locally is miniscule, let alone several hospitals. The AI startups naturally all use their DGX hardware already— we are yet to come across a startup who applied to the NVIDIA Inception program, and wasn’t accepted — so the market here provides diminishing returns for the GPU sales people.

The time has come for structured reporting

Prior to the conference we predicted several things, including that structured reporting would see a surge in interest, particularly because AI development and testing largely rests on the accuracy of labels, and structured reporting is the only way to create these labels prospectively at source. 

This prediction was proven correct, at least twice — firstly the winner of the prestigious Fast5 session was a talk by Dr. Martin-Carreras on how encoding patient-friendly translations into radiology reports provides a whole new level of insight and reassurance for patients who struggle with complex radiological terminology. Secondly, there was huge interest in German startup Smart Reporting, led by Prof Sommer, the only startup at the show who appeared to have solved the perennial problem of getting radiologists to stick to a didactic reporting structure with their flexible universal reporting engine. Their tiny booth was the antithesis of hype (well, technically they aren’t an AI company). Prominently featured partner showcases in both the Intelerad and GE Edison demos upstairs helped get their vision of VR-driven visually rich structured reports across, which also enable seamless integration of AI outputs from multiple vendors. One-click interlanguage translation, data-minable codified reports, templates based on expert guidelines (including the RSNA RadReports), with the option to create bespoke reports — all bases are covered. With no other startup competitors, things certainly look exciting for this company.

Scientific support for structured reporting was also a theme in the academic sessions. An excellent presentation by Dr Vosshenrich from Basel showed that structured reporting decreased reporting variation, and led to a 50% decrease in character count (which is what referring clinicians want, not lengthy prose). Structured reporting is also great for producing labelled data for AI training, as recently published by Dr Pinto dos Santos. Let’s hope the RSNA structured reporting subcommittee, led by Dr Heilbrun were taking note!

If you need to label retrospective data there were startups for that too. Segmed and MD.ai offer data labelling services, and were seeking trainees looking to earn on the side via a web-based platform, and Israeli-based Agamon aimed to please administrators by improving operational, business and clinical performance using NLP to drive insights.

AI-enhanced image acquisition shows promise

While clinical decision support is an oversaturated market, there is massive room for growth in the image acquisition sector. Post-processing of low dose or sparsely acquired medical images has been a field of research since before deep learning came out, but now newer AI techniques have made it to the marketplace. Hospitals and imaging centres looking to increase patient throughput will be interested in the potential to significantly reduce scan time, and radiologists and medical physicists who care about radiation dose should definitely want to take a peak.

Two companies in this sector at RSNA caught our attention. Subtle Medical won the first FDA clearance for AI-augmented PET/CT studies just in time for the conference, and they demonstrated a clear economic and workflow benefit in terms of a fourfold decrease in scanner bed time and up to a 100x decrease in dose without compromising image quality. They also claim it takes less than 24 hours to deploy, which certainly beats many other AI applications. Algomedica showed off their PixelShine algorithm for denoising CT studies, with clear image quality maintained at less than standard CXR dose for a CT lung scan. This seems to be well-placed for the upcoming onslaught of lung cancer screening programmes across the globe. Algorithms such as these should see their way eventually directly into OEM hardware, it’s only a matter of time.

These offerings come with a small caveat — you may want to test locally for quality by running a pre/post comparison, and ensure the software is compatible with your fleet of scanners before purchase. 

So what is the best route to market for radiology AI?

With such a variety of AI algorithms on offer, it’s not surprising that many radiologists feel overwhelmed, and hospitals unsure about how to engage with the growing sector. Big name hospitals across the US are both signing up to several vendors and embarking on ambitious internal IT projects to build their own versions in-house. But where does this leave the vast majority of smaller hospitals across the globe, and those with less of a budget for shiny new tech (i.e. the majority of the global market)?

Our prediction is that, for diagnostic algorithms at least, they won’t have to buy. Yes, you heard that correctly. General hospitals don’t need to buy any radiology AI at all. The overheads in purchasing, deploying, implementing and maintaining a suite of algorithms will be too much for most radiology departments, and most simply do not have the time or inclination to perform multiple business cases, purchasing procurements and contracting, even with a third party app store. Any AI they do want will arrive in the next couple of years within their PACS anyway, and even then, the jury’s out on just how well it will integrate with their legacy systems.

Enter remote reporting services, where the onus is on the teleradiology provider to ensure rapid turn-around with high accuracy, meaningful reports at a competitive price. We believe the future is decentralised, with images being reported remotely, 24/7, in a human-machine hybrid model. All of the teleradiology providers we spoke to at RSNA were actively engaging with the AI vendors, scoping out who to utilise in their enterprise, looking to improve their productivity, increase their revenue through workflow improvements, while maintaining or bettering their accuracy. Teleradiology companies already have significant IT infrastructure and cloud-based remote services designed for fast image transfer and reporting, a set-up which is far more scalable and amenable to AI deployment than multiple local installs. Additionally, they handle a vast array of images, from greater geographic regions compared to single site hospitals, meaning that AI companies get to work with much more clinically useful and statistically powered data, creating a powerful feedback loop that blows away any current validation and post-market auditing models. Teleradiology companies who design their IT infrastructure and reporting pathways around AI, and actively engage their radiologist workforce in working alongside it, are going to leap far ahead. They also have the capital and incentive to do so, which is more than we can say for the average small town radiology department.

If you’re not convinced, and the moves from Affidea (previously mentioned above) don’t intrigue, then we point to new teleradiology outfit Nines from Silicon Valley, who are building a remote reporting service from scratch with AI at its heart. The advisory team are AI luminaries (Prof Langlotz and Dr Lungren from Stanford, Keith Bigelow formerly from GE). Clearly they also share the same vision of a remote human/machine service, where AI sits in the centre, not as a peripheral adjunct bolted on in an attempt to keep with trends. Hospitals struggling with workload and burnout across the world will be able to send their images remotely to services such as this, not just for AI analysis by well-generalised algorithms, but also an expert human read alongside it (human + AI is better than just AI, remember). This seems eminently more sensible than attempting to deploy countless diagnostic algorithms locally, fiddling around in app stores or attempting to build your own, especially for remote and rural locations. 

Despite the expected downturn in hype, the promise of a more automated future still remains an alluring possibility— and, if AI finally starts to make significant headway into human-level decision making, it is the teleradiology providers who have laid a solid foundation for delivery of AI into healthcare that will reap the benefits as we enter the anticipated plateau of productivity. 

The future of diagnostic radiology is remote AI-augmented reporting… if the startups can survive long enough to generate sufficient prospective evidence.

You heard it here first, folks!

Dr Harvey is a board certified radiologist and clinical academic, trained in the NHS and Europe’s leading cancer research institute, the ICR, where he was twice awarded Science Writer of the Year.

Dr Islam is a board certified academic radiologist sub-specialising in imaging of the brain and spine. and is currently completing his PhD at Imperial College London, applying deep learning to advanced brain imaging to help characterise and prognosticate brain tumours.

This post originally appeared on Hardian Health here.

The post RSNA 2019 AI Round-Up appeared first on The Health Care Blog.

The Liability of Outside Provider Orders and What Could be Done About It

The Liability of Outside Provider Orders and What Could be Done About It

By HANS DUVEFELT, MD

As a family doctor I receive a lot of reports from emergency room visits, consultations and hospitalizations. Many such reports include a dozen or more blood tests, several x-rays and several prescriptions.

Ideally I would read all these reports in some detail and be more than casually familiar with what happens to my patients.

But how possible is it really to do a good job with that task?

How much time would I need to spend on this to do it well?

Is there any time at all set aside in the typical primary care provider’s schedule for this task?

I think the answers to these questions are obvious and discouraging, if not at least a little bit frightening.

10 years ago I wrote a post titled “If You Find It, You Own It” and that phrase constantly echoes in my mind. You would hope that an emergency room doctor who sees an incidental abnormal finding during a physical exam or in a lab or imaging report would either deal with it or reach out to someone else, like the primary care provider, to pass the baton – making sure the patient doesn’t get lost to followup.

But emergency room medicine is shift work, just like hospital medicine; providers may not be around when the abnormal result comes in, and the next shift worker perhaps can’t see what is in the first doctor’s inbox.

As I click through the “orders to sign off”, I end up prioritizing “my” orders, because I “own” them. The “Outside Provider” orders are in my inbox as a double check, but nobody double checks my results. I have to make them my priority if my time is limited and time, by definition, is always limited.

There is more and more data in medicine, and while I hope technology will make it easier to sort, view and prioritize data, I don’t believe artificial intelligence will do that well for frontline medical providers anytime soon.

I keep thinking that we really need to have a serious debate or examination of what we need primary care providers to do. The Patient Centered Medical Home movement (see my personal take on that here) held a promise of better care coordination by people like me in clinics like mine, but the way we do things hasn’t changed nearly as much as many of us had hoped.

I seriously believe that it would be a worthwhile investment for our whole healthcare “system” to structure and reimburse the care coordination work we primary care providers could do for our patients.

We can certainly use the help and collaboration of other professionals like nurses, but ultimately we need to know what’s going on with our patients. Otherwise their care will continue to suffer from more and more fragmentation as subspecialization brings more different doctors into many patients care “teams”, as hospital stays grow shorter with more loose ends at discharge, as options for urgent care walk-in and virtual visits increase and as more and more patients become afflicted with multiple chronic illnesses because of the declining health of people in this country.

When I started my residency in Lewiston, Maine back in 1981, family doctors were enthusiastic and idealistic. Much has dampened that enthusiasm since then, but I still believe we have a crucial role we could fill for the health of our nation.

If the “system” would only let us.

Hans Duvefelt is a Swedish-born rural Family Physician in Maine. This post originally appeared on his blog, A Country Doctor Writes, here.

The post The Liability of Outside Provider Orders and What Could be Done About It appeared first on The Health Care Blog.

Rebuilding Trust in our Doctors: An Option for our Broken System

Rebuilding Trust in our Doctors: An Option for our Broken System

By AMITA NATHWANI, MA

This week’s impeachment hearings show what a crisis of trust we live in today.  69% of Americans believe the government withholds information from the public, according to recent findings by Pew Research Center.  Just 41 % of Americans trust news organizations.  We even distrust our own health care providers: Only 34% of Americans say they deeply trust their doctor.

One important way doctors can regrow that trust is to become educated about the types of medicine their patients want, including alternative therapies. 

People are seeking new ways to care for their health. For instance, the percentage of U.S. adults doing yoga and mediating—while still a minority– rose dramatically between 2012 and 2017, according to the CDC’s National Center for Health Statistics.  Likewise, the number of Americans taking dietary supplements including vitamins, minerals and natural therapies like turmeric, increased ten percentage points, to 75% in the past decade, according to the Council for Responsible Nutrition.  As Americans increasingly seek out non-pharmaceutical ways to address wellness, they need doctors who can talk to them about such alternatives. 

Unfortunately, this is rare.  As a provider of an holistic approach to health called Ayurvedic Medicine, I often see people who tell me their physician dismissed them when they asked about treatments they’d read about on the internet.  In many cases, clients tell me their doctor has actually chastised them for entertaining an alternative approach to their existing illness.  This leaves them disempowered. They wanted to make choices to improve their own health, but found they were not acknowledged, supported or even understood by the doctor.  

Furthermore, when doctors aren’t familiar with alternative treatments, they can’t advise their patients about interactions with conventional medicines.  With 75% of Americans taking some sort of supplement, all doctors should at least be able to offer guidance on contraindicated medications. 

With all this, it’s not surprising that only 30% of my patients report having a primary care physician they are happy with.

It doesn’t have to be this way. Physicians should receive mandatory training in integrative medicine, allowing them to consider both prevention and intervention in promoting health and wellbeing.  I’m not saying M.D.s must become experts in  alternative forms of medicine;. That would require years of rigorous training and hundreds of hours of clinical experience. I certainly do not want to see doctors prescribing an Ayurvedic remedy without understanding the underlying cause and pathology. That could simply replace pharmaceuticals with herbs, rather than working with the individual as a whole.  

Instead, I am proposing that conventionally trained physicians do what I do.  When I believe my clients would benefit from conventional medical treatment protocols instead of, or in addition to natural therapies, I offer an educated and empirically based explanation as to why, and respectfully refer out to the appropriate professional.   But conventional physicians rarely do the same. 

To give patients a full, informed range of options, doctors should be required to demonstrate familiarity with the various non-conventional systems of medicine, like traditional Chinese medicine, ayurvedic medicine and naturopathy, so they can have an educated conversation with patients who ask.  At that point, doctors could refer those cases elsewhere, or further their education to encompass a deeper understanding of the discipline. Either way, the patient has been acknowledged and supported by the system.

A few leading medical schools are already doing this. In Arizona, the Dr. Andrew Weil Center for Integrative medicine, where I have taught,  paved the way in 1994, followed by George Washington University and the University of Wisconsin, to name a few.

This divide between conventional and alternative medicine is particularly important at this moment in time. The current healthcare system has turned patients into healthcare consumers who must make their own choices, and who are  more and more empowered to take control of their own health. In many ways, this is a good thing.  As we do so, patients deserve doctors who understand the full range of choices that consumers face. 

If more physicians are trained and educated in medical school to think with an integrative approach to medicine, we can change the broken system, and docs will benefit too. The system is not set up to support them either. By creating an overwhelming demand, ordinary people can advocate for large health care hospitals, clinics and institution to provide compassion and implement integrative models for better quality of care. . It is in our hands to rebuild the trust of the now unhappy, and skeptical patient. Because without a trust in the healer, how can the healing begin?

Amita Nathwani is a professor of Ayurvedic Medicine and an adjunct faculty member with the Family Medicine Integrative Medicine Fellowship at Banner Health, University of Arizona.

The post Rebuilding Trust in our Doctors: An Option for our Broken System appeared first on The Health Care Blog.

The Dilemma of the Black Patient

The Dilemma of the Black Patient

By YOLONDA Y. WILSON, PhD

Last week a nurse posted a video of
herself on Twitter mocking patients with the caption “We know when y’all are
faking” followed by laughing emojis. Twitter responded with the hashtag #patientsarenotfaking,
created by Imani Barbarin, and
a slew of testimonials of negligent medical care. While the nurse’s video was
not explicitly racialized, plenty in the black community felt a particular
sting: there is clear evidence that this attitude contributes to the problem of
black patients receiving substandard care, and that negative behavioral traits like faking or exaggerating symptoms are more likely
to be attributed to black patients. The problem is so bad that it turns
out racial bias is built right into an algorithm widely used by
hospitals to determine patient need. 

Since we can’t rely on the system or
algorithms, many health organizations and the popular media encourage patients to
advocate for
themselves and their loved ones by, for example, asking questions, asking for second (or more) opinions, “trusting [their] guts,”
and not being afraid to speak up for themselves or their loved ones. But this
ubiquitous advice to “be your own advocate” doesn’t take into account that not
all “advocacy” is interpreted in the same way—especially when the advocacy
comes from a black person. Sometimes a patient’s self-advocacy is dismissed as
“faking;” sometimes it is regarded as anger or hostility.

Black male faces showing neutral expressions are more likely than white faces to be interpreted as angry, violent, or hostile, while black women are often perceived as ill-tempered and angry. These stereotypes can have a chilling effect on a person’s decision to advocate for themselves, or it can prompt violent reaction.       

This past August, LeeAnn Bienaime delivered her firstborn child, with the assistance of her husband, in the couple’s bathtub. No, the couple had not planned a home birth. Instead, they were turned away from Naval Medical Center in Portsmouth, VA even though Bienaime was in active labor. Thankfully, she and her baby were healthy. In discussing her ordeal, Ms. Bienaime said, “In hindsight I would have stood my ground and not left.” 

Consider what happened to Barbara Dawson when she stood her ground. Ms. Dawson was having trouble breathing and went to Calhoun Liberty Hospital in Blountstown, Florida. The emergency room docs determined that she was stable and discharged her. However, Ms. Dawson, knowing that something was not right with her body, refused to leave and pled to be examined further. Hospital staff responded by calling the police, who promptly arrested her for trespassing and disorderly conduct. Even after she collapsed outside of the arresting officer’s patrol vehicle, the officer assumed she was faking and can be heard on the dashcam video telling an unresponsive Dawson, “Falling down like this, laying down, that’s not going to stop you from going to jail.” Within hours, Ms. Dawson was dead from a pulmonary embolism, a blood clot in her lungs.

It’s an open secret in US hospitals
that some patients and families are “good” and others are labeled “difficult.”
“Good” patients and families are (or are perceived to be) compliant: they
refrain from complaining or pushing back against medical advice or evaluations
and abide by social norms of manners and politeness. “Difficult” patients and
families challenge hospital staff.They may not easily
acquiesce to hospital directives, they may ask questions, or they may have
feelings.

But many patients and families who
are regarded as “difficult” are merely trying to understand and advocate for
themselves or their loved ones the best way they know how. Patients who speak
up tend to be more satisfied with their medical encounter and gain better
information about their medical conditions. Additionally, patient self-advocacy
is thought to be on element in the prevention of medical mistakes. As Dr. Louise Aronson writes in
defense of difficult patients in The New England Journal of Medicine,
“There will always be patients and families who are considered high
maintenance, challenging, or both by health care providers. Among them are a
few with evident mental illness, but most are simply trying their best to
understand and manage their own or their loved ones’ illness.” Dr. Aronson
found herself reluctant to speak up for her father, who was a hospital patient,
out of worry of being labeled “difficult” by the hospital staff. She spoke up
anyway and likely saved her father’s life.   

For black patients, the consequence of being “difficult” can be as deadly as any disease, injury, or illness, while the consequence of notstanding firmly for oneself can also be dangerous. It has been well-documented that black patients don’t get adequate pain relief: a 2016 study of 418 medical students and residents found that approximately 50 percent believed that black patients have “thicker skin,”and are, therefore, unable to feel pain to the extent that white patients do. Black women are three times more likely to die during and shortly after pregnancy than white women—research has connected this disparity directly to institutional racism. Even wealthy, high-profile pregnant black women, like Beyoncé Knowles-Carter and Serena Williams, had their symptoms minimized or ignored, leading to critical complications. 

So what is a black patient to do?  Despite medical personnel’s insistence that she was simply “confused” as a result of her pain medications, Serena Williams could afford to not back down. Not everyone can. And the consequences can linger long past the medical encounter. Black patients who find themselves with biased providers tend to have shorter medical encounters. And those who pick up on a physician’s bias tend to have greater difficulty recalling the treatment plan, further contributing to worsened health outcomes.         

Medical personnel do not leave their biases at the door when they enter healthcare spaces and don their scrubs. In fact, data show that medical professionals exhibit similar levels of implicit bias as the general population, and that these biases seem to have at least some effect on treatment and care decisions.

There is some recognition that it is not black patients’
responsibility to effectively respond to bias. In September, the California
State Legislature passed
a bill that would require implicit bias training for
healthcare workers. Ideally, such training would make healthcare workers
cognizant of the racialized dynamics that can shape the medical encounter,
including whether patients advocate for themselves and how their advocacy is
perceived. While not a panacea and at minimum requires a long-term
commitment to change, more states should take this first step. It could save
lives. 

Yolonda
Willson, PhD, is currently a fellow at the National Humanities Center and an
Encore Public Voices fellow with the OpEd Project.

The post The Dilemma of the Black Patient appeared first on The Health Care Blog.

Artificial Intelligence vs. Tuberculosis, Part 1

Artificial Intelligence vs. Tuberculosis, Part 1

By SAURABH JHA, MD

Slumdog TB

No one knows who gave Rahul Roy
tuberculosis. Roy’s charmed life as a successful trader involved traveling in his
Mercedes C class between his apartment on the plush Nepean Sea Road in South
Mumbai and offices in Bombay Stock Exchange. He cared little for Mumbai’s weather.
He seldom rolled down his car windows – his ambient atmosphere, optimized for
his comfort, rarely changed.

Historically TB, or
“consumption” as it was known, was a Bohemian malady; the chronic suffering produced
a rhapsody which produced fine art. TB was fashionable in Victorian Britain, in
part, because consumption, like aristocracy, was thought to be hereditary. Even
after Robert Koch discovered that the cause of TB was a rod-shaped bacterium –
Mycobacterium Tuberculosis (MTB), TB had a special status denied to its immoral
peer, Syphilis, and unaesthetic cousin, leprosy.

TB became egalitarian in the early twentieth
century but retained an aristocratic noblesse oblige. George Orwell may have
contracted TB when he voluntarily lived with miners in crowded squalor to
understand poverty. Unlike Orwell, Roy had no pretentions of solidarity with
poor people. For Roy, there was nothing heroic about getting TB. He was
embarrassed not because of TB’s infectivity; TB sanitariums are a thing of the
past. TB signaled social class decline. He believed rickshawallahs, not
traders, got TB.

“In India, many believe TB affects only
poor people, which is a dangerous misconception,” said Rhea Lobo – film maker
and TB survivor.

Tuberculosis is the new leprosy. The
stigma has consequences, not least that it’s difficult diagnosing a disease
that you don’t want diagnosed. TB, particularly extra-pulmonary TB, mimics many
diseases.

“TB can cause anything except
pregnancy,” quips Dr. Justy – a veteran chest physician. “If doctors don’t
routinely think about TB they’ll routinely miss TB.”

In Lobo, the myocobacteria
domiciled in the bones of her feet, giving her heel pain, which was variously
ascribed to bone bruise, bone cancer, and staphylococcal infection. Only when a
lost biopsy report resurfaced, and after receiving the wrong antibiotics, was TB
diagnosed, by which time the settlers had moved to her neck, creating multiple pockets
of pus. After multiple surgeries and a protracted course of antibiotics, she
was free of TB.

“If I revealed I had TB no one
would marry me, I was advised” laughed Lobo. “So, I made a documentary on TB
and started ‘Bolo Didi’ (speak sister), a support group for women with TB. Also,
I got married!”

Mycobacterium tuberculosis is an
astute colonialist which lets the body retain control of its affairs. The mycobacteria
arrive in droplets, legitimately, through the airways and settle in the breezy
climate of the upper lobes and superior segment of the lower lobes of the lungs.
If they sense weakness they attack, and if successful, cause primary TB.
Occasionally they so overpower the body that an avalanche of small, discrete
snowballs, called miliary TB, spread. More often, they live silently in calcified
lymph nodes as latent TB. When apt, they reappear, causing secondary TB. The
clues to their presence are calcified mediastinal nodes or a skin rash after
injection of mycobacterial protein.

MTB divides every 20 hours. In the
bacterial world that’s Monk-like libido. E. Coli, in comparison, divides every
20 minutes. Their sexual ennui makes them frustratingly difficult to culture.
Their tempered fecundity also means they don’t overwhelm their host with their
presence, permitting them to write fiction and live long enough to allow the
myocobacteria to jump ship.

TB has been around for a while. The
World Health Organization (WHO) wants TB eradicated but the myocobacteria have
no immediate plans for retirement. Deaths from TB are declining at a tortoise
pace of 2 % a year. TB affects 10 million and kills 1.6 million every year – it
is still the number one infectious cause of death.

The oldest disease’s nonchalance to
the medical juggernaut is not for the lack of a juggernaut effort. Mass
screening for TB using chest radiographs started before World War 2, and still
happens in Japan. The search became fatigued by the low detection of TB. The
challenge wasn’t just in looking for needles in haystacks, but getting to the
haystacks which, in developing countries, are dispersed like needles.

The battleground for TB eradication
is India, which has the highest burden of TB – a testament not just to its large
population. Because TB avoids epidemics, it never scares the crap out of people.
Its distribution and spread match society’s wealth distribution and
aspirations. And in that regard India is most propitious for its durability.

Few miles north of Nepean Sea Road is
Dharavi – Asia’s largest slum, made famous by the Oscar-winning film, Slumdog
Millionaire. From atop, Dharavi looks like thousand squashed coke cans beside
thousand crumpled cardboard boxes. On the ground, it’s a hot bed of economic
activity. No one wants to stay in Dharavi forever, its people want to become Bollywood
stars, or gangsters, or just very rich. Dharavi is a reservoir of hope.

Dharavi is a reservoir also of
active TB. In slums, which are full of houses packed like sardines in which
live people packed like sardines, where cholera spreads like wildfire and
wildfire spreads like cholera, myocobacteria travel much further. Familiarity
breeds TB. One person with active TB can infect nine – and none are any the wiser
of the infection because unlike cholera, which is wildfire, TB is a slow burn
and its symptoms are indistinguishable from the maladies of living in a slum.

Slum dwellers with active TB often
continue working – there’s no safety net in India to cushion the illness – and often
travel afar to work. They could be selling chai
and samosas outside the Bombay Stock
Exchange. With the habit of expectoration – in India, spitting on the streets
isn’t considered bad manners – sputum is aplenty, and mycobacteria-laden
droplets from Dharavi can easily reach Roy’s lungs. TB, the great leveler,
bridges India’s wealth divide. Mycobacteria unite Nepean Sea Road with Dharavi.

Rat in Matrix Algebra

The major challenges in fighting tuberculosis
are finding infected people and ensuring they take the treatment for the
prescribed duration, often several months. Both obstacles can wear each other–
if patients don’t take their treatment what’s the point finding TB? If TB can’t
be found what good is the treatment?

The two twists in the battle
against TB, drug resistant TB and concurrent TB and HIV, favor the
mycobacteria. But TB detection is making a resurgence with the reemergence of
the old warrior – the chest radiograph, which now has a new ally – artificial
intelligence (AI). Artificial Intelligence is chest radiograph’s Sancho Panza.

Ten miles north of Dharavi in slick
offices in Goregaon, Mumbai’s leafy suburb, data scientists training algorithms
to read chest radiographs are puzzled by AI’s leap in performance.

“The algorithm we developed,” says
Preetham Sreenivas incredulously, “has an AUC of 1 on the new set of
radiographs!”

AUC, or area under the receiver
operator characteristic curve, measures diagnostic accuracy. The two types of diagnostic
errors are false negatives – mistaking abnormal for normal, and false positives
– mistaking normal for abnormal. In general, fewer false negatives (FNs) means more
false positives (FPs); trade-off of errors. A higher AUC implies fewer “false”
errors, AUC of 1 is perfect accuracy; no false positives, no false negatives.

Chest radiograph are two-dimensional
images on which three dimensional structures, such as lungs, are collapsed and
which, like Houdini, hide stuff in plain sight. Pathology literally hides
behind normal structures. It’s nearly impossible for radiologists to have an
AUC of 1. Not even God knows what’s going on in certain parts of the lung, such
as the posterior segment of the left lower lobe.

Here, AI seemed better than God at
interpreting chest radiographs. But Sreenivas, who leads the chest radiograph team
in Qure.ai – a start-up in Mumbai which solves healthcare problems using
artificial intelligence, refused to open the champagne.

“Algorithms can’t jump from an AUC
of 0.84 to 1. It should be the other way round – their performance should drop when
they see data (radiographs) from a new hospital,” explains Sreenivas.

Algorithms mature in three stages.
First, training –  data (x-rays),
labelled with ground truth, are fed to a deep neural network (the brain),
Labels, such as pleural effusion, pulmonary edema, pneumonia, or no abnormality,
teach AI. After seeing enough cases AI is ready for the second step, validation
– in which it is tested on different cases taken from the same source as the
training set – like same hospital. If AI performs respectably, it is ready for
the third stage – the test.

Training radiology residents is
like training AI. First, residents see cases knowing the answer. Then they see
cases on call from the institution they’re training at, without knowing the
answer. Finally, released into the world, they see cases from different
institutions and give an answer.

The test and training cases come
from different sources. The algorithm invariably performs worse on test than
training set because of “overfitting” – a phenomenon where the algorithm tries
hard fitting to the local culture. It thinks the rest of the world is exactly
like the place it trained, and can’t adapt to subtle differences in images
because of different manufacturers, different acquisition parameters, or acquisition
on different patient populations. To reduce overfitting, AI is regularly fed cases
from new institutions.

When AI’s performance on
radiographs from a new hospital mysteriously improved, Sreenivas smelt a rat.

“AI is matrix algebra. It’s not corrupt
like humans – it doesn’t cheat. The problem must
be the data,” Sreenivas pondered.

Birth of a company

“I wish I could say we founded this
company to fight TB,” says Pooja Rao, co-founder of Qure.ai, apologetically.
“But I’d be lying. The truth is that we saw in an international public health
problem a business case for AI.”

Qure.ai was founded by Prashant
Warier and Pooja Rao. After graduating from the Indian Institute of Technology
(IIT), Warier, a natural born mathematician, did his PhD from Georgia Tech. He
had no plans of returning to India, until he faced the immigration department’s
bureaucratic incompetence. Someone had tried entering the US illegally on his
wife’s stolen passport. The bureaucracy, unable to distinguish the robber from
the robbed, denied her a work visa. Warier reluctantly left the US.

In India, Warier founded a company which
used big data to find preferences of niche customers. His company was bought by
Fractal, a data analytic giant – the purchase motivated largely by the desire
to recruit Warier.

Warier wanted to develop an AI-enabled
solution for healthcare. In India, data-driven decisions are common in retail but
sparse in healthcare. In a move unusual in industry and uncommon even in
academia, Fractal granted him freedom to tinker, with no strings attached.
Qure.ai was incubated by Fractal.

Warier discovered Rao, a
physician-scientist and bioinformatician, on LinkedIn and invited her to lead
the research and development. Rao became a doctor to become a scientist because
she believed that deep knowledge of medicine helps join the dots in the biomedical
sciences. After her internship, she did a PhD at the Max Planck Institute in
Germany. For her thesis, she applied deep learning to predict Alzheimer’s
disease from RNA. Though frustrated by Alzheimer’s, which seemed uncannily
difficult to predict, she fell in love with deep learning.

Rao and Warier were initially
uncertain what their start-up should focus on. There were many applications of
AI in healthcare, such as genomic analysis, analysis of electronic medical records,
insurance claims data, Rao recalled two lessons from her PhD.

“Diseases such as Alzheimer’s
are heterogeneous, so the ground truth, the simple question – is there
Alzheimer’s – is messy. The most important thing I realized is that without the ground
truth AI is useless.”

Rao echoed the sentiments of Lady
Lovelace, the first computer programmer, from the nineteenth century. When
Lovelace saw the analytical engine, the first “algorithm”, invented by Charles
Babbage, she said: “The analytical
Engine has no pretensions whatever to originate anything. It
can do whatever we know how to order it to perform. It can
follow analysis; but it has no power of anticipating any analytical relations
or truths.”

The second lesson Rao learnt was
that the ground truth must be available immediately, not in the future – i.e.
AI must be trained on diseases of the present, not outcomes, which are nebulous
and take time to reveal. The immediacy of their answer, which must be now,
right away, reduced their choices to two – radiology and pathology. Pathology
had yet to be digitized en masse.

“The obvious choice for AI was radiology”,
revealed Warier.

Why “Qure” with a Q, not “Cure”
with a C, I asked. Was it a tribute to Arabic medicine?

“We’re not that erudite,” laughed
Warier. “The internet domain for ‘cure’ had already been taken.”

Qure.ai was founded in 2016 during peak
AI euphoria. In those days deep learning seemed magical to those who understood
it, and to those who didn’t. Geoffrey Hinton, deep learning’s titan, famously
predicted radiologists’ extinction – he advised that radiologists should stop
being trained because AI would interpret the images just as well.

Bioethicist and architect of
Obamacare, Ezekiel Emanuel, told radiologists that their profession faced an
existential threat from AI. UK’s health secretary, Jeremy Hunt, drunk on the
Silicon Valley cool aid, prophesized that algorithms will outperform general
practitioners. Venture capitalist, Vinod Khosla predicted modestly that
algorithms will replace 80 % of doctors.

Amidst the metastasizing hype,
Warier and Rao remained circumspect. Both understood AI’s limitations. Rao was
aware that radiologists hedged in their reports – which often made the ground
truth a coin toss. They concluded that AI would be an incremental technology.
AI would help radiologists become better radiologists.

“We were firing arrows in the dark.
Radiology is vast. We didn’t know where to start,” recalls Rao.

Had Qure.ai been funded by venture
capitalists, they’d have a deadline to have a product. But Fractal prescribed
no fixed timeline. This gave the founders an opportunity to explore radiology.
The exploration was instructive.

They spoke to several radiologists
to better understand radiology, find the profession’s pain points, see what
could be automated, and what might be better dealt by AI. The advice ranged
from the flippant to the esoteric. One radiologist recommended using AI to
quantify lung fibrosis in interstitial pulmonary fibrosis, another, knee
cartilage for precision anti-rheumatoid therapy. Qure.ai has a stockpile of
unused, highly niche, esoteric algorithms.

Every radiologist’s idea of
augmentation was unique. Importantly, few of their ideas comprised mainstream practice.
Augmentation seemed a way of expanding radiologist’s possibilities, rather than
dealing with radiology’s exigencies – no radiologist, for instance, suggested
that AI should look for TB on chest radiographs.

Augmentation doesn’t excite venture
capitalists as much as replacement, transformation, or disruption. And
augmentation didn’t excite Rao and Warier, either. When you have your skin in
the commercial game, relevance is the only currency.

“Working for start-ups is different
from being a scientist in an academic medical center. We do science, too. But
before we take a project, we think about the return of investment. Just because
an endeavor is academically challenging doesn’t mean that it’s commercially
useful. If product don’t sell, start-ups have to close shop,” said Rao.

The small size of start-ups means
they don’t have to run decisions through bulky corporate governance. It doesn’t
take weeks convening meetings through Doodle polls. Like free climbers who
aren’t encumbered by climbing equipment, they can reach their goal sooner. Because
a small start-up is nimble it can fail fast, fail without faltering, fail a few
time. But it can’t fail forever. Qure needed a product it could democratize. Then
an epiphany.

In World War 2, after allied
aircrafts sustained bullets in enemy fire, some returned to the airbase and
others crashed. Engineers wanted the aircrafts reinforced at their weakest
points to increase their chances of surviving enemy fire. A renowned
statistician of the time, Abraham Wald, analyzed the distribution of the
bullets and advised that reinforcements be placed where the plane hadn’t been
shot. Wald realized that the planes which didn’t return were likely shot at the
weakest points. On the planes which returned the bullets marked their strongest
point.

Warier and Rao realized that they
needed to think about scenarios where radiologists were absent, not where
radiologists were abundant. They had asked the wrong people the wrong question.
The imminent need wasn’t replacing or even augmenting radiologists, but in supplying
near-radiologist expertise where not a radiologist was in sight. The epiphany
changed their strategy.

“It’s funny – when I’m asked whether
I see AI replacing radiologists, I point out that in most of the rest of the
world there aren’t any radiologists to replace,” said Rao.

The choice of modality – chest
radiographs – followed logically because chest radiographs are the most commonly
ordered imaging test worldwide. They’re useful for a number of clinical
problems and seem deceptively easy to interpret. Their abundance also meant
that AI would have a large sample size to learn from.

“There just weren’t enough radiologists
to read the daily chest radiograph volume at Christian Medical College,
Vellore, where I worked. I can read chest x-rays because I’m a chest physician,
but reading radiographs takes away time I could be spending with my patients,
and I just couldn’t keep up with the volumes,” recalls Dr. Justy. Several
radiographs remained unread for several weeks, many hid life-threatening conditions
such as pneumothorax or lung cancer. The hospital was helpless – their budget
was constrained and as important as radiologists were, other physicians and
services were more important. Furthermore, even if they wanted they couldn’t
recruit radiologists because the supply of radiologists in India is small.

Justy believes AI can offer two
levels of service. For expert physicians like her, it can take away the normal
radiographs, leaving her to read the abnormal ones, which reduces the workload
because the majority of the radiographs are normal. For novice physicians, and
non-physicians, AI could provide an interpretation – diagnosis, or differential
diagnoses, or just point abnormalities on the radiograph.

The Qure.ai team imagined those
scenarios, too. First they needed the ingredients, the data, i.e. the chest
radiographs. But the start-up comprised only a few data scientists, none of
whom had any hospital affiliations.

“I was literally on the road for
two years asking hospitals for chest radiographs. I barely saw my family,”
recalls Warier. “Getting the hospitals to share data was the most difficult
part of building Qure.ai.”

Warier became a traveling salesman
and met with leadership of over hundred healthcare facilities of varying sizes,
resources, locations, and patient populations. He explained what Qure.ai wanted
to achieve and why they needed radiographs. There were long waits outside the
leadership office, last minute meeting cancellations, unanswered e-mails,
lukewarm receptions, and enthusiasm followed by silence. But he made progress,
and many places agreed to give him the chest radiographs. The data came with stipulations.
Some wanted to share revenue. Some wanted research collaborations. Some had
unrealistic demands such as share of the company. It was trial and error for
Warier, as he had done nothing of this nature before.

Actually it was Warier’s IIT alumni
network which opened doors. IITians (graduates of the Indian Institutes of
Technology) practically run India’s business, commerce, and healthcare. Heads
of private equity which funds corporate hospitals are often IITians, as are the
CEOs of these hospitals.

“Without my IIT alumni network, I
don’t think we could have pulled it off. Once an IITian introduces an IITian to
an IITian, it’s an unwritten rule that they must help,” said Warier.

Warier’s efforts paid. Qure has now
acquired over 2.5 million chest radiographs from over 100 sites for training,
validation and testing the chest radiograph algorithm.

“As a data scientist my ethos is
that there’s no such thing as ‘too much data.’ More the merrier,” smiled
Warier.

“The mobile phone reached many
parts of India before the landline could get there,” explains Warier.
“Similarly, AI will reach parts of India before radiologists.”

Soon, a few others, including
Srinivas, joined the team. Whilst the data scientists were educating AI, Rao
and Warier were figuring their customer base. It was evident that radiologists
would not be their customers. Radiologists didn’t need AI. Their customers were
those who needed radiologists but were prepared to settle for AI.

“The secret to commercialization in
healthcare is need, real need, not induced demand. But it’s tricky because the
neediest are least likely to generate revenues,” said Warier in a pragmatic
tone. Unless the product can be scaled at low marginal costs. An opportunity
for Qure.ai arose in the public health space – the detection of tuberculosis on
chest radiographs in the global fight against TB. It was an indication that
radiologists in developing worlds didn’t mind conceding – they had plenty on
their plates, already.

“It was serendipity,” recalls Rao.
“A consultant suggested that we use our algorithm to detect TB. We then met
people working in the TB space – advocates, activists, social workers,
physicians, and epidemiologists. We were inspired particularly by Dr. Madhu Pai,
Professor of Epidemiology at McGill University. His passion to eradicate TB
made us believe that the fight against TB was personal.”

Qure.ai started with four people.
Today 35 people work for it. They even have a person dedicated to regulatory
affairs. Rao remembers the early days. “We were lucky to have been supported by
Fractal. Had we been operating out of a garage, we might not have survived. Building
algorithms isn’t easy.”

Finding Tuberculosis

Hamlet’s modified opening soliloquy,
“TB or not TB, that is the question”, simplifies the dilemma facing TB detection,
which is a choice between fewer false positives and fewer false negatives.
Ideally, one wants neither. The treatment for tuberculosis – quadruple therapy
– exacts several month commitment. It’s not a walk in the park. Patients have
to be monitored to confirm they are treatment compliant, and though directly
observed therapy, medicine’s big brother, has become less intrusive, it still consumes
resources. Taking TB treatment when one doesn’t have TB is unfortunate. But not
taking TB treatment when one has TB can be tragic, and defeats the purpose of
detection, and perpetuates the reservoir of TB.

Hamlet’s soliloquy can be broken
into two parts – screening and confirmation. When screening for TB, “not TB is
the question”. The screening test must be sensitive –capable of finding TB in
those with TB, i.e. have a high negative predictive value (NPV), so that when
it says “no TB” – we’re (nearly) certain the person doesn’t have TB.

Those positive on screening tests
comprise two groups – true positives (TB) and false positives (not TB). We don’t
want antibiotics frivolously given, so the soliloquy reverses; it is now “TB,
that is the question.” The confirmatory test must be specific, highly capable
of finding “not TB” in those without TB, i.e. have a high positive predictive
value (PPV), so that when it says “TB” – we’re (nearly) certain that the person
has TB. Confirmatory tests should not be used to screen, and vice versa.

Tuberculosis can be inferred on
chest radiographs or myocobacteria TB can be seen on microscopy. Seeing is
believing and seeing the bacteria by microscopy was once the highest level of
proof of infection. In one method, slide containing sputum is stained with carbol
fuchsin, rendering it red. MTB retains its glow even after the slide is washed
with acid alcohol, a property responsible for its other name – acid fast
bacilli.

Sputum microscopy, once heavily endorsed
by the WHO for the detection of TB, is cheap but complicated. The sputum
specimen must contain sputum, not saliva, which is easily mistaken for sputum. Patients
have to be taught how to bring up the sputum from deep inside their chest. The
best time to collect sputum is early morning, so the collection needs
discipline, which means that the yield of sputum depends on the motivation of
the patient. Inspiring patients to provide sputum is hard because even those
who regularly cough phlegm can find its sight displeasing.

Which is to say nothing about the
analysis part, which requires attention to detail. It’s easier seeing mycobacteria
when they’re abundant. Sputum microscopy is best at detecting the most
infectious of the most active of the active TB sufferers. Its accuracy depends
on the spectrum of disease. If you see MTB, the patient has TB. If you don’t
see MTB, the patient could still have TB. Sputum microscopy, alone, is too
insensitive and cumbersome for mass screening – yet, in many parts of the
world, that’s all they have.

The gold standard test for TB – the
unfailing truth that the patient has TB, independent of the spectrum of disease
– is culture of mycobacteria, which was deemed impractical because on the Löwenstein–Jensen
medium, the agar made specially for MTB, it took six weeks to grow MTB, which is
too long for treatment decisions. Culture has made a comeback, in order to
detect drug resistant mycobacteria. On newer media, such as MGIT, the mycobacteria
grow much faster.

The detection of TB was
revolutionized by molecular diagnostics, notably the nucleic acid amplification
test, also known as GeneXpert MTB/ RIF, shortened to Xpert, which simultaneously
detects mycobacterial DNA and assesses whether the mycobacteria are resistant
to rifampicin – one of the mainline anti-tuberculosis drugs.

Xpert boasts a specificity of 98 %,
and with a sensitivity of 90 % it is nearly gold standard material, or at least
good enough for confirmation of TB. It gives an answer in 2 hours – a
dramatically reduced turnaround time compared to agar. Xpert can detect 131
colony-forming units of MTB per ml of specimen – which is a marked improvement
from microscopy, where there should be 10, 000 colony-forming units of MTB per
ml of specimen for reliable detection. However, Xpert can’t be used on
everyone, not just because its sensitivity isn’t high enough – 90 % is a B
plus, and for screening we need an A plus sensitivity. But also its price,
which ranges from $10 – $20 per cartridge, and is too expensive for mass
screening in developing countries.

This brings us back to the veteran
warrior, the chest radiograph, which has a long history. Shortly after Wilhelm
Röntgen’s discovery, x-rays were used to see the lungs, the lungs were a
natural choice because there was natural contrast between the air, through
which the rays passed, and the bones, which stopped the rays. Pathology in the
lungs stopped the rays, too – so the ‘stopping of rays’ became a marker for
lung disease, chief of which was tuberculosis.

X-rays were soon conscripted to the
battlefield in the Great War to locate bullets in wounded soldiers, making them
war heroes. But it was the writer, Thomas Mann, who elevated the radiograph to
literary fame in Magic Mountain – a story about a TB sanitarium. The chest
radiograph and tuberculosis became intertwined in people’s imagination. By
World War 2, chest radiographs were used for national TB screening in the US.

The findings of TB on chest
radiographs include consolidation (whiteness), big lymph nodes in the
mediastinum, cavitation (destruction of lung), nodules, shrunken lung, and
pleural effusion. These findings, though sensitive for TB – if the chest radiograph
is normal, active TB is practically excluded, aren’t terribly specific, as they’re
shared by other diseases, such as sarcoid.

Chest radiographs became popular
with immigration authorities in Britain and Australia to screen for TB in
immigrants from high TB burden countries at the port of entry. But the WHO remained
unimpressed by chest radiographs, preferring sputum analysis instead. The inter-
and intra-observer variation in the interpretation of the radiograph didn’t
inspire confidence. Radiologists would often disagree with each other, and
sometimes disagree with themselves. WHO had other concerns.

“One reason that the WHO is weary
of chest radiographs is that they fear that if radiographs alone are used for
decision making, TB will be overtreated. This is common practice in the private
medical sector in India,” explains Professor Madhu Pai.

Nonetheless, Pai advocates that
radiographs triage for TB, to select patients for Xpert, which is cost
effective because radiographs, presently, are cheaper than molecular tests.  Using Xpert only on patients with abnormal
chest radiographs would increase its diagnostic yield – i.e. percentage of
cases which test positive. Chest radiograph’s high sensitivity compliments
Xpert’s high specificity. But this combination isn’t 100 % – nothing in
diagnostic medicine is. The highly infective endobronchial TB can’t be seen on
chest radiograph, because the mycobacteria never make it to the lungs, and
remain stranded in the airway.

“Symptoms such as cough are even more
non-specific than chest radiographs for TB. Cough means shit in New Delhi,
because of the air pollution which gives everyone a cough,” explains Pai,
basically emphasizing that neither the chest radiograph nor clinical acumen,
can be removed from the diagnostic pathway for TB.

A test can’t be judged just by its
AUC. How likely people – doctors and patients – are to adopt a test is also
important and here the radiograph outshines sputum microscopy, because despite
its limitations, well known to radiologists, radiographs still carry a certain
aura, particularly in India. In the Bollywood movie, Anand, an oncologist played by Amitabh Bachchan diagnosed terminal
cancer by glancing at the patient’s radiograph for couple of seconds. Not CT,
not PET, but a humble old radiograph. Bollywood has set a very high bar for
Artificial Intelligence.

Saurabh Jha (aka @RogueRad) is a contributing editor for THCB. This is part 1 of a two-part story.

The post Artificial Intelligence vs. Tuberculosis, Part 1 appeared first on The Health Care Blog.

The Definition of Health Data has Changed—and HHS is All Over It | Dr. Mona Siddiqui, HHS

The Definition of Health Data has Changed—and HHS is All Over It | Dr. Mona Siddiqui, HHS

By JESSICA DAMASSA, WTF HEALTH

Dr. Mona Siddiqui, Chief Data Officer at the US Department of Health & Human Services (HHS), says the definition of health data has changed. Health data is not just about what kind of data or where it came from, but, now, she says health data is more or less data that is defined by its intent. (Think how social media data is being used in healthcare these days for just a minute here..) Mona led a meeting with over 70 stakeholders across the healthcare industry this summer to talk next steps for this new era of health data: assessing risks and benefits, talking transparency, and looking at issuing recommendations for actions that HHS can be engaged in. What’s next as the industry continues to look to HHS for guidance around data policy? Tune in to find out.

Filmed at the HIMSS Health 2.0 Conference in Santa Clara, CA in September 2019.

Jessica DaMassa is the host of the WTF Health show & stars in Health in 2 Point 00 with Matthew HoltGet a glimpse of the future of healthcare by meeting the people who are going to change it. Find more WTF Health interviews here or check out www.wtf.health.

The post The Definition of Health Data has Changed—and HHS is All Over It | Dr. Mona Siddiqui, HHS appeared first on The Health Care Blog.

The Lynne Chou O’Keefe Fallacy

The Lynne Chou O’Keefe Fallacy

By MATTHEW HOLT

Rob Coppedge and Bryony Winn wrote an interesting article in Xconomy yesterday. I told Rob (& the world) on Twitter yesterday that it was good but wrong. Why was it wrong? Well it encompasses something I’m going to call the Lynne Chou O’Keefe Fallacy. And yes, I’ll get to that in a minute. But first. What did Rob and Bryony say?

Having walked the halls and corridors and been deafened by the DJs at HLTH, Rob & Bryony determined why many digital health companies have failed (or will fail) and a few have succeeded. They’ve dubbed the winners “Digital Health Survivors.” And they go on to say that many of the failures have been backed by VCs who don’t know health care while the companies they’ve invested in have “product-market fit problems, sales traction hiccups, or lack of credible proof points.”

What did the ” Survivors” do? They have:  

“hired health care experts, partnered effectively, and have even co-developed their models alongside legacy players. Many raised venture capital from strategic corporate investors who have helped them refine their product, accelerate channel access, and get past the risk of “death by pilot.”

Now it won’t totally shock you to discover that Rob heads Echo Health Ventures, the joint VC fund from Cambia Heath Solutions (Blues of Oregon) & BCBS of N. Carolina, and Bryony runs innovation at BCBS of N. Carolina. So they may be a tad biased towards the strategic venture = success model. But they do have a point. Many but not all of their portfolio are selling tools and services to the incumbents in health care, which mostly includes health plans, hospitals and pharma.

And now we get to the Lynne Chou O’Keefe fallacy. (You might argue that fallacy is the wrong term, but bear with me).

Lynne is another super smart VC and having decamped from the break up of Kleiner Perkins, has just started her own fund, Define Ventures. About two years ago she gave a talk at a Health 2.0 Chapter meeting in San Francisco which was a wonderful roadmap for what a tech company needed to do to “partner with” (i.e. sell to) hospitals, plans and pharma. As you might imagine it included a bunch of getting to know your customers’ problems, doing a whole amount of data analytics, getting your clinical process correct, etc, etc. But in the end, Lynne’s assessment was that the best bet for a health tech startup’s success was to improve the life of an incumbent.

I don’t have a problem with that advice per se, and frankly I send many of the startups I advise to incumbents, hoping that they’ll become clients or investors. But here’s the mistake made by Rob & Bryony. They say:

“Despite being attacked on all sides from innovative startups, well-capitalized tech companies, big retail brands, and government regulators, traditional health care services companies simply don’t seem disrupted yet. In fact, driven by consolidation and strong financial performance, many are healthier and appear more confident than ever. And some of the more successful ones even seem downright innovative themselves, having learned from innovators to build, buy, and partner their way to new capabilities.” “These Innovative Incumbents are differentiated by their commitment to becoming good partners with the “Digital Health Survivors.” They have realized that winning in the future means bringing together better solutions and consumer experiences than their competitors.”

There’s an illusion out there that these incumbents are doing well financially because they are able to take the best of the digital health tech and ideas and change what they are doing.

No one who looks at the US health care system can possibly believe that the incumbents have changed their behavior to adopt the consumer friendly ethos of the digital health tech crowd. They are making money the old fashioned way, by creating monopolies (including buying up physician networks to feed the inpatient beast), ramping up drug prices, (something Rob’s boss Mark Ganz has been very explicit about) and aggressively pursuing patients for collections. And when they get caught in the act, they settle to keep it out of the media. Now, given that the Obama administration was set on HITECH and trying to roll out the ACA, and the Trump administration can’t manage to create a coherent policy on anything, regulators have been more or less absent so it’s no surprise that we’ve had a decade plus of incumbents running rampant. That’s why we are in the mess we are in today and why 35% of the country wants single payer and another 35% want a massive expansion of the ACA.

I’m not sure if Rob & Bryony caught the talk from one of the incumbents at HLTH, but Pam Kehaly, CEO of Blues of Arizona, said something very revealing: “we pat ourselves on the back about value based care but 90% of America is just doing fee-for-service.” The incumbents know how to play that game and they are winning. Just look at the profit totals for those top provider systems.

(Stolen from Modern Healthcare via Eric Topol)

Which brings me to what’s wrong with all of this and it’s the question I asked Lynne at that talk, mindful of her Kleiner Perkins heritage and the alleged $1bn profit Kleiner made on their Amazon investment.

When Jeff Bezos came to pitch John Doerr to invest in Amazon, did he explain how he was going to build a system to help Barnes & Noble sell more books or help Sears sell more clothes? No, he came to put those companies out of business.

Where are the VCs who want to invest in today’s health care equivalent? Because that’s what we need.

Matthew Holt is the Founder and publisher of THCB

The post The Lynne Chou O’Keefe Fallacy appeared first on The Health Care Blog.