T

The Human Side of AI in Healthcare

Since an artificial intelligence (AI) medical consultant made its debut more than 50 years ago, the use of AI in hospitals and other clinical settings has advanced and spread to multiple facets of healthcare. Today, physicians and administrators use machine learning and other AI systems for everything from mundane scheduling tasks to assisting with complex surgeries in order to create efficiencies in the hopes of decreasing healthcare costs and improving patient outcomes. 

It’s an alluring promise that most people can get behind. Who doesn’t want cheaper, more effective medical treatments? But when University of Minnesota School of Public Health (SPH) faculty Hannah Neprash and Paige Nong peeled back the hype, they found that in many instances, the use of AI in healthcare has actually led to worse results on metrics like patient trust and total cost of care. 

“The biggest problem I have with AI in healthcare right now is that it’s like we’re walking around with a hammer looking for a nail, when it should be the other way around,” says Nong, an SPH assistant professor. “We should be asking, ‘What is our biggest challenge? Does it need AI? And if so, why?’—which would be a big step forward in ensuring that we avoid the negative consequences of overhyping a tool that may not deliver.” 

A Shortfall of Trust

SPH Assistant Professor Paige Nong says, “We should be asking, ‘What is our biggest challenge? Does it need AI? And if so, why?’ — which would be a big step forward in ensuring that we avoid the negative consequences of overhyping a tool that may not deliver.

In February, Nong published the results of a survey of 2,000 Americans that showed nearly two-thirds of respondents don’t trust their healthcare systems to use AI responsibly. Even more concerning, the majority of respondents said they lacked confidence that their health systems would protect them from potential AI-related harm. 

But as Nong found in her research, patients’ mistrust of healthcare AI isn’t necessarily driven by technological illiteracy. Instead, the key predictor of trust in AI was overall trust in healthcare systems, which has been declining for years.

This trust deficit matters because effective healthcare fundamentally depends on patient trust. 

When patients don’t have confidence in the tools their providers use, it can affect everything from their willingness to seek care to adherence to treatment. 

And patients may have reason to be suspicious of AI models. Many AI systems are not fully evaluated for bias before they are deployed. Inherent bias in the model can happen when models are trained on data that is incomplete or non-representative of the patient population. This can lead to worse outcomes for patients from under-represented communities.

For example, a recent study examined four leading AI models that had the highest success rates in diagnosing liver disease from blood test results. Although these algorithms performed well overall, they performed significantly worse on women than men. Other studies have found similar biases associated with race in algorithms widely used by health systems, such as an algorithm used to predict the success of vaginal birth after cesarean (VBAC), which led to Black and Hispanic women receiving more C-sections than white women.  

Despite the importance of identifying bias in data models, Nong found that fewer than half of the nearly 2,500 hospitals she surveyed had evaluated their AI systems for bias.

Part of the problem is resource gaps among providers.“A lot of hospitals don’t have data science staff to pull a model apart to do these bias evaluations,” says Nong. “They’re getting multiple calls a day from software vendors that are promising to save millions of dollars, but it’s really hard to assess those promises if you don’t have a data science team.” 

Following the Money

As Nong points out, one of the biggest selling points for healthcare AI systems is the return on investment for care providers, which are under increasing financial pressure and facing severe labor shortages from the back office to the operating room. 

“If you listen to the hype, AI stands to be the silver bullet for healthcare spending by targeting the right treatment to the right patient at the right time,” says Hannah Neprash, an associate professor at SPH. “Unfortunately, it looks like AI may increase healthcare spending rather than decrease it.”

Neprash’s research focuses on how clinical AI adoption affects Medicare spending, the second largest program in the federal budget. Today, Medicare only covers a few dozen AI-enabled services. While cost reduction from AI use in healthcare is possible, Neprash says that overall it has yet to drive down the cost of care delivery. 

“To save money in healthcare, you basically have to either do less of something or pay less for it,” says Neprash. “We’re definitely not paying less for it.

However, AI may still be worth the cost to improve patient outcomes, reduce clinician burnout, and enable patient-centered care. In some areas of medicine, such as image-based diagnostics, AI systems may reduce the need for invasive testing, while catching potentially deadly diseases earlier than standard diagnostics. Emerging clinical AI tools, like ambient scribe technology which captures patient-physician conversations, may also end up significantly reducing the documentation burden that drives clinician burnout and reduces face time with their patients. 

“I don’t think we’re going to be saving a lot of money because of AI in healthcare just yet,” Neprash says. “That doesn’t mean it doesn’t improve the quality of care and is something we should spend money on—the verdict is still very much out on whether the AI tools will be worth the cost to improve patient outcomes.

Critical Thinking in the Age of AI

Both Nong and Neprash are the first to highlight that when it comes to the use of AI in healthcare, there are still far more questions than there are satisfying answers. The important thing is that the next generation of healthcare professionals are equipped to ask the right questions and drive meaningful change in a rapidly evolving AI landscape. 

To this end, the duo launched a new graduate course in the spring focused on AI in healthcare that gave students the opportunity to engage directly with industry representatives from major health tech companies in the Twin Cities who are working on the frontlines of the AI adoption curve. 

“I think it’s important for everyone to have a certain baseline of AI literacy, but these are students who are going to have careers in healthcare administration so our goal was to help them develop critical thinking about these tools,” says Nong. “What questions do you need to ask these vendors? What kind of ethical and equity concerns are there?”

In a field where the technology evolves faster than the evidence, that skeptical, informed perspective may be the most valuable tool of all.

Begin typing your search above and press return to search. Press Esc to cancel.

Menu