Patients Are Becoming Less Open to AI in Healthcare

By Crystal Lindell

Many of us are using artificial intelligence (AI) in our everyday lives, such as learning more about our medical conditions and symptoms. But when it comes to actually using AI in their own healthcare, patients are less open to it.

That’s according to a new poll by The Ohio State University Wexner Medical Center, which surveyed 1,007 adults across the country about their opinions about AI in healthcare.

They found that just 42% of adults are open to AI being used as part of their healthcare in 2026. That’s down from the 52% who supported it in 2024. The belief that AI can make healthcare more efficient also fell, from 64% to 55%.

However, the survey found that over half of the adults (51%) still used AI to help them make important health decisions, without consulting with a medical professional.

Participants said they use AI for a variety reasons:

  • 62% use AI to help understand symptoms

  • 44% use AI to help explain test results or a medical diagnosis

  • 25% use AI to compare treatments and help make a treatment decision

  • 20% use AI to prepare for an upcoming medical appointment

The drop in patient trust with AI is on par with the natural “hype cycle” of any new technology, according to Ravi Tripathi, MD, chief health informatics officer at Ohio State Wexner Medical Center.

“When we first see something new and shiny, we think it's going to fix the world and replace health care and solve all of our medical problems,” Tripathi said. “People are learning that there are pros and cons of artificial intelligence, where it has actual use and where it really doesn't have a place.”

Tripathi predicts that over the next two to five years, trust in AI will increase, as people become more familiar with artificial intelligence and it becomes more common in healthcare technology.

But he warned patients against relying too heavily on AI for their own medical research.

“We know that 2% of the time AI is going to be inaccurate or it will potentially hallucinate,” Tripathi said. “Physicians are not using AI 100%. We're not trusting it 100%. I would be really concerned about a patient who is following AI. The artificial intelligence doesn't understand your story.” 

Tripathi suggests using AI in partnership with your doctor. AI can help patients compile their health data, explain test results and diagnoses, and help identify questions to ask providers.

“There's a strong value for using artificial intelligence as augmented intelligence,” Tripathi said. “Patients should have oversight of what the technology is doing but consult with their health care team for the final plan.”

While patients have mixed feelings about AI, doctors appear to be more open to it. 

According to a recent survey by the American Medical Association, 81% of physicians use AI to stay current on medical research and to help them with record keeping. That’s about double the rate in 2023, when the AMA first polled doctors on their AI use.

While 76% of physicians say AI technology can help with patient care, about 40% said they are both excited and worried about it – citing concerns about patient privacy and the integrity of the patient-physician relationship.

The global AI healthcare market is projected to reach $868 billion by 2030, with its influence on the overall healthcare market more than doubling from roughly 15% today to over 30% by 2030.

AI in Healthcare: Designed for Progress or Profit?

By Crystal Lindell

As a pain patient, I take a controlled substance medication, which means every single time I need a refill I have to contact my doctor. 

It doesn’t matter that this refill comes every 28 days and that I have been getting it refilled every 28 days for years. It doesn’t matter that my condition has no cure, and that I will most likely need this medication refilled every 28 days for the foreseeable future.

No. I have to make sure to contact my doctor and specifically ask for it, every single time.  

There are ways to automate this process. They could give me a set number of automatic refills and have them sent to the pharmacy every 28 days. If we were even more practical, they could just give me 60 to 90 days worth of pills at a time, and save me from multiple trips to the pharmacy. 

But because of insurance rules, hospital policies and opioid-phobia legislation, all of those options are impossible. In fact, they actively work to make a process that could be automated into something that has to be done manually. 

Which is why I’m so skeptical of Artificial Intelligence (AI) in healthcare. 

The promise of AI is that it can automate away the mundane tasks so many of us hate doing. Many health related tasks could easily be automated. They just purposefully are not. 

The hospital I go to for my medical care, University of Wisconsin-Madison, recently released a report filled with recommendations for how AI should be integrated into healthcare. It was based on a recent roundtable discussion that included healthcare professionals from across the country. 

But while the participant list included doctors, IT staff, policy experts, and academics, there was one very glaring absence – the list of participants included exactly zero patients. 

UW Health was one of the organizers for the panel, along with Epic, a healthcare software developer. Their report includes some seemingly good recommendations. 

They ask that AI be used to supplement the work that doctors, nurses and other healthcare staff perform, as opposed to replacing the staff altogether. They say AI could be a great tool to help reduce staff burnout. 

They also recommend that the technology be set up in such a way that it also helps those living in rural areas, in addition to those in more metropolitan ones. The report also emphasizes that healthcare systems should prioritize “weaving the technology into existing systems rather than using it as a standalone tool.”

Additionally, the report stressed the need for federal regulations to “balance space for innovation with safeguarding patient data and ensuring robust cybersecurity measures.”

I don’t disagree with any of that. But it’s a little frustrating to see those recommendations, when some of those problems could already be solved if we wanted them to be. 

And while the panel’s report is new, UW Health’s use of AI is not. 

In April, UW Health announced that they were participating in a new partnership program with Microsoft and Epic to develop and integrate AI into healthcare. 

At the time they said the innovation would be focused on “delivering a comprehensive array of generative AI- powered solutions… to increase productivity, enhance patient care and improve financial integrity of health systems globally.”

That’s the real motivation to bring AI into healthcare: make more money by improving “financial integrity.” Something tells me that AI won’t be used to lower patient’s bills though. 

UW Health also recently shared that its nurses were using AI to generate responses to patients. Over 75 nurses were using generative AI, which assisted them in creating more than 3,000 messages across more than 30 departments.

“This has been a fascinating process, and one I’ve been glad to be part of,” said Amanda Weber, registered nurse clinic supervisor, UW Health. “I have found having a draft to start from helpful, and I’m glad I could provide feedback on improvements and features to ensure this can be a good tool for nurses and have a positive impact on our patients.”

Before I even knew about this program, I had a feeling that AI was involved. 

Recently, when I messaged my doctor about my upcoming refill, I received an overly-formal, odd response that felt very much like generative AI writing to me. Which is fine. I honestly don’t mind if my doctor saves time by using AI to respond to patient emails. Heck, I myself have used AI to write first drafts of some emails. 

But my doctor and his staff wouldn’t even need to reply to my emails if he was allowed to set up automatic refills of my long-time medication instead. 

There are many ways to improve health care, and tools like generative AI are likely among them. But AI can’t solve problems that exist on purpose. 

Unless patients are at the forefront of the conversations about these tools, I fear they’ll only be used to solve the sole problem hospital administrators actually care about: how to make more money.