Tools & Strategies News

Patient Trust in AI Impacted by Demographics, Clinician Involvement

New research shows that 52 percent of people are resistant to the use of artificial intelligence for diagnosis, but clinician support of the tech can improve trust.

trustworthy AI in healthcare

Source: Getty Images

By Shania Kennedy

- Researchers from the University of Arizona (UArizona) Health Sciences found that patients are almost evenly split about whether they would prefer a human clinician or an artificial intelligence (AI)-driven diagnostic tool, with preferences varying based on patient demographics and clinician support of the technology.

The results of the study, which was published last week in PLOS Digital Health, demonstrated that many patients do not believe that the diagnoses provided by AI are as trustworthy as those given by human healthcare providers. However, patients’ trust in their clinicians supported one of the study’s additional findings: that patients were more likely to trust AI if a healthcare provider supported its use.

These insights were generated using interviews and polls with study participants evaluating whether they would prefer AI- or clinician-guided diagnosis and treatment, in addition to the circumstances influencing these preferences.

In the study’s first phase, the research team conducted qualitative, semi-structured interviews to test 24 patients’ reactions to current and future AI technologies.

In the second phase, researchers gave 2,472 participants from various socioeconomic, racial, and ethnic groups a blinded, randomized survey that used clinical vignettes to capture patient perceptions through eight variables: illness severity, AI accuracy, whether the AI is personalized or tailored to the patient and their treatment plan, whether the AI is racially and financially unbiased, how the primary care physician may or may or may not incorporate the AI’s advice, and whether the provider nudged the patient toward the AI.

Overall, the research team discovered that patients were almost evenly split when initially asked whether they would prefer a human clinician or an AI, with 52.9 percent choosing the human and 47.1 percent choosing the AI.

However, upon re-questioning, if participants were prompted that their provider supported the use of the AI tool and felt that it was helpful, they were significantly more likely to accept it. AI uptake increased substantially when study participants were told that the AI was accurate and personalized, or if the primary care provider nudged the patient toward the AI option.

Illness severity, whether the AI is racially and financially unbiased, how the primary care physician may or may or may not incorporate the AI’s advice, and whether the AI tailored the treatment plan did not have significant impacts on AI acceptance.

Demographic factors and patient attitudes also affected AI uptake.

Older study participants, those who identified as politically conservative, and those who viewed religion as important were less likely to prefer AI. When compared to white patients, Native American participants were more likely to choose AI, while Black patients were less likely to do so.

The researchers also found that clinicians may be able to increase trust in AI and leverage it within their practices by focusing on the human element of patient-provider interactions.

“While many patients appear resistant to the use of AI, [factors like] accuracy of information, nudges and a listening patient experience may help increase acceptance,” explained Marvin J. Slepian, MD, JD, Regents Professor of Medicine at the UArizona College of Medicine – Tucson, in the press release. “To ensure that the benefits of AI are secured in clinical practice, future research on best methods of physician incorporation and patient decision making is required.”

The researchers further emphasized that clinician-driven approaches will be necessary to promote AI use across various groups before widespread adoption can be achieved.

“I really feel this study has the import for national reach. It will guide many future studies and clinical translational decisions even now,” Slepian said. “The onus will be on physicians and others in health care to ensure that information that resides in AI systems is accurate, and to continue to maintain and enhance the accuracy of AI systems as they will play an increasing role in the future of health care.”