Analytics in Action News

Arguing the Pros and Cons of Artificial Intelligence in Healthcare

Artificial intelligence is a hot topic in healthcare, sparking ongoing debate about the ethical, clinical, and financial pros and cons of relying on algorithms for patient care.

Artificial intelligence in healthcare debate

Source: Getty Images

By Editorial Staff

- In what seems like the blink of an eye, mentions of artificial intelligence have become ubiquitous in the healthcare industry. 

From deep learning algorithms that can read CT scans faster than humans to natural language processing (NLP) that can comb through unstructured data in electronic health records (EHRs), the applications for AI in healthcare seem endless.

But like any technology at the peak of its hype curve, artificial intelligence faces criticism from its skeptics alongside enthusiasm from die-hard evangelists.

Despite its potential to unlock new insights and streamline the way providers and patients interact with healthcare data, AI may bring considerable threats of privacy problems, ethical concerns, and medical errors.

Balancing the risks and rewards of AI in healthcare will require a collaborative effort from technology developers, regulators, end-users, and consumers.

READ MORE: Artificial Intelligence Collaboration Aims To Improve Alzheimer’s Care 

The first step will be addressing the highly divisive discussion points commonly raised when considering the adoption of some of the most complex technologies the healthcare world has to offer.

AI WILL CHALLENGE THE STATUS QUO IN HEALTHCARE

AI in healthcare will challenge the status quo as the industry adapts to new technologies. As a result, patient-provider relationships will be forever changed, and the idea that AI will change the role of human workers to some extent is worth considering.

The idea that artificial intelligence will replace human workers has been around since the very first automata appeared in ancient myth. 

From Hephaestus’ forge workers to the Jewish legend of the Golem, humans have always been fascinated with the notion of imbuing inanimate objects with intelligence and spirit — and have equally feared these objects’ abilities to be better, stronger, faster, and smarter than their creators.

Seventy-one percent of Americans surveyed by Gallup in 2018 (the most recent data available) believe AI will eliminate more healthcare jobs than it creates. 

READ MORE: Adopting AI to Improve Patient Outcomes, Cost Savings, Health Equality

Just under a quarter reported believing the healthcare industry will be among the first to see widespread handouts of pink slips due to the rise of machine learning tools.

Their fears may not be entirely unfounded.

AI tools that consistently exceed human performance thresholds are constantly in the headlines, and the pace of innovation is only accelerating.

Radiologists and pathologists may be especially vulnerable, as many of the most impressive breakthroughs are happening around imaging analytics and diagnostics

In a follow-up to their 2016 report, Stanford University researchers in 2021 assessed advancements in AI over the last five years to see how perceptions and technologies have changed. Researchers found evidence of growing AI use in robotics, gaming, and finance.

READ MORE: Education Partnership To Improve Artificial Intelligence Practices

The technologies supporting these breakthrough capabilities are also finding a home in healthcare, and physicians are starting to be concerned that AI is about to evict them from their offices and clinics.

“Recent years have seen AI-based imaging technologies move from an academic pursuit to commercial projects. Tools now exist for identifying a variety of eye and skin disorders, detecting cancers, and supporting measurements needed for clinical diagnosis,” the report stated.

“Some of these systems rival the diagnostic abilities of expert pathologists and radiologists, and can help alleviate tedious tasks (for example, counting the number of cells dividing in cancer tissue). In other domains, however, the use of automated systems raises significant ethical concerns.”

At the same time, however, one could argue that there simply aren’t enough radiologists and pathologists — or surgeons, or primary care providers, or intensivists — to begin with. 

The US is facing a dangerous physician shortage, especially in rural regions, and the drought is even worse in developing countries around the world.

A little help from an AI colleague that doesn’t need that fifteenth cup of coffee during the graveyard shift might not be the worst idea for physicians struggling to meet the demands of high caseloads and complex patients, especially as the aging Baby Boomer generation starts to require significant investments in care.

AI may also help alleviate the stresses of burnout that drive physicians out of practice. The epidemic affects the majority of physicians, not to mention nurses and other care providers, who are likely to cut their hours or take early retirements rather than continue powering through paperwork that leaves them unfulfilled.

Automating some of the routine tasks that take up a physician’s time, such as EHR documentation, administrative reporting, or even triaging CT scans, can free up humans to focus on the complicated challenges of patients with rare or serious conditions.

Most AI experts believe that this blend of human experience and digital augmentation will be the natural settling point for AI in healthcare. Each type of intelligence will bring something to the table, and both will work together to improve the delivery of care.

For example, artificial intelligence is ideally equipped to handle challenges that humans are naturally not able to tackle, such as synthesizing gigabytes of raw data from multiple sources into a single, color-coded risk score for a hospitalized patient with sepsis. 

But no one is expecting a robot to talk to that patient’s family about treatment options or comfort them if the disease claims the patient’s life. Yet.

AI PRIVACY AND SECURITY CHALLENGES

Artificial intelligence presents a whole new set of challenges around data privacy and security – challenges that are compounded by the fact that most algorithms need access to massive datasets for training and validation.

Shuffling gigabytes of data between disparate systems is uncharted territory for most healthcare organizations, and stakeholders are no longer underestimating the financial and reputational perils of a high-profile data breach.

Most organizations are advised to keep their data assets closely guarded in highly secure, HIPAA-compliant systems. In light of an epidemic of ransomware and knock-out punches from cyberattacks of all kinds, CISOs have every right to be reluctant to lower their drawbridges and allow data to move freely into and out of their organizations.

Storing large datasets in a single location makes that repository a very attractive target for hackers. In addition to AI’s position as an enticing target to threat actors, there is a severe need for regulations surrounding AI and how to protect patient data using these technologies.

“Ensuring privacy for all data will require data privacy laws and regulation be updated to include data used in AI and ML systems," CSA maintained.

"Privacy laws need to be consistent and flexible to account for innovations in AI and ML. Current regulations have not kept up with changes in technology. HIPAA calls for deidentification of data; however, technology today can link deidentified data resulting in identification."

AI falls into a regulatory gray area, making it difficult to ensure that every user is bound to protecting patient privacy and will face consequences for not doing so.

In addition to more traditional cyberattacks and patient privacy concerns, a 2021 study by University of Pittsburgh researchers published in Nature Communications found that cyberattacks using falsified medical images could fool AI models.

The study shed light on the concept of “adversarial attacks,” in which bad actors aim to alter images or other data points to make AI models draw incorrect conclusions. The researchers began by training a deep learning algorithm to identify cancerous and benign cases with more than 80 percent accuracy.

Then, the researchers developed a “generative adversarial network” (GAN), a computer program that generates false images by misplacing cancerous regions from negative or positive images to confuse the model.

The AI model was fooled by 69.1 percent of the falsified images. Of the 44 positive images made to look negative, the model identified 42 as negative. Of the 319 negative images doctored to look positive, the AI model classified 209 as positive.

“What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” Shandong Wu, PhD, the study’s senior author and associate professor of radiology, biomedical informatics, and bioengineering at the University of Pittsburgh, explained in an accompanying press release.

“By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”

Security and privacy will always be paramount, but this ongoing shift in perspective as stakeholders get more familiar with the challenges and opportunities of data sharing is vital for allowing AI to flourish in a health IT ecosystem where data is siloed and access to quality information is one of the industry’s biggest obstacles.

ETHICS, RESPONSIBILITY, AND OVERSIGHT

The thorniest issues in the debate about artificial intelligence are the philosophical ones. In addition to the theoretical quandaries about who gets the ultimate blame for a life-threatening mistake, there are tangible legal and financial consequences when the word “malpractice” enters the equation.

Artificial intelligence algorithms are complex by their very nature. The more advanced the technology gets, the harder it will be for the average human to dissect the decision-making processes of these tools.

Organizations are already struggling with the issue of trust when it comes to heeding recommendations flashing on a computer screen, and providers are caught in the difficult situation of having access to large volumes of data but not feeling confident in the tools that are available to help them parse through it.

While artificial intelligence is presumed to be completely free of the social and experience-based biases entrenched in the human brain, these algorithms are perhaps even more susceptible than people to making assumptions if the data they are trained on is skewed toward one perspective or another.

There are currently few reliable mechanisms to flag such biases. “Black box” artificial intelligence tools that give little rationale for their decisions only complicate the problem – and make it more difficult to assign responsibility to an individual when something goes awry.

When providers are legally responsible for any negative consequences that could have been identified from data they have in their possession, they need to be absolutely certain that the algorithms they use are presenting all of the relevant information in a way that enables optimal decision-making.

An artificial intelligence algorithm used for diagnostics and predictive analytics should have the capability to be more equitable and objective than its human counterparts. Unfortunately, cognitive bias and a lack of comprehensive data can worm their way into AI and machine learning algorithms.

In a 2021 report, the Cloud Security Alliance (CSA) suggested that the rule of thumb should be to assume that AI algorithms contain bias and work to identify and mitigate those biases.

“The proliferation of modeling and predictive approaches based on data-driven and MI techniques has helped to expose various social biases baked into real-world systems, and there is increasing evidence that the general public has concerns about the societal risks of AI,” the report stated.

“Identifying and addressing biases early in the problem formulation process is an important step to improving the process.”

Developers may unknowingly introduce biases to AI algorithms or train the algorithms using incomplete datasets. Regardless of how it happens, users must be aware of the potential biases and work to manage them.

In 2021, the World Health Organization (WHO) released the first global report on the ethics and governance of AI in healthcare. WHO emphasized the potential health disparities that could emerge as a result of AI, particularly because many AI systems are trained on data collected from patients in high-income care settings.

WHO suggested that ethical considerations should be taken into account during the design, development, and deployment of AI technology.

Specifically, WHO recommended that individuals working with AI operate under the following ethical principles:

  1. Protecting human autonomy
  2. Promoting human well-being and safety and the public interest
  3. Ensuring transparency, explainability, and intelligibility
  4. Fostering responsibility and accountability
  5. Ensuring inclusiveness and equity
  6. Promoting AI that is responsive and sustainable

Bias in AI is a significant negative, but one that developers, clinicians, and regulators are actively trying to change.

Ensuring that artificial intelligence develops ethically, safely, and meaningfully in healthcare will be the responsibility of all stakeholders: providers, patients, payers, developers, and everyone in between.

There are more questions to answer than anyone can even fathom. But unanswered questions are the reason to keep exploring — not to hang back. 

The healthcare ecosystem has to start somewhere, and “from scratch” is as good a place as any. 

Defining the industry’s approaches to AI is an unfathomable responsibility and a golden opportunity to avoid some of the past mistakes and chart a better path for the future.

It’s an exciting, confusing, frustrating, optimistic time to be in healthcare, and the continuing maturity of artificial intelligence will only add to the mixed emotions of these ongoing debates. There may not be any clear answers to these fundamental challenges at the moment, but humans still have the opportunity to take the reins, make the hard choices, and shape the future of patient care.