Viewing entries by
Alexis Stadler

Generative AI in Healthcare: Worth the Risk? 

Comment

Generative AI in Healthcare: Worth the Risk? 

In an overburdened healthcare system, health care providers are being asked to do more things with less time. An overwhelming administrative burden can diminish equity and efficiency within a healthcare practice. In the modern world, we have a new ecosystem of tools at our disposal, generative AI being of the emerging technologies. Generative AI like ChatGBT have become groundbreaking for content creation. The creation of audio, text, images and simulations have been accelerated through generative AI in a wide range of public U.S. systems. But organizing and compartmentalizing information brings a downfall that we ourselves recognize as perhaps inevitable: bias. At what point does generative AI transition into another bias-coding organism, in which forms of discrimination taint patient diagnosis or treatment plans?  

Many are wary of valuing efficiency over equity. Others encourage use of AI in healthcare, as practice makes perfect. As we take our first steps in AI development, generative AI has fallen into the medical world. ChatGBT’s release in 2022 expanded generative AI usage in a multitude of public sectors, healthcare being one of many. Clinicians must navigate complex webs of patient information while balancing ethical concerns. Little concern has sparked regarding use of AI for objective, administrative tasks. Healthcare is data and text rich. Scanning medical imaging or drafting pre-authorization requests for insurers are some of the lower risk possibilities. Saurabh Johri, chief scientific officer at Babylon Health, highlights the benefits of generative AI for administrative tasks, specifically for tele-medicine visits: “we have also developed generative AI models optimized for telemedicine consultations to automatically summarize patient-clinician consultations in near-real time, reducing the administrative burden placed on clinicians” [1]. Furthermore, many patients feel lost about their diagnosis or healthcare plan due to communication gaps with physicians. Thus, another outcome is using generative AI to merge this gap between physician and patient vocabulary. Generative AI can “support clinical decision-making [and] enhance patient literacy with educational tools that reduce jargon,” [2] said Jacqueline Shreibati, M.D., senior clinical lead. 

But eventually we reach the aspect of medicine that is not so black and white, where evaluating a person’s condition becomes less objective. Personalized medicine and patient diagnoses are unique to every patient, where treatment plans are crafted on a personal level. ChatGBT’s use has been contentious for patient diagnosis, where a patient’s symptoms act as input and the algorithm produces a diagnosis as output. 

Is this too good to be true? Some highlight that generative AI cannot stand in the shoes of a physician to make a diagnosis. First, forms of racial bias have emerged. Racial bias in particular has hindered the use of generative AI in healthcare; senior clinical lead at Google emphasizes this point: “A lot of [health] data has structural racism baked into the code” [2]. The National Institute for Health Care Management (NIHCM) describes AI use in healthcare now as a major risk: “embedding race into health care data and decisions can unintentionally advance racial disparities in health” [3]. 

How can this be? Generative AI in healthcare is often used to assess a patient’s risk for a condition, or to identify a patient’s general health needs. Therefore, if an algorithm receives input regarding trends where a health condition and racial background correlate, outcomes can be skewed. NIHCM highlighted this in a 2019 study, in which an algorithm replaced a physician’s judgment. Health risk scores were assigned to Black and White patients. Black patients, who were significantly sicker than White patients, received the same risk score. This was ultimately due to a trend embedded in the algorithm: Black patients have lower health care spendings than White patients for a given level of health, likely due to disparities in the health care system. Without this element of bias in the algorithm, Black patients would have received around 30% more care [3]. 

Other limitations of generative AI are simply due to the nature of technology. Some point out that a patient’s narrative and personal condition cannot be reduced to patterns and facts. Although ChatGBT scores well on national medical exams, a patient’s pain is often multifaceted. This was highlighted by ER doctor Joshua Tamayo-Sarver, who experimentally tested ChatGBT’s patient diagnosis abilities. Taymayo-Sarver presented medical narratives and symptoms from 40 of his patients to ChatGBT. Only 50% of ChatGBT’s diagnoses were correct. He concluded that “the art of medicine is extracting all the necessary information required to create the right narrative…we must be very careful to avoid inflated expectations with programs like ChatGPT, because in the context of human health, they can literally be life-threatening” [4]. 

ChatGBT has sparked conversation for generative AI. We face a crowded healthcare system built by hardworking clinicians. Many are tempted to implement AI into all aspects of health care and work to eliminate algorithm bias. Others highlight the fragility of human lives, in which algorithms are unfit to evaluate deeply personalized human conditions. As AI continues its exponential growth, we fall into a cost-benefit analysis of unfamiliar territory. Will human growth in medicine be supported or jeopardized through AI implementation? 






Work Cited

  1. Siwicki, Bill. “A Primer on Generative AI – and What It Could Mean for Healthcare.” Healthcare IT News, 9 Mar. 2023, https://www.healthcareitnews.com/news/primer-generative-ai-and-what-it-could-mean-healthcare.

  2. King, Robert. “Google, Microsoft Execs Share How Racial Bias Can Hinder Expansion of Health Ai.” Fierce Healthcare, Questex, 23 Feb. 2023, https://www.fiercehealthcare.com/health-tech/google-microsoft-execs-share-how-racial-bias-can-hinder-expansion-health-ai.

  3. Jones, David S. “Racial Bias in Health Care Artificial Intelligence.” NIHCM, https://nihcm.org/publications/artificial-intelligences-racial-bias-in-health-care.

  4. Tamayo-Sarver, Joshua. Chatgpt in the Emergency Room? the AI Software Doesn't Stack Up. FastCompany, https://www.fastcompany.com/90863983/chatgpt-medical-diagnosis-emergency-room. 

Comment

Physicians Have a Duty to Treat Patients in Times of Personal Risk

Comment

Physicians Have a Duty to Treat Patients in Times of Personal Risk

Covid-19 has led to renewed interest and discussion regarding the duties of physicians in a high risk environment. The pandemic resulted in large shortages in emergency and critical care providers, and those that remained were overworked and dealing with shortages of key equipment such as personal protection devices. In one study during the pandemic, about 25% of physicians and nurses thought it was ethical for health care providers to abstain from treating patients given the personal risk to themselves and their families [1]. Those who choose healthcare as a career certainly assume certain risks relating to psychological stress, long hours, and personal risk such as exposure to harmful and potentially deadly infections. When these challenges increase dramatically, which moral and ethical duties are inherent in the jobs of physicians? I argue that there is an ethical duty for physicians to treat patients despite the personal risk involved during events such as pandemics.

An important consideration in discussing physicians’ duties and responsibilities is that of implied consent. Since it is commonly accepted that some patients are infected and contagious, it is reasonable to assume that risk is inherent in the field of medicine, and by entering the field certain risks are implied and accepted. Although this does not imply that a duty to treat exists, it establishes an accepted reality regarding physician practice.

In 1847, the American Medical Association published its first Code of Ethics stating, “When pestilence prevails, it is their [physicians’] duty to face the danger, and continue their labors for the alleviation of suffering even at the jeopardy of their own lives” [2, p. 3]. While this wording no longer exists in the AMA code, it points to the long tradition of self-sacrifice in the field of medicine, a concept which draws many to pursue the profession. In addition, pledges made by physicians such as the Hippocratic Oath reference the special nature of physicians’ duties. The World Medical Association has a similar pledge, although like the Hippocratic Oath, does not address risk to physicians [2].

When discussing the difficult concept of physician duty to treat, it is also useful to consider the ethical concept of beneficence. In their work entitled, Bioethics: The Islamic Perspective, Al-Bar and Chamsi-Pasha argue that the principle of beneficence has special meaning for health care workers, and implies unique moral obligations. They state, “beneficence is a continuum…professionally things which are considered as supererogatory for the public become obligatory for the professional, e.g., a physician or nurse in a hospital where he is tending patients with highly infectious diseases” [3, p. 132]. This perspective goes a step further in applying unique moral duties to physicians.

Making broad arguments about moral and ethical duties is particularly challenging, because physicians may face health risks that extend to others–family members at home may be especially vulnerable to an infectious disease. Although contentious, the concept of beneficence still applies: physicians have a unique commitment to provide. In the case of a future epidemic, perhaps one that is more lethal than Covid-19, a physician who pauses their labors to protect members at home may indirectly result in deaths of others. Risks that extend into a physician’s personal life should not interfere with priorities when health care is valuable and dire. These duties should be clearly outlined for prospective health care providers, as ambiguities in a code of ethics will not guarantee care from physicians.



References

  1. McConnell D. (2020), “Balancing the duty to treat with the duty to family in the context of the COVID-19 pandemic”, Journal of medical ethics, April 24, 2020


  1. Kirsch T. D. (2022), “Heroism Is Not a Plan-From "Duty to Treat" to "Risk and Rewards”,” The American journal of bioethics, March 4, 2022


  1. Al-Bar MA, Chamsi-Pasha H. “Contemporary Bioethics: Islamic Perspective”, Cham (CH): Springer, May 28, 2015





















Comment