Don’t Let AI Advancements Outpace Quality Patient Care

Author: Janet Fan


        What if the next time you walk in for a colonoscopy, AI is helping your doctor diagnose you? AI is increasingly used in diagnostic tools for patients. While modern advancements in data collection makes machine learning more easy than ever before, they also come with great risks that are not always considered. One way that Big Data and machine learning is being applied in healthcare is through diagnostic imaging. In machine learning, the computer is trained on a large number of pictures, for example, of MRI scans with cancer tumors and without tumors. Gradually, the computer is then able to recognize images based on patterns from previous pictures. Champions of AI in healthcare argue that this could be a benefit to patients and help health providers increase quality of care. “It works like a second pair of eyes, like a shoulder-to-shoulder partner,” Dr. Po-Hao Chen, a diagnostic radiologist argues, “The combined team of human plus AI is when you get the best performance.” However, significant advancement in technology, especially in a sector as sensitive and vital as healthcare, should also be approached with appropriate caution. For example, studies have shown discrepancies in the effectiveness of AI diagnostics across racial lines based on training data that is not diverse. The study concluded that these tools, originally intended to streamline care for patients, could actually end up hurting marginalized communities that already have limited access to healthcare. Of course, this is not an AI error; rather, it is a human error and an example of humans passing on their subconscious biases onto the AI. However, this is still a valid concern if AI continues to be adopted on a mass scale.


        In addition, there is a crucial question about the role of patient consent in how AI should be used in their care. For example, what amount of AI error is comparable to human error and therefore acceptable? Should patients be allowed to opt-out of the use of AI in their care? If so, there needs to be more education about the risks and benefits of AI in healthcare for the general population to ensure that information is not asymmetric. In the United States, the Supreme Court case Schloendorff v. Society of New York Hospital set a precedent that “every human being of adult years and sound mind has the right to determine what shall be done with his own body.” This established the role of consent in providing appropriate care to patients. However, now that AI is increasingly being incorporated into healthcare, this question needs to be revisited in the context of hospital policies. Should patients be aware if their physician is using generative AI for charting? Or is this within the general discretion of the physician to do what they see is best for patients? These are crucial questions that need answers before we fully embrace AI in healthcare.


        In sum, this is not an attack on AI. AI is a tool that can be used for great good, if used appropriately.


Works Cited


Binkley, Charles E. “Is Informed Consent Necessary When Artificial Intelligence is Used for Patient Care: Applying the Ethics from Justice Cardozo’s Opinion in Schloendorff v. Society of New York Hospital.” Justia, 19 July 2024, verdict.justia.com/2024/07/19/is-informed-consent-necessary-when-artificial-intelligence-is-used-for-patient-care. Accessed 18 June 2025.

“How AI Is Being Used to Benefit Your Healthcare.” Cleveland Clinic, 5 Sept. 2024, health.clevelandclinic.org/ai-in-healthcare. Accessed 18 June 2025.

Marko, John Gabriel O, et al. “Examining Inclusivity: The Use of AI and Diverse Populations in Health and Social Care: A Systematic Review.” BMC Medical Informatics and Decision Making, vol. 25, no. 1, 5 Feb. 2025, https://doi.org/10.1186/s12911-025-02884-1.

Leave a Comment

Your email address will not be published. Required fields are marked *