Following the news, it's evident that healthcare professionals are keenly interested and engaged in discussions surrounding artificial intelligence (AI). The rapid development of AI applications is transforming patient-facing services, including diagnostics and predictive analytics for enhancing prevention and treatment strategies. As the surge of new applications continues, risk management professionals are increasingly taking notice.
Taking a moment to reflect, AI is not a recent invention. Mathematician Alan Turing published the first paper on AI benefits in 1950. While the concept isn't new, the intrigue lies in the evolving applications for patients and providers. The potential applications of AI in various facets of life have grown exponentially.
Keeping up with the definitions for computer and robotic applications has become somewhat frenetic as well. For the purposes of this article, let’s review the language of AI.
While everyone’s definition of AI is different depending on who you ask, generally speaking, AI is a broad branch of computer science concerned with creating systems, applications, and machines capable of performing tasks too complex for humans. It achieves this by processing and analyzing data, enabling it to understand and learn from past data points through specially designed AI algorithms.
Despite the hype and fear surrounding the term "artificial intelligence," it is already employed globally for tasks ranging from mundane to incredible. Some common examples of AI include smart assistants like Alexa and Siri, social media monitoring, face recognition, smartphones, search engines like Google, and much more.
Predictive analytics, a common tool in data science, interprets historical data to make informed predictions about the future. It employs techniques such as data mining, modeling, machine learning and statistics, aiding in identifying upcoming risks and opportunities for organizations. Examples of predictive analytics in action includes weather forecasting, Amazon’s recommendations for purchase and similar items, modeling of flu trends and insurance risk assessments.
While AI and predictive analytics overlap, the most significant difference lies in autonomy. AI can be autonomous and learn independently, whereas predictive analytics often requires human interaction to help query data, identify trends, and test assumptions.
AI and ML overlap considerably, with ML being a subset of AI. However, key differences exist, beyond just the fact that AI is a broader term than ML. AI aims to create computer systems imitating the human brain, focusing on broad and complex problems. In contrast, ML is more task-focused, training machines to perform specific tasks and learn in the process. AI tends to focus on solving broad and complex problems, whereas ML focuses on streamlining a certain task to maximize performance.
Unlike predictive analytics, ML can be autonomous and has broader applications beyond predicting data, including error detection, pattern recognition, and more. Predictive analytics can use ML to achieve its goal of predicting data, but that’s not the only technique it uses.
Predictive analytics is a core function of AI, rapidly analyzing large volumes of data and classifying data points. Machines can learn from experience and can be trained to accomplish specific tasks. Examples of AI applications include predictive analytics, clinical pathway decision-making, wearable tech, and off-site or home-based patient monitoring. AI can enhance efficiency, support workforce shortages, provide accurate work product and reduce training costs.
Radiology departments increase productivity with healthcare technology that leverages AI to enable faster scan times with higher resolution in imaging modalities like MR, even with patients who are in pain or struggle to hold their breath during an exam. As a result, radiology departments can scan more patients in a day while supporting diagnostic confidence and improving the patient experience at the same time.
The use of AI in clinical diagnostics and monitoring carries both risks and benefits. Predictive analytics can assist in identifying potential health issues, yet unknown risks and awareness of assumptions can emerge with the adoption of new technology over time. Many remain optimistic that large language “generative artificial intelligence” models will improve healthcare. However, this isn’t the first time that using algorithms to make recommendations has gone awry.
AI can manage repetitive tasks, increase efficiency while reducing error rates for work processes and potentially reduce caregiver burnout by handling administrative tasks that burden providers in current healthcare settings. The American Medical Association (AMA) reports that 20 hours or more per week are spent on administrative tasks, and AI could alleviate this burden.
Sacramento, Calif.-based Sutter Health, for instance, uses augmented intelligence to redirect 20% of patient portal messages away from physicians to a more appropriate care-team member and give personalized, anticipatory advice to expectant mothers, leading to a 15% drop in in-basket messages there.
From the staff perspective, digital and AI solutions may offer a way to alleviate some of the load on the care team while also providing training tools that are cutting edge.
In a demonstration conducted by UNC Health, another AI format, ChatGPT was used to offer medical advice. ChatGPT was given information that included a patient’s health history and then asked how the patient should be treated. While the results were spot on to how the human doctor proposed to treat the patient, the use of ChatGPT for medical diagnosis has created some controversy and concerns linger regarding inaccuracies and “made-up” information in similar scenarios.
There are limitations and risks to these delivery processes and it’s imperative that the healthcare professional understand what AI can, and cannot, do. Due diligence is needed, including the following areas:
Currently, AI regulations are absent in the United States, but changes are anticipated. The White House's "Blueprint for an AI Bill of Rights" outlines principles guiding the design and implementation of automated systems, emphasizing safety, non-discrimination, data privacy, user notice, and human alternatives.
The evolving nature of AI necessitates vigilance from healthcare leaders regarding regulatory changes. Efficiencies created by AI may not eliminate risks entirely. Errors resulting in injury may still occur and liability considerations may extend from the provider to new stakeholders such as software developers, bringing forth product liability concerns.
Physicians, health systems, and algorithm designers are subject to different, yet overlapping theories of liability for AI/ML systems. A hospital could be sued over the actions of its physician employees for unsafe deployment of an AI/ML algorithm. Additional risks can be found in AI applications as part of the diagnostic care team, with an algorithm lacking the decision-making skills to differentiate the underlying reasons for those decisions.
Given the differing policy triggers, multiple insurers could argue over the proximate cause of any loss: was it a healthcare error, provider technique mishaps, technology error or cyber incident? The use of artificial intelligence can also affect a company’s business income valuation and the coverage limit chosen.
In a recently published article on AI insurance, the authors state that, “despite enthusiasm about the potential to apply artificial intelligence (AI) to medicine and health care delivery, adoption remains tepid, even for the most compelling technologies. Well-designed AI liability insurance can mitigate predictable liability risks and uncertainties in a way that is aligned with the interests of health care’s main stakeholders, including patients, physicians, and health care organization leadership. A market for AI insurance will encourage the use of high-quality AI, because insurers will be most keen to underwrite those products that are demonstrably safe and effective. As such, well-designed AI insurance products are likely to reduce the uncertainty associated with liability risk for both manufacturers — including developers of software as a medical device — and clinician users and thereby increase innovation, competition, adoption, and trust in beneficial technological advances.”
Organizations considering AI implementation should:
While AI has the potential to revolutionize healthcare, understanding and mitigating risks is critical. Implementing robust security measures, vetting and validating AI systems, and ensuring healthcare providers remain integral to patient care will contribute to the ethical and safe use of AI in the healthcare industry.
Willis Towers Watson hopes you found the general information provided in this publication informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, Willis Towers Watson offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).