Skip to main content
main content, press tab to continue
Article

Artificial and augmented intelligence: Risk management considerations for healthcare

By Joan M. Porcaro | October 11, 2023

The rapid development of AI applications is transforming patient-facing services, including diagnostics and predictive analytics for enhancing prevention and treatment strategies.
N/A
N/A

Following the news, it's evident that healthcare professionals are keenly interested and engaged in discussions surrounding artificial intelligence (AI). The rapid development of AI applications is transforming patient-facing services, including diagnostics and predictive analytics for enhancing prevention and treatment strategies. As the surge of new applications continues, risk management professionals are increasingly taking notice.

Background: Navigating the evolution of AI

Taking a moment to reflect, AI is not a recent invention. Mathematician Alan Turing published the first paper on AI benefits in 1950. While the concept isn't new, the intrigue lies in the evolving applications for patients and providers. The potential applications of AI in various facets of life have grown exponentially.

Keeping up with the definitions for computer and robotic applications has become somewhat frenetic as well. For the purposes of this article, let’s review the language of AI.

What is artificial intelligence (AI)?

While everyone’s definition of AI is different depending on who you ask, generally speaking, AI is a broad branch of computer science concerned with creating systems, applications, and machines capable of performing tasks too complex for humans. It achieves this by processing and analyzing data, enabling it to understand and learn from past data points through specially designed AI algorithms.

Despite the hype and fear surrounding the term "artificial intelligence," it is already employed globally for tasks ranging from mundane to incredible. Some common examples of AI include smart assistants like Alexa and Siri, social media monitoring, face recognition, smartphones, search engines like Google, and much more.

What are predictive analytics?

Predictive analytics, a common tool in data science, interprets historical data to make informed predictions about the future. It employs techniques such as data mining, modeling, machine learning and statistics, aiding in identifying upcoming risks and opportunities for organizations. Examples of predictive analytics in action includes weather forecasting, Amazon’s recommendations for purchase and similar items, modeling of flu trends and insurance risk assessments.

While AI and predictive analytics overlap, the most significant difference lies in autonomy. AI can be autonomous and learn independently, whereas predictive analytics often requires human interaction to help query data, identify trends, and test assumptions.

Machine learning (ML)

AI and ML overlap considerably, with ML being a subset of AI. However, key differences exist, beyond just the fact that AI is a broader term than ML. AI aims to create computer systems imitating the human brain, focusing on broad and complex problems. In contrast, ML is more task-focused, training machines to perform specific tasks and learn in the process. AI tends to focus on solving broad and complex problems, whereas ML focuses on streamlining a certain task to maximize performance.

Unlike predictive analytics, ML can be autonomous and has broader applications beyond predicting data, including error detection, pattern recognition, and more. Predictive analytics can use ML to achieve its goal of predicting data, but that’s not the only technique it uses.

Benefits: What can AI do?

Predictive analytics is a core function of AI, rapidly analyzing large volumes of data and classifying data points. Machines can learn from experience and can be trained to accomplish specific tasks. Examples of AI applications include predictive analytics, clinical pathway decision-making, wearable tech, and off-site or home-based patient monitoring. AI can enhance efficiency, support workforce shortages, provide accurate work product and reduce training costs.

Addressing workforce shortages with workflow automation and AI

Radiology departments increase productivity with healthcare technology that leverages AI to enable faster scan times with higher resolution in imaging modalities like MR, even with patients who are in pain or struggle to hold their breath during an exam. As a result, radiology departments can scan more patients in a day while supporting diagnostic confidence and improving the patient experience at the same time.

Risks: A cautionary tale

The use of AI in clinical diagnostics and monitoring carries both risks and benefits. Predictive analytics can assist in identifying potential health issues, yet unknown risks and awareness of assumptions can emerge with the adoption of new technology over time. Many remain optimistic that large language “generative artificial intelligence” models will improve healthcare. However, this isn’t the first time that using algorithms to make recommendations has gone awry.

Good news

AI can manage repetitive tasks, increase efficiency while reducing error rates for work processes and potentially reduce caregiver burnout by handling administrative tasks that burden providers in current healthcare settings. The American Medical Association (AMA) reports that 20 hours or more per week are spent on administrative tasks, and AI could alleviate this burden.

Sacramento, Calif.-based Sutter Health, for instance, uses augmented intelligence to redirect 20% of patient portal messages away from physicians to a more appropriate care-team member and give personalized, anticipatory advice to expectant mothers, leading to a 15% drop in in-basket messages there.

From the staff perspective, digital and AI solutions may offer a way to alleviate some of the load on the care team while also providing training tools that are cutting edge.

Not ready for prime time

In a demonstration conducted by UNC Health, another AI format, ChatGPT was used to offer medical advice. ChatGPT was given information that included a patient’s health history and then asked how the patient should be treated. While the results were spot on to how the human doctor proposed to treat the patient, the use of ChatGPT for medical diagnosis has created some controversy and concerns linger regarding inaccuracies and “made-up” information in similar scenarios.

There are limitations and risks to these delivery processes and it’s imperative that the healthcare professional understand what AI can, and cannot, do. Due diligence is needed, including the following areas:

  • Biased decision making: Although there are many opportunities for AI, a report from the World Health Organization points out associated challenges and risks, including unethical collection and use of health data, biases encoded in algorithms, as well as risks to patient safety, cybersecurity and the environment.
  • Socioeconomic inequality: Algorithms may create an opportunity for abuses (think deepfakes), such as sparking concerns for job loss because of automation.
  • Privacy violations: Privacy may not be assured for virtual care or AI, and personal health information may be at risk of breach.
  • Available historical data: AI requires massive data sets in order to “learn,” thus requiring volumes of patient health information for the machine learning to take place. With this level of “machine” training, data output validation takes place. AI and machine learning foster the creation of algorithms. With continued advancement of this technology, some groundbreaking applications offer a new way to monitor risks — the so-called ‘black box algorithm.’ These algorithms are constantly updating and learning based on data inputs — but the exact identity and weighting of variables cannot be determined as they constantly evolve.
  • Learning curve: Ensure that staff understand the limitations of AI.
  • Human factors: However, at issue relative to risks and benefits is the possibility that AI may not be able to interpret all human nuances, which could result in biases, lapses and unintended consequences in care. While AI can certainly enhance the capabilities of healthcare providers, it should never replace them entirely.
  • Ethical issues and dilemmas: Ethical concerns are justified when false information is pushed out. Another issue to consider: Do people want early warning of insipient disease?

Regulations

Currently, AI regulations are absent in the United States, but changes are anticipated. The White House's "Blueprint for an AI Bill of Rights" outlines principles guiding the design and implementation of automated systems, emphasizing safety, non-discrimination, data privacy, user notice, and human alternatives.

Insurance implications

The evolving nature of AI necessitates vigilance from healthcare leaders regarding regulatory changes. Efficiencies created by AI may not eliminate risks entirely. Errors resulting in injury may still occur and liability considerations may extend from the provider to new stakeholders such as software developers, bringing forth product liability concerns.

Physicians, health systems, and algorithm designers are subject to different, yet overlapping theories of liability for AI/ML systems. A hospital could be sued over the actions of its physician employees for unsafe deployment of an AI/ML algorithm. Additional risks can be found in AI applications as part of the diagnostic care team, with an algorithm lacking the decision-making skills to differentiate the underlying reasons for those decisions.

Given the differing policy triggers, multiple insurers could argue over the proximate cause of any loss: was it a healthcare error, provider technique mishaps, technology error or cyber incident? The use of artificial intelligence can also affect a company’s business income valuation and the coverage limit chosen.

In a recently published article on AI insurance, the authors state that, “despite enthusiasm about the potential to apply artificial intelligence (AI) to medicine and health care delivery, adoption remains tepid, even for the most compelling technologies. Well-designed AI liability insurance can mitigate predictable liability risks and uncertainties in a way that is aligned with the interests of health care’s main stakeholders, including patients, physicians, and health care organization leadership. A market for AI insurance will encourage the use of high-quality AI, because insurers will be most keen to underwrite those products that are demonstrably safe and effective. As such, well-designed AI insurance products are likely to reduce the uncertainty associated with liability risk for both manufacturers — including developers of software as a medical device — and clinician users and thereby increase innovation, competition, adoption, and trust in beneficial technological advances.”

Risk mitigation

Organizations considering AI implementation should:

  1. Create policies and procedures for AI-based applications, devices, and wearables.
  2. Establish a multidisciplinary team (including the end user) to review new products, services, or devices being brought into the organization before implementation, to guard against unexpected outcomes.
  3. To ensure for safety, test the effectiveness of the AI processes through the use of Failure Mode and Effects Analysis.
  4. Develop training checklists for the care team using AI devices. Educate the care team on escalation strategies, should there be a question regarding the device integrity or when injury occurs.
  5. Involve insurance carriers and brokers to review potential insurance implications.
  6. Track and trend all device incidents. Ensure the care team knows the process for reporting such incidents and build into organizational device-management policies the requirements for reporting to regulators any issues could have, or did, result in harm.
  7. Be cautious with AI vendor contracting and insert privacy requirements into the agreement.
  8. Consistently monitor AI systems after deployment with checks and balances in place to ensure safety and accuracy to the plan of care.
  9. Utilize tools like the Technology Assessment Checklist for evaluating acquisitions.

Conclusion

While AI has the potential to revolutionize healthcare, understanding and mitigating risks is critical. Implementing robust security measures, vetting and validating AI systems, and ensuring healthcare providers remain integral to patient care will contribute to the ethical and safe use of AI in the healthcare industry.

Disclaimer

Willis Towers Watson hopes you found the general information provided in this publication informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, Willis Towers Watson offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).

Author


RN, BSN, MM, CPHRM, FASHRM
Director, Operational & Risk Management Consulting

Related content tags, list of links Article Healthcare
Contact us