Applying AI to healthcare areas, such as diagnostics and predictive analytics, offers significant opportunities for improving healthcare outcomes. But while AI-enabled change is rapid and potentially transformative, these advances are generating new risk management considerations.
To help your healthcare organisation navigate emerging AI risks, in this insight, we examine:
Back in 2020, the World Economic Forum (WEF) predicted how AI would access multiple sources of data to reveal patterns in disease and aid treatment and care. WEF also forecasted healthcare systems would be able to predict an individual’s risk of certain diseases, suggest preventative measures, and also highlighted how AI would help reduce waiting times for patients and improve efficiency in hospitals and health systems.
We can expect to see wider and increasingly advanced applications of AI in healthcare in line with WEF’s predictions, supported by private and public sector investment. For example, this year, a £21m AI Diagnostic Fund was announced by the U.K. Health and Social Care Secretary. Its aim is to accelerate the deployment of the most promising AI imaging and decision-support tools to help diagnose patients more quickly for conditions such as cancers, strokes, and heart conditions.
In addition, in June the UK Government announced its intent to host the first global AI summit which took place last week.
By rapidly analysing vast amounts of data, AI can classify data points, accomplish specific tasks and learn from experience by harnessing machine learning. Examples of AI applications in healthcare include clinical pathway decision-making, wearable tech and off-site or home-based patient monitoring.
AI can also support increased efficiency, helping to address workforce shortages and reduce training costs. For example, radiology departments have been seen to increase productivity using healthcare technology that leverages AI to enable faster scan times with higher image resolution. This allows radiology departments to scan more patients in a day with diagnostic confidence and improve the patient experience at the same time through those shorter scan times.
We have also seen AI developers working on modelling that can predict heart failure-related health outcomes for veterans, a project launched by a collaboration between regulators and healthcare organisations in the U.K. and U.S.
We’re also seeing AI being used within domiciliary care to analyse data from care workers’ visit reports to produce risk assessments of individuals, predicting the likelihood of falls and hospital admission. Where an alert is triggered, regional service managers compare the automated assessment with the care worker’s written report to make informed decisions about the individuals’ care needs, potential intervention measures and escalation to other agencies.
Whether for clinical tasks or administrative work, AI can manage repetitive tasks and increase efficiency while reducing error rates for work processes. AI may also support reduced caregiver burnout by taking over some of their tasks.
Earlier this year, for example, care home review site carehome.co.uk reported more than half of care home staff think homes should use AI, such as smart devices, to help care for residents. Carehome.co.uk says AI can help people with limited mobility to regain some of their autonomy using their voice to control their environment, such as by operating light switches and temperature, as well as enabling them to call friends and family.
Ultimately healthcare providers should first consider what is the area of their business which requires support and/or a new approach and whether AI is the answer.
Regulatory frameworks and guidelines can play a crucial role in ensuring your healthcare organisation uses and governs AI responsibly. Governments and organisations worldwide are actively working on establishing standards and frameworks implementing AI ethically and safely. The U.K. government's approach emphasises voluntary compliance using existing regulators and laws, while the EU's proposed AI Act takes a risk-based approach and introduces stringent standards for high-risk AI systems.
In late 2022, the Medicines and Healthcare products Regulatory Agency (MHRA) updated its ‘Software and AI as a Medical Device Change Programme’ to help ensure regulatory requirements for software and AI are clear and that patients are protected.
Meanwhile, the multi-agency AI and Digital Regulations Service was launched in June 2023 to advise the NHS and wider care system on using digital and AI technologies.
While AI holds great promise for healthcare organisations, there are risks and challenges you should be ready to respond to, including ethical issues. These can arise when false information is propagated, and also originates from the inability of AI to interpret human nuances which can result in biases, lapses, and unintended consequences in care.
Some assumptions are ‘baked into’ technological programmes and can arise after longer periods of use. For example, in May 2023, the U.S. National Eating Disorders Association (NEDA) replaced its volunteer-run helpline with an AI chatbot. A study suggested that at times the chatbot unexpectedly reinforced harmful behaviours.
To address decisions made by AI that risk worsening healthcare outcomes for patients based on their profile and background, the NHS is trialing a programme designed to identify algorithmic biases in systems used to administer healthcare.
Healthcare organisations will need to clearly understand what AI can and cannot do and you will need to perform due diligence is key areas, specifically:
There are also challenges around attitudes to AI in healthcare, including educating your people on what AI can and cannot do and the role of AI in the future of your healthcare organisation.
Some employees may distrust AI or have concerns as to how it may impact them and their job security, which can impact their mental wellbeing and may further compound workforce challenges.
Patients can also be dubious. A Harvard Business Review report ‘AI Can Outperform Doctors, so Why Don’t Patients Trust It?’ indicated patients are reluctant to use healthcare provided by medical AI even when it outperforms human doctors. This is because they see their medical needs as unique and that ‘AI does not take into account one’s idiosyncratic characteristics and circumstances’. In other words, some people don’t believe taking care of their health can be adequately addressed by algorithms.
While AI can help reduce some risks, it does not eliminate the possibility of errors resulting in injury which means healthcare providers, software developers, and algorithm designers, need to consider the complex challenges and the exposures to potential liabilities. For example, with the introduction of AI who makes the ultimate decision on patient care; the healthcare provider or the technology itself?
To avoid disputes over responsibility/liability we may expect increased examples of historically separate lines of insurance, such as medical malpractice, cyber insurance, and technology errors and omissions converge being combined into one policy underwritten by a single insurer to address concerns over the proximate cause of loss and avoid arguments over which insurer is liable.
Healthcare providers seeking to implement AI should adopt risk mitigation strategies to ensure patient safety and regulatory compliance. These strategies include:
Artificial intelligence and augmented intelligence have the potential to revolutionise healthcare, but you will need to manage the risks carefully to implement these technologies successfully and without risking harm to patients and your organisation.
By understanding the evolving landscape of AI and addressing the associated risks and challenges, healthcare organisations can leverage emerging technologies to improve efficiency, patient care, and overall outcomes.
To discover smarter way to understand and mitigate the risks around AI in healthcare, get in touch.