Skip to main content
main content, press tab to continue
Article

Navigating AI risks in Professional Liability

By Dr. Joanne Cracknell and Roberto Felipe | October 12, 2023

With Artificial Intelligence (AI) usage increasing, questions remain relating to coverage and emerging risks. Our Professional Indemnity (PI) Insurance team assess AI's impact on the PI landscape and how to manage the associated risks.
|Financial, Executive and Professional Risks (FINEX)
Artificial Intelligence

Since ChatGPT’s launch in November 2022, AI has been gaining a lot more attention from the media, popping up on friendly talking circles and, of course, in the insurance community. AI is now a perpetual feature of our life, offering greater efficiency, automation and autonomy for businesses and their clients. Insureds are also increasingly raising questions about insurance coverage and possible future restrictions, and underwriters are concerned about potential new exposures. So, seeing the numerous questions and concerns from clients and underwriters, Joanne Cracknell and Roberto Felipe joined forces to propose a few thoughts on how AI is impacting the Professional Liability universe.

Although it is a growing underwriting concern at the moment, it is recognised that AI may reduce risk for insureds and insurers alike, particularly as AI can take many forms and can include machine-based learning, deep learning, narrow AI, and artificial general intelligence. The speed that AI can process vast amounts of data to identify potential problems and anomalies cannot be done by any human, which enables predictive analysis and fast (non-human) decision making leading to positive results. If the use of AI is based on a robust corporate policy by the insured, it should offer extra comfort to underwriters in terms of risk prevention and management.

Navigating the risks

Nevertheless, new technology used without adequate supervision poses fresh challenges and risk. The current key concerns around using AI is centred on:

  • potential for copyright infringement
  • data privacy and security
  • accuracy and liability for any errors

First, care needs to be taken over the use of content as not all responses are unique and can be derived from existing works increasing the risk of breach of copyright. Content needs to be correctly cited to avoid any plagiarism accusations.

In addition, the use of AI and chatbots, such as ChatGPT, has raised concerns around data privacy and security, which is of great concern for professionals handling sensitive confidential information. PI insureds must ensure that any AI services complies with the requisite data protection legislation and regulators’ Codes of Conduct, and they must implement adequate security measures.

It is vital to acknowledge that ChatGPT, for instance, have recognised accuracy as an ongoing concern and therefore, the onus of the accuracy of the content falls to the user. OpenAI, the owner and developer of ChatGPT, has waived all liability rights for any damages from using this chatbot. Regarding ‘accuracy’ the terms of OpenAI state that: -

Within the context of Legal Services Professional Liability, AI proves helpful for carrying out legal research, contract comparison, due diligence, FAQs, and increased access to legal services. However, there is no control over accuracy and the data is sourced using algorithms. The Limitations on Liability clause is very clear about waiving liability, suggesting that there is very little recourse against OpenAI if there is an error in its output which has been relied upon and results in a potential claim against a law firm by a client. Therefore, any work produced by AI should undergo the same or greater scrutiny as work conducted by a trainee solicitor/junior lawyer. The use of AI does not remove the need for supervision and checking the quality and/or accuracy of the work.

As client facing executives, our immediate concerns revolve around whether our clients are already using AI and have informed their insurers. Additionally, clients inquire whether their insurance policies are contemplating this exposure. In general terms, PI Insurance’s objective is to indemnify third party losses arising from negligent acts or omissions. On this basis, provided AI is used in the context of professional business and a relevant claim arises in connection with those services, the policy should respond to the relevant claim.

Nevertheless, it is crucial to initiate a dialogue and inform underwriters about AI's usage in rendering professional services. AI's integration extends to various industries, including Construction, where it enhances efficiency and quality across various stages. From Pre-Design with feasibility studies to validating building code compliance during Permitting and Approvals, or even optimising Project Management, AI streamlines tasks, making them more efficient and cost-effective.

Hence, regardless of the underlying business using AI, discussing how AI is employed, including the insured's policy, protocols, and controls, with underwriters is imperative. This not only raises awareness of potential exposure but also highlights risk mitigation through disciplined and effective technology utilisation.

Therefore, the conversation on how the AI is being used (including full insured’s policy on the matter, protocols, and controls) is necessary to be held with underwriters, so they are not only aware of potential exposure, but can also appreciate how risk is being mitigated with a disciplined and effective use of this technology.

Underwriters concerns in AI usage are centred on professionals recklessly using chatbots which can result in inaccurate advice and wrongful professional acts. An illustrative example emerged earlier this year, involving a US lawyer[1] who sought to establish legal precedent using ChatGPT without adhering to established protocols.

Similar situations could arise if accountants exclusively rely on chatbots for tax solutions or engineers employ AI for project specifications, bypassing corporate/market-tested and approved tools.

Creating a risk aware culture

Without adopting a consistent risk management policy encompassing AI - addressing scope, limitations, supervision, approval process, copyright, data protection and client confidentiality, there may likely be an increase in the risk of reckless use by individuals. Underwriters will request evidence of a risk aware culture and a diligent corporate policy that clearly outlines how and when AI is used, monitored, and controlled. Insureds must demonstrate that they have assessed and averted potential risks stemming from the use of AI. While the landscape of AI exposure in Professional Liability Insurance continues to evolve, it's imperative to initiate these discussions now, as we're merely at the inception of a broader conversation.

Proactive steps to manage AI risks to your organisation

AI is a powerful tool, offering immense benefits when harnessed responsibly. However, it can be a double-edged sword, posing significant risks when handled recklessly. As professionals in the insurance community, we understand the importance of helping our clients with identifying and taking proactive steps to manage AI-related risks within your organisations, and would encourage you to:

  • Assess Your AI Practices: Evaluate how AI is currently integrated into your operations. Are there areas where it can be more effectively utilised, or where risks need further mitigation?
  • Review and Update Policies: Ensure that your corporate policies encompass AI usage, addressing scope, limitations, supervision, approval processes, copyright, data protection, and client confidentiality.
  • Engage in Dialogue: Initiate discussions with your underwriters about your AI usage. Make them aware of your potential exposures and how you're mitigating risks through disciplined and effective technology utilisation.
  • Establish a Risk-Aware Culture: Foster a culture within your organisation that is aware of AI risks and actively seeks to mitigate them. Train your teams and instil a sense of responsibility regarding AI usage.
  • Stay Informed: AI's landscape is ever evolving. Keep abreast of new developments, regulations, and best practices in AI risk management.
  • Collaborate: Share your insights and experiences with peers in the industry. By learning from each other, we can collectively enhance our understanding of AI risks.

The journey to effectively navigate AI risks is ongoing. We encourage you to actively engage with your insurance broker to proactively manage AI-related risks within your organisation. Together, we can harness the power of AI while safeguarding against its potential pitfalls.

Footnote

  1. ChatGPT: US lawyer admits using AI for case research and New York lawyers sanctioned for using fake ChatGPT cases in legal brief | Reuters. Return to article

Authors


Director - PI FINEX Legal Services

Director, FINEX Global

Contact us