Skip to main content
main content, press tab to continue
Article

AI and communication - is it risky business?

By Rebecca Atkinson | February 1, 2022

What actually is AI? Why would firms adopt it and how should they mitigate the associated risks?
N/A
Legal PI Risk Management

Rebecca Atkinson, Director of Risk & Compliance at Howard Kennedy LLP considers whether AI and communication apps are a risk to businesses within this article.

This article has been written by a Third Party contributor and the views expressed are not necessarily those of WTW.

In the last decade legal tech has come on leaps and bounds and the number of tech companies has grown exponentially. COVID-19 appears to have led to the further adoption of tech to enable working from home (in particular communication applications) but what about AI tech? What really is AI, why would firms adopt it and what needs to be thought about from the risk side? Further, what are the risks in using communication applications?

What is AI really?

When some people think of AI they might think of a humanoid from a movie, a piece of technology masquerading as a human, doing what a human does. Alas (or not) AI in the legal world is not that exciting (or it may be to you depending on your viewpoint).

In terms of AI in the legal world, we are often talking about using natural language processing (NLP) software to perform a sort of ‘word search on steroids’ to find information contained in contracts and court records at speed and at little cost.

It’s called AI or ‘legal AI’ because the NLP is trained up using machine learning (this gives it the ‘artificial intelligence’ bit), whether assisted by a lawyer while using it, or prior to that by the software vendor, and in many cases both.

AI in law firms can relate to due diligence reviews, eDiscovery, legal research, knowledge management tools, review and red-lining contracts and more.

It may be that in years to come AI develops to be more of the kind found in movies, a sort of ‘artificial intelligence lawyer’ if you will, but we aren’t there yet.

What are the risks in using AI?

There are undoubtedly many benefits to using AI tech in law firms (speed and cost being the main ones) and indeed failure to embrace technological changes and advances is to stand still which is, in itself, a risk to law firms. However, the use of AI tech should not be implemented blindly and the risks of the use of each piece of kit needs to be considered.

One of the biggest risks is the output of the tech and whether it will be checked by a lawyer or other suitable person at the firm before being utilised or sent to the client. It is important to check a firm’s insurance position on this as many insurers will not cover losses caused by inaccuracies produced by the legal AI. Often also the legal AI provider will not indemnify the firm against such losses and will not warrant the output.

This leaves firms very vulnerable to lack of insurance cover which is not only a risk but also a regulatory issue as there is no minimum cover in place. It is vital that there is a good relationship between the IT department who will undoubtedly seek to innovate and the Risk department so that they can assess risk for each new piece of AI tech. Indeed, a risk assessment process should take place on each occasion.

Firms need to also be aware of the Law Society’s view on the use of AI as set out in their lawtech and ethics principles report1 published on 28 July 2021 in which they set out that:

We can only hope the Law Society is referring to unusual and novel new AI tech and not everyday AI tech such as knowledge management tools or eDiscovery platforms.

The rise of communication apps and associated risks

As we live in an ever-increasing virtual world, the breadth of communication apps seems ever increasing and client demand for their use shows no sign of waning.

Instant messaging platforms have become mainstream in law firms, court rooms and mediations and of course there is good old SMS which now seems like a dinosaur in comparison.

However, what risks does the use of such apps pose and what can firms do to mitigate those risks?

Firms need to consider:

  1. What applications are being used in the firm? It might be time for full disclosure by fee earners with an amnesty promised so that the firm can fully understand and appreciate the risks it faces.
  2. Do firm mobile devices allow users to download and use any app they want? Should certain applications be locked down to minimise the risk?
  3. If certain applications are being used for client work, is this in breach of the application licence? i.e. allows for personal but not business use.
  4. How secure is the application? Is it vulnerable in any way?
  5. How will data subject access requests be handled when data is spread across applications? How will data be extracted?
  6. Is it really a good idea to be conversing with a client in an informal, chatty style? Could messages be misinterpreted?

Firms may determine that they cannot get away from the use of applications that do not automatically file into the client file. If this is the case, then firms must set out rules for their use and consider only allowing the use of applications that can be downloaded manually and saved into the file. Some applications do not allow download and worse still delete messages after a certain length of time.

Is the use of AI and communication apps risky business? If not considered properly and risks mitigated appropriately, yes it can be!

Footnote

1 https://www.lawsociety.org.uk/topics/research/lawtech-and-ethics-principles-report-2021

Author


Director of Risk & Compliance at Howard Kennedy LLP

Contact


Director - PI FINEX Legal Services

Related content tags, list of links Article PI Legal Services

Article

Risk Management Matters Winter 2022

This article is apart of our 2022 edition of Risk Management Matters: Winter edition. You may access the full document here.

Contact us