The presence of Artificial Intelligence (AI) in law firms is increasing as the legal sector starts to embrace this technological advancement to enhance the provision of legal services and gain a competitive edge.
The Solicitors Regulation Authority’s (SRA) latest risk outlook report[1] explores the use of AI within the legal sector, which is rising across law firms of all sizes, with over 50% of lawyers using some form of AI[2].
The key concern around AI is how to use it safely and effectively. As with any type of risk it is important that law firms are able to identify, understand and manage the consequences to their business. There is some uncertainty around the use of AI systems whether it be identifying the most appropriate system to use, ensuring compliance with regulatory and legislative obligations, or broader concerns about AI’s association with criminal activity. It is this uncertainty that may be holding some firms back from adopting AI in their businesses.
AI is arguably the most important technological development in recent years, but it is not a new phenomenon and has been used by law firms for many years. In 2018 law firms were using AI tools for e-disclosure, legal research, digital dictation, automating practice management systems and chatbot style tools to offer basic assistance for clients[3]. Research undertaken by the University of Oxford[4] which explored the use of AI within the legal profession was used to publish a white paper. The figure below shows how AI assisted technology was being used by the profession at that time.
*'Grand total' includes all complete responses, including respondents from ABS and legal technology solutions providers
Source: Sako, Armour, and Parnham (2020)
Today, the most common use of AI in the legal sector is to automate routine risk and compliance tasks, such as onboarding of clients and conducting anti-money laundering checks. It is also being used to carry out administrative tasks such as engaging with clients via chatbots which can expedite response times to client enquiries. Other areas where AI is commonly used is text generation for document drafting analysing contracts, and predicting the outcome of a client’s case - which can prove helpful when assessing the risks of litigation and conducting costs benefit analysis.
One of the benefits from adopting AI is the increase of the accessibility of legal services, allowing law firms to provide legal services to a wider reach whilst doing so more affordably, effectively and efficiently. AI can help firms deal with administrative tasks more efficiently, freeing up valuable time of fee earners and enabling them to apply their technical expertise to more complex and challenging aspects of a client’s matter.
Automating administrative tasks can also reduce costs which benefit both the law firm and their clients. Using AI case prediction can help law firms demonstrate to clients the pros and cons of their matters which may assist with resolving disputes more swiftly, again reducing costs for all parties concerned.
Once law firms and their clients become more comfortable and confident with AI systems it is expected that the use of AI will increase as the practical and commercial benefits from the adoption of this tool are obvious.
The SRA’s Chief Executive, Paul Philips, recognises the opportunities AI can offer but also acknowledges the risks and challenges that it can bring and stated that:
The key risks facing the profession arise from concerns about:
AI must operate effectively, accurately, and within the law. The UK Government has established a national strategy for managing AI and a framework to guide and inform the responsible development AI which is underpinned by the following five principles[6]:
01
Select systems that meet the needs of the firm and ensure that they are tested prior to deployment. Staff must be trained on the system and are aware of what is deemed acceptable use. In addition, clear rules around the use of systems such as ChatGPT is needed including supervision and guidance regarding the use of confidential information and keeping it safe.
02
The design and running of chosen systems need to be documented so that law firms can explain how they are used. Clients need to be informed when AI will be used to handle their matter, how it works, and the impact it has on them.
03
Any personal data processed using AI systems must be handled in accordance with the data protection legislation[7] including the use of personal data to train, test or deploy an AI system. Ensure client confidentiality is protected at all times, particularly when training a system. Any data output must be monitored to ensure that outcomes are not biased or inaccurate. Be aware that bias can develop as systems evolve.
04
AI does not currently have the concept of truth and therefore systems and staff using them need to be supervised to ensure that they are working appropriately, and the accuracy of the information produced is checked and verified to limit the risk of information being hallucinated. All law firms are responsible for their activities and accountability cannot be delegated to the IT department or an outsourced external IT provider.
05
Ensure that there is a mechanism for clients to contest any decision made by AI involving the use of personal data that they disagree with. This may form part of a law firm’s complaints handling procedure. Make sure that this procedure is updated to be able to deal with any questions raised by clients about the use of AI with their transactions.
The aim of AI is to offer greater efficiency, automation and autonomy for law firms and their clients. Regardless of how we view AI, it is very much present in our everyday lives, and it is here to stay. Law firms have become more accepting of this and the use of AI systems in the provision of legal services is increasing. As long as the profession continues to understand and monitor the risks to their business, law firms and their clients should stand to benefit from the use of AI.