Skip to main content
main content, press tab to continue
Podcast

AI regulation; how will this impact insurers now and in the future?

June 04, 2024

Insurance Consulting and Technology
N/A

In this episode, Chris Halliday is joined by James Clark, a partner at DLA Piper, to discuss AI regulation and the impact it will have, and is already having on insurers.

AI regulation; how will this impact insurers now and in the future?

Transcript:

(Re)Thinking Insurance Podcast Season 4, Episode 8: AI regulation; how will this impact insurers now and in the future?

JAMES CLARK: The government is keen to emphasize it's as much about fostering and encouraging innovation in the UK, creating the environment to enable businesses to feel free to develop AI, invest in AI, whilst at the same time, protecting the public, businesses, consumers from the potential harms and maybe some of the worst excesses.

SPEAKER: You're listening to (Re)thinking Insurance, a podcast series from WTW, where we discuss the issues facing P&C, life, and composite insurers around the globe, as well as exploring the latest tools, techniques, and innovations that will help you rethink insurance.

LAUREN FINNIS: So Lauren Finnis. I lead our commercial lines practice for insurance consulting and technology in North America.

CHARLIE SAMOLCZYK: Awesome.

CHRIS HALLIDAY: Hello, and welcome to the (Re)thinking Insurance podcast series. I'm your host, Chris Halliday from WTW. And in this episode, we're discussing AI regulation and the impact it will have and is already having on insurers. Before we start, I'd like to welcome our guest, James Clark, who is a partner at DLA Piper. Thanks for joining us, James.

JAMES CLARK: Thanks, Chris. Great to be here.

CHRIS HALLIDAY: Fantastic. Well, James, so you're a partner at the law firm DLA Piper. And you specialize in data privacy law, cybersecurity, and now the regulation of new technologies, such as AI. And I suppose to get started, I wanted to understand, how did you move into AI? What brought you here?

JAMES CLARK: Yeah. So as you say, I'm first and foremost a data protection and privacy lawyer. And as we might come on to discuss and as you know, Chris, AI systems are fundamentally creatures of data. So they rely on large amounts of data initially to train the models that underpin the AI systems. And then throughout their life cycle, they're processing data, producing outputs that constitute data.

And given the very broad scope of personal data under data protection laws, particularly in the UK and the EU, data protection law is very often relevant when AI systems are both being developed and then used in practice. And so it's really that angle that I first approached AI from. And then a couple of years ago, we did a very interesting project, working with the UK government, looking at how the UK should approach the regulation of AI.

And that opened my eyes more broadly to the regulatory challenge around AI, which is as broad as the potential applications for AI, which are, as the listeners may be aware, almost limitless. So it's a huge challenge, this question of how we best regulate AI, but it's a very exciting one. And, yeah, I'm very much enjoying being at the forefront of advising clients on that.

CHRIS HALLIDAY: It's really exciting. I mean, from my perspective, we see the use cases of AI are just, as you say, almost-- it feels almost limitless at the moment. I'm sure we'll start touching on those limits at some point, but it's so exciting to be there and helping it happen. And I know from talking to our clients that often the legal challenges and the question marks come up quite early in the process. And they can sometimes feel like it's restricting the innovation.

So it's exciting that you're there talking to governments, as well as insurers and other clients, about how to free themselves so they can do that innovation.

JAMES CLARK: Yeah. And I think that's exactly the balance that the UK is trying to strike. So the title of the white paper that was published last year is A Pro-Innovation Approach to AI in the UK. And the government is keen to emphasize it's as much about fostering and encouraging innovation in the UK, creating the environment to enable businesses to feel free to develop AI, invest in AI, whilst at the same time, protecting the public, businesses, consumers from the potential harms and maybe some of the worst excesses of the technology.

So sort of getting that balance right is definitely something that the government here is focused on and governments more broadly around the world, as everyone wants to be the leader in innovation, but at the same time realizes they need to protect the public from some of the very clear harms.

CHRIS HALLIDAY: Yeah, that makes a lot of sense. I wonder if we could talk-- maybe start with stepping back a little bit and understanding a little bit about how AI is regulated today. And I know we're going to get into some of the new regulations, the white papers, the EU AI Act, but let's talk about today the regulatory frameworks and the areas that insurers would be concerned about in implementing these new technologies right now.

JAMES CLARK: Yeah, of course. So what we have today is what I sometimes describe as the indirect regulation of AI. So we don't really yet have any laws that were specifically designed with AI in mind or were focused on AI. But what we do have is a range of legal frameworks that were not necessarily designed with AI in mind, but which nevertheless are relevant to the development and use of AI.

And we'll engage and create obligations, rights for individuals where AI is deployed. And so maybe to pick up on a few examples of what those are, so one would be my field, data protection law. We've already spoken about the fact that personal data is often used both to develop and operate AI systems.

And where that is the case, individuals whose personal data is being used will have all of the normal rights that they will have in other contexts under laws like the GDPR, so rights of transparency, rights to access the data, and perhaps most importantly, the rights that we have under GDPR not to be subject to automated decision making in respect to significant decisions other than in certain circumstances.

So, for example, a decision about how to settle someone's claim or indeed an underwriting decision might be a significant decision. And data protection law already regulates the circumstances in which a decision like that can be taken and the rights that individuals have. And hopefully, it's quite apparent that it will increasingly be what we might call AI systems that are being used to take those automated decisions.

So data protection law is one important field. Another would be intellectual property law. And we've seen a lot of disputes in this field already, for example, from rights holders whose data is used to train generative AI systems, litigation from people like Getty Images, for example, who we're concerned that the material that they hold rights in is being scraped and used by big tech to train and develop AI systems, but also questions about the extent to which IP rights continue to exist in the outputs of AI systems.

So to what extent do I own something that is created by an AI system? Can that still generate copyright for me as the person who's using that system to create a piece of creative output, if you like? And then probably the final one to touch on, particularly relevant for the insurance industry, is financial services regulation and the role that regulators like the FCA and the PRA have to play.

And as we'll see as we move through the podcast, the approach that is being taken in the UK at least is to try and rely to the greatest extent possible on existing regulators and existing regulatory frameworks. And so what that means is in the world of insurance, primarily looking to the FCA and the PRA to use their existing powers to monitor and enforce against the misuse of AI using existing principles, for example, consumer protection principles and the concept of treating customers fairly and how that might be impacted, for example, if you are relying on an AI system and predictive models to take pricing decisions, for example.

Is there a risk that certain customers might be discriminated against, may even be priced out of insurance markets through the increased use of AI? And so all of those existing principles that the FCA applies when regulating firms, they are now thinking very carefully about how to translate those to the use and the misuse of AI.

CHRIS HALLIDAY: Yeah, that makes a lot of sense. And I think it was one of the themes-- I think the Bank of England did a request for information around the topic of regulation of AI, and it was one of the themes of the respondents, was there's a huge amount of regulation already around consumer protection and financial regulation.

And a lot of this is outcomes focused, so it doesn't matter whether you're doing an AI, or you're doing it manually, or you've got a statistical model or some data. The outcome is the key thing. But I think as we go on to talk about there are some harms and some things that maybe are exacerbated or increasing in probability with AI. And you mentioned intellectual property. It was interesting, actually, whether there's an opportunity for the insurance industry around some of these new risks and the litigation risks. Do you see new insurance products coming out that might protect firms against these sort of litigations?

JAMES CLARK: Yeah, I mean, I think that's certainly something that insurers will be thinking about. I mean, the stakes are so high in this space, which is why we're already seeing such a huge amount of litigation, a lot of that happening at the moment in the US, but it is starting to come across the Atlantic as well. So, yeah. And if you are a business, for example, whereby substantial amount of the value of your business depends on rights that you hold in creative property and the ability to continue creating intellectual property and monetizing that intellectual property, then something like generative AI is a massively disruptive technology that potentially upends your entire business model.

So when companies like Getty Images, for example, but also lots of big media companies, you know, when they are bringing these claims against the technology companies who sit behind some of the big generative AI applications, it is, to a large extent, bet the company stuff for a lot of those businesses. That's the way they see it.

And, yeah, I mean, we'll see how it plays out, but I think probably it-- at a high level, my bet would be that you can't hold back the tide of technology and that actually what we'll probably start to see over time is some of these big media companies working with the technology and trying to work out how they can bring it in-house and apply it to their existing business model, rather than just fighting against the tide.

CHRIS HALLIDAY: Absolutely. I think it's a big moment in a lot of creative industries. And there'll be this transition period to whatever the new position is where you have AI working together with creatives so that we still get that value to the economy of the creative industries, but with the added benefit of AI. And I think that maybe a place that insurers can support is through that transition period because you wouldn't want to lose a big chunk of the creative industries as we're finding out how they should and could adapt to the new world.

So, I mean, that's regulation today, and I think that's really helpful to see what the frameworks are. But let's talk a bit about regulation tomorrow, so obviously, specifically, the EU AI Act, which it feels like it's been talked about for a long time. And I'm sure that's how these regulations work, but I remember reading drafts and drafts about it and trying to figure out, well, what exactly is the latest wording?

But it's all a lot clearer now because earlier this year, it's effectively been released, and it's going through the final stages through the EU regulatory bodies. For the listeners that don't know the act, could you briefly just outline the scope of the act and why it is impactful for insurers?

JAMES CLARK: Yeah, of course. So I think if the listeners have heard of one AI law, it probably is the AI Act that is the world's first comprehensive horizontal law that is designed specifically to focus on the regulation of AI systems. It is, as we say, an EU law, but also one importantly that has extraterritorial effects. So businesses that are selling into the EU or even businesses that are using AI systems where the outputs of those systems then have an impact on individuals in the EU are potentially caught by the law.

As you say, we now have a political agreement on the final version of the text, so we know what it's going to say. We're just waiting for it to be published in the official journal, and there will then be-- which is likely, actually, to happen essentially even by the end of this month, so May 2024. We'll then enter a between two and three-year transition period, depending on which part of the act you're talking about, which will really be the tunnel that businesses will then go in to actually prepare to comply with the act.

It's comprehensive, as I say, so it applies to all parts of the economy, all sectors. But very importantly, it's a risk-based law, so it doesn't regulate all AI systems, and it doesn't regulate them all in the same way. And the most important category of AI system under the act is what's referred to as a high-risk AI system, which is where the vast majority of obligations lie.

And most of those obligations will apply to the provider or the developer of the AI system, so whoever's building the AI system, but there are also obligations that apply to the user or deployer of the AI system, where those are different people. And then specifically, in relation to the insurance industry, one of the categories of a high-risk AI system is an AI system that is used in the context of health or life insurance both for pricing, underwriting, but also in the context of claims decisioning.

And so, yeah, any insurers who have health or life business will need to consider it from that angle. But there are also other categories of high-risk AI systems that will apply to a much broader range of businesses. So, for example, if you use systems in the context of managing your workforce, monitoring employees, again, that's a high-risk use case.

And, yeah, so a lot for businesses to think about. And also important to remember that, as I say, the majority of the obligations will sit with the provider of the system, but that's not just big tech, right? So lots of businesses will be building their own AI tools, even if those tools depend on existing models. So you may take an existing large language model, for example, and use it to develop your own AI system in-house. If you do that, you are then a provider of that AI system under the act.

So this is definitely the time for businesses to be getting to grips with the act, working out, are they caught? In which areas are they caught as a provider or a deployer or both? And then beginning to plan and build the compliance framework that will be needed to address their relevant obligations.

CHRIS HALLIDAY: It's interesting for me the focus on life and health insurance in particular. I don't know if you know any of the background to that and maybe why it doesn't cover other insurances, like motor insurance, which is compulsory products in all of the EU. And often, the first insurance product people will buy is when they buy their first car. And they're one of the most expensive products as well for people, especially those on a low income who maybe don't have life and health insurance.

JAMES CLARK: Yeah. I think these things, they're always horse trading to some extent. So if you followed the development of the act and the various drafts, you see things come in and then get taken out. And generally, the Parliament wants to expand the scope and add in more high-risk categories, but then the Council will want to narrow the scope. And so things get traded in and out.

But, yeah, which then often means, as I think you're suggesting, you sometimes get gaps that feel like they're not completely logical or, why is this thing high risk, but this thing over here is not high risk? But ultimately, that's the way that these laws get put together. And I suppose the thing to note there is that they've retained the ability to change that list of high-risk systems over time. So the idea is that they will keep it under review and add in new things, potentially take things out over time.

CHRIS HALLIDAY: Yeah, that makes sense. And so we've talked about the areas of business, and you briefly touched on claims and pricing and underwriting. Are there any other specific use cases that we're seeing insurers using that might be captured?

JAMES CLARK: Yeah. So, I mean, I think those are the big ones and the ones where for a very long time insurance businesses have been using mathematical models, and then they've been using machine learning. And for insurance business, this is a continuum rather than a new technology that has just arrived out of the ether in the last year or two. So in those contexts, it's a continuum.

I suppose what is a bit newer for some businesses is the generative AI side of things, the natural language processing. And so there are other use cases, for example, things like more sophisticated chatbots for communicating with customers, things like sentiment analysis that you see being implemented in call centers to help manage interactions with customers in a more sophisticated way. So that sort of thing is newer for some businesses and is also potentially subject to obligations under the act.

So, for example, if you use AI systems to interact with a human being, then you have to-- there are labeling requirements. So you have to make it clear to the individual that they're interacting with a bot, which today might be pretty obvious for most people, but over time will probably become less and less obvious as those systems become more sophisticated.

CHRIS HALLIDAY: Yeah. I mean, in my personal experience, it's not always easy to understand when you're interacting with a chatbot or a chat function which has a person behind it. So I think that will become very important. And obviously, you mentioned it's an EU act, and it does have these extraterritorial scopes for firms operating in the EU or with the EU. But thinking beyond the EU, and maybe we can talk about the UK, but also globally, are there other reasons why this would be interesting for insurance companies that don't have activities in the EU?

JAMES CLARK: Yeah, so I think we often talk about the Brussels effect in terms of EU regulation that often, they are the first to regulate in a given area, or if they're not the very first, then they're the first significant player to do so. And obviously, it's a large market, and that means that when they do regulate, that often then has sort of ripple effects and either leads to copycat regulations in other parts of the world. So that will be a really interesting one to watch, whether there are other jurisdictions that take the AI Act as a template for their own AI law. But also--

CHRIS HALLIDAY: Yeah, as we've seen with GDPR. I mean--

JAMES CLARK: Exactly.

CHRIS HALLIDAY: --there are many countries around the world that have got something that looks very similar to GDPR. And I do agree with you. It'll be interesting to see what happens with AI.

JAMES CLARK: Yeah. And even if that doesn't happen-- my bet would be it doesn't happen to quite the same extent as the GDPR. But the other thing that businesses sometimes do is take it as a bit of a benchmark. So even in areas where they might not be directly subject to the AI Act because it's non-EU business, there may be helpful principles in the AI Act in terms of what we talk about with the governance framework for managing AI risks.

And there is stuff in the AI Act that would be sensible good practice anyway, so, for example, the requirements around data quality when you're developing an AI system, ensuring that your data is accurate, fit for purpose, free from bias. I mean, those are all things that you would want to be doing just to ensure the quality of your products, but also to manage broader legal risks.

And so, yeah, what I do expect is that we'll see businesses rely on the AI Act as a bit of a framework and maybe taking the bits that they think are useful and leaving some of the stuff that isn't as useful in areas where they're not actually directly subject to the act.

CHRIS HALLIDAY: And I think we've seen that. I mean, even the definition of AI itself was used years ago when I was presenting on AI, where my definition of AI is all the things that computers cannot yet do. That's my definition of AI. So when computers can play chess, well, that's not AI anymore. And they can play Go, well, that's no longer AI.

But it does seem that finally, we're coming around to some sort of consensus in the international regulatory community around the definition of AI. So I think that's helpful. And then there are other things, reading the ABI guidance, for example, for UK insurers, on the use of AI. There are a large number of themes which overlap between that and the AI. So I think that we can see, certainly in the UK, this starting to come true. We're learning from the EU AI Act and other regulations.

JAMES CLARK: Yeah.

CHRIS HALLIDAY: Is there any--

JAMES CLARK: No, I was just going to say, I mean, I think one of the reassuring things is that there is a lot of alignment globally on what's sometimes referred to as safe or ethical principles for the development and use of AI. And those have been adopted in bodies like the OECD, the UN, so in a supranational level, as well as in individual laws, including in the UK white paper.

And those are just common sense things like principles of safety, security, transparency and explainability, fairness, accountability in governance, all things that you'd hope broadly most people agree on. And those principles run a bit of a golden thread through most of the law, most of the guidance around AI and will, therefore, probably form the bedrock for a lot of the governance programs and compliance programs that businesses put in place.

CHRIS HALLIDAY: Makes sense. So you mentioned when we started talking about the AI Act that two or three years of implementation time, if it does get finalized at the end of this month. But what should insurers and businesses be doing to make the most out of AI in general and do so safely?

JAMES CLARK: Yeah, it's a great question. I mean, I think a lot of this starts with just mapping where the business is already using AI systems, right? You can only control what you understand. And in the same way that, for example, when businesses began their GDPR compliance programs, the starting point was often let's actually map out where within the business we're using personal data, and in particular, where are we using more sensitive data or higher volumes of data?

Where are the heightened areas of risk? In the same way, what I'm now seeing companies do is map out their use of AI systems, and in particular, doing it in a risk-based way, so thinking, where are we either using AI systems that are high risk because of the sensitivity of the data they're processing or the criticality of the decision that they are being used to take? So we're using systems to take, actually, a really important decision or to support a business process that's really critical.

So, yeah, they should start with that mapping, and then beginning to build what I keep referring to as a sort of governance, risk management framework for AI, which is going to have a number of pillars, and you can divide it in a number of different ways, but will include things like from a governance perspective, ensuring that there is leadership for AI within the business, typically, some sort of cross-functional steering or oversight group that has representation from different functions, articulating what the business's overall approach to AI adoption is going to be.

How aggressive or how risk averse does the business want to be in terms of how it uses AI? Then going on to articulate some key dos and don'ts for the business, some key policy statements that help to clarify what the business is prepared to do and not do and some of the most fundamental controls that need to be in place, probably by reference to those safe AI principles that I spoke about earlier.

And then from there, developing more granular risk assessment tools that, for specific proposed use cases that the business has, can assess and do a deeper dive on the legal risks associated with such a use case, recommend controls for how to build the AI system in a compliant way, and then obviously, audit and monitor that those controls are being complied with in practice.

And then probably the final one to mention would be the third-party side of it. So a lot of this will depend on either buying in AI systems from technology vendors or even just buying in the data that you need for an AI system, and then at the other end of the pipeline, selling AI potentially to customers. So if you're an insurance broker, you might be selling some services to an insurer that depends on the use of AI and thinking about how you manage your risk at both ends of that pipeline in terms of the contracts, in terms of the assurances that you get from your vendors, whether you're prepared to give to your customers.

So all of those different pillars will be underpinning this governance, risk management framework. And obviously, building that takes some time, but there are some things that you can get started with pretty quickly in terms of the high-level governance side, the high-level policy, starting the mapping, and then over time, developing the more granular tools and processes.

CHRIS HALLIDAY: Yeah. And as you mentioned, this is best practice, whether or not there's a regulatory imperative to do it. And I think we've seen insurers investing in this for their machine learning and data science teams and their data science implementation. So even though it's probably not AI or not always AI, from the definition of the act, we're seeing insurers very interested just from a risk management perspective and because of those existing regulations, the consumer regulations, et cetera, which are just as applicable to machine learning and to statistical models as they are to AI.

So that framework-- and I think it's a very similar framework, actually, for machine learning and for AI and is valuable. And what would you say to an insurer that wants to balance this and maybe say, well, it sounds like a huge amount of work to build these frameworks? And how do we balance that with actually doing some and getting some value out of it?

JAMES CLARK: [CHUCKLES] Yeah. So I guess I'd say two things to that. I mean, one is the importance of this principle of doing it in a risk-based way and a proportionate way. So particularly over time, as AI does become ubiquitous, if it hasn't already, you're not going to be able to get your arms around all of it, or certainly, not to the same extent. So you have to keep to this principle of focusing on, where are the higher risk systems to the business?

But also, hopefully, if this framework is done right, and it's not done in an overbearing way, and it's done in a proportionate way, then it's something that can give confidence to the business because there is a lot of scaremongering about AI at the moment. A lot of what I see with clients is the fear factor in terms of not being confident to deploy these solutions or to greenlight the development of these solutions because they're aware that there's a lot of risk that they don't know how to address it.

And so, yeah, if you build the framework correctly, then it gives people the toolkit to do that and the confidence to know that we can do this. We just have to comply with these controls, most of which are, hopefully, common sense and a number of which, as I touched on earlier, would be things you would want to do anyway in order to ensure the quality of the product and ensure that it's useful to you as a business.

So I think that's the key to unlocking this, is doing it in a proportionate way and building something that actually gives confidence to the business to go out and take on the AI challenge and make the most of it.

CHRIS HALLIDAY: Yeah. And I think that absolutely goes hand in hand with understanding the use cases, understanding where the value is. And we're seeing more and more actual live use cases coming through in the insurance industry now. And so we're starting to get a picture of where the value lies and maybe where it doesn't lie as well in some of the false starts people have had.

So if insurers can go and say, well, look, we understand the benefits, and we can see the use cases coming up, we can see what the market's doing, and also we can understand the risks and we've got a framework, then they can be confident from both aspects that what they're doing is the right thing.

JAMES CLARK: Agreed.

CHRIS HALLIDAY: Well, thank you so much, James, for joining me today. I think it's been a really interesting conversation from my perspective.

I'm excited about AI. I'm now excited that we can also manage the risks of AI.

JAMES CLARK: Good.

CHRIS HALLIDAY: So that's great. So thank you very much for joining me.

JAMES CLARK: Great. Thanks, Chris.

CHRIS HALLIDAY: All right. And thank you to listeners as well for joining us in this episode. If you found this interesting, then please make sure you join us for future episodes of (Re)thinking Insurance.

SPEAKER: Thank you for joining us for this WTW podcast featuring the latest perspectives on the intersection of people, capital, and risk. For more information, visit the Insights section of wtwco.com. This podcast is for general discussion and/or information only. It's not intended to be relied upon, and action based on or in connection with anything contained herein should not be taken without first obtaining specific advice from a suitably qualified professional.

Podcast host

Chris Halliday
Global Proposition Leader, Insurance Consulting and Technology

Chris is a Global Proposition Leader, leading personal lines pricing, product, claims and underwriting at WTW. In this role he steers the direction of WTW’s software and consultancy offerings, guiding insurers to harness the full potential of innovation and technology. As former head of innovation in Europe, Chris was a founder of WTW’s global data science team and remains close to innovation in AI and data science. He has diverse consulting expertise in insurance covering pricing, technology, claims, strategy, and strategic acquisitions.


Podcast guest

James Clark
Partner, DLA Piper

James is a partner at the law firm DLA Piper, where he specialises in data privacy law, cybersecurity and the regulation of new technologies, including AI. James has a particular interest in the insurance sector, where his clients range from large global insurers, to brokers and other intermediaries, as well as start-up insurtech businesses. James has been advising on the regulation of artificial intelligence for several years and his experience includes advising the UK government on its evolving regulatory framework for AI, as well as working with private sector businesses looking to establish AI governance frameworks or to obtain compliance advice on emerging laws such as the AI Act.


Related content tags, list of links Podcast Insurance Consulting and Technology Insurance
Contact us