In each episode of the podcast, I'm joined by experts who can help us better understand geopolitically-related issues of the day. In recent episodes, we have discussed attacks on global shipping, risks facing the oil and gas industry, threats to sea-based infrastructure, the US election, and much else. Now, artificial intelligence is already all around us. In fact, just as I typed an email, the email program volunteered to finish it for me.
But in the past months, the discussion around it has focused on the risks it poses, not just to individuals, but to countries, to the environment, and in fact, to the entire planet. Jonathan Kewley is co-head of Clifford Chance's Tech Group, which now comprises around 1,000 lawyers. And I got to know him through his work tackling the enormous growth of geopolitically-linked cyber aggression. And for a long time, Jonathan has also focused on the ramifications, good and bad, of AI. But first of all, Jonathan, welcome.
JONATHAN KEWLEY: Thank you so much, Elisabeth. It's great to be here with you today.
ELISABETH BRAW: And I should add, congratulations on just having been voted Partner of the Year at the British Legal Awards.
JONATHAN KEWLEY: That's very kind. Thank you. It was a nice way to start the year.
ELISABETH BRAW: Nice way, and a well-deserved recognition of your work in the legal space. Now, the most aggressively growing AI companies are led by young entrepreneurs who have growth as their highest priority. Sam Altman, ChatGPT, or OpenAI, I should say, in fact, saw his advisory board rebel against this growth first strategy a few months ago. And they felt that safety should come first.
On the other hand, like them, many AI veterans are concerned about this relentless march forward. The father of AI, Geoffrey Hinton, recently quit Google for exactly this reason, and even Sam Altman has said that our world may not be, quote, "that far away from potentially scary," end quote, AI tools. Now, Jonathan, where do you stand? What is your position? AI good or bad or a little bit of both?
JONATHAN KEWLEY: I think the reality is that AI is this extraordinary, transformational force for our society and for our economies, particularly as economies flatline. We're seeing issues in the UK and many other European countries currently with growth and productivity. So AI offers this huge opportunities for businesses to drive efficiency, to provide new product lines, to reduce costs in terms of how they oversee their employees and how they run their business.
And of course, in Europe, it provides a great hope that we can, at last, rival the us in terms of tech dominance, which, for so long, has just been a west coast tech bro discussion. And this could be the thing that brings Europe out of the tech doldrums, if you like, and provides a great hope. But the reality is, if you have this discussion, you can't just focus on the positives.
And indeed, that most controversial of characters at the moment, Elon musk, was a signatory to a letter only in the last 24 months requesting an ethical pause on the use of AI. He and 30,000 other people, including Steve Wozniak, the co-founder of apple, signed that letter. They requested that there is a pause because AI is going too quickly. And the reality is, it does have a dark side.
I've spoken to Sam Altman about this. I've been in the room where he's described the need to have a equivalent to an atomic energy authority to oversee it, and yet the financial imperative is very strong. So as a society, and also as individuals and as companies, we have to get this balance right. Unlike anything, like nuclear power, it's a brilliant thing. But if it falls into the wrong hands in terms of state actors, or if it's not handled by us properly, it will have severe societal impact.
And what do we mean when we talk about that? It could mean that things like cyber attacks are on-- cyber attacks are on steroids, because they're made much more powerful and much worse. It means that our political systems, the way that we engage, particularly online, could be manipulated through the use of deepfakes. And that fraud, in the way that we run our daily lives and engage with financial services, again, could-- the risk there can be amplified.
So what we need to build is trust. And today, the UK government has said that they want to mainline this into the veins of the UK, which I fully welcome. But we need to do that in the context of building this safely. Because just as the first planes and cars were unsafe and killed many people, the first AI is also quite unsafe. And if we're not alert to that, we're going to be building technologies which are going to have catastrophic impacts. And that means catastrophic not just for companies, but also for individuals. And that's where we need to get very real about this.
ELISABETH BRAW: Jonathan, you look after or you help companies. I as a as an ordinary citizen worry most about-- and as somebody who focuses on threats to Western societies, I worry most about how AI can be used in deepfakes and generally in the public domain to distort the discussion and potentially harm our elections, which is a major threat to liberal democracies. But from your perspective, since you have a slightly different focus, what are the aspects we most should most worry about?
JONATHAN KEWLEY: The things that I deal with day to day that are going wrong tend to be in the employment context. So AI feels like a very new thing to us all. It's this shiny, new opportunity. But actually, AI has been around for 50 years. It's accelerated recently with the advent of ChatGPT and the prospect of artificial general intelligence, which is this idea that you can have a machine which is 10,000 times cleverer than a human being.
But the reality is, the day to day, this technology is already embedded into our lives. And in the HR context, with hiring decisions, with recruitment, and supervising people when they're employed, AI is used very commonly, and it is going wrong. So the bias that we see in CV monitoring, for example, hiring people, preferring people because they are white, middle class, or have gone to certain universities is a very AI-driven thing.
We see really worrying use of it in the appraisal and supervision of people. So decisions are being made about people's lives and livelihoods by what's called agentic AI, agents, AI agents which make decisions abstracted from humans. And so some examples include people being sacked from their positions because an AI tool has decided that they're not processed-- that they've submitted incorrect expenses or fraudulent expenses, and no one is checking the outcome of that. And actually, when you look into it, the expense claims have been completely legitimate, and yet the letters are sent.
So I think in the HR context and the employment context, the risk is that because so much money can be saved in this space-- and actually, 8/10 of what these tools do is really effective, you lose sight of the 2/10, which is very bad. Which means that people aren't treated properly. They're not treated fairly. They're subject to prejudice and bias. The things that we have taken as human beings many years to try and dial out of our companies, this AI, through the back door can introduce.
And of course, the reality is that these tools have been trained on data and built by people, which is quite biased. And the AI reflects their views. And certainly, some of the announcements from meta recently around dialing back their DNI programs I think is going to be worrying in the context of what it means for these tools. Because if you have a more diverse group of people building them, then the end result when these tools are unleashed onto people, particularly in the employment context, is going to be better if it's built by more diverse teams.
ELISABETH BRAW: And one of the challenges, I think, when tech leaders, AI leaders talk about the need for regulation, is that we don't know yet what AI will be, and how can we then know how it should be regulated, what it is that should be regulated? And secondly, legislators, by definition, are bound to be further behind on understanding tech than are tech leaders themselves. So they will always be struggling to understand what exactly the technology is that is being developed and needs to be regulated. What gives?
JONATHAN KEWLEY: Well, you say that. But actually, I think Europe and, in fact, China are being quite bold on this. And that's one reason why Mark Zuckerberg is picking fights with the EU. Because actually, the EU on AI, which is the next big bastion of growth for Meta, the EU is being pretty bold on it. So there is law which came into play late last year, the first deadline is February, which bans the use of certain prohibited AI tools in Europe.
And when we say that, it's AI tools that are linking into European individuals. So you could be a US company or a company in China and using these tools to serve or make decisions about individuals in Europe. And your use of AI there will be regulated. So it's extraterritorial. It's the long arm of the law in Europe. And these prohibited AI are things like practices which could impact negatively on individuals who are in vulnerable groups, anything which could link into social scoring, so the sort that we have seen in China.
So categorizing people because of certain data or certain data points and denying them products or services. In Europe from next month, that's going to be banned. And also, the use of sentiment analysis, emotion recognition in the workplace. So if I'm talking to you Elisabeth here and you're using AI to look at my face and decide whether I'm happy or sad or motivated or depressed, then you won't be able to do that. So EU is getting very firm on this. And I think that's why there's this sense that it's anti-innovation.
The reality is, if you're a big business, and many of the companies I speak to, they are seeing this as is equivalent to aircraft safety or automotive safety. They're not scared of regulation. They want to be sophisticated in the way in which they build these tools, and they want to ensure that their customers aren't negatively impacted. What we are going to see in coming months and years is that reputations are going to be destroyed by unsafe use of AI. So people shouldn't be afraid of the law.
But the laws are strong in Europe. And also, I mentioned China. China has been first out of the blocks on this. They have had AI regulation since 2018, particularly in the context of cybersecurity, and also impacting vulnerable groups. And anything which can generate deepfakes has to be registered with the government. They've been very interested on this. So it's quite easy to see this as a binary good and bad China versus the rest of the world. But actually, the European legislation is pretty similar to what China's had in place for quite a long time.
ELISABETH BRAW: And in a sense, one could say that the EU, although people may perceive it as annoying because it focuses a lot on regulation, actually, by being such an early pioneer-- being a pioneer of regulation in different areas, including AI, that can then essentially make it easier for other countries and jurisdictions to pass similar legislation.
JONATHAN KEWLEY: What we going to see in the same way that we saw with data law, which, again, some companies have not liked, but some companies have got used to, is Europe setting the standard for the world. And so there are something like 150 different standards now for safe AI use. Most of them replicate what Europe is doing. So Europe is standard setting here.
And one may think in some areas that Europe goes too deep or is too conservative. But as you said at the top of the program, the risk is here that we could have an existential risk for human life if this goes wrong. If it gets into weaponry, for example, and isn't properly controlled, if we give it too much power-- I mean, the way that Geoffrey Hinton and others have spoken about this is that if we develop a tool which is 10,000 times cleverer than us, we become the toddlers, and they become the parents.
And he has said that in humanity, and this is completely right, there are no examples of a much more intelligent being overcome by a being which is much less intelligent. So that power dynamic is set. Now, if we do that without control and an awareness, then we are on a very negative course. And Europe have realized that. And why have they realized that? Well, they've seen Europe knows about human rights. And why does it know about human rights? Because it's gone through two world wars. It's also seen a holocaust, and it's seen the catastrophic impact of inhuman behavior.
Now, like it or not, AI does have the ability to be hugely inhuman and hugely negative, and it could amplify the horrors that we've seen previously in Europe. So we need to get real on this. That let's not be jingoistic or negative about Europe. They're seeing that something really awful could happen, and they want to act now. And they're seeing this in the context of being pro-innovation, but safe innovation. And frankly, who would want to fly in a plane with engines that weren't regulated or with wings that weren't looked at carefully from a safety perspective? It's exactly the same with AI.
ELISABETH BRAW: And on that note, Jonathan, Rishi Sunak, when he was prime minister, convened this very high-level global AI summit here in the UK. That was an ambitious attempt to get some sort of global agreement world-- global spanning agreement on AI rules, but it didn't get very far, did it? We are still discussing how the EU can essentially set some sort of bar that others can then reach. What happened? Have we reached a stage in global geopolitical tensions where not even agreement on safe AI is possible?
JONATHAN KEWLEY: This is, we are living now in an extraordinary period of growth, which will be driven by AI. And it will be the industrial revolution, but something even bigger than that. And who would want to hold that back? So there's a temptation just to go for the profit imperative. But you're right that the safety summit looked at frontier risk. So what could cause significant harm? And that does need multilateral action.
Again, the reality is-- I mean, I think UK said it was a leader in this space. Was that summit a kind of a watershed moment? Not really. Because regardless of what the UK said, Europe, in terms of the way it regulates, is always going to be more powerful. And if you're a company doing business in the UK, you're effectively going to comply with European laws.
So the summit was not, I would say, a huge success in getting global support around this, but Europe is not slowing down. It's being very powerful on this. You've got a regulatory agenda now, which is going to flow over the next two years. As a company, if you're not doing anything about it, you need to start very quickly. The fines are 7% of your turnover if you get it wrong. So Europe is being very strong on this. China is being very strong on this.
And I think what we're going to see is an evolution of the discussion, which is, we want this to be safe, but also for safety to support innovation. And of course, the UK, you mentioned the announcement today, can do that. Because we've got the Alan Turing Institute, huge academic center of excellence in terms of safety at Oxford and Cambridge and at London universities in particular. We can leverage that and get this-- and be a leader in this space.
But if we go all out and say it's just about profit and not about safety, we're going to-- we're not going to look back fondly on what's happened. So we have a moment now to decide a fork in the road, and the best companies are going to embrace safety but also roll out AI as we are doing at Clifford Chance. We've got 9000 people using it now. We're not scared of this technology, but that's because we look at it very carefully, and we're putting guardrails around it.
ELISABETH BRAW: Quick question. You were at the AI summit. I was not. Was the atmosphere collaborative or more adversarial?
JONATHAN KEWLEY: I think highly collaborative, because it was focused on something which should focus all of our minds, which is the types of AI that could destroy us. Do we want to go to war with adversaries who have drones which are-- killer drones which have no sense of right or wrong? Do we want to have nuclear warheads which are AI enabled or planes or fighter jets which AI enable without a human being overseeing them? These are things that really worry people. Or indeed, artificial general intelligence, which could turn on humanity. So these are the questions that are being asked.
And I think you'll struggle to find anyone in the world who would support the idea of killer robots destroying humanity. Geoffrey Hinton narrowed the odds of AI destroying humanity within the next 30 years. I don't think he was being inflammatory around that. I think he really believes it. And so the safety summit was very focused on that.
The point is now though, now we're further down the line. These technologies are much more developed. People have seen that they can save money and drive profit through them. I don't think these two, the profit imperative and the safety imperative, have to be at war. I think they can they can go together.
ELISABETH BRAW: Indeed. And since you mentioned nuclear weapons several times, I think, well, the big difference between the agreements, the safety, the regulations that exist around nuclear weapons is that the only actors involved in the development of nuclear weapons are governments. Whereas with AI, it's a plethora of governments, but especially companies. And how do you establish those rules and limitations that we need in order to ensure safety?
So that's a debate that, I think, will continue to be carried out in a sometimes turbulent manner. But we have made a start. And as you said, Jonathan, the EU, China in particular, are ahead of others in creating their own rules for AI. Just to finish off on a on a somewhat more positive note than nuclear weapons, are there any aspects of AI that you consider undisputedly positive?
JONATHAN KEWLEY: I think the use of AI in healthcare is going to be absolutely transformational. So there are tools now that can predict the occurrence of cancer four years earlier than a human being can. I mean, who's going to argue with that? And there's a study at the University of Surrey hospital which showed that X-ray scanning, again, looking for irregularities in chest X-rays, 99.8% accurate compared with human beings, which is 80% accurate.
So the level of predictiveness that this is going to bring into our medicine is, I think, going to be extraordinary. But we're seeing it now at an early stage. We've got, in many countries now, straining public health services, particularly in an aging population. Our NHS and others like it around the world are really struggling. So if you get these tools that can aid doctors to do their job better, speed up decision making, dial in more accuracy, it's going to lighten the load of medical professionals.
And means that I think are-- the way in which we care for people is going to be much less reactive and actually predictive. Again, few can argue with that. And then again, if we look at public service, our justice systems, the way in which AI is starting to be used in there to process administrative tasks, to unblock the judiciary, which, again, in our country and in many other countries around the world, is really struggling, that is going to be, again, transformational.
And let's also realize that human beings can be highly biased and highly prejudiced. Again, ending on a positive note, if you can design AI to be less prejudiced than human beings and less biased than human beings, actually, you may get better outcomes in terms of diversity and inclusion. We're not there yet, but it doesn't mean we can't design it to be better than we are, because we come with our inbuilt problems ourselves. Human beings are not perfect.
So I think if AI is used as a tool to supplement human behavior and positive human action, then it's going to be transformational. Who can argue with it? But again, we have to go into this with our eyes wide open. And like any magical thing, whether it's magic itself or magic technology, if you let it out of the box without sufficient controls, that magic is going to turn dark. And that's the message which people need to take away today.
ELISABETH BRAW: The magic will indeed turn dark. And it's extraordinary to think that totally banal activity like typing one's own name into ChatGPT can go all the way to kill a robot, and that's the space in which our governments and multilateral institutions and companies need to regulate and think responsibly. Jonathan Kewley, thank you so much for joining Geopolcast, and thank you for shedding light on all these aspects.
JONATHAN KEWLEY: It's been great to be with you today, Elisabeth, as ever.
ELISABETH BRAW: To get Geopolcast episodes as soon as they are released, make sure to subscribe. And you can find us via your usual podcast players. And please recommend us to your friends and colleagues. See you next time.
SPEAKER 1: Thank you for joining us for this WTW podcast featuring the latest perspectives on the intersection of people, capital, and risk. For more information, visit the insight section of wtwco.com. Willis Towers Watson offers insurance-related services through its appropriately-licensed and authorized companies. In each country in which Willis Towers Watson operates. For further authorization and regulatory details about our Willis Towers Watson legal entities operating in your country, please refer to our Willis Towers Watson website. It is a regulatory requirement for us to consider our local licensing requirements.
The information given in this podcast is believed to be accurate at the date of publication. This information may have subsequently changed or have been superseded and should not be relied upon to be accurate or suitable after this date. This podcast offers a general overview of its subject matter. It does not necessarily address every aspect of its subject or every product available in the market, and we disclaimer all liability to the fullest extent permitted by law.
It is not intended to be and should not be used to replace specific advice relating to individual situations, and we do not offer and this should not be seen as legal accounting or tax advice. If you intend to take any action or make any decision on the basis of the content of this podcast, you should first seek specific advice from an appropriate professional. Some of the information in this podcast may be compiled from third party sources we consider to be reliable. However, we do not guarantee and are not responsible for the accuracy of such. The views expressed are not necessarily those of Willis towers Watson. Copyright 2025, Willis Towers Watson. All rights reserved.