How focused are boards on AI as a possible risk?
AI has seen almost constant press over the last few years since ChatGPT and its competitors were launched. Many commentators have highlighted AI as a potential risk area, with suggestions that AI-washing could generate claims and regulatory investigations in a similar way to climate change.
Notwithstanding the commentary, AI is not highly ranked by the respondents to the Global Directors and Officers Survey. It is ranked 21st out of 30 risks for directors and officers. It has been suggested that people distinguish between the risk for the directors and the officers and the risk for the business. However, in our most recent survey, we asked respondents some new questions, asking about skills and knowledge at board level, where the board should be spending more time and the materiality to the business of various matters on which the board could be spending its time[1].
AI ranks as:
So, AI is neither ranked as a significant risk for directors and officers, nor as an area material to the business nor as an area where the board should be spending more time.
Given that the risks of data loss and cyber-attack are the number two and three risks overall for directors and officers, it is perhaps surprising to see AI being ranked so low by comparison.
So why isn’t AI ranked more highly by the survey respondents? Our clients are increasingly being asked What are you doing about AI? Putting aside the somewhat oversimplified question, we frequently hear the response that it is still early days. In our experience, AI technologies are still generally seen by most companies as an emerging field with the focus being more on understanding the potential opportunities, rather than perceiving it as an immediate risk.
AI technologies are undoubtedly recognized for the vast opportunities they can create, but the scope of how they are utilized and subsequently the risks they present, can vary significantly depending on factors such as company size and industry group. The AI use cases we have seen for companies can be quite different ranging from a focus on generating operational efficiencies at one end to the potential for new product development and innovation at the other.
From a risk perspective, at this stage it remains to be seen whether AI will create significant new risks for Directors and Officers. What we have seen are examples of where AI has already amplified existing risks. A good example of this has been the increased cyber security threat from AI generated phishing attacks and deep fake videos. What used to be seen as a sophisticated and complex capability has increasingly become commercialized and available to a mass of threat actors enabling attacks at scale. We have already seen examples of where this has led to significant financial losses for some of the companies where the board and senior company executives were specifically targeted.
In addition, the regulatory environment around AI is starting to take shape. The introduction of the EU’s Artificial Intelligence Act in 2024 was the world’s first comprehensive AI law and the recent US executive order on AI safety, will undoubtedly bring further focus to the potential risks and exposures.
The developments in this area continue to move fast and as companies further integrate AI into their operations, we anticipate that concerns about the associated risks will grow. Whilst companies may not yet have fully formulated their AI strategy it’s essential that Boards educate themselves to stay ahead and be able to balance the enthusiasm for its benefits with prudent oversight of its evolving risks.