Skip to main content
main content, press tab to continue
Webcast | Managing Risk

AI: What principles and best practices do risk managers need to know now?

By John Merkovsky , Lisa Lipuma and Lauren Finnis | June 25, 2024

AI is opening up new opportunities and risks. How can risk managers protect and grow their organizations as AI becomes increasingly embedded in their organizations?
Risk and Analytics|Risk Management Consulting|
Artificial Intelligence

Navigating the risks and opportunities of artificial intelligence (AI) is becoming crucial for many organization’s efficiency and strategic capabilities. As your organization explores integrating AI – particularly generative AI (genAI) – into your business processes, the role of a risk manager is to understand where and how it’s being used and how to apply the right risk governance around AI developments.

To help you stay ahead of the implications of AI on your organization and your role, in this insight, based on an expert Outsmarting Uncertainty webinar we explore a range of need-to-know AI areas:

How genAI works - a quick reminder

Generative AI operates by analyzing vast amounts of data to create new, original content. This capability spans various media, including text and audio, building upon earlier AI technologies like machine learning and natural language processing.

GenAI opens up possibilities to boost productivity, innovate product offerings and personalize customer interactions. For instance, at WTW, we’re using genAI tools to review complex legal contracts more efficiently and to create tailored content that resonates with targeted audiences.

However, deploying genAI successfully can present significant challenges, including the quality and quantity of data you need to effectively train AI models. If your organization is long-established, it could have access to extensive historical data, but may struggle with legacy systems not optimized for AI. If you work in a newer company, you may have modern, adaptable platforms, but lack substantial data. Using genAI effectively requires both high-quality data and a robust infrastructure to support AI applications.

Regardless of where you are in your AI journey, as a risk professional, you’ll need to understand the full range of the risks using AI poses.

What are the key AI risks you need to manage?

  • Data privacy and intellectual property (IP): AI systems process vast amounts of data, raising significant concerns about data privacy and IP rights. If mismanaged, this can lead to breaches or unauthorized data usage, violating privacy laws and damaging trust.
  • Bias, explainability and quality: AI systems can inadvertently perpetuate existing biases if they're trained on biased data. A lack of transparency can make it difficult to explain decisions made by AI, complicating compliance with regulations demanding accountability. And when AI can generate outputs based on flawed, incomplete, or biased data, it can lead to errors or suboptimal decisions that could affect your operational quality and reliability.
  • Content moderation: To identify and stop the spread of harmful content via your AI uses, you may need to invest in specific content moderation capabilities.
  • Over-reliance: Excessive dependence on AI technologies can lead to vulnerabilities, particularly if these systems face downtime or other operational issues, potentially crippling your critical business processes.
  • Workforce disruption: The capabilities of AI to automate complex tasks can lead to job displacements, requiring you to adopt a strategic approach to workforce management and re-skilling.
  • Hallucination: AI might generate false or misleading information, which can lead you to make incorrect decisions based on this data.
  • Environmental concerns: Using AI can up your greenhouse gas emissions due to the increased, energy-consuming demands on the data centres you need to deploy AI.
  • Non-compliance with emerging regulations can lead to fines and reputational damage.
  • Reliance on third-party services for your AI applications can introduce risks related to service continuity and data security.
  • Crime and deepfakes mean criminals can use AI for fraudulent activities or to enhance social engineering attacks, leading to financial and reputational losses.
  • Competitive landscape opens the potential to lose market share if your competitors leverage AI successfully and outpace you.

How to use ERM to address AI risks and opportunities

Enterprise risk management (ERM) is a systematic approach to managing risk and offers several key principles and practices that can help you navigate the complexities of AI integration more effectively:

Risk identification and assessment is a continuous ERM process you can integrate into strategic planning around AI implementations. This involves understanding the specific risks associated with AI, including data privacy issues, bias and explainability concerns and the potential for AI to ‘hallucinate’ or generate false outputs. By defining these risks clearly, your organization can better prepare for and mitigate the potential adverse impacts on your operations and reputation.

Integration into business processes is a crucial part of ERM and an essential part of aligning AI initiatives with your organization's overall strategy. For AI, this integration will be about ensuring implementations don't operate in silos but are part of a holistic approach to managing enterprise risks. Integration will likely involve collaboration across departments — such as IT, legal, operations and finance — to foster consensus on how you’ll approach identifying, assessing and mitigating complex risks.

Role clarity and stakeholder involvement are all part of an effective ERM process. For AI, this role clarification could include defining who's responsible for monitoring performance, who handles data governance, who's accountable for compliance with regulations and how these roles interact within the broader ERM framework. Stakeholder involvement is critical to ensure you consider different perspectives when assessing the risks and thoroughly evaluating all the potential impacts.

Monitoring and reporting are essential components of an effective ERM framework. This could involve regular reviews of AI systems to ensure they function as intended and don’t deviate from expected behaviors. AI risk-monitoring might also include mechanisms to detect and respond to AI-generated errors or failures promptly. Reporting, meanwhile, can ensure you keep all stakeholders, including leadership, informed about AI performance and any risks that may arise, facilitating timely risk management decisions.

Mitigation and continuous improvement is about developing strategies to mitigate your specific risks. For AI, this might include implementing controls to prevent data breaches, adjusting AI models to reduce bias, or establishing protocols to handle AI malfunctions. ERM best practice encourages you to view risk mitigation as an ongoing improvement process, where feedback and new information lead to continual refinement of your risk strategies. This approach will help you address both existing and emerging AI applications.

Scenario planning and testing will help you predict potential outcomes from AI risks and test how your organization handles these situations. This proactive approach allows you to assess the effectiveness of your risk mitigation strategies and make the necessary adjustments before any real issues occur.

Strong leadership that endorses a risk-aware culture is an important part of effective ERM and in the context of AI, helps ensure your organization sees managing the risks as a critical component of all business activities.

Overall, ERM’s strategic approach to risk management can provide the appropriate risk governance that allows AI technologies to contribute positively to business objectives while safeguarding against the potential threats.

To discover best practice principles for organizations considering the risks and opportunities of AI, watch the full webinar on demand by filling out the form on this page.

For expert support on finding a smarter way to manage AI risks and opportunities, get in touch.

Authors


Head of Risk & Analytics and Global Large Account Strategy, WTW

Director, Risk and Analytics

Head of Commercial Lines, North America, Insurance Consulting and Technology

Contact us