Technology and AI governance remains a top concern for corporate directors and executives in 2025 relative to safeguarding data, managing new technologies and ensuring the necessary skills in the boardroom and across the organization. Effective board members understand the importance of technology for their businesses and their role in governing it.
According to the National Association of Corporate Directors 2025 Trends and Priorities Survey, three of the 10 Director's Top Trends for 2025 involve technology governance. Cybersecurity threats and AI remain at the center of director concerns around technology. In WTW’s November 2024 Emerging and Interconnected Risks Survey, executives worldwide listed AI and cyber risk as the top two out of 752 emerging risks. Additionally, WTW’s 2025 Directors' and Officers' Risk Survey reports data loss and cyberattacks are both within the top three risks.
As covered in AI requires dynamic governance to seize opportunities and manage risks, effective leaders have shifted from traditional risk management protocols to more dynamic and responsible governance models for managing AI’s growth across industries and applications, and adhering to their values.
Classical rules-based governance structures and processes often fail to address AI’s unique challenges and opportunities and cannot keep pace with rapid advancements. Effective leaders adopt guiding principle-based governance practices that allow their organizations to benefit from AI technologies while reducing risks and increasing trust and accountability.
Effective leaders employ responsible AI – the process of developing and operating AI systems that align with organizational purpose and values while achieving desired business impact. Responsible AI governance models are flexible and responsive and include mechanisms for regular updates, feedback loops and continuous improvement. These models enable leaders to design governance practices specifically addressing AI, adapt to internal and external changes and remain effective and relevant in both the short and long term.
Recently, at an AI roundtable in London hosted by TWIN Global, professor and corporate director Dr. Helmuth Ludwig shared insights from his research conducted in partnership with professor Dr. Benjamin van Giffen regarding board effectiveness in governing AI.
Ludwig and van Giffen reported that although AI is top of mind even for nontechnical business executives and board members, most boards struggle to understand both the implications of AI for their businesses and their role in governing it. The authors identified four categories of board-level AI governance issues and examples of effective practices for each.
01
Effective boards recognize AI as a strategic enabler and differentiator that influences an organization’s competitive position and business model.
These boards adopt two key practices: First, they ensure that AI is reflected in business strategy, for example, addressing how AI is impacting business decisions and priorities, as well as the execution of those priorities. Second, they incorporate AI into the board’s annual strategy meeting, including internal and external views, ensuring systematic, board-level discussions about AI.
02
Effective boards govern capital allocation decisions related to AI by promoting experimentation with AI, securing investment for platforms and tools, and enhancing AI capabilities through external partnerships and potential mergers and acquisitions. They use the annual budgeting process to secure investment in both foundational and differentiated AI capabilities across the company and support external AI partnerships and M&A.
03
Effective boards treat AI risk oversight as a key board duty. They discuss updates in AI technology developments and applications that are relevant to their business strategy and its execution. They review publicly available cases of AI bias and discrimination that occur even at highly advanced technology companies. They recognize how AI introduces new risks and adds complexity to risk management protocols.
Effective boards actively ensure the implementation of processes that manage AI risks. They focus on ethical and reputational risks arising from the use of AI, data risks, and legal and regulatory risks. They invite AI risk experts to present at board or audit/risk committee meetings and integrate AI into enterprise risk management activities.
04
Effective boards focus on the technology competence of both the board and the executive management team.
To ensure competent engagement in critical AI governance decisions, effective directors require at least a foundational level of AI competency, if not more. This may mean expanding the role of board committees (such as setting up technology and innovation committees). Effective boards make certain that the CEO and executive management are ready to execute the company’s AI agenda and assess AI competencies in succession planning and CEO search criteria.
Effective boards assess whether their directors are ready for AI, and they “seize a moment of impact” to transition from merely acknowledging AI’s importance to actively discussing it and initiating board-level AI governance.
A version of this article originally appeared on Forbes on March 28, 2025.