Skip to main content
main content, press tab to continue
Article

AI in the spotlight: What directors and officers need to know

By Elizabeth Hepburn Miller | February 14, 2025

Key considerations for directors and officers and their financial and executive liability insurance regarding artificial intelligence (AI).
Financial, Executive and Professional Risks (FINEX)
Artificial Intelligence

42 AI-related securities class action filings recorded since 2020

Is AI a flash-in-the-pan buzzword, or has the technology’s potential for sweeping transformation only just peeked the headline horizon? Though AI has indeed risen quickly to dominate headlines, it’s clear the tech’s newsworthy buzz is here to stay. From the perspective of directors and officers liability risk, heads are already turning to watch the extent to which AI-related risks, including securities class actions, are unfolding. To date, there have been 42 AI-related securities class action filings recorded since 2020, a number that is only expected to grow. In tandem, companies are staring down an ever-evolving patchwork of legislation as regulators, domestically and globally, appear to be both hot and cold on AI’s path.

This rapidly evolving technological environment has captured the attention of directors and officers as they steer their companies’ investment in the tech revolution. Yet, they should heed caution as regulators and shareholders fixate on disclosures compliance and fulfillment of legal duties to their organizations and stakeholders. It is vital that boards navigate this complex environment with a watchful eye on the surfacing trends.

Through the lens of directors and officers liability risk, the following commentary explores AI’s evolving regulatory landscape, trends in AI-related securities class actions, and predictions for the tech-forward future, to keep a close watch on.

Evolving AI regulation: Rulemaking at foreign, federal, state and local levels

As is typical for tech, regulation is both forthcoming and lagging on the heels of AI’s trailblazing path.

Foreign regulatory bodies have been the frontrunner, with the European Union adopting what has been coined as the world’s first sweeping AI law.[1] In March 2024, the European Parliament approved the adoption of the EU Artificial Intelligence Act, which will affect all companies deploying or using AI in the EU.[2] The Act, applying to companies as both suppliers and users of AI, seeks to classify and regulate AI applications based on their risk to cause harm, with the highest risk level uses banned entirely, and other high-level risk uses subject to security, transparency and quality requirements.

In addition, the EU has created an “AI Office,” which will oversee the Act’s implementation and enforcement as its provisions stage in over the next two years. And, as you may be wondering, what are the consequences for noncompliance? Well, penalties may be significant, ranging from up to 35 million euros or 7% of global revenue, depending on the violation and size of the company.[1] The risk does not stop there. As could be expected with the potential for such large penalties, it is important to note that there is an increased risk of possible follow-on civil actions brought by investors alleging misrepresentation by management of their company’s compliance with the Act or a violation of their duty of care with respect to the implementation of the Act.

As AI technology rapidly advances across the globe, companies deploying or using AI in the U.S. must stay informed about both federal and various state regulations to ensure compliance across the country. Federal legislation for AI took center stage after Inauguration Day, as the Trump administration made sweeping executive orders to reverse the previous administration’s cautionary tone. However, there are various regulations, proposed laws, and frameworks at the federal and state levels that already address AI's impact. At the federal level, regulations that focus on privacy, fairness and transparency will still apply in AI applications. At the state level, various efforts are already in motion to regulate AI. It is important, and will be instructive, for directors and officers to keep watch as to how these efforts will fare.

On the state level, there are regulations in place that impact use of the tech, even if they are not AI-specific rules. For example, the California Consumer Privacy Act (CCPA) and the similar laws by Virgina, Washington and other states, stress the need for transparency in how algorithms process privacy-focused data, especially when it impacts consumers and employees. For example, if AI tools process personal information (PII), companies must ensure compliance with CCPA/CPRA regarding data access, transparency and consent.

In May 2024, Colorado became the first state to adopt comprehensive AI-specific legislation, which will go into effect in 2026. The Colorado Artificial Intelligence Act (CAIA) imposes requirements for both AI developers and deployers to adhere to a standard of “reasonable care” to avoid algorithmic discrimination in their AI systems and applies to companies doing business in Colorado that develop or use “high-risk artificial intelligence systems.”[3]

Not far behind was California’s flurry of AI legislation at the end of September 2024, with over a dozen bills signed into law. The bills address concerns ranging from intellectual property to safety and security standards for large AI models. Covering the deployment and regulation of GenAI technology, California’s legislative package is the most comprehensive on this emerging technology’s risks, aimed at cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation. Companies doing business in California or interacting with California residents must ensure compliance with these laws. As other states are likely to follow California’s lead, even companies not doing business in California should take note of these laws and consider engaging in steps to evaluate compliance measures with general themes.[4]

When it comes to utilizing AI as a decision-making tool, we may begin to see similar laws to New York City’s Local Law 144, Automated Employment Decision Tools, that require companies to undergo an independent audit for AI systems used in hiring, promotions or other employment decisions to ensure that the algorithms are not discriminatory. Under NYC’s law, companies must also disclose when AI is used for employment decisions and provide transparency on how AI systems operate.

It is important to note there is a federal version of NYC’s law that has been proposed: The Algorithmic Accountability Act bill, introduced by Congress in 2022, aims at regulating automated decision-making systems and requiring companies to conduct impact assessments for certain AI systems that may have high potential risks (e.g., discrimination, privacy breaches). It would further require organizations to assess and mitigate bias, discrimination and privacy risks in AI systems. Whether or not it becomes federal law, apt due diligence will mitigate the risk of a company being a potential target of litigation or a bad news headline.

On the nationwide stage, regulatory focus noticeably increased over the past two years. However, under the Biden administration, no sweeping EU-copycat laws were implemented. Though, it would be remiss not to mention federal laws such as the Civil Rights Act, the Americans with Disabilities Act (ADA) and the Age Discrimination in Employment Act (ADEA) which have already raised AI-headlines as they build important regulatory boundaries to for AI-usage in hiring, employment and other decision-making processes. Similarly, in healthcare, AI systems used to process protected health information (PHI) must comply with HIPAA’s privacy and security requirements.

From a securities law standpoint, the SEC under President Biden had a heightened regulatory focus is on AI-related risks and disclosures.[5] Though AI will continue to dominate headlines, U.S. federal regulation trends are likely to unfold to mimic what we’ve seen with regard to evolving public concerns, such as ESG and data privacy protection, with foreign regulation imposing a higher and narrower standard of compliance.

In his first batch of executive orders, President Trump rescinded the Biden Administration’s sweeping 2023 executive order on “Safe, Secure, and Trustworthy Artificial Intelligence.” Under the Trump administration, expectations of a more light-handed approach to AI regulation have already started to come to fruition. President Trump has made promises to keep the AI industry free of regulations, and has made American domination of the technology a priority, including the announcement of a $500 billion Stargate AI venture from OpenAI, Softbank and Oracle during his first week in office.[6] However, as showcased by California and Colorado, individual states are likely to continue to take action and enact their own AI legislation.[7] Companies will need to actively monitor and adapt to the regulatory landscape, as AI-related laws and enforcement actions will continue to evolve at both the state and federal levels.

What’s the D&O litigation landscape now? Current AI-related securities class actions

In addition to the focus on AI by domestic and global regulators, AI has been top of mind for shareholder plaintiffs’ lawyers. While the technology’s transformation is still underway, AI-related securities litigation has already taken the stage.

As of this writing, there have been 42 securities class actions filed against organizations and their directors and officers relating to the use of AI.[8] The filings involve a variety of organizations, ranging from companies whose operations focus particularly on AI to those using AI to, at least in part, advance their business pursuits. Almost all AI-related litigation to date has involved allegations that the defendant company misrepresented its AI capabilities or AI prospects. However, at the end of January 2025, a different kind of AI-related securities case emerged. In the SCA involving Telus International (CDA) Inc., the plaintiffs alleged the company did not adequately disclose that its adoption of AI would result in the company “cannibalizing” its own business.[9]

Of the 42 cases filed, five have settled and eight have been dismissed, leaving 29 cases pending.

To date, the largest settlement of an AI-related securities class action involved TuSimple Holdings, Inc., a trucking company engaged in the testing of autonomous driving technology.[10] Plaintiffs in the litigation generally alleged the defendants made false and misleading statements in violation of the federal securities laws relating to the efficacy and safety of the technology. The litigation settled in August 2024 for $189 million.

Going forward, it will be interesting to examine how AI-related securities claims are resolved as more are filed, as well as how regulatory agencies address concerns surrounding AI, which could pose opportunity alongside its risk.[11] Perhaps, assuming liability can be established, it’s reasonable to forecast that organizations may be at greater risk for larger settlements where, like the TuSimple litigation, cases involve the wider use AI technology that can adversely impact public health and safety.

What do we expect for the future?

We can expect that AI-related regulation will continue to be driven as the evolving uses, or rather, misuses, of the tech spur regulator attention. However, as foreign regulation on headline-dominating issues outpaces that of the U.S., it is reasonable to predict that companies will likely need to direct efforts at satisfying foreign and state regulation in effect, while simultaneously navigating changes at the federal level.

Further, AI-related securities claims are expected to continue, and could very well be exacerbated by event-driven litigation for AI-related consequences that could arise. Included in this prediction is the continued risk and rise of misrepresentation allegations. For each and every company statement, a director or officer must sign off, and presentations about the use of AI are included in that. Though the plaintiff’s bar may be scouting for missteps, exposure is not limited to just potential misrepresentation claims. There is also the potential for allegations of breach of fiduciary duty in failing to manage the company’s exposure to AI, or for failing to sufficiently pursue goals of their investment on what can be high, AI-related startup and implementation costs.

Regulators have warned companies against AI-related misrepresentations, including former SEC Chair Gary Gensler, who set a cautionary tone in warnings to reporting companies against so-called “AI-washing,” or the overstating of the use of AI in products or services, a term similarly brandished to echo the concerns about climate-change-related “greenwashing.”[12] However, innovators’ backlash prompted additional statements from the SEC and FTC to defend that their false claims warnings are not to stifle innovation, but rather, address attempts to capitalize on the strong waves of investor interest on the AI buzzword without real, related opportunities.[13] These warnings were followed by at least three cases brought by the SEC against companies over alleged AI-washing, including an enforcement action involving a publicly-traded company. In the waning days of the Biden Administration, the SEC filed an AI-washing suit against a restaurant technology company alleging that between November 2021 and May 2023, the company made misrepresentations about its use and the capabilities of its AI-based product in Commission filings and public statements.[14] It remains to be seen to what extent alleged AI-related misrepresentations will be a priority for the new administration’s SEC. We may see a mixed bag on just how closely the SEC monitors and enforces companies’ AI-related disclosures. It is rather likely this will pose a continued “chicken or the egg” disclosure challenge for companies driven to not be left behind in the AI-race.[15]

Liability concerns are not exclusive to just the company taking the hot seat, as tagalong derivative claims have become an increasingly expensive exposure for directors and officers. Though derivative actions continue to result in corporate therapeutics, as they have generally been resolved in the past, monetary derivative settlements are continuing to rise. The last five years have seen derivative settlements in the eight- to nine-figure range, and with multiple of these occurring in the past year. It would not come as a surprise to see tagalong AI-related derivative claims, as well as seeing these don expensive price tags to settle.

What are some of the key takeaways? What can companies, their directors and officers and risk managers learn from these developments? The issues underscore the relevancy of directors and officers to weigh risk and reward surrounding AI-related statements and continue to focus on the mitigation of outsized claims that could mislead investors, harm consumers and violate securities laws.

While potential AI-related D&O liabilities are not entirely known, plaintiffs and regulators will no doubt identify other types of claims. Claims could emerge from an organization’s alleged over- or under-use of AI, failures caused by hallucinations (which occur when AI systems generate inaccurate outputs) or other foreseeable or less foreseeable technology issues. It is incumbent on directors and officers to better understand how their AI works, how their organizations are using it, and to assess the risks to their business.

D&O liability insurance considerations

We would anticipate most claims relating to AI-washing or breach of fiduciary duty claims to fall within the scope of most policies’ coverage, subject to the policies’ terms and conditions. For now, we are not seeing AI-specific exclusions in D&O policies. However, we recommend that companies confer with their broker and counsel to assess whether any cyber exclusions in private company policies could restrict coverage for AI claims or losses.

An increasingly popular coverage inquiry is the extension of entity investigation costs coverage to public company policies. This enhancement is available in the marketplace and can offer meaningful protection given the growing cost of regulatory investigations. However, businesses looking to buy this cover typically find the following:

  • First, available coverage is often narrowed to investigations that involve a concurrent securities claim; and
  • Second, the enhancement is customarily made available for additional premium, which can vary in scope on a case-by-case basis.

At all times, we encourage companies to review their policy terms and conditions and consult with their broker and counsel to more fully understand the scope of coverage.

In addition to policy wording and coverage considerations, AI is becoming more of a focus for D&O underwriters. Companies, particularly public companies, should be prepared to address AI-specific inquiries at renewal, including inquiries into areas such as risk/reward assessments around the use of AI, controls to monitor the use of AI, board level expertise and oversight, among others.

Concluding thoughts

As companies attempt to navigate the rapidly evolving AI landscape, directors and officers should be mindful of corporate and securities litigation-related risks, as well as an evolving maze of regulatory activity. Of notable interest, of course, will be the actions of the incoming presidential administration and its impact on AI-focused regulation and enforcement activity.

If not already in dynamic focus, companies must make AI a top board agenda item. As steps are made for research, investment, adoption and integration, due diligence in the purpose and use of AI will be as imperative as proving true the material representations about its usefulness to stakeholders. In consultation with their corporate counsel, companies will need to consider the materiality of AI-related disclosures, such as disclosing the extent to which they use AI systems internally, or to what extent their operations could be affected by competitors’, customers’, suppliers’ or vendors’ use of AI.

Further, boards will need to stay vigilant in mitigating the risks of AI use. For example, many companies have included AI-related references in their risk disclosures, with some noting that the company could be harmed if its applications produce faulty analysis or recommendations, and the liability of those unintended outcomes. On the other hand, some companies have noted that the perhaps greater long-term risk may be their competitive disadvantage if competitors deploy AI faster than they do.[16] Overall, litigation trends, regulatory actions and continued disclosure considerations will be a must-watch for boards in 2025.

Footnotes

  1. Artificial Intelligence Act: MEPs adopt landmark law. Return to article
  2. The EU Artificial Intelligence Act. Return to article
  3. Concerning consumer protections in interactions with artificial intelligence systems. Return to article
  4. Governor Newsom announces new initiatives to advance safe and responsible AI, protect Californians Decoding California’s Recent Flurry of AI Laws. Return to article
  5. Securities and Derivative Litigation: Quarterly Update. Return to article
  6. Trump rescinds Biden’s executive order on “safe, secure, and trustworthy” AI. Return to article
  7. US Federal Regulation of AI Is Likely To Be Lighter, but States May Fill the Void. Return to article
  8. Securities Class Action Clearinghouse a Collaboration with Cornerstone Research, “Artificial Intelligence” prompt, accessed January 24, 2025. Return to article
  9. Class action complaint for violations of the federal securities laws. Return to article
  10. Self-driving truck company TuSimple settles fraud lawsuit for $189 million Return to article
  11. Securities and Derivative Litigation: Quarterly Update. Return to article
  12. SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence. Return to article
  13. Regulators Say AI Enforcement Sweeps Are Reining in Hucksters, Not Innovation. Return to article
  14. SEC Charges Restaurant-Technology Company Presto Automation for Misleading Statements About AI Product. Return to article
  15. AI, Risk, and Public Company Disclosures. Return to article
  16. Id. Return to article

Disclaimer

WTW hopes you found the general information provided in this publication informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, WTW offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).

Author


Senior Associate, FINEX Commercial
email Email

Contact us