Skip to main content
main content, press tab to continue
Article | FINEX Observer

Emergent AI and potential implications for cyber, tech & media coverages

By Joe Quinn | November 9, 2023

In 2023, AI is a significant topic with broad legal and regulatory risks. Traditional insurance may not fully cover AI risks, urging tailored coverage solutions.
N/A
managing-complex-organizational-risks

The exponential growth of AI

Artificial intelligence has been the most talked-about topic of 2023, despite being one of the least understood. The exponential development of such high-profile online platforms as the generative AI known as ChatGPT 4.0, launched by OpenAI in March of 2023, has ignited debate over not only the future of industry and commerce, but of humanity itself. This article will not attempt to address the broad-ranging, existential discussions surrounding AI, but will instead focus on the potential implications of emerging AI technologies on traditional cyber, media and tech insurance coverages, and provide some practical tips for businesses incorporating AI into their operation, whether through AI vendors or less frequently, through home-grown AI.

Companies are already investing heavily in AI, and this investment will likely accelerate over the next several years, forecasted to be over $200b by 2025. In 2023, investors flocked to companies such as Nvidia (NASDAQ: NVDA), which produce the unique chips required to operate the robust generative AI engines, and which boosted markets despite rising interest rates and other inflationary pressures that many expected would slow down the economy.

Despite the ubiquitous commentary surrounding ChatGPT, many other flavors of AI are being developed and applied to virtually every industry in some way or another. These include machine learning, deep learning, large language modeling, anomaly detection, computer vision, reinforcement learning, operational technology, robotics, and cognitive computing, just to name a few. You are not alone if you do not know what these terms mean. But be wary of technological gatekeeping and dig into the definitions to these terms with your AI vendor, if you have one, or your own data scientists and software engineers where possible. Push your AI vendors to explain any terms you do not understand in plain English, so that you can better understand whatever AI your business may be adopting. Given the multitude of applications for AI, virtually every industry will adopt it in some form or fashion, so the more you can learn, the better discernment and business acumen you will be able to bring to the discussion.

Inherent risks associated with emergent AI

While the power and potential for AI to bring great productivity and prosperity is undeniable, it is not without risks. Following is a list of just some of the leading legal and regulatory risks associated with this AI revolution:

  • Privacy and data protection: AI often relies on vast amounts of data. Collecting, processing, and storing this data must comply with privacy and data protection regulations such as GDPR in Europe or CCPA in California, and in a growing number of states proposing and enacting similar laws. It is likely that the EU will establish some sort of AI-specific regulatory framework before any similar such framework materializes in the US. As of now, leaders of both parties in the US have merely held discussions about a national framework, and with the upcoming presidential election in 2024, it is unlikely that much will happen in terms of federal legislation until at least 2025.
  • False advertising/deceptive trade practices: If an AI-generated product is marketed as human made, these misrepresentations may run afoul of federal and state laws prohibiting unfair and deceptive trade practices. The Federal Trade Commission (FTC) has released guidance stating that Section 5 of the FTC Act, which prohibits “unfair and deceptive” practices, gives it jurisdiction over the use of data and algorithms to make decisions about consumers and over chatbots that impersonate humans. Companies should also be transparent when collecting sensitive data to feed into an algorithm to power an AI tool, explain how an AI’s decisions impact a consumer and ensure that decisions are fair.
  • Antitrust and competition law: The use of AI in business can raise concerns about anti-competitive behavior, particularly when AI systems are used to optimize pricing, marketing, and supply chain management.
  • Contractual issues: Contracts involving AI, such as licensing agreements, service level agreements, and terms of use, need to address issues specific to AI technology, including data usage and performance guarantees.
  • Intellectual property infringement: AI systems that use copyrighted content may inadvertently infringe on the intellectual property rights of others. Several lawsuits against OpenAI are pending, and other similar lawsuits against companies that have trained their AI models on copyrighted content, or which generate their own creative content through the use of generative AI, are pending. And in the first decision of its kind, one federal court judge has upheld a decision of the US Copyright office to deny copyright protection to AI-generated content, stating that “human authorship is an essential part of a valid copyright claim.” Courts have not, however, provide guidance on what level of human involvement would meet that “authorship” threshold.
  • Bias and discrimination: AI algorithms can perpetuate and even exacerbate biases present in the training data. This can lead to discrimination in decision-making processes, which may violate anti-discrimination laws. In one recent case, the EEOC settled its first-ever discrimination lawsuit against a tutoring company after it alleged that the company’s AI system for reviewing job candidates had automatically rejected women applicants over 50 and men over the age of 60. This litigation and agency enforcement trend is expected to increase as more models are used in the hiring space.
  • Data drift: Data drift in AI models refers to the phenomenon where the statistical properties of the data used to train a machine learning model change over time, leading to a decrease in the model's performance. This can occur for various reasons and has the potential to significantly impact the accuracy and reliability of AI models. Detecting and addressing data drift is crucial for maintaining the performance of AI models in production. Failure to account for data drift can lead to degraded model accuracy, increased errors, and ultimately a loss of trust in AI systems. To mitigate data drift, organizations use techniques such as monitoring data quality, retraining models periodically, and implementing data preprocessing strategies to adapt to changing data distributions. Data drift detection tools and practices are essential components of a robust AI model deployment and maintenance process.
  • AI model “Black Box” performance issues: AI algorithms can be complex and difficult to interpret. The lack of transparency and accountability can make it challenging to assign responsibility when something goes wrong with the model itself, such as the generation of “toxic content” or “hallucinations” which can occur despite any human error or fault. Many experts can tell you what an AI model’s output should be, based on the inputs to that model. But they may not be able to articulate exactly how that model works. In other words, it is possible to quantify a model’s accuracy rate based on available data, including outputs and inputs, but what happens within the model can be a mystery.

Do traditional cyber, media and tech insurance coverages fully address all the risks of AI?

The short answer to this question is no, not yet. Traditional coverages address some, but not all, of the risks associated with AI. As any good lawyer would say, “it depends.” How is the AI being employed? Is it being provided by vendors or is it a home-grown AI model? Is it being used for internal purposes, or is it interacting with third parties, especially clients and consumers, daily? Is it being used in operational technology that can cause immediate and significant disruption to a business if it underperforms? All these questions impact the risk level, and the relative insurability of those risks.

Cyber: Data privacy

One of the most common risks addressed by cyber insurance is data privacy. The Samsung data leak from April of this year is a cautionary tale. The leak occurred when a well-intentioned employee seeking a solution to an operational problem decided to enter source codes into ChatGPT to seek fixes. Unfortunately, once the proprietary source codes were entered into the OpenAI platform, they were discoverable by anyone else using the platform. OpenAI has had the advantage in generative AI precisely because it was the first to launch into the “wild.” In other words, the continuous input of data from users has been the driving accelerant of the technology, giving it a major advantage over other competitors being developed by Google, Microsoft, Amazon and others.

But as demonstrated in the case of Samsung, this open platform provides hackers with a new playground to create malware of their own, often using any available source codes that have been accidentally or maliciously released into the wild to use them as prompts to exploit those codes and gain unauthorized access. Hackers are also using their own devious Generative AI models through applications such as WormGPT to create more targeted and believable phishing, smishing and vishing schemes on steroids. Leaking data into OpenAI or elsewhere only gives these bad actors more ammunition in their already sophisticated arsenal.

In response to the leak, Samsung banned employees from using ChatGPT to assist with job functions. Many companies have similarly banned employees from entering sensitive information into the model. In response, OpenAI has introduced enterprise versions of its model that it promotes as having more stringent data protections for businesses using this separate platform.

Having a written policy outlining what uses, if any, of ChatGPT or other Generative AI are permitted and what uses are not, is an important step most business should consider. It will likely be a question on cyber insurance applications as well. Such a written policy could even come into play should a rogue employee release information into the “wild.” Many policies define a rogue employee as one who is “acting outside of the scope of their employment.” By having a scope in writing, it is easier to trigger coverage when someone acts outside of that scope, especially a rogue employee.

In the case of the Samsung leak, a traditional cyber policy would provide third-party data privacy coverage, providing for a defense and indemnity in the event someone (likely a customer) sues the company for the unauthorized disclosure of personally identifiable information (PII). However, a cyber policy does not typically provide for any first-party coverage for the unauthorized disclosure of an insured’s own proprietary data, trade secrets or other confidential corporate information, such as source codes.

Therefore, if your company engages with an AI vendor, detail what sensitive data the vendor may use, how they may use it, how long they may use it, what they must do to protect it from disclosure, and what they need to do to return or destroy the data when the contract ends. To back up those contractual promises, especially when many AI vendors are start-ups and less than optimally capitalized, it is wise to require that vendor to maintain its own cyber policy (even if at lower limits) to provide some coverage for any third-party data privacy breaches.

Because AI models have such unprecedented power to collect and analyze large swaths of data, be sure to also review your cyber policy’s language regarding unauthorized or wrongful collection, including any references to biometric data (or BIPA), pixel tracking and the recently resurrected zombie from a VHS-era horror movie known as the Video Privacy Protection Act (VPPA), which dates back to …you guessed it, the Eighties. In response to an increasingly aggressive plaintiff class action bar, some of whom are adding RICO counts to their lawsuits for unauthorized collection, several insurers have inserted exclusions, sub-limits, or defense-only provisions. In the spirit of Halloween, that’s a scary scene. Be sure to review your policy with your broker.

One final note, if you are concerned about your own employees either accidentally (as in the Samsung matter), or maliciously, leaking your own corporate, proprietary and protected data into a platform like ChatGPT, it is worth considering products such as WTW’s Intangible Asset Protection Policy. These policies are specifically designed to insure the scheduled intangible assets themselves, such as source codes, against both accidental and malicious insider events. Talk to your broker to learn more.

Cyber: Ransomware and business interruption

If engaging AI vendors who furnish models essential to your operation, it is wise to require that they carry cyber insurance coverage of their own, including ransomware coverage. Because the AI vendors are often start-ups, they may not be able to get limits that match your own, especially if you are a large enterprise. Despite this potential impediment, any coverage is better than none.

In addition, if you are a buyer of AI vendor services critical to your operation, you should carry your own dependent/contingent business interruption coverage under your policy and make sure your AI vendor is considered a covered provider. Doing so will provide an immediate response mechanism for your own business to manage the crisis and any fallout that results from an attack on the AI vendor.

Media

If your company engages with an AI vendor who needs to train its model on data, it is recommended that the vendor carry some form of media liability coverage, which generally will provide some defense and indemnity for infringements of copyright. As with cyber liability, your company should also consider some form of multimedia liability coverage for this potential exposure as well, especially if your AI model is home-grown, and you are training it on potentially copyrighted works. Many carriers will bundle the media liability as part of the cyber policy, and others may add it as a separate endorsement. Discuss your options with your broker.

Tech

A traditional technology liability policy is designed to indemnify and defend an insured for third-party claims for negligence in the provision of technology services and technology products, as defined by the policy. While this coverage is generally applicable to the services of an AI vendor, it can be a crude instrument when it comes to the nuances of AI risks.

For one, it is unclear whether an AI model would meet the criteria of being a technology product or a technology service. In most cases, the AI vendor contract will cover all phases of the training, implementation, and maintenance of the AI system, and would encompass both services and products. Most AI models are only as good as the people and entities providing it with data. Therefore, it is rare that an AI vendor contract would be considered purely a product supply contract. Any analysis of whether coverage is triggered under an associated tech E&O policy would necessarily involve the parties’ conduct and the provision of technology services.

This is where the “black box” problem looms over a Tech E&O claim for under performance. What if the vendor did everything it said it would do, and committed no human error, or other negligent conduct to cause the underperformance? How will the aggrieved party prove its case? What will be the applicable negligence standard applied to this relatively new technology and industry? And how long will a case, with its battle of AI experts, linger in the courts before it is resolved? Such an expert-heavy lawsuit could be problematic in a world where, as noted earlier in this article, many AI experts cannot tell you HOW certain models actually, well, “work” in the first place.

Moreover, what if the toxic content, hallucinations or data drift were actually the fault of the company buying those AI vendor services, who failed to provide accurate data to the vendor and its model, or failed to monitor the model for potential problems, such as bias or discrimination?

Due to the number of unanswered questions, one can see that the traditional Tech E&O policy may not provide a comprehensive protection for all of the unique risks of AI, nor an immediate mechanism to address very real or significant financial impacts. Imagine a self-checkout AI system to capture retail theft failing to properly work, causing immediate financial loss. Unless the model itself can be underwritten, then failures of the model absent negligence of an AI vendor may go uncovered.

In this sense, Tech E&O policies appears to be a crude tool ripe for some updating and refinement to adapt to these new risks. While not many products have entered the insurance market to date, we anticipate seeing more products such as these from global insurer, Munich Re, that will more specifically target the inherent risks of AI models themselves. We will continue to monitor the landscape as more products like these are developed and brought to market.

Conclusion

For better or worse, AI is here to stay, and its impact will continue to increase exponentially, so it is critical to identify those areas in which you plan to use it, and then build a resilience strategy around those uses. Your strategy should consider the operational importance of the AI and the relative risk that under-performance of that model may have on day-to-day operations. At the outset, while one may not have access to the inner workings of a particular AI model, investigate what inputs are essential for that model, and what the expected outputs of that model should be, based on those inputs. Make sure that your AI vendor agreement clearly spells out the relative rights and uses to customer data and corporate proprietary information and outlines a plan for the protection of that data, or destruction of the data, upon termination of the agreement. Also, beware any implicit biases or discrimination if your AI model in any way effects current or prospective employees. Consider the cyber, media and tech coverages that both the AI vendor, and you, the buyer of those services, should have in place as an extra layer of protection in case the model under performs. Finally, explore new products coming to market which can help cover some of the potential gaps or shortfalls of traditional coverages, as the insurance market, like us all, tries to catch the AI tiger by its tail and keep up with such a rapidly developing technology.

Disclaimer

Willis Towers Watson hopes you found the general information provided in this publication informative and helpful. The information contained herein is not intended to constitute legal or other professional advice and should not be relied upon in lieu of consultation with your own legal advisors. In the event you would like more information regarding your insurance coverage, please do not hesitate to reach out to us. In North America, Willis Towers Watson offers insurance products through licensed entities, including Willis Towers Watson Northeast, Inc. (in the United States) and Willis Canada Inc. (in Canada).

Author

FINEX NA Cyber/E&O Coverage Analyst
email Email

Contact us