Skip to main content
main content, press tab to continue
Article

How AI creates cybersecurity vulnerabilities – and what to do about it

By Sean Plankey | August 26, 2024

AI presents opportunities for innovation and efficiency, but also introduces data and cybersecurity risks. Through a multifaceted data security approach, you can protect your data and tap AI’s potential.
|Corporate Risk Tools and Technology|Financial, Executive and Professional Risks (FINEX)
Artificial Intelligence

Generative AI tools exploded onto the scene in 2023 and were quickly adopted by both consumers and businesses alike. But these tools create vulnerabilities in your data security. Protecting data and critical intellectual property (IP) requires preventative measures and additional governance.


Vulnerabilities in AI systems

Since 2023, all major software companies integrated AI into their core software offerings. For example, Microsoft launched CoPilot for Github and PowerBI, allowing coders and data analysts to receive support from AI. Yet such integration creates vulnerabilities in their software. Attackers can take advantage of these weaknesses to compromise the system’s functionality. As AI use continues to expand through mainstream software adoption, poorly designed or trained models may leak corporate or personal information through inference and engineering prompt attacks.

Let’s consider a scenario where a corporation uses proprietary data to train an AI algorithm. Without proper controls or governance, the AI algorithm may “see” too much corporate data, including:

  • Business strategy
  • Customer information
  • Schedules
  • Trade secrets and IP

An attacker who gains access to the AI might continually prompt it with leading questions until the AI unwittingly reveals corporate IP.

In another scenario, an attacker might feed the AI system erroneous or incomplete datasets to cause it to offer wrong, biased or inaccurate predictions. Depending on the speed of the company’s decision cycle, this could be a very costly attack.

These attacks hinge on the training data the AI system is provided. The AI system can only make predictions based on the prior decisions provided in the training data. If a system is provided with incorrect or false data, the AI will predict false or even nefarious outcomes.

These aren’t hypotheticals. They’re actual possibilities. For example, Github CoPilot or Google Gemini Code Assist can be configured to read your entire software code base. While this may be helpful to your developers, it also provides deep access to your core intellectual property. This is especially dangerous for software and technology companies that derive most of their revenue from IP. For these companies in particular, governance and data security methods to control AI access are critical.

How to reduce AI-driven cybersecurity risks

Addressing the data security threats AI poses requires a multifaceted approach that combines technical solutions, clear governance and organizational readiness. Organizations can adopt several strategies to mitigate these risks:

  1. Improve AI security posture: Implement robust data security measures for AI systems, including encrypting corporate data to prevent the AI system from training on it, access controls and continuous monitoring for unusual behavior.
  2. Educate and train personnel: Provide employees with data and cybersecurity awareness training to recognize and respond to cyber attacks against AI systems effectively.
  3. Collaborate with regulators and industry: Engage with regulators, industry peers and cybersecurity experts to create standards, governance and best practices for secure AI deployment and monitoring.

AI presents unprecedented opportunities for innovation and efficiency, but it also introduces new data and cybersecurity challenges. Organizations must take preventative steps in architecture, governance and design prior to software deployment to secure their data. Additionally, data encryption will continue to grow as standard practice, becoming a key fail safe practice to prevent AI from accessing critical information and data. Through a multifaceted approach, organizations can gain the efficiency of AI-based systems, while protecting their IP and trade secrets from theft.

Author

Global Leader Cybersecurity Software, WTW
email Email

PRODUCT

Indigo Vault

Future-proof your intellectual property

Contact us