Skip to main content
main content, press tab to continue
Article

AI risk and governance: Utopian and dystopian views

By John M. Bremen | February 21, 2024

Effective leaders embrace the complexities of AI, recognize the limits of their control and take steps to anticipate and manage risks.
Work Transformation|Employee Experience|Ukupne nagrade |Benessere integrato|Inclusion-and-Diversity
Artificial Intelligence

Late last year, a panel of experts joined me in a debate hosted at Lloyd’s of London to discuss whether artificial intelligence (AI) ultimately is utopian or dystopian. More than 30 leading AI experts from academia, banking, insurance and technology attended the event, held in collaboration with The World Innovation Network (TWIN Global). The discussion indicated that effective leaders employ AI governance efforts that both strengthen the benefits of technology and mitigate the risks.

Utopian view

The panelists discussed several ways AI will benefit people, organizations and society. The most common near-term advantage of AI is enabling greater productivity for routine tasks of knowledge workers. Workers will be able to be more productive and effective, which can improve employees’ work/life balance, physical health, career growth and income, and companies’ financial health.

For example, generative AI tools such as ChatGPT and Bard allow workers to harness the power of “virtual coworkers” to help them add more value. As emerging generations become “AI natives,” machine learning tools will provide opportunities in ways that were never previously expected or understood.

People have feared most new technologies (e.g., airplanes, the polio vaccine, PCs, the spreadsheet, even can openers) because their use felt disconnected from natural processes and created disruption. History demonstrates there can be significant benefits when technologies are understood and used constructively.

Dystopian view

The panelists also articulated numerous risks of AI that could negatively impact people, organizations and society – generally connected to disruption. For example, there is concern that AI will make workers vulnerable to job redundancies and eliminations. And while AI may lead to the creation of new jobs, these jobs likely will require skills that many workers currently do not have or would find difficult to learn.

Further, AI could lead to greater wealth inequality as the divide between high- and low-skilled workers grows. Additionally, there is risk for misinformation (such as influencing public opinion), breaches of data security, and misuse of intellectual property (as recent lawsuits such as The New York Times suing Open AI highlight). At the extreme, humans could lose control with “the machines taking over” and AI acting on its volition in destructive ways.

Governance

The panelists described how effective leaders practice good governance when it comes to AI and other emerging technologies. Their actions could include:

  • Understanding how AI and other technologies work: When generative AI was introduced, many users mistook tools to be merely advanced search engines as opposed to text or code generators. Tools created “hallucinations” and other errors that could have been avoided. Effective leaders not only understand the purpose of technologies, but also learn how they work, deepening their understanding of what they can and cannot do.
  • Educating users and other leaders: Good governance includes educating those responsible for AI on its benefits and risks and how to use tools responsibly. For example, effective leaders coach users to create quality prompts and verify the accuracy of content. They also coach reviewers on red flags to look for in processes and output.
  • Establishing ethical usage standards: Effective leaders facilitate users’ ability to respect privacy and copyrights, cite sources and use information responsibly. For example, ethical usage standards may include generating content from a known set of verified documents, rather than from broader, internet-based learning models.
  • Keeping confidential information safe: Effective leaders establish guidelines and procedures to keep confidential data from being exposed to the public. This includes policies and processes to prevent proprietary information from being released through AI learning and training, and ensuring legal and security reviews of AI services.
  • Addressing bias: Effective leaders take steps to ensure objectivity and fairness on data input and output. They maintain standards for data used to train AI models and how output is reviewed to reduce the impact of analytical and social biases.
  • Understanding rules and where liability sits: Effective leaders know it is difficult to trace legal and statutory liabilities and their implications. Country-specific and local regulations vary widely, and good governance requires understanding and action to prevent issues and quickly address them once they arise.
  • Thinking ahead: Effective leaders do not wait for problems to manifest before addressing them. For example, understanding the implications of AI on jobs, workers and skill availability long before changes occur can give companies a competitive advantage and create constructive usage scenarios for technologies.

    The same is true for data security, programming and other potential issues. The most effective organizations began implementing skills training and data safeguards immediately when the use of generative AI proliferated.

Panelists stressed that technologies are not good or bad in and of themselves. Rather, the ways people use them lead to positive or negative consequences. Effective leaders have jumped in to better understand and address the complexities of AI and other related technologies such as blockchain, the metaverse and quantum computing. They recognize full control is neither possible nor realistic. And they’re taking responsibility for good governance over what they can influence.

A version of this article originally appeared on Forbes on January 31, 2024.

Author


Managing Director and Chief Innovation & Acceleration Officer
email Email

Contact us