Late last year, a panel of experts joined me in a debate hosted at Lloyd’s of London to discuss whether artificial intelligence (AI) ultimately is utopian or dystopian. More than 30 leading AI experts from academia, banking, insurance and technology attended the event, held in collaboration with The World Innovation Network (TWIN Global). The discussion indicated that effective leaders employ AI governance efforts that both strengthen the benefits of technology and mitigate the risks.
The panelists discussed several ways AI will benefit people, organizations and society. The most common near-term advantage of AI is enabling greater productivity for routine tasks of knowledge workers. Workers will be able to be more productive and effective, which can improve employees’ work/life balance, physical health, career growth and income, and companies’ financial health.
For example, generative AI tools such as ChatGPT and Bard allow workers to harness the power of “virtual coworkers” to help them add more value. As emerging generations become “AI natives,” machine learning tools will provide opportunities in ways that were never previously expected or understood.
People have feared most new technologies (e.g., airplanes, the polio vaccine, PCs, the spreadsheet, even can openers) because their use felt disconnected from natural processes and created disruption. History demonstrates there can be significant benefits when technologies are understood and used constructively.
The panelists also articulated numerous risks of AI that could negatively impact people, organizations and society – generally connected to disruption. For example, there is concern that AI will make workers vulnerable to job redundancies and eliminations. And while AI may lead to the creation of new jobs, these jobs likely will require skills that many workers currently do not have or would find difficult to learn.
Further, AI could lead to greater wealth inequality as the divide between high- and low-skilled workers grows. Additionally, there is risk for misinformation (such as influencing public opinion), breaches of data security, and misuse of intellectual property (as recent lawsuits such as The New York Times suing Open AI highlight). At the extreme, humans could lose control with “the machines taking over” and AI acting on its volition in destructive ways.
The panelists described how effective leaders practice good governance when it comes to AI and other emerging technologies. Their actions could include:
Panelists stressed that technologies are not good or bad in and of themselves. Rather, the ways people use them lead to positive or negative consequences. Effective leaders have jumped in to better understand and address the complexities of AI and other related technologies such as blockchain, the metaverse and quantum computing. They recognize full control is neither possible nor realistic. And they’re taking responsibility for good governance over what they can influence.
A version of this article originally appeared on Forbes on January 31, 2024.