Note: The discussion contained in this paper reflects the views of the authors and is not meant to reflect the views of Alphabet or WTW. It is intended to spark discussion on forward-looking and evolving economic models around AI risk.
Given its recent rapid development, AI has outpaced conventional measures of governance and risk, requiring new models that capture both financial and intangible factors. Much of the risk associated with AI stems from human factors – both people’s impact on AI and AI’s impact on people. To use the technology to its full potential while minimizing risks, economists have the opportunity to complement financial growth metrics with new measures of value creation that reflect human flourishing, productivity, safety, prosperity and quality of life.
As AI continues to increase the pace and extent of technological development and as intangible assets continue to increase as a percent of corporate valuations, the adequacy of traditional mainstream economic growth and risk metrics comes into question. AI’s development suggests the adoption of broader measures of financial and societal impact (positive and negative). New models for measuring growth and risk help maximize the benefits and minimize the risks associated with AI’s new technologies as well as others such as quantum computing.
Digitization requires new valuation and revenue models. Historically, technology breakthroughs – for example, mechanized production, electric power, information technology – and the industrial revolutions they enabled, drove major developments in economic value theories, as shown in Table 1 below. Digitization has played such a role in the current industrial revolution but without a concurrent economic measurement breakthrough that accounts for the new ways value is created, distributed and consumed.
Examples include dependable and consistent valuation models for intangible digital assets such as large language models, algorithms, agents, data and intellectual property. Advanced metrics could better enable risk management strategies to protect them.
Industrial revolution | Technology breakthrough | Value theory | Typical metrics |
---|---|---|---|
First Industrial Revolution (1760–1840) |
Water and steam power to mechanize production | Classical school (Adam Smith) | Physical and economic output |
Second Industrial Revolution (1870–1914) |
Electric power to create mass production | The marginal revolution (Carl Menger, William Stanley Jevons, Léon Walras) and Keynesian economics (John Maynard Keynes) | Marginal utility and aggregate demand (profit and production) |
Third Industrial Revolution (1950s to 2000s) |
Electronics and information technology to automate production | Neoliberalism/free market capitalism (Milton Friedman) | Gross domestic product |
Fourth Industrial Revolution (2020s to today) |
A fusion of technologies that is blurring the lines between physical, digital and biological spheres | Evolving | Net societal value |
Value creation and risk management in the era of AI. To use technology to its full potential amid rapid disruption while minimizing risks, economists have the opportunity to complement financial growth metrics with new measures of societal value that reflect human flourishing, productivity, safety, prosperity and quality of life. Such growth and value measurement factors are already defined, for example, in Maslow’s hierarchy of needs and David R. Hawkin’s scale of consciousness, which include both financial and nonfinancial factors.
AI governance. Traditional governance structures and processes often fail to address AI’s unique challenges and opportunities because they are unable to keep pace with rapid advancements in AI technologies or ever-changing large language models. AI tools (such as agents) require different governance protocols for different circumstances.
New models of dynamic governance that include relevant measures of performance and risk could provide a blueprint for the responsible and ethical development and deployment of AI. Dynamic governance models are flexible and responsive, with mechanisms for regular updates, feedback loops and continuous improvement. These attributes allow leaders to tailor governance practices to their AI objectives and adapt to internal and external changes.
Human-focused economics isn’t new. People have been at the center of economics since its advent given it’s the study of collective outcomes of human behaviors. But economic and risk models are not able to fully encompass and measure the value-creation and growth opportunities of AI. A new model that includes human-focused economics and net societal value addresses those shortcomings.
Business type | Category | Description |
---|---|---|
I | Traditional | Operating with the primary goal of maximizing profit through AI, with little to no focus on the broader societal impacts (positive or negative) of their operations |
II | By-product | Businesses create and are aware of social value and reduced risks of AI as a result of their actions but lack strategic focus |
III | Strategic | Businesses drive strategy by aligning profit-making activities and goals to minimize net societal value and minimizing risk |
Net societal value and risk measures the positive and negative impact of AI on people. Organizations already have experience measuring components of societal value across multiple dimensions of wellbeing. The quality of each metric varies considerably, based on several factors related to measurability and reliability, including access to data, each organization’s history tracking it, the availability of benchmarks and the presence of uniform standards within and across industries and countries.
Measures of societal value and risk include the impact of AI on people who work for the organization (either directly as employees or indirectly as contractors, temporary workers or vendors) as well as people who are part of the broader society at large. This can include customers, users, community members and those outside the community.
fullscreenExplore representative measures for each dimension of wellbeing
Societal value and risk from organizations can take several forms. The example measures below connect AI and its impacts. Many of these examples will require aggregation and standardization as their practice matures and is refined. This may include weighing different factors by industry, region or other relevant criteria.
Dimension | Value and risk measure | Description |
---|---|---|
Physical wellbeing | PHV = physical health value | The value of physical and health benefits and risks created for employees and other people |
Emotional wellbeing | ESV = emotional stress value | The value of stress on employees and other people |
Financial wellbeing | TSR = total shareholder return PSV = product and service value SPV = supplier payment value TRV = total reward value TXV = training and experience value |
Stock appreciation plus dividends Value of products and services to customers and end users Total value paid to vendors and suppliers Compensation and benefits paid to employees and contractors Value of training and experience to employees |
Relational wellbeing | EEV = employee engagement value CEV = Community engagement value |
Value of employee and community engagement and experience by the enterprise |
Spiritual wellbeing | PAV = purpose and affiliation value | Value of purpose and affiliation to employees |
Planetary wellbeing | NRV = natural resources value PEI = pollution and environmental impact |
Value of natural resources consumed and generated by the enterprise) Value of impact by the enterprise on the environment) |
Net societal value | NSV = The sum of these measures |
AI creates both positive and negative social value, which can build or erode economic value and create new forms of risk.
fullscreenSocietal examples of AI value creation and erosion
Societal value: Social media algorithms play a key role in content personalization, enhancing user engagement by delivering relevant content, facilitating social connections and enabling the discovery of new interests and communities. They can also amplify voices and causes that might otherwise go unnoticed.
Societal costs: These algorithms are often criticized for creating echo chambers, spreading misinformation and exacerbating mental health issues, such as anxiety and depression, through addictive design practices to maximize usage and profit. The impact on emotional and relational wellbeing as well as the potential manipulation of public opinion, represents significant societal costs.
Net positivity equation: Societal benefits (content personalization, social connections) minus societal costs and risks (echo chambers, misinformation, mental health issues) equals net positivity.
The societal value of social media has changed over time. Some experts believe it may have already peaked while others say it has yet to peak – indicating the potential value of updated and consistent new economic measurement models.
How can we pivot social media algorithms for maximized benefit?
The expansion of AI tools and models into global economic systems is outgrowing traditional economic metrics of success, requiring new models that capture the intangible aspects of societal benefits and costs/risks. By aligning AI development with metrics focused around both economics and human flourishing, leaders can guide growth to balance opportunities and risks.