The increasing use of artificial intelligence (AI) in employment-related decisions has prompted the New York City government to regulate its use by employers, driven in particular by concerns over potential unequal treatment of job candidates due to the programming or functioning of the AI. New York City’s Local Law 144 (LL 144) is effective January 1, 2023, and will require employers using automated employment decision tools (AEDTs) in hiring and promotions to satisfy a bias audit requirement and provide notices and disclosures regarding the audit results and the use of the AEDT. Proposed rules were issued in September, and a hearing was held on November 4, 2022. It is unclear whether final regulations will be issued before the end of 2022 or if the effective date will be delayed. Other jurisdictions, within the U.S. and globally, are also in various stages of addressing the employment-related use of AI.
New York City’s LL 144 defines an AEDT as a "computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision-making for making employment decisions that impact natural persons" but excludes tools that do not impact the decision-making process (such as junk email filters and antivirus software). LL 144 prohibits the use of an AEDT unless:
The proposed rules address several questions regarding compliance with LL 144, including clarifications regarding the definition of an AEDT, the focus of the bias audit, the data that must be made publicly available, and compliance with the notice and disclosure requirements. However, several questions remain unanswered, including (1) which entities are permitted to perform the bias audit, (2) whether the audit must be provided annually, and (3) the definition of an alternative evaluation process or the types of options that must be made available.
Several U.S. states (e.g., Illinois and Maryland) and some cities have enacted or are considering legislation that could impact the use of AI in hiring and other employment decisions. In the European Union, the European Commission is drafting an Artificial Intelligence Act to regulate the use of AI in general. The act would divide the use of AI into four broad categories of risk (to the rights of citizens):
The U.S. federal government has also focused on the use of AI in employment decisions. The Equal Employment Opportunity Commission (EEOC) issued guidance in May 2022 outlining how certain employment-related uses of AI potentially could violate the Americans with Disabilities Act (ADA). In October, the Biden administration published a draft AI Bill of Rights intended to guide the design, use and deployment of automated systems. Brazil, Canada and the U.K. are working on the development of similar laws and frameworks (as are other governments).
The application of AI in employment is already far ahead of the development of regulatory regimes governing its use. The EEOC has estimated that more than 80% of U.S. employers use some form of AI in their work and employment decision making. Employers should monitor the development of legal restrictions and requirements on the use of AI in employment-related decisions. For employers with employees in New York City, the New York City law is currently set to go into effect in 2023; it may be a good test case for showing how regulation may affect the use of AI in making employment-related decisions.