EU: Comprehensive AI Act includes obligations for employers
March 28, 2025
A major European regulation imposing strict rules on AI activities that pose high and unacceptable risks, including those deployed in the workplace, is being implemented in phases through 2026.
The AI Act — a major European regulation governing the use of AI systems — is taking effect in stages through August 2, 2026. As a regulation, it applies to all member states without the need for local conforming legislation to be adopted, though some states may choose to do so (e.g., Spain approved corresponding legislation in March 2025).
Key details
The act classifies, defines and regulates AI activity based on four levels of risk (Unacceptable, High, Limited and Minimal):
Applications of AI that pose an unacceptable level of risk are prohibited. These include using AI systems for "social scoring" (i.e., evaluating or categorizing people based on social behavior or personality characteristics, resulting in certain types of detrimental or unfavorable treatment) or for biometric categorization to infer "protected" personal attributes such as race, union membership and sexual orientation. The act's ban on prohibited AI applications took effect on February 2, 2025
High-risk AI systems will be subject to substantial regulation, with the great bulk applying to the system developers. Deployers of high-risk systems (e.g., employers) will be subject to lesser obligations, such as ensuring human oversight and properly using the system. Additional implementation guidelines regarding high-risk systems are to be released by February 2, 2026, and the act's requirements related to high-risk systems generally take effect on August 2, 2026
For high-risk AI systems used in the workplace, employers must inform workers' representatives and the affected workers before putting the system into service. The act defines employment-related high-risk AI systems to be those used for:
Recruiting or selecting individuals (in particular, placing targeted job advertisements), analyzing and filtering job applications, and evaluating candidates
Making decisions affecting terms of work-related relationships, promoting or terminating work-related contractual relationships, allocating tasks based on individual behavior or personal traits or characteristics, or monitoring and evaluating the performance and behavior of persons in such relationships
Limited-risk AI systems are subject to lighter transparency obligations (e.g., developers and deployers must ensure that end users are aware that they are interacting with AI); minimal-risk AI activity (the bulk of AI currently in use) is left largely unregulated
Employer implications
Employers should evaluate the risk classification of AI systems deployed in the workplace or as part of the employment process and ensure compliance with the act's provisions, including communications with employees and assignment of human oversight as appropriate. It's worth noting that much of the controversy around the act is that it seeks to regulate potential (as well as actual) capabilities of AI systems, potentially inhibiting the development and utilization of new AI systems.