list of ai employment laws

AI Employment Law Tracker

Governments worldwide are implementing a new generation of laws to regulate AI, particularly in the context of employment and hiring. This page is an ongoing resource to keep track of new legislation.

AI Employment Laws by Country

United States

The United States has no single federal law on AI in employment, leading to a patchwork of state and local regulations. The focus is on preventing discrimination, ensuring transparency, and requiring accountability from employers.

State and Local Laws

  • New York City, New York: The city’s Local Law 144 is a pioneering regulation. It requires employers using an automated employment decision tool (AEDT) for hiring or promotion to perform an annual independent bias audit and to make a summary of the audit publicly available. Employers must also notify candidates when an AEDT is used, including the job qualifications it will assess.
  • Colorado: The Colorado Artificial Intelligence Act (CAIA), effective February 1, 2026, is a landmark law that creates a duty of reasonable care for developers and deployers of “high-risk” AI systems, including those used in hiring. The law’s goal is to prevent algorithmic discrimination, and it requires employers to conduct impact assessments, maintain a risk management policy, and provide disclosures to applicants.
  • Illinois: The Artificial Intelligence Video Interview Act requires employers to inform applicants if they will be using AI to analyze their video interviews and to obtain their consent. A separate amendment to the Illinois Human Rights Act, effective January 1, 2026, will broadly prohibit employers from using AI that causes a discriminatory effect on any protected class and will require notification when AI is used for a variety of employment decisions.
  • California: California has regulations that clarify that the state’s Fair Employment and Housing Act (FEHA) applies to automated decision-making systems. The regulations make it unlawful for employers to use AI that leads to discrimination against protected characteristics. Employers are also required to maintain comprehensive records of automated decision-making data for at least four years to facilitate oversight.
  • Texas: The Texas Artificial Intelligence and Data Act (TRAIGA), effective January 1, 2026, prohibits companies from developing or using AI with the intent to unlawfully discriminate against a protected class. Enforcement is handled by the Texas Attorney General, and individuals do not have a private right of action.

European Union

The EU AI Act is the world’s most comprehensive AI law to date, taking a risk-based approach. Systems used in recruitment and hiring are classified as “high-risk,” which subjects them to strict, pre-market requirements.

Key Provisions

  • High-Risk Classification: AI systems used for recruitment, candidate screening, and performance evaluation fall into the high-risk category. This classification triggers a series of obligations for both the developers and the employers (deployers) using the tools.
  • Mandatory Requirements: High-risk AI systems must undergo a conformity assessment to verify they comply with the act’s requirements. These requirements include having human oversight, maintaining detailed technical documentation, and using high-quality, bias-free data to train the models.
  • Transparency and Accountability: The law requires employers to inform workers and job applicants when they are subject to a high-risk AI system. It also mandates that systems be designed to allow for human oversight and that individuals can appeal an adverse decision.
  • Prohibitions: The EU AI Act bans certain AI practices entirely, such as using AI to infer emotions in the workplace.

Canada

Canada’s federal Artificial Intelligence and Data Act (AIDA) is still in development, but it is expected to create a legal framework for “high-impact” AI systems.

Anticipated Requirements

  • Risk-Based Framework: AIDA will likely categorize AI systems based on their potential to cause harm. Systems used in hiring and employment decisions will probably be classified as high-impact and will face the most stringent regulations.
  • Accountability and Transparency: The act is intended to hold developers and users of high-impact AI accountable for the outcomes of their systems. This includes obligations to conduct risk assessments, implement governance mechanisms, and ensure transparency.
  • Human Oversight: The law is expected to mandate that high-impact AI systems be designed to allow for meaningful human oversight and intervention.

China

China has taken a proactive, multi-pronged approach to regulating AI, with a focus on algorithmic transparency and data security.

Regulatory Framework

Algorithmic Recommendation Regulation: This regulation, effective since March 2022, is designed to ensure transparency in how algorithms recommend information and services. While not specific to employment, it requires platforms to provide users with a clear and explainable mechanism to control or disable the algorithm.

Generative AI Regulation: Effective August 2023, this law addresses the use of generative AI. It holds service providers responsible for the content generated by their models and requires measures to prevent the creation of illegal or discriminatory content. While not directly aimed at hiring, it sets a precedent for liability that could be applied to HR tools.

Comprehensive AI Law: China is in the process of drafting a comprehensive national AI law, which is expected to consolidate and expand on existing regulations, potentially including more specific rules for the use of AI in employment.

Others

Many other countries are also moving forward with new regulations.

  • Chile: Chile is a key player in Latin America. It has drafted AI legislation that promotes a risk-based approach while ensuring human rights are protected. The law also encourages self-regulation, demonstrating a collaborative approach between the government and the private sector.
  • Brazil: Brazil’s draft AI legislation is also a major development. It proposes a risk-based framework similar to the EU AI Act, with specific rules for high-risk AI systems. The proposed law would grant individuals new rights related to AI-driven decisions, including the right to receive an explanation for a hiring decision made by an algorithm.
  • India: While not a comprehensive law on AI yet, India is actively considering legislation. Its focus is likely to be on data privacy and the accountability of AI systems, with particular attention to how AI tools handle the personal data of job applicants and employees.