AI and Compliance for 2026: What Talent Acquisition Teams Need to Know

AI has moved from the future to your everyday recruiting tech stack, yet most talent teams are still figuring out how to use it safely, fairly and legally. Our experts break down what you need to know.

AI now lives in your applicant tracking system, your sourcing tools and every candidate engagement platform you rely on. But hand on heart, how many of us can explain where it’s working in our processes, what data it’s touching, or whether we’re using it by the book?

Our recent roundtable brought together two experts working to help organizations navigate this challenge: Bennett Sung, a seasoned HR tech expert who spent the past year building AI governance platforms at FairNow, and Martyn Redstone, who started one of the first AI advisory consultancies in recruitment and now focuses exclusively on AI governance and risk management.

AI compliance is top of the agenda for many talent organizations. Mobley v. Workday showed that we need to be paying very close attention to the output of AI screening tools, and the “regulatory frenzy” at home and abroad is making compliance more complex by the day. Translating these fragmented, and sometimes conflicting, legal principles into practices and behaviors is easier said than done. The closer 2026 gets, the more urgent it becomes to have the right governance in place before the next wave of changes arrives.

So with all this in motion, what do recruiting leaders need to watch for right now? Here are the key takeaways.

Getting Started: The Four Pillars of AI Governance

So how do you even start with AI compliance? Martyn Redstone recommends putting in place a framework for compliance built around four pillars: inventory, policy, provision, and education.

Inventory – what AI do you use?

Inventory means figuring out exactly what AI systems you have. Obviously, you can’t manage risk for systems you haven’t inventoried, and some regulators now require organizations to keep a centralized registry of all the artificial intelligence systems and applications in use or in development. California requires organizations to keep track of their AI systems and retain the inputs and outputs of automated decision-making tools for four years, for example, and Colorado recently passed similar requirements. 

Here’s where things get tricky: to inventory your AI, you need to understand your entire supply chain. Most talent acquisition (TA) teams don’t build their own AI – they purchase it or it comes embedded in systems they already use. That makes you the “deployer” under most regulations, meaning you own the compliance burden regardless of who built the tool, even for models built within models built within models. 

Bennett Sung recommends starting with a simple Excel spreadsheet that documents all your AI models, whether you developed them in-house or purchased them. For each system, you need to know what it does, who uses it, and critically, where it came from. Press your vendors for answers about model origins and third-party relationships — you might discover they lease part of their model from someone else. 

Policy – how are you using AI?

Policy means creating acceptable use policies around AI, for both candidates and internal teams, that lay out exactly what people can and can’t do with AI tools. But before you can set effective rules, you need to find out how teams are really using it day to day because it probably isn’t how you think. About 78-80% of the white-collar workers using AI are bringing their own AI tools into the workplace, according to Microsoft & LinkedIn’s 2024 Work Trend Index Report, creating what Redstone calls “shadow AI” usage. A good first step is to ask people anonymously whether they’re using AI outside of any mandated tools and policy, because hidden use creates real blind spots in your governance.

Provision – how are you evaluating vendors?

Provisioning is the process of equipping employees, teams and systems with the necessary AI tools to perform their jobs. If you expect people to use AI responsibly, you need to provide secure, mandated tools through proper procurement and information security processes, so people don’t resort to using their own tools that you can’t audit or regulate.

When evaluating AI vendors, most organizations focus on features and price. In fact, as Sung recommends, you should start every vendor conversation by asking for a full list of sub-processors and understanding how your data is retained and used. You need to dig into SOC II testing, bias testing, and whether the vendor can keep up with regulations that change state by state and country by country. What happens if regulations change during your contract? Will the tool remain compliant if the law shifts? Many contracts don’t address this scenario at all.

Model cards are key artefacts for assessing a tool’s compliance. These source documents describe what the AI system is capable of, how it was trained, and how (or if) it’s being kept current with regulations. This documentation is required under the EU AI Act (more on that below) and should be a non-negotiable request for any US-based organization evaluating AI tools.

Education – are you building true AI literacy?

Article 4 of the EU AI Act requires providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff. Even if you’re not affected by this law, it’s good practice to follow it for reasons we’ll get into. 

Under the EU AI Act, AI literacy is defined as the skills and knowledge people need to make an informed deployment of AI systems, be aware of its opportunities and risks, and understand potential harm. This goes a lot further than the prompt engineering training many organizations are currently offering. Your TA teams need to understand the basics of discriminative versus generative AI, data accuracy issues, the “garbage in, garbage out” problem, and most critically, how to critically assess AI outputs. Without this foundation, even the best tools become liability risks.

On the value of having a robust governance framework, Redstone uses a helpful analogy: “Think about a highway with metal barriers in the middle, usually close to the fast lane. Those barriers aren’t there to slow traffic down. They’re there to ensure that people can drive quicker, safely. That’s exactly what good AI governance does for your organization. It’s a way of speeding up innovation – good guardrails help people know what they can do, so they can move forward at pace.” That’s counterintuitive to many leaders, but it’s true.

Global Regulation: The Brussels Effect and Planning for the Most Draconian Standard

For organizations operating across multiple jurisdictions, keeping up with regulation has become its own full-time job. The US alone has nearly 30 AI regulations controlling different aspects of HR and recruiting, with each state adding its own flavor – Illinois has a video interview law, Maine has a chatbot law. Add in other countries, and the complexity multiplies.  

Redstone’s advice to companies operating across multiple jurisdictions is straightforward: design your governance framework around the most draconian legislation and you’ll satisfy every other law. Right now, that’s the EU AI Act. The European Union’s landmark artificial intelligence law is widely considered the strictest AI regulation in the world. 

The requirements are sweeping and prescriptive. The EU AI Act requires independent bias auditing (similar to NYC’s Local Law 144), model inventory management, good documentation, transparency (making individuals aware they are interacting with an AI system) and explainability (the requirement for users and deployers to understand the AI’s function, outputs, and decision-making processes in a meaningful and context-specific way). In AI systems used for hiring, for example, explainability includes the ability to give candidates meaningful reasons for decisions that significantly affect them. 

The “Brussels Effect” is the phenomenon where the EU’s regulations become de facto global standards. We saw it first with GDPR – the EU’s flagship law set the standard for data protection, and everyone else followed. If you’re building a compliance roadmap, the EU AI Act is the law you have to start with and the standard you have to meet.

Candidate Transparency: Writing an ‘AI Cookie Policy for Recruitment’

Transparency, as defined in the EU AI Act, broadly requires you to tell candidates that you’re using AI. But how do you do that without scaring candidates away? Candidates are already questioning whether a human even looked at their resume, and they’re broadly skeptical of AI that doesn’t seem to give them a fair shot. 

The answer involves creating what Redstone calls a “cookie policy for recruitment,” a plain-English disclosure explaining what AI technology you’re using, what it does, what it doesn’t do, and how it uses candidate data. To shift the narrative from skepticism to trust, frame AI usage as a tool for improving the candidate experience: “We’re using these AI tools because we want to give you a better service. Before we implemented this AI solution, we weren’t able to speak to 85% of candidates. AI will improve things by giving everyone a chance to tell their story.” Greenhouse has done exceptional work at this. The company has created a dedicated AI page that explicitly details where they use AI across their hiring funnel, what the purpose is, and what alternatives exist if the AI path doesn’t work. Our experts would like to see this kind of transparency become the industry standard.

Integral to your AI policy is securing the candidate’s consent to data processing. Applying for a job is a “legitimate interest” under GDPR so candidates don’t have to explicitly consent to their data being processed through an AI system. But the rules in other jurisdictions are often broader, and more ambiguous. Best practice in every case is to disclose what data you’re collecting, what you’re using it for, and what data is being processed. Make it clear. Make it transparent. Give people the opportunity to opt out. 

The challenge of data ownership also comes up when using AI. Who’s accountable for gathering and documenting consent? Is it your career site? Your ATS? Your AI vendor? The answer affects how you build consent into your workflows and who’s responsible when something goes wrong.

Human Oversight: Putting an Expert in the Loop

One concept that often trips up TA teams is the difference between automated decision making and automated decision execution. Automated decision-making is when an algorithm makes a decision about a person without human involvement. This is the classic “resume submitted, immediately rejected” scenario where the AI screens someone out with no human ever reviewing the application.

Automated decision execution is when AI assists or influences a human decision. When a recruiter gets an AI-ranked list of candidates and then decides which ones to advance, that’s decision execution, not decision-making, because the human makes the final call.

Regulators are basing laws around this distinction and it represents the direction of travel. The UK’s recent Data Use and Access Regulation says that if an AI system makes (not executes) a significant decision such as a hiring decision, candidates need the option to appeal and have that decision reviewed by a human. And not just any human, but “a person of reasonable seniority.” Redstone describes the rule as “expert in the loop” rather than just “human in the loop” – you can’t ask a junior recruiter to evaluate whether the AI made the right call. 

What Talent Leaders Should Do Right Now

As the roundtable wrapped up, both experts offered final thoughts on what TA leaders should prioritize today.

Martyn Redstone: “Don’t think of governance as bureaucracy. Think of it as an enabler. Think of it as a differentiator.” 

Bennett Sung: “Start learning about governance. A number of career paths are being spawned out of the need for a governance practice that could be very interesting for recruiters looking to get more actively involved. But make sure that HR & TA are involved in managing governance in the organization. Regardless of your systems, hiring is considered high risk, which means you’ll be at the center of all regulations.”

The reality is that AI in recruiting isn’t slowing down, and regulations will keep evolving to catch up with the technology. Building the right governance foundations minimizes the risks of obsolescence and ensures your organization remains compliant in the face of change.

About The Author

Subscribe to the Jobsync Quarterly Newsletter