The EU AI Act Hits Dutch Workplaces in August 2026: What HR Leaders need to do now

The EU AI Act

The deadline is real, the rules are already binding in parts, and most Dutch employers are not ready.

On 2 August 2026, the main body of the EU AI Act will become enforceable across the European Union. For HR teams in the Netherlands, that date matters more than most people realise. Recruitment AI, candidate screening tools, performance monitoring software, and almost everything that uses an algorithm to make or shape a decision about a person at work falls into the regulation’s “high-risk” category. And as of April 2026, the Dutch government finally put forward its national implementation law, the Uitvoeringswet AI-verordening, which assigns enforcement to ten existing regulators rather than creating a new AI authority.

The picture for Dutch employers is now clearer than it has been in months. It is also more demanding than many realise.

What the Dutch government just put on the table

The Dutch government published its draft Uitvoeringswet AI-verordening on 20 April 2026, open for public feedback until 1 June. State Secretary Willemijn Aerdts says the goal is simple: AI is useful, but only if people trust how it is used.

How supervision will work:

  • No new AI regulator is being created. Ten existing regulators will share the work, each covering AI in the area it already oversees.
  • The Autoriteit Persoonsgegevens (AP) and the Rijksinspectie Digitale Infrastructuur (RDI) will coordinate across the ten.
  • For HR, the AP is the one to watch. It will handle:
    • Banned AI practices
    • Transparency rules, such as labelling chatbots and deepfakes
    • Most of the high-risk uses listed in Annex III, including recruitment and employment AI

What the Dutch law will and will not do:

The EU regulation kicks in on 2 August 2026 regardless. The delay in the Dutch law gives employers no extra breathing room.

It will not add extra rules on top of the EU regulation. The government has chosen a light-touch approach, so no parallel Dutch rulebook to worry about.

It will probably take effect after 2 August 2026.

The deadlines that already matter

The AI Act is being phased in. Some obligations are already binding, others arrive in August 2026, and a few extend into 2027. For HR teams, the relevant ones are these.

DateWhat changesWhat it means for HR
2 August 2026Main body of the AI Act enforceable, including high-risk obligations and transparency rulesRecruitment AI, monitoring tools, and employee profiling systems must comply. Candidates must be told when AI is involved.
2 August 2027Rules for AI embedded in regulated productsRelevant for HR tech vendors but indirectly for employers using their tools.

Why HR is at the centre of this

Most rules target a whole sector. The AI Act works differently: it targets specific uses, and many of those uses sit right inside HR. Annex III of the regulation classifies the following as high-risk:

  • Recruiting or selecting candidates
  • Placing targeted job ads
  • Filtering and analysing applications
  • Evaluating candidates
  • Making decisions about promotions, terminations, or other work relationships
  • Allocating tasks based on personal traits or behaviour
  • Monitoring or evaluating worker performance and behaviour

This is a wide net. In practice, it covers tools most HR teams already use, including:

  • CV screening software
  • Video interview analysers
  • Scheduling and shift allocation tools
  • Productivity monitoring platforms
  • Performance management systems with predictive features

If your business uses any of these, you are running high-risk AI under European law.

Some uses are banned outright. The most important one for employers is the ban on emotion recognition systems at work and in education, with narrow exceptions for medical or safety reasons. AI tools that try to read emotions from facial expressions, voice, or biometrics, whether in interviews or on the job, are not allowed in the EU.

The Dutch AP has been clear that current practice is falling short. Its recent reporting on AI and algorithms flagged recruitment AI as a particular concern, pointing to:

  • A lack of transparency for candidates
  • Weak fairness controls
  • Limited options for candidates to challenge outcomes

We explored the underlying tension in more detail in our piece on agentic AI and HR ethics, and the regulatory direction has only sharpened since.

What “high-risk” actually requires from you

High-risk AI systems come with a defined set of obligations. The provider (the company that builds the system) carries most of the technical compliance burden. But employers who use these systems are “deployers” under the Act, and they have their own duties.

For HR, those duties translate into a concrete operating standard:

RequirementWhat it means in practice
Know what AI you are usingMost HR teams underestimate this. AI is likely running in your applicant tracking system, sourcing tools, interview scheduling, engagement surveys, and performance management. A complete inventory is the foundation for everything else.
Document a risk assessment for each high-risk systemCover the data used, the decisions shaped, the people affected, and the failure modes considered. For systems touching protected characteristics, bias testing is not optional.
Provide meaningful human oversightArticle 14 requires that a human can understand the system’s output, override it when needed, and step in when something goes wrong. Rubber-stamping AI recommendations does not meet the standard.
Be transparent with the people affectedCandidates and employees must be told when AI is involved in a decision about them, and they need a way to challenge the outcome. This overlaps with GDPR Article 22 on automated decisions, which the AP already enforces.
Build AI literacy in the teamAnyone working with an AI system must understand what it does, where its limits are, and what they are responsible for. This is mandatory under Article 4 and has been since February 2025. Our training, learning and development team can help you set this up.
Engage your works councilThe SER has confirmed that AI in hiring is high-risk and that emotion recognition is unacceptable. Unions are increasingly pushing for stronger co-determination on AI deployment.

A practical compliance roadmap for HR

The work breaks into three phases. The timing is tight, so most teams will need to run them in parallel rather than in sequence.

Phase 1: Inventory and assess (immediately)

Map every AI system touching HR processes. Include the obvious ones such as ATS modules and screening tools, but also the embedded ones, like AI features inside your HRIS or productivity platform.

Phase 2: Build the compliance scaffolding (by mid-2026)

For each high-risk system, document a risk assessment, set up bias monitoring where relevant, define human oversight points with named reviewers and authority to override, and draft transparency notices for candidates and employees. Update privacy notices and recruitment communications.

Phase 3: Operate and monitor (from August 2026)

Set up the recurring activities that keep you compliant. That includes regular bias testing, decision logs that make AI-shaped outcomes auditable, an incident response procedure for challenged decisions, and a watching brief on regulator guidance as the AP and RDI publish sector-specific advice. Compliance is not a project that ends; it is an operating capability.

ActionOwnerDeadline
AI inventory across HR processesHR + IT, with DPOQ3 2025
Risk classification per systemHR + LegalQ4 2025
Bias and impact assessments for high-risk toolsHR + DPO + external auditor where neededMid-2026
Human oversight design with named reviewersHR + line managementMid-2026
Updated candidate and employee transparency noticesHR + LegalBefore August 2026
AI literacy trainingL&DOngoing, started 2025
Works council consultation on AI deploymentManagement + ORBefore any rollout
Incident response and challenge procedureHR + LegalBefore August 2026
Ongoing regulator monitoringComplianceContinuous

Where to focus, and where theHRchapter can help

The Netherlands is implementing the AI Act late, with ten regulators sharing the work, in a market where most employers are not yet ready. The 2 August 2026 deadline is binding regardless of where the Dutch implementation law sits, the AP has signalled it will enforce, and the GDPR track record shows what serious penalties look like.

Three areas combine the highest risk with the weakest current practice:

  • Recruitment chatbots and CV screening. If candidates do not know AI is involved, that is a transparency problem. If the system has not been bias-tested, that is a fairness problem. If rejected candidates have no real route to human review, you are likely breaching both Article 14 of the AI Act and Article 22 of the GDPR.
  • Employee monitoring. Productivity scoring, engagement inference from communications, and automated warnings all count as high-risk under Annex III. Layered on top of GDPR, works council rights, and Dutch employment law, this area is unforgiving. Anything resembling emotional analysis is off the table entirely.
  • Cross-border AI tools. Systems built for US state-level compliance often fall short on the EU’s mandatory human oversight, advance risk assessment, and transparency rules. EU-grade systems usually travel better the other way. Treat this as a procurement and vendor management issue, not just a compliance one.

For HR teams, the window to act is now. Inventory your AI, build the documentation and oversight that high-risk systems require, train your people, engage your works council, and make sure candidates and employees know when AI shapes decisions about them. The teams that treat this as a leadership project, rather than a compliance afterthought, will be the ones their workforce still trusts when the dust settles.

This is where theHRchapter comes in.

Our work spans the full HR stack, including:

If your hiring, performance management, or HRIS setup will be touching AI by August 2026, the right time to start is now, not when a regulator or a candidate asks the first hard question.

Book a call with theHRchapter and let us help you make AI compliance a strength of your HR function, not a risk.

Name
Email
Which service are you interested in?
Message
The form has been submitted successfully!
There has been some error while submitting the form. Please verify all form fields again.

Related Reads: Check out these other Articles!

Scroll to Top