Twelve days. That’s how much time UK data controllers have to shape what “meaningful human involvement” means in practice before the ICO finalizes its automated decision-making guidance.
The deadline is May 29, 2026. The ICO published its draft ADM guidance following implementation of the Data (Use and Access) Act 2025, and the consultation window is short. Organizations that use AI in HR decisions, hiring, performance management, compensation, disciplinary processes – should treat the consultation as both a compliance preview and an opportunity to influence the standard before it’s set.
The ICO’s consultation page is the primary source for the draft guidance text. This is a T1 primary source and readers should verify the guidance requirements directly there rather than relying solely on third-party characterizations, including this one.
Here’s what legal analysis indicates the draft requires. The ICO’s draft guidance, as characterized by legal analysts at TLT LLP, sets a standard of “meaningful human involvement” for AI-assisted decisions affecting individuals, particularly in employment contexts. The precise regulatory formulation should be verified against the ICO’s published text, but the principle is clear: an automated output presented to a human who rubber-stamps it doesn’t meet the standard.
UK ADM Compliance Readiness, Before May 29
- Review ICO draft ADM guidance directly at ico.org.uk
- Assess whether current human review processes meet 'meaningful involvement' standard
- Identify AI-assisted HR decision systems and document human oversight mechanisms
- Review data repurposing practices for AI training against Data (Use and Access) Act 2025
- Submit consultation response by May 29, 2026 if applicable
That’s harder to operationalize than it sounds.
Most organizations using AI in HR have a human in the loop in a formal sense. The actual question the ICO guidance raises is whether that human involvement is substantive, does the reviewer have access to the underlying rationale, can they override the output, do they’ve meaningful authority to reach a different conclusion? Systems that produce opaque scores or rankings without interpretable reasoning will struggle to demonstrate that the human review step adds anything beyond a checkbox.
The second dimension is data repurposing. Legal analysis of the guidance, per JD Supra’s analysis, indicates new compatibility requirements for personal data repurposed for AI training under the 2025 Act. This is legal interpretation, not a direct ICO quote, but organizations that have been repurposing employee data for model training should flag this for legal review.
The consultation closes May 29. That’s the action date for organizations that want input into the final standard. Don’t expect that to change quickly.
Unanswered Questions
- What documentation will the ICO require to demonstrate 'meaningful' human involvement vs. nominal review?
- Does the guidance apply to automated systems that inform human decisions, or only to fully automated outputs?
- How will the compatibility framework interact with existing GDPR training data obligations for EU-operating organizations?
UK and US AI governance are moving in parallel on human oversight requirements, the principle of meaningful human involvement in consequential AI decisions is becoming a cross- jurisdictional standard, not a UK-specific requirement. It isn’t subtle.
Don’t expect the ICO to narrow the “meaningful” standard after consultation. Regulators use consultation periods to sharpen language, not to lower bars. If your organization’s current human review process can’t be documented as substantive, the finalized guidance will expose that gap. That’s the shift.
Bottom Line
The real question is whether organizations that respond to the consultation will shape the standard toward operationalizable requirements, or whether the final text reflects only the ICO’s own framing.