Not every AI governance obligation belongs to IT or legal. For M&E professionals, that distinction matters.
The EU AI Act’s Annex III classifies AI used in recruitment and HR management, education and vocational training, and public administration as high-risk. These aren’t peripheral use cases, they’re common M&E and program evaluation domains. If your organization uses AI to score candidates, assess learner outcomes, or support public service delivery in the EU, you’re likely inside the high-risk classification framework.
Article 4 of the Act requires providers and deployers to take measures ensuring staff have sufficient AI literacy. That requirement has been enforceable since February 2, 2025. According to EvalCommunity Academy, a professional education platform for M&E practitioners, the Act’s core governance requirements remain applicable regardless of ongoing Digital Omnibus discussions, and M&E professionals should extend traditional evaluation principles to AI governance processes within their organizations. That’s a practitioner recommendation, not a regulatory requirement, but the underlying Article 4 obligation is live.