More than 1,000 workers lost their jobs at Sama on April 17, 2026.
The Kenyan AI data outsourcing firm had provided AI training and content moderation services to Meta. Meta ended that contract, per The Guardian’s reporting, following allegations of privacy violations involving Meta’s AI-equipped smart glasses. The Oversight Lab, which has documented the event independently, provides a second corroborating source for the core facts.
The “1,000+” figure comes from The Guardian with a “+” qualifier, meaning the actual headcount may be higher. This isn’t an approximation of an uncertain number. It’s a floor.
The mechanism matters for how you understand this story. Sama’s workers didn’t lose their jobs because an AI system replaced them directly on the assembly floor. They lost them because Meta, the company that contracted Sama, decided to end the relationship. The proximate cause is contract termination. The underlying driver, consistent with a broader pattern of AI companies internalizing previously outsourced data operations, is that the function Sama was performing – preparing training data and moderating content for Meta’s AI systems, is one Meta is moving to handle itself. According to reports, this reflects a broader shift toward internalizing AI training operations. This is the editorial interpretation supported by the pattern of events – it is not a stated Meta position.
This distinction matters for accountability. When a contractor ends a relationship and 1,000+ workers lose their jobs as a result, the question of who bears responsibility doesn’t have a clean answer under current labor or AI governance frameworks. The employing company (Sama) has limited recourse against the contracting party’s business decisions. The contracting party (Meta) has no direct employment relationship with the affected workers. The regulatory frameworks that would assign accountability for AI-driven labor displacement, EU AI Act supply chain provisions, proposed AI labor standards in various jurisdictions, are not yet operational at the speed this kind of displacement happens.
Per the Oversight Lab’s documentation, the contract termination followed specific allegations tied to Meta’s smart glasses and data privacy. The causal chain, privacy allegations leading to contract termination leading to layoffs, is attributed to source reporting, not independently established by TJS.
Why this matters beyond Sama. The AI data labor ecosystem is large and largely invisible in mainstream AI coverage. Data annotation workers, RLHF contractors, and content moderation outsourcing firms in developing economies are the workforce that made current frontier models possible. When AI companies bring these functions in-house, for cost, privacy, control, or regulatory reasons, that workforce has no structural protection. The Sama layoffs are not an isolated event. They’re a preview.
What to watch. Watch for regulatory response, particularly from Kenyan labor authorities and any EU AI Act enforcement bodies with supply chain mandate. Watch for whether Oversight Lab publishes additional documentation. And watch for whether this pattern, AI company internalizes training data operation, contract workforce displaced, repeats across other outsourcing relationships in the next 90 days.
TJS synthesis. The Sama layoffs make concrete something the AI industry has mostly discussed in the abstract: the displacement risk from AI internalization runs through contract labor in developing economies first, and it moves faster than the accountability frameworks designed to address it. This is a structural story with a specific, documented opening chapter.