Two UK developments from recent weeks are worth reading together, because they describe the same regulatory posture: intervention first, industry accommodation second.
The more novel development is the House of Lords. Legal analysis of the amendment indicates the House of Lords agreed to add a specific criminal offense for operating an “unsafe” AI chatbot to the Crime and Policing Bill on March 19, 2026. Criminal liability for an AI product is unusual in any jurisdiction. The amendment would, according to that same legal analysis, carry penalties of up to five years’ imprisonment, though this figure comes from secondary legal commentary rather than published Hansard records, and should be treated accordingly. The amendment has passed the Lords; it must still proceed through the House of Commons before it could become law.
What “unsafe” means in this context isn’t yet defined in statute. That definitional gap is where most of the legal and compliance work will happen if the amendment advances. A criminal standard requires clarity that civil regulatory frameworks often don’t need to provide upfront. Developers and operators of consumer-facing AI chatbots in the UK are watching closely.
The copyright element is follow-up context to a story this hub covered previously: the UK government’s decision not to proceed with a broad text and data mining exception. The March 2026 Report on Copyright and Artificial Intelligence from DSIT and DCMS confirms that no broad opt-out licensing model will be created. The government’s consultation drew strong opposition from creative industries, with reports indicating the vast majority of respondents opposed an opt-out model. The precise consultation figures circulating in some coverage couldn’t be independently verified, so this brief doesn’t repeat them, but the direction of the response is consistent across multiple accounts.
Legal commentary from Travers Smith characterizes the resulting position as licensing-first: AI developers who want to train on UK-copyrighted works need licenses, and no statutory exception exists to bypass that requirement. The UK has not yet built a licensing framework that makes systematic licensing practical at scale. That gap between policy position and operational infrastructure is where UK compliance exposure currently lives.
Taken together, the Lords amendment and the copyright confirmation describe a government moving toward more interventionist AI regulation, not less, on both the safety side (criminal liability for chatbot operators) and the rights side (no training data safe harbor). That’s the opposite direction from the US framework released the same week, which proposed preempting state AI laws, affirmed training data as lawful, and put dispute resolution in courts rather than statutes.
What to watch: Commons debate on the Crime and Policing Bill amendment. If “unsafe” begins to acquire a working definition through debate, that will be the first signal of how broadly the criminal standard might apply. Also watch for whether the DSIT/DCMS report is followed by a consultation on a compulsory or voluntary licensing framework, that’s the missing infrastructure piece that would let developers actually comply with the licensing-first position.