Two announcements. Opposite directions. One regulatory posture.
On March 12, 2026, the UK government announced revisions to the National Security and Investment Act’s AI screening rules. The change removes off-the-shelf AI, commercial software used as a tool, not developed or modified, from the list of activities that trigger mandatory notification to the government. Organizations that previously filed NSIA notifications because they were deploying a commercial AI product may no longer be required to do so.
The revised rules refocus NSIA screening on firms that develop AI systems or make significant modifications to advanced AI, the activities the government considers genuine national security considerations. Deploying a vendor’s product is no longer treated the same as building or materially changing it.
That is a deregulatory move. The chatbot announcement is not.
According to legal reporting from Fladgate, the UK government is pushing to tighten regulation of AI chatbots to address gaps in the Online Safety Act 2023. The specific mechanism is still being developed. According to legal commentators, the proposed reforms could expose developers to substantial fines or service interruptions where chatbots are found to expose children to illegal or dangerous content, though the precise enforcement details and implementation timeline have not been publicly confirmed.
The pattern here is deliberate. The UK is removing regulatory friction from AI investment and acquisition while building enforcement capability for AI systems that interact directly with vulnerable users. Two problems, two different tools.
Compliance teams should review current NSIA notification obligations against the revised rules now. The NSIA change is enacted, not proposed.