Google pulled “What People Suggest” from its search product on or around March 16, 2026. The Guardian reported the removal the same day, describing the feature as one that aggregated health-related advice from non-professional users and surfaced it in AI search results.
A Google spokesperson, quoted by The Guardian, stated the removal was not due to quality or safety concerns but was related to helping people find reliable health information. That framing is worth reading carefully. The gap between the stated reason and the feature’s function, aggregating amateur health advice into AI search results, is not subtle. The Guardian’s own prior reporting found that Google’s AI Overviews provided inaccurate health information in some cases, a context that sits alongside the spokesperson’s statement without the two being mutually exclusive.
Google had previously positioned “What People Suggest” as evidence that AI could transform health outcomes at scale. Quiet removal without safety attribution is a product decision with a story behind it.
For product teams and compliance professionals, the practical signal is straightforward: crowdsourced, user-generated content surfaced through AI in health contexts carries risks that official removal framing often understates. The decision to build this feature and then discontinue it, without a detailed post-mortem, is the kind of precedent that responsible AI deployment frameworks are designed to prevent from recurring.