Google API keys used to be low-risk. Leak one, and the worst case was usually a few unauthorized geocoding requests. Gemini changed the rules.
Truffle Security reported that 2,863 live Google API keys found exposed in public code repositories through the November 2025 Common Crawl dataset had silently gained access to Google’s Gemini AI models. The keys’ owners never opted in. Google’s platform upgrade to include the Generative Language API applied the new capability to existing API keys by default, creating what Truffle Security classified as CWE-1188 (Insecure Default) and CWE-269 (Incorrect Privilege Assignment).
Google’s initial response didn’t help. When Truffle Security filed the report on November 21, Google dismissed it as “Intended Behavior.” It took until December 2 for the company to reclassify it as a bug, and January 13 to escalate to Tier 1 priority. As of the 90-day disclosure window ending February 19, the root-cause fix remained in progress. Google confirmed awareness and said it had “implemented proactive blocking measures,” but the underlying design decision (API keys inheriting new service access automatically) was unresolved.
The practical risk sits at the intersection of two problems. Organizations that treated Google API keys as non-sensitive secrets now have AI model access exposed in public repositories. And anyone with those keys could potentially use Gemini to generate content, access AI capabilities, or incur charges against the key owner’s account.
For practitioners: audit any Google API keys in your codebase for Generative Language API enablement. Rotate client-side keys immediately. Check your Google Cloud Console for unexpected Gemini API activity.
Source: Truffle Security | BleepingComputer | Feb 27, 2026