If exploited, an attacker can steal the cloud credentials that your AI infrastructure uses to access other systems, potentially gaining access to databases, storage buckets, and internal services well beyond the LMDeploy application itself. This creates risk of data exfiltration, unauthorized access to sensitive workloads, and downstream regulatory exposure if personal or regulated data is reachable from the compromised environment. Because this vulnerability is actively exploited and CISA has listed it on the Known Exploited Vulnerabilities catalog, the window for exploitation before patching is shorter than typical advisories.
You Are Affected If
You run internlm/lmdeploy version prior to 0.12.3 in any environment
Your LMDeploy deployment accepts externally supplied or user-controlled image URLs as input to the vision-language module
The LMDeploy host runs in a cloud environment with access to instance metadata services (AWS, Azure, GCP, Alibaba Cloud, or similar)
Outbound HTTP requests from the LMDeploy host are not filtered or restricted to approved destinations
Cloud IAM roles or instance profiles attached to the LMDeploy host carry permissions beyond the minimum required for the application
Board Talking Points
A confirmed, actively exploited vulnerability in our AI model-serving software can allow attackers to steal cloud access credentials and reach internal systems.
Security teams should upgrade LMDeploy to version 0.12.3 and rotate exposed credentials immediately, with completion within 24 to 48 hours given active exploitation.
Without remediation, attackers who exploit this vulnerability may move laterally through cloud infrastructure, potentially accessing sensitive data far beyond the AI system itself.