A high-severity unauthenticated out-of-bounds read vulnerability (‘Bleeding Llama’) in Ollama allows any attacker with network access to the Ollama API port to extract arbitrary process memory from a running inference server, including model weights, system prompts, and inference context. No authentication is required. Organizations running Ollama with the API exposed to the internet or to untrusted internal network segments face immediate risk of proprietary AI asset disclosure. No CVE ID has been formally assigned; CERT/CC advisory VU#518910 is the authoritative reference.