← Back to Cybersecurity News Center
Severity
CRITICAL
CVSS
9.5
Priority
0.508
Executive Summary
The threat group TeamPCP injected credential-stealing malware into LiteLLM versions 1.82.7 and 1.82.8, a widely deployed AI/LLM orchestration library downloaded approximately 3.4 million times daily, with estimated exposure across up to 500,000 devices. The malware targets cloud provider credentials (AWS, GCP, Azure), SSH keys, Kubernetes secrets, and cryptocurrency wallets, creating direct risk of cloud account takeover, data exfiltration, and infrastructure compromise. This attack is part of a documented, multi-target campaign by TeamPCP that has previously poisoned security tools including Aqua Security Trivy and Checkmarx KICS, indicating a deliberate effort to target DevOps and AI/ML pipelines at the tooling layer.
Technical Analysis
TeamPCP introduced a multi-stage infostealer into LiteLLM PyPI releases 1.82.7 and 1.82.8 via a supply chain compromise of the package maintainer or build pipeline (T1195.001).
The malicious payload executes obfuscated Python (T1059.006, T1027) to enumerate and exfiltrate credentials stored in files and environment variables (T1552.001, T1552.004), cloud service tokens (T1528), SSH private keys, Kubernetes secrets, and cryptocurrency wallet data (T1005).
It establishes persistence via a hidden systemd service (T1543.002) and exfiltrates data over the network (T1041).
Setuid/setgid abuse (T1548.001) and file system discovery (T1083) are also present in the chain. The compromised packages were live on PyPI and consumed by downstream tooling including Aqua Security Trivy, Aqua Security Docker images, and Checkmarx KICS, enabling cascading backdoor deployment (T1554, T1610) to any environment running those scanners in CI/CD pipelines. No CVE has been assigned. Relevant CWEs: CWE-506 (Embedded Malicious Code), CWE-494 (Download of Code Without Integrity Check), CWE-522 (Insufficiently Protected Credentials), CWE-798 (Hard-coded Credentials), CWE-912 (Hidden Functionality). Safe versions are those below 1.82.7 or above 1.82.8; organizations should verify the LiteLLM security advisory for the confirmed clean release. Source quality for this item is moderate (T3 sources only); technical details should be validated against the LiteLLM official advisory and Endor Labs research before acting.
Action Checklist IR ENRICHED
Triage Priority:
IMMEDIATE
Escalate to external IR firm or law enforcement if credential misuse is confirmed in cloud audit logs (anomalous API calls, unauthorized resource creation, data exfiltration to external IPs) or if forensic analysis reveals evidence of malware persistence or lateral movement in production infrastructure.
Step 1, Immediate: Remove LiteLLM versions 1.82.7 and 1.82.8 from all environments. Upgrade to the patched version confirmed clean in the LiteLLM official security advisory. Block installation of the affected versions via package manager policy or private registry controls.
Containment
NIST 800-61r3 §3.2.2 (Containment)
NIST 800-53 SI-3 (Malware Protection)
NIST 800-53 SA-3 (System Development Life Cycle)
CIS 8.3 (Address Unauthorized Software)
Compensating Control
For teams without enterprise package management: (1) Query pip freeze or poetry.lock across all environments: `find . -name 'requirements.txt' -o -name 'poetry.lock' -o -name 'Pipfile.lock' | xargs grep -l 'litellm==' | grep -E '1.82.[78]'`. (2) Block versions in pip index by adding to pip.conf: `[global]\nno-deps = True` + manually audit local wheels. (3) Document all affected systems in a spreadsheet and mandate manual removal with verification before redeployment.
Preserve Evidence
Before removing packages: (1) Capture pip freeze output and installed package metadata: `pip show litellm` and `pip freeze > /tmp/pre_removal_inventory.txt`. (2) Export Docker layer history for all images: `docker history <image_id>` and save Dockerfile build logs. (3) Extract Kubernetes pod spec history: `kubectl get pods -A -o yaml > /tmp/k8s_pods_pre_removal.yaml`. (4) Collect CI/CD pipeline execution logs showing package installation timestamps and sources. (5) Preserve Python .pyc bytecode and site-packages directory structure before cleanup for forensic analysis of code execution artifacts.
Step 2, Immediate: Rotate all cloud credentials (AWS IAM keys, GCP service account keys, Azure service principals), SSH keys, Kubernetes secrets, Slack/Discord tokens, and cryptocurrency wallet keys accessible from any system that had 1.82.7 or 1.82.8 installed or that ran Aqua Security Trivy or Checkmarx KICS recently.
Eradication
NIST 800-61r3 §3.2.3 (Eradication)
NIST 800-53 IA-4 (Identifier Management)
NIST 800-53 IA-5 (Authentication)
NIST 800-53 SC-7 (Boundary Protection)
CIS 6.1 (Establish and Maintain an Inventory of Sensitive Data)
CIS 6.2 (Address Unauthorized Access to Sensitive Data)
Compensating Control
For teams without automated credential rotation: (1) Manually generate new AWS IAM access keys and document old key IDs: `aws iam list-access-keys --user-name <username>`, then `aws iam create-access-key` and immediately `aws iam delete-access-key`. (2) For GCP, create new service account keys: `gcloud iam service-accounts keys create new-key.json --iam-account=<sa-email>`. (3) For Azure, regenerate service principal credentials: `az ad sp credential reset --id <app-id>`. (4) For Kubernetes secrets, use `kubectl delete secret <secret-name>` and redeploy with new values. (5) For SSH keys, generate new key pairs on a clean isolated system, distribute via signed secure channel, then remove old public keys from authorized_keys files. (6) For cryptocurrency wallets, transfer all funds to new wallets generated offline.
Preserve Evidence
BEFORE credential rotation: (1) Capture current credential inventory: `aws iam list-users && aws iam list-access-keys` and `gcloud iam service-accounts list`. (2) Export all Kubernetes secrets: `kubectl get secrets -A -o yaml > /tmp/k8s_secrets_pre_rotation.yaml` (encrypted export for audit trail). (3) Extract SSH authorized_keys files: `find / -name 'authorized_keys' 2>/dev/null | xargs cat > /tmp/ssh_keys_pre_rotation.txt`. (4) Log all credential creation/modification timestamps from cloud audit logs before rotation begins: AWS CloudTrail, GCP Cloud Audit Logs, Azure Activity Log. (5) Capture process environment variables on affected hosts: `cat /proc/<pid>/environ | tr '\0' '\n' | grep -E 'AWS_|GOOGLE_|AZURE_' > /tmp/env_vars_pre_rotation.txt`. (6) Document current SSH key fingerprints: `ssh-keygen -l -f <key_file>`.
Step 3, Detection: Search CI/CD pipeline logs, container image build histories, and package manifests for installations of litellm==1.82.7 or litellm==1.82.8. Audit Aqua Trivy and Checkmarx KICS versions deployed in pipelines for exposure to the upstream compromise. Check for unexpected systemd service creation on Linux hosts that ran the affected package.
Detection & Analysis
NIST 800-61r3 §3.2.1 (Detection and Analysis)
NIST 800-53 SI-4 (Information System Monitoring)
NIST 800-53 AU-2 (Audit Events)
NIST 800-53 AU-3 (Content of Audit Records)
CIS 8.1 (Establish and Maintain Detailed Asset Inventory)
CIS 8.6 (Address Unauthorized Software)
Compensating Control
For teams without centralized logging: (1) Search CI/CD logs manually: `grep -r 'litellm' /var/log/jenkins /var/log/gitlab-runner /var/log/github-actions 2>/dev/null | grep -E '1.82.[78]'`. (2) Query Docker build history locally: `docker history --no-trunc <image_id> | grep litellm`. (3) Parse package manifest files: `find . -type f \( -name 'requirements.txt' -o -name 'setup.py' -o -name 'pyproject.toml' \) -exec grep -l 'litellm' {} \; | xargs cat`. (4) Check systemd service files: `find /etc/systemd/system /usr/lib/systemd/system -name '*.service' -exec grep -l 'python\|pip\|litellm' {} \;`. (5) Search process startup records: `journalctl --no-pager | grep -i 'litellm\|systemd.*python' | head -100`. (6) Query npm, pip, or Maven local cache: `ls -la ~/.npm ~/.cache/pip ~/.m2/repository | grep litellm`.
Preserve Evidence
Capture before analysis begins: (1) Full CI/CD pipeline execution logs with timestamps: `curl -H 'Authorization: token <token>' https://jenkins.example.com/job/<job>/api/json?tree=builds[*[timestamp,log]]` or equivalent for your platform. (2) Complete Docker build logs and layer metadata: `docker inspect <image_id>` and `docker build --verbose 2>&1 | tee /tmp/docker_build_forensics.log`. (3) Container image registry metadata and push timestamps from artifact repository. (4) Systemd journal snapshot: `journalctl --no-pager -b > /tmp/systemd_journal_pre_analysis.txt`. (5) Process accounting data: `sa -u | sort -k2 -nr > /tmp/process_accounting.txt` (if enabled). (6) Python import hooks and site-packages directory listing with file modification times: `find /usr/lib/python*/site-packages/litellm* -type f -exec ls -la {} \;`.
Step 4, Assessment: Inventory all systems, Docker images, Kubernetes clusters, and cloud workloads that directly or transitively depend on the affected LiteLLM versions. Assess which environments had access to high-value credentials or secrets at time of exposure. Review cloud provider access logs (AWS CloudTrail, GCP Audit Logs, Azure Monitor) for anomalous API calls or credential use originating from affected hosts.
Detection & Analysis
NIST 800-61r3 §3.2.1 (Detection and Analysis)
NIST 800-53 RA-3 (Risk Assessment)
NIST 800-53 CA-7 (Continuous Monitoring)
NIST 800-53 SI-4 (Information System Monitoring)
CIS 1.1 (Establish and Maintain Detailed Asset Inventory)
CIS 3.1 (Configure Encryption for Data at Rest)
Compensating Control
For teams without enterprise asset management tools: (1) Build manual dependency graph: `pip-audit --no-cache --disable-pip-audit-db | grep -A 5 'litellm'` and cross-reference with `pip tree` or `pipdeptree`. (2) Scan Docker registries manually: `curl -s https://registry.example.com/v2/_catalog | jq .repositories[] | while read repo; do curl -s https://registry.example.com/v2/$repo/manifests/latest | grep -i litellm; done`. (3) Query Kubernetes for deployments referencing LiteLLM: `kubectl get pods,deployments -A -o jsonpath='{range .items[*]}{.metadata.namespace},{.metadata.name},{.spec.containers[*].image}{"\n"}{end}' | grep litellm`. (4) Query cloud provider logs locally: AWS `aws s3 cp s3://cloudtrail-bucket/AWSLogs/ /tmp/cloudtrail --recursive --exclude '*' --include '*1.82.7*' --include '*1.82.8*'`. (5) Extract credential access patterns: `grep -i 'GetSecretValue\|AssumeRole\|serviceAccountKeys' /tmp/cloudtrail/*.json | jq '.sourceIPAddress, .userAgent'`.
Preserve Evidence
Capture before assessment analysis: (1) Complete asset inventory snapshot with installation dates: `sudo find / -name '*.pyc' -o -name '*.egg-info' 2>/dev/null | xargs stat --format='%y %n' | grep litellm > /tmp/asset_inventory.txt`. (2) Full cloud audit log export for 90 days prior to disclosure: `aws cloudtrail lookup-events --max-results 50 --output json > /tmp/cloudtrail_90days.json` and `gcloud logging read 'resource.type=k8s_cluster' --limit 10000 --format json > /tmp/gcp_audit_90days.json`. (3) Kubernetes RBAC audit logs: `kubectl logs -n kube-system -l component=kube-apiserver --tail 10000 | grep -i 'secret\|credential' > /tmp/k8s_rbac_audit.txt`. (4) Network segmentation and firewall rules showing egress paths from affected systems: `iptables -L -n -v > /tmp/firewall_rules.txt` and `aws ec2 describe-network-acls --output json > /tmp/aws_nacls.json`. (5) Memory dumps from affected systems if still online: `sudo /usr/bin/dumpit /tmp/affected_host_memdump.dd` (requires tools like LiME).
Step 5, Communication: Notify development, DevOps, and cloud infrastructure teams of confirmed or suspected exposure. Escalate to incident response if credential exfiltration is confirmed or suspected. Brief leadership on supply chain risk scope.
Preparation
NIST 800-61r3 §3.1 (Preparation)
NIST 800-53 IR-1 (Incident Response Policy and Procedures)
NIST 800-53 IR-2 (Incident Response Training)
NIST 800-53 IR-4 (Incident Handling)
CIS 17.1 (Establish and Maintain an Incident Response Process)
Compensating Control
For teams without formal incident communication infrastructure: (1) Activate predefined communication tree using org chart: Email incident commander list first, then cascade to team leads with templated escalation message. (2) Use version-controlled incident playbook template stored in shared drive or Git repository with timestamp-dated copies. (3) Document all notifications in shared spreadsheet with recipient, timestamp, method (email/Slack/call), and acknowledgment status. (4) Schedule mandatory all-hands meeting within 4 hours of confirmation; record meeting and distribute transcript to absent stakeholders. (5) For executive briefing, use one-page summary with: systems affected count, credential types at risk, timeline of exposure, immediate actions taken, ongoing investigation status.
Preserve Evidence
Document communication chain: (1) Preserve all incident notification emails, Slack messages, and chat logs with timestamps: `slack-export --channel incident-response > /tmp/slack_incident_log.csv`. (2) Record meeting recordings and transcripts: store in secure location with access controls. (3) Maintain incident timeline log: create document with entry for each notification sent, recipient, acknowledgment timestamp, and any immediate response actions. (4) Capture stakeholder feedback and escalation requests: maintain running log of questions, concerns, and resource requests from teams. (5) Screenshot or export any status dashboard or war room materials used during communication phase.
Step 6, Long-term: Enforce cryptographic integrity verification (hash pinning, signed packages) for all PyPI dependencies in CI/CD pipelines. Implement dependency review gates (e.g., Dependabot, Endor Labs, Snyk) with alerting on new package versions before they enter pipelines. Treat security scanner tooling (Trivy, KICS, etc.) as high-trust attack surface requiring the same supply chain controls as production code.
Post-Incident
NIST 800-61r3 §3.2.4 (Post-Incident Activities)
NIST 800-53 SA-3 (System Development Life Cycle)
NIST 800-53 SA-4 (Acquisition Process)
NIST 800-53 SA-10 (Developer Configuration Management)
NIST 800-53 SI-7 (Software, Firmware, and Information Integrity)
CIS 4.3 (Address Unauthorized Software)
CIS 4.10 (Enforce Application Security Configuration Management)
Compensating Control
For teams without commercial SCA/SBOM tools: (1) Implement hash pinning in requirements.txt: `pip-tools` with hashes: `litellm==1.82.9 --hash=sha256:<hash>` (generate via `pip install pip-tools && pip-compile --generate-hashes`). (2) Use free tools: `safety check` for vulnerability scanning (`pip install safety`), `pip-audit` for supply chain audits (`pip install pip-audit`), `OWASP Dependency-Check` for SBOMs. (3) Create manual approval gate: require senior developer signature on any new package or version bump (documented in Git commit). (4) Implement build-time integrity checks: `python -m pip install --require-hashes -r requirements.txt` in CI/CD pipeline, fail build if hash mismatch. (5) For scanner tooling: vendor-lock scanner versions in separate requirements file with pinned hashes, rebuild container images for scanners quarterly, and scan the scanners themselves using the same pipeline.
Preserve Evidence
Establish post-incident monitoring baseline: (1) Create dependency baseline snapshot: `pip freeze > /tmp/baseline_dependencies.txt` with SHA256 hashes, store in version control with signed commit. (2) Export CI/CD pipeline configuration: `git log --all --oneline --decorate > /tmp/pipeline_config_history.txt` and document all package installation steps. (3) Capture current container image SBOMs: use `syft <image_id> -o json > /tmp/sbom_post_incident.json`. (4) Establish baseline for scanner tool versions: `trivy --version > /tmp/trivy_baseline.txt`, `kics --version > /tmp/kics_baseline.txt`. (5) Document security gate implementation: capture CI/CD configuration files showing dependency review steps, approval workflows, and notification mechanisms. (6) Create ongoing monitoring dashboard: set up alerts for new versions of high-risk packages (LiteLLM, Trivy, KICS) with escalation to security team within 24 hours of release.
Recovery Guidance
Post-containment recovery: (1) Validate that all affected package versions are removed and patched versions deployed via automated verification scan (e.g., SBOM re-scan confirming litellm >= 1.82.9). (2) Monitor cloud provider access logs continuously for 30 days post-recovery for any residual credential misuse or indicators of compromised accounts, with alerting on new IAM key creation from affected source IPs. (3) Conduct post-incident review within 7 days to document root cause (supply chain gap), remediation effectiveness, and update incident response plan and security architecture documentation to reflect lessons learned.
Key Forensic Artifacts
Python site-packages litellm directory tree with file timestamps and hashes (find /usr/lib/python*/site-packages/litellm* -exec sha256sum {} \;)
CI/CD pipeline build logs with package installation commands, timestamps, and download sources (Jenkins, GitLab CI, GitHub Actions logs)
Docker image layer history and Dockerfile source code showing litellm installation (docker history --no-trunc, Dockerfile)
Cloud audit logs: AWS CloudTrail, GCP Cloud Audit Logs, Azure Activity Log covering 90 days pre-disclosure with focus on credential access, API calls from affected compute instances, and service account key creation events
Kubernetes audit logs and secret access events (kubectl logs -n kube-system kube-apiserver with --audit-log-maxage and event filtering for secret/credentials access)
Detection Guidance
Package presence: Search for litellm==1.82.7 or litellm==1.82.8 in pip freeze output, requirements.txt, pyproject.toml, poetry.lock, and Pipfile.lock across all repositories and deployed environments.
Container images: Scan all Docker images built after the approximate compromise window for the affected package using tools such as Syft or Grype (verify those tools are themselves running clean versions).
Systemd persistence: On Linux hosts, audit /etc/systemd/system/ and /usr/lib/systemd/system/ for recently created or modified service units that are not part of known baselines (T1543.002 indicator).
Credential access: Review AWS CloudTrail for GetSecretValue, ListBuckets, DescribeInstances, or AssumeRole calls from unexpected source IPs or at unusual times from hosts that ran the affected package. Apply equivalent review to GCP Audit Logs and Azure Activity Logs. SSH key enumeration: Monitor for processes reading ~/.ssh/id_* or /etc/ssh/ssh_host_* outside of normal SSH daemon activity. Network exfiltration: Look for outbound connections to non-baseline external IPs or domains from Python processes or containers running the affected package, particularly over ports 443 or 80 (T1041). Kubernetes: Audit secret access logs (kube-apiserver audit logs) for unexpected reads of namespace secrets. Note: specific IOC values (C2 IPs, domains, file hashes) were not available in source data at the time this item was generated; monitor Endor Labs, Snyk, and OX Security advisories for updated IOC releases.
Indicators of Compromise (2)
| Type | Value | Context | Confidence |
| PACKAGE_VERSION |
litellm==1.82.7 |
Malicious PyPI release containing TeamPCP infostealer; remove and replace immediately |
high |
| PACKAGE_VERSION |
litellm==1.82.8 |
Malicious PyPI release containing TeamPCP infostealer; remove and replace immediately |
high |
Compliance Framework Mappings
T1552.001
T1195.001
T1059.006
T1027
T1083
T1078
+8
SI-3
SI-4
AC-2
AC-6
IA-2
IA-5
+5
A04:2021
A07:2021
A08:2021
5.2
2.5
2.6
16.10
6.3
15.1
164.308(a)(5)(ii)(D)
164.312(d)
MITRE ATT&CK Mapping
T1552.001
Credentials In Files
credential-access
T1195.001
Compromise Software Dependencies and Development Tools
initial-access
T1027
Obfuscated Files or Information
defense-evasion
T1083
File and Directory Discovery
discovery
T1078
Valid Accounts
defense-evasion
T1528
Steal Application Access Token
credential-access
T1610
Deploy Container
defense-evasion
T1041
Exfiltration Over C2 Channel
exfiltration
T1548.001
Setuid and Setgid
privilege-escalation
T1005
Data from Local System
collection
T1554
Compromise Host Software Binary
persistence
Guidance Disclaimer
The analysis, framework mappings, and incident response recommendations in this intelligence
item are derived from established industry standards including NIST SP 800-61, NIST SP 800-53,
CIS Controls v8, MITRE ATT&CK, and other recognized frameworks. This content is provided
as supplemental intelligence guidance only and does not constitute professional incident response
services. Organizations should adapt all recommendations to their specific environment, risk
tolerance, and regulatory requirements. This material is not a substitute for your organization's
official incident response plan, legal counsel, or qualified security practitioners.