LiteLLM PyPI Supply Chain Attack Triggers Google ADK Security Advisory — Mercor Reports 4TB Breach
A supply chain attack targeting LiteLLM — the lightweight Python library used widely as an LLM routing layer across the AI framework ecosystem — has triggered a cascade of security advisories and at least one confirmed major data breach. Unauthorized infostealer code was embedded in LiteLLM versions 1.82.7 and 1.82.8 on PyPI, first identified on March 24th. Google's Agent Development Kit team responded with an urgent advisory: any developer using ADK Python with the eval or extensions extras should update immediately. The malware used a command-and-control domain that spoofed LiteLLM's own infrastructure — models[.]litellm.cloud — making detection in network logs significantly harder than a typical supply chain compromise.
The blast radius extends well beyond the LiteLLM library itself. AI hiring startup Mercor has confirmed a breach traced to the malicious dependency, with hackers claiming 4TB of exfiltrated data including source code and production databases. LiteLLM's reach through the AI framework stack makes this a transitive trust failure: LangChain, AutoGen, DSPy, Haystack, and dozens of other frameworks route LLM calls through LiteLLM as a dependency, meaning any system running the compromised extras was potentially exposed regardless of which higher-level framework was in use.
If your environment includes LiteLLM, the immediate action is straightforward: run pip list | grep litellm and confirm you are not on versions 1.82.7 or 1.82.8. Update to the latest patched release, audit your network logs for traffic to models[.]litellm.cloud, and review your dependency lock files for any transitive pulls of the affected versions. This incident is a sharp reminder that trust in a complex dependency graph can unravel at any layer — and that the LLM routing layer is now critical enough infrastructure to warrant the same scrutiny as a database client or auth library.