LiteLLM Cuts Ties With Delve After Credential-Stealing Malware Attack and Fake Compliance Scandal

LiteLLM Cuts Ties With Delve After Credential-Stealing Malware Attack and Fake Compliance Scandal

LiteLLM, the open-source AI gateway that millions of developers use to route API calls across OpenAI, Anthropic, Google, and other model providers, is cutting ties with compliance vendor Delve and re-certifying its security controls with Vanta and an independent third-party auditor. The announcement from CTO Ishaan Jaffer follows a bruising week: LiteLLM's open-source version was hit by credential-stealing malware that exposed API keys for AI model providers across enterprise teams, and Delve — the startup responsible for certifying LiteLLM's compliance posture — is itself embroiled in a separate scandal over allegedly fabricated compliance data and rubber-stamp auditing practices. A whistleblower released evidence over the weekend; Delve's founder denied the claims before additional documentation surfaced.

The real-world impact is significant because of where LiteLLM sits in the AI stack. Teams running Claude Code, Cursor, and GitHub Copilot at enterprise scale often route model calls through LiteLLM as an abstraction and cost-management layer. A credential theft here doesn't just affect one model provider — it potentially exposes keys for GPT, Claude, and Gemini simultaneously across every downstream application. The incident lands at a moment when AI security is becoming a boardroom-level concern, and it highlights a structural vulnerability: the routing and infrastructure layer connecting enterprises to foundation models has grown critical faster than its security practices have matured.

Read the full article at TechCrunch →