LiteLLM Python Library Poisoned — Do Not Update
LiteLLM Python library poisoned in supply chain attack. Do not update. Critical secrets exposed. Security alert for AI developers.
Today, let’s not talk about model performance. Instead, let’s discuss a very critical and potentially fatal security issue. Just yesterday, something major happened in the AI community: the widely used litellm library, which many projects rely on daily, was poisoned!
LiteLLM is an extremely popular and critical open-source Python library in AI development. It serves as the “universal joint” or “universal adapter” of the large language model (LLM) era. Its core function is to provide a unified interface (fully compatible with the OpenAI API format), allowing you to seamlessly call almost all major large models on the market (supporting over 100 different LLMs).
Because of its convenience, many AI applications, scaffolding tools, and development frameworks (such as DSPy, certain Cursor MCP plugins, and various Agent frameworks) secretly depend on it under the hood to handle multi-model calling. This also explains why, once it was “poisoned,” it was like a water treatment plant being contaminated — any project that indirectly “drank the water” would be affected.
Potentially Exposed Information
LiteLLM is an important dependency. I checked, and I do have it installed. The current version is still safe.




