LiteLLM has been compromised by a malicious supply chain attack. Vincent Groves and Vineet Goyal explain how SAP LeanIX mitigated the attack and you can do the same.
On 24.03.2026, LiteLLM was compromised via a supply chain attack. Malicious versions silently harvested cloud credentials, SSH keys, and CI/CD secrets from every environment running them.
LiteLLM is the orchestration proxy sitting inside a huge share of enterprise AI stacks. It has 97 million monthly downloads, and is embedded in CrewAI, LangChain, and MLflow.
The attack was discovered, not by a security team, but by a developer whose machine started behaving strangely. It was the third such attack in six days.
The enterprises that contained their exposure quickly weren't faster because of better SecOps. They could answer one question most can't: which of our AI agents use LiteLLM, and what do those agents have access to?
This isn't new. Cloud-native development gave us the same software supply chain problem: layers of libraries obscuring real risk until it was too late, as Log4Shell demonstrated.
Regulators responded by mandating software composition transparency: the EU's Cyber Resilience Act, the US Executive Order on Cybersecurity, and DORA for banking all require SBOM as a baseline.
Now it's harder. AI-assisted coding pulls in dependencies silently, agents layer on top of orchestration frameworks like LiteLLM, and auditing the full stack is exponentially harder when code is generated automatically across decentralised teams.
Same problem. Far greater velocity.
The real risk is business context
The technical response to a compromised library is straightforward: identify affected versions, rotate credentials, and patch.
The hard part is understanding what you're actually dealing with. A LiteLLM dependency in a developer's experimental chatbot is a nuisance, but the same dependency in an AI application powering your customer portal, or the agent automating your procurement approvals with access to ERP credentials and production configs is a business-critical incident.
The blast radius isn't determined by the library, it's determined by what the application or agent does and what it can reach. That's the information most enterprises don't have at the moment they need it most.
How SAP LeanIX closes the gap
The difference between a contained incident and a days-long investigation comes down to the quality of your inventory at the moment the alert fires. Without a structured SBOM inventory, the response is a broadcast:
-
advisories go out
-
a scan script is shared in a channe
-
every team runs it themselves across their own environments
Different groups handle different infrastructure layers independently. No-one has a complete picture of which services are affected, what those services do, or how to prioritise.
Thorough on paper, blind in practice, but with SAP LeanIX, the workflow looks different. Your SBOM data, continuously ingested from build pipelines, is linked to micro-service and fact sheets that carry business context:
-
what the service does
-
which process it supports
-
who owns it
LiteLLM is part of the SAP LeanIX reference catalog, meaning it's automatically recognised and mapped as an IT component the moment it appears in an SBOM. No manual configuration required.
The enterprise architect points an AI tool, like Claude, at the incident and their SAP LeanIX workspace via the MCP server with a single prompt:
"Check this incident — run an analysis against the SBOM data in my workspace. Are we affected, and if so, where?"
The AI tool queries the full application inventory in parallel, cross-references SBOM components against the compromised LiteLLM versions, and surfaces which services are exposed, and which teams own them. More importantly, it answers the business question: is the affected service running under a customer-facing application, a financial process, or a mission-critical workflow?
That determines whether you're dealing with a low-priority cleanup or an incident that needs executive escalation in the next hour. From there, the remediation plan itself can become an input to coding agents.
Each affected service, its owner, and the required fix are known, so instead of broadcasting instructions and waiting, you point a coding agent at the repository with the remediation plan already scoped: upgrade the dependency, validate, and open a PR.
The human role shifts from co-ordination to review. That's the AI-native incident response loop running in minutes rather than days:
-
discover
-
contextualise
-
remediate
-
verify
As PRs are merged, updated SBOMs flow back into SAP LeanIX automatically. Enterprise architects and stakeholders see live remediation progress without status meetings or manual tracking: a single, trusted source that reflects the actual state of your landscape.
That's the compounding value: a quality-assured inventory that ties business context to low-level library data, queried by an agent that can reason across both simultaneously.
SAP LeanIX visibility doesn't stop at libraries, however. SAP LeanIX lets you discover and inventory every AI agent running across your organization, including which orchestration frameworks (like LiteLLM) they depend on, which platforms host them (SAP, Microsoft Entra, Google), and who owns them.
When an incident like this hits, you can query the agent inventory alongside SBOM data to understand not just which services use LiteLLM but which agents use it and what business processes dothose agents touch.
Where do you stand?
When the next incident hits, four questions will define your response:
-
How are you currently managing supply chain incidents across your AI application and agent landscape?
-
How confident are you in your architectural understanding of where a compromised component can grip your business?
-
How fast can you move from alert to remediation?
-
How do you know when you're done?
Do you have a structured inventory of the AI agents operating in your organization: what they do, what they access, and which frameworks they run on?
If the honest answer involves manual effort, fragmented tooling, or uncertainty, that's the gap worth addressing now, before the next advisory lands. Get in touch to discuss how SAP LeanIX can help:
