Back to Blog
Strategy
AI Agent Security

The Hermes Agent Supply Chain Hack: What Every Founder Using AI Agents Needs to Know

On March 24, 2026, a backdoored Python library silently harvested API keys from thousands of AI agent deployments. Hermes Agent was one of the projects affected. Here is what happened, what it cost, and how to protect your business.

A
Amine Afia@eth_chainId
9 min read

On March 24, 2026, a threat actor group called TeamPCP published two backdoored versions of LiteLLM, a Python library used by hundreds of thousands of AI agent deployments to route requests across LLM providers. The malicious code silently harvested API keys, cloud credentials, SSH keys, and database passwords from every machine where the compromised package was installed. Hermes Agent, one of the most popular open-source AI agent frameworks, depended on LiteLLM. So did many other projects your business may rely on.

This was not a theoretical vulnerability. It was a live credential theft operation that ran for approximately five hours before being detected. If you are running AI agents in your business today, or considering it, this incident is a case study in the risks you need to manage. Here is what happened, who was affected, and the concrete steps that separate protected businesses from exposed ones.

What Actually Happened: The Kill Chain in Plain Language

The attack did not start with LiteLLM. It started five days earlier with Trivy, a widely used security scanner. On March 19, TeamPCP compromised Trivy's code distribution on GitHub by exploiting a misconfigured automation workflow. From there, they stole publishing credentials that gave them access to other projects, including LiteLLM.

Think of it like a locksmith getting robbed. The thief did not break into your office directly. They broke into the locksmith's shop, copied your keys, and then walked through your front door.

On March 24, TeamPCP used the stolen credentials to publish LiteLLM versions 1.82.7 and 1.82.8 to PyPI (Python's package registry, similar to an app store for developers). These versions contained a hidden file that ran automatically whenever Python started on the infected machine. The payload did three things:

  1. Credential harvesting: It searched for and collected API keys, cloud provider credentials (AWS, Google Cloud, Azure), SSH keys, database passwords, and any secrets stored in environment files.
  2. Lateral movement: It probed for Kubernetes configurations, allowing attackers to pivot from one compromised machine to other systems in the same infrastructure.
  3. Persistent backdoor: It installed a mechanism for remote code execution, giving attackers ongoing access even after the initial malicious package was removed.

LiteLLM receives roughly 3.4 million downloads per day. The backdoored versions were live for about five hours. TeamPCP claims to have exfiltrated over 300 GB of compressed credentials, affecting an estimated 500,000 corporate identities, though these numbers are unverified.

Key Takeaway

The attack did not exploit a flaw in AI technology. It exploited the software supply chain: the chain of dependencies that every modern application relies on. If your AI agent uses third-party libraries (and they all do), you are exposed to this class of risk. The question is whether you have protections in place.

Why AI Agent Frameworks Are Uniquely Vulnerable

A traditional web application might store one or two API keys. An AI agent framework like Hermes Agent or OpenClaw typically stores credentials for multiple LLM providers (OpenAI, Anthropic, Google), messaging platforms (Telegram, Slack, Discord), databases, and cloud services. That concentration makes AI agent deployments exceptionally high-value targets.

The LiteLLM incident illustrates this perfectly. LiteLLM exists specifically to connect to many AI providers through a single interface. Every machine running LiteLLM likely had multiple provider API keys configured. One compromised dependency gave attackers access to all of them simultaneously.

This is not unique to Hermes Agent. Earlier in 2026, security researchers found 1,184 malicious "skills" (plugins that extend agent capabilities) across popular agent skill repositories. Snyk identified 1,467 malicious payloads in a study of agent skill supply chains. The pattern is clear: wherever AI agents aggregate credentials and capabilities, attackers follow.

The Cost of a Supply Chain Breach: Real Numbers

If you are a founder evaluating whether supply chain security is worth your attention, here are the numbers.

Cost CategoryTypical RangeSource
Average supply chain breach (all sizes)$4.91 millionIBM Cost of a Data Breach 2025
US-specific average breach cost$10.22 millionIBM Cost of a Data Breach 2025
Remediation premium vs. direct attacks17x more expensiveSOCRadar 2025 Report
Downtime cost per hour$300,000+SOCRadar 2025 Report
Small business incident (direct remediation)$15,000 to $50,000Industry estimates for credential rotation, forensics, notification
Founder time lost to incident response2 to 4 weeksPost-incident surveys

For a startup, $15,000 in direct costs plus a month of lost productivity can be existential. Even if you are not directly breached, the remediation overhead is significant: rotating every API key, auditing logs, scanning infrastructure, and notifying affected parties. After the LiteLLM incident, any business running the compromised version needed to treat every credential on the affected machine as stolen.

How Hermes Agent Responded (and What You Can Learn from It)

The Hermes Agent team released v0.5.0 on March 28, four days after the LiteLLM compromise. They called it "the hardening release" and it included 216 merged pull requests from 63 contributors. The security-relevant changes tell you exactly what a responsible response looks like:

  • Removed the compromised LiteLLM dependency entirely. They did not just pin to a safe version. They eliminated the dependency.
  • Pinned all remaining dependencies to exact versions. No more version ranges that could silently pull in a compromised update.
  • Regenerated the lockfile with cryptographic hashes. Every dependency is now verified against a known-good checksum at install time.
  • Added CI scanning for supply chain attack patterns. Every pull request is now automatically checked for the kinds of manipulation TeamPCP used.
  • Bumped dependencies to fix known vulnerabilities (CVEs). A full audit of the dependency tree, not just the one compromised package.

This is a textbook response. But here is the uncomfortable truth: if you were running Hermes Agent between March 24 and March 28, you were potentially exposed for four days.

The Broader Pattern: AI Agents Are the New High-Value Target

The LiteLLM incident was not isolated. It was part of a cascading campaign by TeamPCP that also hit Trivy (a security scanner used in 76+ version tags), Checkmarx (an infrastructure scanning tool), and the Telnyx SDK. The attackers specifically targeted tools in the AI and security toolchain because those tools have the richest credential stores.

Consider the timeline:

  • March 19: Trivy compromised. CI/CD secrets harvested from automation pipelines.
  • March 20: Stolen credentials used to infect 66+ npm packages via a self-propagating worm.
  • March 23: Checkmarx GitHub Actions compromised using previously stolen secrets.
  • March 24: LiteLLM backdoored on PyPI. Credential harvesters deployed to AI agent environments worldwide.

Each step in the chain used stolen credentials from the previous step. The attackers did not need to find new vulnerabilities. They just followed the trust relationships between tools. Your security scanner trusts its dependencies. Your AI gateway trusts its package manager. Your agent framework trusts its AI gateway. One break in that chain and everything downstream is compromised.

Five Steps to Protect Your AI Agent Deployment Today

Whether you use Hermes Agent, OpenClaw, or any other framework, these steps reduce your exposure to supply chain attacks. None of them require deep technical expertise.

  1. Pin your dependencies and verify checksums. Never allow your agent framework to silently update to the latest version of anything. Use lockfiles with integrity hashes. This single step would have prevented the LiteLLM payload from reaching most environments.
  2. Isolate credentials per service. Do not store all your API keys in one environment file on one machine. Use separate credential stores for each provider. If one is compromised, the blast radius is limited. For a deeper look at how different frameworks handle this, see our security model comparison.
  3. Rotate keys on a schedule (and immediately after any incident). If you installed any Python package between March 19 and March 28, rotate every credential that was accessible on that machine. Even if you think you were not affected.
  4. Use managed hosting with supply chain protections. Self-hosting gives you control, but it also gives you responsibility for every dependency update. Managed platforms like getclaw handle dependency management, isolation, and credential storage so you do not have to track every CVE yourself.
  5. Monitor for anomalous outbound traffic. The LiteLLM payload sent stolen credentials to a domain (models.litellm.cloud) that was not part of the official LiteLLM infrastructure. Even basic outbound traffic monitoring would have flagged this.

The Self-Hosting Trade-Off: Control vs. Responsibility

This incident highlights a tension that every founder running AI agents must confront. Self-hosting frameworks like Hermes Agent or OpenClaw gives you full control over your data and infrastructure. But it also means you are personally responsible for tracking supply chain threats, applying security patches, and rotating credentials after incidents.

The Hermes Agent team responded well, shipping a hardening release within four days. But four days is a long time when credential harvesters are running. And not every open-source project has 63 contributors who can mobilize that quickly.

For founders who want the benefits of open-source AI agents without the operational security burden, the practical middle ground is using a managed deployment that handles dependency management and isolation for you while still giving you visibility into what is running. Our cost breakdown for hosting AI digital coworkers walks through the real numbers.

What This Means for Your AI Strategy

The LiteLLM supply chain attack is not a reason to avoid AI agents. The productivity gains are real: businesses using AI assistants report 40% to 70% reductions in support response times and significant cost savings per interaction. But it is a reason to treat your AI agent deployment with the same security rigor you apply to your payment processing or customer database.

The companies that will benefit most from AI agents in 2026 are not the ones that adopt fastest. They are the ones that adopt with their eyes open: pinned dependencies, isolated credentials, monitored traffic, and a response plan for when (not if) the next supply chain incident happens.

Your Next Step

If you are currently running an AI agent on your own infrastructure, run a dependency audit this week. Check whether your lockfile uses integrity hashes. Verify that your credentials are not all stored in a single environment file. And if you installed or updated any Python packages between March 19 and March 28, 2026, rotate every API key and cloud credential on the affected machine immediately.

If you want to skip the operational security overhead entirely, try getclaw. We handle dependency isolation, credential management, and supply chain monitoring so you can focus on what your AI agent actually does for your business.

Filed Under
AI Agent Security
Supply Chain Attack
Hermes Agent
LiteLLM
Open Source
Risk Management

Deploy a Hermes Agent for free

Start a 7-day free trial and launch a Hermes Agent on getclaw in minutes.