A cascading supply chain attack on LiteLLM — an open-source AI gateway downloaded 95 million times per month — led to the theft of roughly 4 terabytes of data from Mercor, a $10 billion AI training startup serving OpenAI, Anthropic, and Meta. The breach, confirmed March 31, 2026, exposed the personal data of 40,000+ contractors, proprietary source code, video interviews, and potentially the AI training methodologies of multiple frontier labs. Meta indefinitely paused all work with Mercor. Five contractor lawsuits have been filed. The incident sits at the intersection of two parallel Silicon Valley scandals: the compromise of LiteLLM's software supply chain by threat group TeamPCP, and the collapse of Delve Technologies, the GRC startup that had certified LiteLLM's security compliance — and was exposed as running what one whistleblower called "fake compliance as a service."
The Mercor breach did not start with Mercor. It began with Trivy, an open-source vulnerability scanner maintained by Aqua Security. In late February 2026, the threat actor group TeamPCP (also tracked as PCPcat/ShellForce/DeadCatx3) exploited a `pull_request_target` workflow vulnerability in Trivy's GitHub Actions to steal maintainer credentials. On March 19, Trivy's GitHub Action v0.69.4 was rewritten to carry credential-harvesting payloads. By March 23, Checkmarx KICS was compromised using the same infrastructure.
The critical escalation came on March 24 at 10:39 UTC. LiteLLM's CI/CD pipeline — which used Trivy for security scanning with unpinned version references — executed the compromised Trivy action. This exfiltrated LiteLLM's `PYPI_PUBLISH` token from the GitHub Actions runner. TeamPCP immediately published malicious LiteLLM versions 1.82.7 and 1.82.8 to PyPI, thirteen minutes apart.
The payload operated in three stages. Stage one swept SSH keys, cloud credentials across AWS, GCP, and Azure, Kubernetes secrets, cryptocurrency wallets, `.env` files, API keys, and database credentials. Stage two encrypted and exfiltrated harvested data to command-and-control domains `models.litellm[.]cloud` and `checkmarx[.]zone`. Stage three installed persistent systemd backdoors and, in Kubernetes environments, deployed privileged pods across every node to extract cluster secrets. Version 1.82.8 was particularly aggressive: it used a `.pth` file injection that executed automatically on every Python interpreter startup, regardless of whether LiteLLM was imported.
Security researcher Callum McMahon of FutureSearch discovered the compromise at approximately 11:48 UTC when his machine crashed from a fork-bomb side effect. PyPI quarantined the entire `litellm` package by 13:38 UTC. The malicious versions were live for roughly 40 minutes to 3 hours, depending on the measure used — but given 3.4 million daily downloads, thousands of systems pulled the compromised packages automatically. The vulnerability was tracked as CVE-2026-33634.
LiteLLM, maintained by Y Combinator-backed BerriAI (co-founded by CTO Ishaan Jaffer), is an open-source Python library and self-hosted proxy server that provides a unified OpenAI-compatible API to call over 100 LLM providers — OpenAI, Anthropic, Google Vertex AI, AWS Bedrock, Azure, Mistral, and more. It has approximately 40,000 GitHub stars, over 240 million Docker pulls, and, according to Wiz Research, is present in an estimated 36% of all cloud environments. It is listed on Microsoft's Azure Marketplace and serves as a transitive dependency for major AI agent frameworks, including CrewAI, DSPy, and Browser-Use.
The architectural risk was fundamental. As ARMO Security's analysis stated: "LiteLLM isn't just any Python library. Its entire purpose is to hold API keys for dozens of AI providers. A typical LiteLLM deployment has more API keys in its environment than almost any other service in your infrastructure." Compromising LiteLLM yielded not just one set of credentials but potentially the full credential store for every AI provider an organization used. Trend Micro's TrendAI Research team called it "a case study on why AI infrastructure can become the next preferred supply chain target," noting that attackers are "exploiting the exact same infrastructure weaknesses we have battled for a decade. The AI technology stack is built on standard, fragile, open-source foundations."
When Mercor's systems ingested the malicious LiteLLM versions, the credential-harvesting malware executed and stole API keys, cloud credentials, SSH keys, database passwords, and Kubernetes configs. Using these stolen credentials, attackers moved laterally through Mercor's infrastructure. The extortion group Lapsus$, collaborating with TeamPCP, subsequently claimed 4 terabytes of stolen data: 939 GB of platform source code, a 211 GB user database, approximately 3 TB of video interview recordings and identity verification documents, internal Slack communications, ticketing data, TailScale VPN configurations, and contractor PII including Social Security numbers.
The Mercor breach collided with an already-unfolding scandal involving Delve Technologies, the GRC automation startup that had issued LiteLLM's SOC 2 and ISO 27001 compliance certifications. TechCrunch described the convergence as "Silicon Valley's two biggest dramas intersecting."
Delve was founded in 2023 by MIT dropouts Karun Kaushik (CEO) and Selin Kocalar (COO), both Forbes 30 Under 30 honorees. The company graduated from YC's Winter 2024 batch and raised a $32 million Series A led by Insight Partners in July 2025 at a $300 million valuation. It claimed 1,700+ customers across 50+ countries and pitched "agentic AI" that could compress compliance certification timelines from months to days, with pricing as low as $6,000–$15,000 for bundled SOC 2, ISO 27001, and HIPAA certifications — a fraction of traditional Big Four audit costs.
On March 18–19, 2026, an anonymous Substack account named "DeepDelver" published a post titled "Delve – Fake Compliance as a Service – Part I." The author, who identified themselves as an employee at a former Delve client, documented an investigation triggered by Delve's accidental leak of audit reports through a publicly accessible Google Spreadsheet in December 2025.
The allegations were sweeping. Analysis of 494 leaked SOC 2 reports and 81 ISO 27001 registration forms revealed 99.8% identical text across all clients, including recurring grammatical errors and nonsensical boilerplate describing cloud architectures that bore no relation to individual clients' actual systems. Section 1 ("Independent Service Auditor's Report") and Section 4 (test procedures and conclusions) were pre-written in draft reports before clients had submitted their company descriptions. The platform generated pre-fabricated board meeting minutes, security simulation reports, and risk assessments that clients could adopt with a single click. For employees who hadn't completed onboarding tasks, Delve reportedly auto-generated passing evidence for device security, background checks, and training.
DeepDelver alleged that 99%+ of Delve's clients were audited by either Accorp or Gradient Certification, described as "Indian certification mills operating through front companies." Gradient Certification was registered in Wyoming through a mailbox agent, with its president listed at the same Delhi address as its Indian entity. Glocert, another auditor, claimed UK headquarters but had filed dormant accounts with Companies House for four consecutive years with zero revenue. Delve reserved legitimate U.S.-based firms like Prescient and Aprio for high-profile clients who performed compliance mostly off-platform. The whistleblower argued that Delve "inverts the normal compliance structure" by "generating auditor conclusions, test procedures, and final reports before any independent review occurs," constituting "structural fraud that invalidates the entire attestation."
A second DeepDelver post around March 30 alleged IP theft from Sim.ai, a fellow YC company, claiming Delve forked Sim.ai's open-source SimStudio tool, stripped attribution, rebranded it as "Pathways," and sold it as proprietary to enterprise clients. Sim.ai's CEO confirmed to TechCrunch that Delve had no license agreement.
The fallout was swift. Insight Partners scrubbed its blog post explaining the $32M investment. On March 30, LiteLLM publicly announced it was ditching Delve and switching to Vanta for compliance recertification. On April 4, Delve was expelled from Y Combinator — with COO Kocalar posting on X that "YC and Delve have parted ways," reportedly triggered more by the IP theft from fellow YC company Sim.ai than by the compliance fraud itself.
Five contractor lawsuits were filed against Mercor in the week of April 1–7, 2026, in federal courts in California and Texas, according to Business Insider's reporting. All seek unspecified monetary damages for violations of data privacy and consumer protection laws.
The lead case, Gill v. Mercor.io Corporation, was filed April 1 in the U.S. District Court for the Northern District of California as a proposed nationwide class action. Plaintiff Lisa Gill, a Hawaii resident, alleges Mercor failed to implement multi-factor authentication, encrypt sensitive data during storage and transmission, limit access to PII, monitor systems for suspicious activity, or rotate passwords regularly. Deboni v. Mercor.io Corporation (Case 3:26-cv-02821) was filed the same day in the same court. Esson v. Mercor, brought by contractor NaTivia Esson, who worked for Mercor from March 2025 through March 2026, alleges she submitted W-9 forms with personal identifying information, trusting the company "would use reasonable measures to protect it." Notably, one of the five suits names BerriAI (LiteLLM's creator) and Delve Technologies as co-defendants, referencing the whistleblower who exposed Delve's practices. A fifth suit was filed in a Texas federal court.
The legal theories span negligence, violation of data privacy and consumer protection laws, invasion of privacy, and breach of implied duty. While all suits seek unspecified damages, research on data breach settlements suggests historical payouts of $1 to $5 per class member, though victims with documented financial losses may receive more.
Meta indefinitely paused all contracts with Mercor while it investigates the breach, according to Wired (reporters Maxwell Zeff, Zoë Schiffer, and Lily Hay Newman), confirmed by two sources and independently by Staffing Industry Analysts. Contractors who depended on Meta projects cannot log hours until work resumes. Mercor has not told affected contractors why their projects were paused, though internal conversations suggest the company is seeking alternative projects for them.
The stakes for Meta are enormous. Mercor was one of a few key firms generating bespoke, proprietary training data for Meta's AI models, including for TBD Labs, Meta's core unit working toward AI superintelligence. The stolen data may have exposed data selection criteria, labeling protocols, and RLHF training strategies — information Y Combinator CEO Garry Tan described as representing "billions and billions of value and a major national security issue," noting an "incredible amount of SOTA training data now just available to China." OpenAI said it is investigating but has not paused projects with Mercor; Anthropic has not publicly commented.
The Delve scandal exposes fault lines that run far deeper than one startup. The GRC automation market — which includes established players like Vanta, Drata, Secureframe, and Hyperproof — has been among the hottest enterprise SaaS segments, with Gartner forecasting 50% growth in GRC tool investment by 2026. The competitive pressure to deliver "SOC 2 in days" created perverse incentives that Delve exploited to their logical extreme.
IANS Faculty member Jeff Brown observed: "A SOC 2 Type II report was never meant to be a security guarantee. The fact that hundreds of companies apparently accepted pre-populated evidence without questioning it shows how much the market has prioritized speed over control effectiveness." The self-policing attestation model — where the assessed company selects and pays its own assessor — has no effective independent enforcement mechanism. Delve's case illustrates what happens when "AI-native" branding drives valuation premiums disconnected from operational reality, producing what one analysis called "compliance theater at scale."
The EU AI Act, with Annex III enforcement beginning August 2026, introduces mandatory risk assessments for high-risk AI systems. IANS Faculty member Summer Fowler warned that "if the compliance attestations are invalid, some clients could face criminal liability under HIPAA and fines of up to 4% of global revenue under GDPR." Companies that relied on Delve certifications now face a cascading compliance gap: their security attestations may be worthless, and the window to obtain legitimate certifications before regulatory deadlines is shrinking.
The Mercor breach is being positioned as the defining AI supply chain incident of 2026 — comparable to MOVEit/Cl0p for file transfer tools. At RSAC 2026, Mandiant CTO Charles Carmakal stated that over 1,000 SaaS environments were actively dealing with cascading effects from TeamPCP's attacks, potentially expanding to 10,000+. Security intelligence group vx-underground estimated data was exfiltrated from 500,000 machines.
The expert consensus is unambiguous about the structural risk. AppOmni CISO Cory Michal characterized the attack as "a true software supply-chain compromise" in "a more consequential category" than typical AI vulnerabilities like prompt injection. Suzu Labs Senior Director Jacob Krell emphasized the cascading mechanics: "One dependency. One chain reaction. Five supply chain ecosystems compromised in under a month." Okta's VP of Threat Intelligence Brett Winterford connected it to "identity debt" created by rapid AI agent adoption, where "developers repeatedly connect AI agents directly to production applications using static API tokens."
New frameworks are emerging rapidly. The NSA, CISA, and FBI published joint guidance on March 18 — one day before the Trivy attack — treating AI supply chain security as a distinct discipline for the first time. AI Bills of Materials (AIBOMs) are gaining traction through OWASP's AI BOM project, with CycloneDX and SPDX 3.0 extended to include AI-specific components. The OWASP LLM Top 10 elevated supply chain vulnerabilities from position five to position three in its 2025 edition. Microsoft released an open-source Agent Governance Toolkit on April 2 with plugin signing and SLSA-compatible build provenance. Black Duck's 2026 OSSRA report found average open-source vulnerabilities per application surged 107% to 581 per codebase.
The practical lesson has crystallized into a specific technical recommendation: pin dependencies to exact versions with cryptographic hashes. Organizations using lockfiles with `poetry.lock` or `uv.lock` were completely protected from the malicious LiteLLM packages. Those relying on mutable version tags or unpinned references — as LiteLLM itself did with Trivy — inherited the full attack chain.
The Mercor breach is not a single-point failure but a systems failure spanning three intersecting domains: open-source software supply chain security, compliance certification integrity, and AI infrastructure governance. The attack chain — from a compromised vulnerability scanner to a poisoned AI proxy to the theft of frontier AI training data — demonstrates that the AI ecosystem's rapid growth has outpaced its security foundations.
The Delve scandal reveals that the compliance industry meant to ensure security standards was, in at least one prominent case, generating industrialized fiction. The five lawsuits and Meta's pause signal that the legal and commercial consequences are only beginning.
The companies most exposed are those that simultaneously depended on open-source AI infrastructure they didn't audit and compliance certifications they didn't verify — a combination that, as March 2026 demonstrated, leaves no safety net at all. The emerging response — AIBOMs, federal guidance, dependency pinning, cryptographic signing — represents necessary structural reform, but adoption trails deployment by years. This gap defines the central risk of the current AI era.
The events of March 2026 underscore three hard truths for any organization pursuing compliance today.
First, a compliance certificate is only as credible as the evidence behind it. If your GRC platform is generating pre-populated attestations, clicking through auto-approved evidence, or routing your sensitive compliance data through external AI systems you can't audit, you don't have compliance — you have the appearance of it. The distinction matters enormously when regulators, customers, and partners come asking.
Second, AI infrastructure is now a first-class attack surface. The same open-source components powering your GRC automation, your AI proxy layer, and your CI/CD pipelines are the same components threat actors are actively targeting. "AI-native" should mean genuinely secure architecture — not just a marketing descriptor.
Third, supply chain visibility isn't optional anymore. Whether you're a GRC vendor or a GRC buyer, you need to know what's running under the hood.
Before you renew or sign with any GRC platform, ask these questions and demand real answers:
Where does my compliance data go when AI processes it? Is it leaving your environment? Who can access it?
What open-source dependencies sit in your AI supply chain? Are they pinned to verified versions, or floating on mutable tags?
Can you show me actual evidence that those dependencies are secured? Not a certification. Evidence.
Strike Graph was built on the principle that compliance should be evidence-based — not evidence-generated. In a world where the tools meant to certify security can themselves become the attack vector, that distinction is no longer academic. It's existential.