AI Is Strip-Mining Open Source
The Tailwind Canary
In January 2026, Tailwind Labs laid off 75% of its engineering team. Revenue had collapsed by nearly 80%, dropping to $3.6 million. Docs traffic was down 40% from early 2023.
The kicker? Tailwind is more popular than ever. 75 million downloads per month. Adopted by 51% of developers globally. More widely used than at any point in its history.
So what happened?
AI happened. Coding agents generate Tailwind classes directly. Developers never visit the docs. They never see the commercial plans. They never become paying customers. The entire discovery-to-revenue pipeline — the one every open source business depends on — got bypassed by a language model that memorized the documentation during training.
Tailwind founder Adam Wathan declined a pull request that would have made their docs more accessible to AI models, explaining it would further harm their already-strained business. Think about that. An open source maintainer had to actively resist making their project more useful because doing so would accelerate their own financial collapse.
This isn’t a Tailwind problem. It’s an ecosystem problem.
The Invisible Infrastructure Trap
Here’s the pattern every successful open source project follows:
- Build something useful. Give it away.
- Developers discover it, read the docs, join the community.
- Some percentage convert to paid plans, sponsorships, or consulting.
- Revenue funds continued development.
AI coding tools nuke step 2. When Claude or Copilot generates code using a library, there’s no docs visit, no community interaction, no brand impression. The library becomes invisible infrastructure — embedded in millions of codebases, maintained by a shrinking team with a shrinking budget.
This isn’t hypothetical. We’re already seeing it:
- Documentation traffic is cratering. Stack Overflow’s traffic has been in freefall since 2023. Library docs across the ecosystem are seeing similar declines. Developers don’t Google APIs anymore — they prompt for them.
- Sponsorship fatigue is real. GitHub Sponsors, Open Collective, and Tidelift were already struggling to fund critical infrastructure. Now the value proposition is even harder to make: “Pay us to maintain the thing your AI already knows how to use.”
- The discovery funnel is dead. Developers used to stumble across new tools through docs, blog posts, and conference talks. Now they use whatever the model suggests, which is heavily biased toward whatever was popular in the training data.
The Training Data Paradox
This is the part that should make everyone uncomfortable.
AI models are trained on open source code. The entire capability of AI coding tools — the thing generating billions in revenue for Anthropic, OpenAI, Microsoft, and Google — is built on freely contributed code from millions of developers.
Those developers were never paid for that contribution. But at least the old social contract worked: you contribute to open source, you build reputation, your project gets adoption, maybe you build a business on top of it.
AI broke that contract. It extracted the value (the knowledge in the code) without preserving the mechanism (docs traffic, community, discovery) that let maintainers capture any value in return.
It’s strip-mining. You take the resource, leave nothing behind, and move on.
”But AI Companies Are Sponsoring Open Source!”
Some are. Google, Vercel, and Lovable stepped in to help fund Tailwind after the crisis went public. That’s good. It’s also a bandage on a severed artery.
Corporate sponsorship is discretionary. It can be cut in any quarterly budget review. It creates dependency on the goodwill of companies whose core business model is to extract value from the very projects they’re sponsoring. And it only goes to projects big enough to make headlines when they collapse.
Nobody’s writing a check for the maintainer of that date parsing library your AI agent uses 4,000 times a day.
The uncomfortable truth: we now have a system where the most valuable software in the world (the foundation models) is built on top of the least-funded software in the world (open source infrastructure), and the models are actively destroying the funding mechanisms that keep that infrastructure alive.
What Actually Happens Next
I don’t think open source dies. But I think we’re heading for a painful correction with some predictable outcomes:
Maintenance rot will accelerate. Projects that can’t fund development will slow down. Security patches will lag. Bugs will linger. The software supply chain gets more fragile at the exact moment we’re shipping more code than ever — most of it AI-generated, most of it unreviewed.
License warfare will escalate. We’ve already seen projects move from MIT/Apache to SSPL, BSL, and other “open-but-not-really” licenses to prevent cloud providers from freeloading. Expect this trend to intensify, but now targeting AI training specifically. The Redis and Elasticsearch playbook, applied to LLM training data.
Pay-to-train models will emerge. Some projects will gate their documentation and source behind licenses that require payment for AI training use. Others will poison or watermark their code to track unauthorized training. It’ll be messy, legally untested, and probably ineffective — but desperation breeds creativity.
The AI-popular monoculture will calcify. Models recommend what they were trained on. Projects that were popular in 2024 get recommended in 2026, regardless of whether better alternatives exist. New projects struggle to get adoption because they’re not in the training data. Innovation slows at the library level even as code volume explodes.
The Bill We’re Not Paying
The software industry is running a massive tab. We’re extracting more value from open source than ever before while contributing less back than ever before. The maintainers who built the foundations are burning out, going broke, or both.
Sonar’s 2026 developer survey found that AI accounts for 42% of all committed code, expected to reach 65% by 2027. Every line of that AI-generated code depends on open source libraries, frameworks, and patterns created by people who are increasingly getting nothing in return.
We’ve spent the last year celebrating how AI makes us more productive. We haven’t spent a single minute asking: productive with whose work? And what happens when those people stop showing up?
The code is free. The maintenance never was. And we’re about to find out what happens when we forget that.