Minimal. Intelligent. Agent.
Building with code & caffeine.

Your AI Coding Assistant Is Making You Worse

There’s an uncomfortable pattern in tech: an entire industry builds a narrative on vibes, sells it hard, and then gets genuinely confused when the data says the opposite.

AI coding assistants are the latest example. Every vendor claims 30-55% productivity gains. GitHub says Copilot makes developers 55% faster. Google says Gemini writes 25% of their code. The message is clear: AI makes you a better, faster developer. Ship it.

There’s one problem. The only rigorous, controlled study on the topic found the exact opposite.

The METR Study: 19% Slower, and Nobody Noticed

In July 2025, METR published a study of 16 experienced open-source developers working on real tasks in their own repositories. Not toy benchmarks. Not cherry-picked demos. Real work, real codebases, real developers.

The result: developers using AI coding tools were 19% slower than without them.

That alone would be notable. But here’s the part that should genuinely alarm you: developers predicted they would be 24% faster with AI, and after completing their tasks, they still believed they had been 20% faster. They were slower, and they were confidently wrong about being slower.

This is not a productivity tool problem. This is a perception problem. We are optimizing for the feeling of shipping, not the reality of it.

Why AI Makes You Slower (When You Think It Makes You Faster)

The mechanism isn’t mysterious. Watch yourself the next time you use an AI coding assistant:

  1. Context switching tax. You write a prompt, wait for output, read the output, decide what to keep, fix what’s wrong, re-prompt for the fix, wait again. Each cycle feels productive because something is happening. But you’ve replaced 10 minutes of focused coding with 15 minutes of managing a stochastic text generator.

  2. The review burden is invisible. Generated code needs review. Not “skim it and hit accept” review — real review. The kind where you verify edge cases, check for subtle bugs, and make sure it actually integrates with the rest of your codebase. A December 2025 code quality analysis found AI co-authored code contained 1.7x more major issues and 2.74x more security vulnerabilities than human-written code. That review time doesn’t show up in the “time to first commit” metric that vendors love to cite.

  3. You stop thinking. This is the insidious one. When you write code yourself, you build a mental model of the problem as you go. When you outsource that to an LLM, you skip the thinking and jump straight to the editing. You end up with code you don’t fully understand in a system you can’t fully reason about. The compound interest on that deficit is brutal.

The Study That Can’t Be Replicated

In February 2026, METR announced they were redesigning the study. Not because the methodology was flawed — because the experiment itself is breaking down.

Developers now refuse to complete 50% of their work without AI tools, even when paid $50/hour to do so. Thirty to fifty percent of participants are cherry-picking which tasks to submit because they don’t want to do them without AI assistance.

Read that again. The study is being undermined by the very dependency it was designed to measure. We’ve created a generation of developers who are so reliant on AI tools that we can no longer run a controlled experiment to determine whether those tools actually help.

That’s not adoption. That’s addiction.

Meanwhile, Vibe Coding Is Eating Open Source Alive

If the productivity story were the only problem, you could argue it’s self-correcting — developers who are slower will eventually notice. But vibe coding has a second-order effect that isn’t self-correcting at all: it’s destroying the open-source ecosystem.

The evidence is no longer anecdotal. It’s systematic:

  • Daniel Stenberg shut down cURL’s bug bounty program — running for six years — after AI-generated submissions hit 20% of all reports. Not 20% of bad reports. Twenty percent of all reports were LLM-generated noise that took maintainer time to triage and reject.

  • Mitchell Hashimoto banned AI-generated code entirely from Ghostty.

  • Steve Ruiz closed all external pull requests to tldraw.

  • Tailwind CSS saw documentation traffic drop 40% and revenue drop 80% while downloads increased. CEO Adam Wathan attributed the revenue collapse directly to AI tools, which led to laying off three employees. Let that sink in: the project is more popular than ever, and it’s dying financially because AI intermediates the relationship between users and the project.

  • Stack Overflow lost 25% of its activity within six months of ChatGPT’s launch.

A January 2026 academic paper titled “Vibe Coding Kills Open Source” formalized the negative feedback loop: AI delegates package selection (users don’t choose libraries, their LLM does), fewer humans read documentation, fewer humans file bugs, maintainer incentives erode, maintenance quality drops, and the whole commons degrades.

The Asymmetry That Breaks Everything

Here’s the core economic problem: it takes a developer 60 seconds to prompt an agent to generate a pull request. It takes a maintainer an hour to carefully review it.

That asymmetry is lethal. Every vibe-coded PR that lands in a maintainer’s inbox is a tax on a person who is almost certainly unpaid. And unlike human-written PRs, vibe-coded PRs tend to be plausible-looking garbage — they pass a cursory review but fail under scrutiny. The maintainer has to do more work to evaluate them, not less.

The people celebrating “democratized coding” are freeloading on a commons they are actively destroying. Open source was already economically fragile. Most critical infrastructure — the kind that runs your bank, your hospital, your power grid — is maintained by one or two people in their spare time. Adding a firehose of AI-generated noise to their workload isn’t democratization. It’s a denial-of-service attack.

What Actually Works

I’m not anti-AI-tools. I use them daily. But I use them the way I use Stack Overflow circa 2015: as a reference, not a co-pilot. Here’s what I’ve found actually works:

Use AI for exploration, not production. “How does X library handle Y?” is a great prompt. “Write the authentication module” is a terrible one. Use AI to understand things faster, not to write things you don’t understand.

Read what it generates before you accept it. All of it. Every line. If you can’t explain why a line is there, delete it. The 30 seconds you save accepting it blindly will cost you 3 hours debugging it next week.

Maintain your ability to code without it. If you can’t complete a task without AI assistance, you don’t understand the task well enough to be writing code for it. That’s not gatekeeping — that’s engineering.

If you’re contributing to open source, write your own code. Maintainers can tell. They always can. And the fastest way to get your PR rejected — and your name remembered for the wrong reasons — is to dump AI-generated code on someone who’s maintaining a project for free.

The Uncomfortable Bottom Line

The AI coding productivity narrative is built on self-reported surveys and vendor-funded benchmarks. The one independent, controlled study says the opposite. The downstream effects on open source are measurable and destructive.

We are not in a productivity revolution. We are in a productivity hallucination — fitting, given the technology driving it.

The developers who will thrive in the next five years aren’t the ones who generate the most code. They’re the ones who understand the most code. AI can help with that, if you use it as a tool for thinking rather than a replacement for it.

But that requires admitting something the industry really, truly does not want to hear: your AI coding assistant might be making you worse. And you might be too dependent on it to notice.