Code, craft, and hard-won lessons.
Building with code & caffeine.

AI Writes the Code. Nobody Owns the Bugs.

Atlassian cut 1,600 people last week — 10% of its workforce — and made the unusual decision to say the quiet part out loud: AI did it. Not a restructuring. Not a strategic pivot. AI investment reducing the need for headcount.

That’s new. Companies have been quietly trimming while citing “efficiency initiatives” for years. Atlassian was the first major tech company to draw a direct public line between AI adoption and human headcount reduction at scale.

But the headline buried the more important story.

If AI is now writing 25-30% of production code at Google and Microsoft — numbers both companies confirmed publicly — and if that percentage is accelerating, then we have a serious accountability problem nobody is talking about honestly.

The Numbers Are Real

Let’s start with what we know.

GitHub Copilot has over 1.8 million paid users. Claude Code went from zero to the most-used AI coding tool in eight months. 95% of developers use AI coding tools at least weekly. 75% use them for more than half of their work. Big Tech is not experimenting — they are depending.

And the output isn’t trivial. AI tools are writing entire components, generating test suites, scaffolding microservices, refactoring legacy codebases. The code looks good. It compiles. Tests pass. It ships.

Here’s the problem: 45% of AI-generated code contains security vulnerabilities, according to independent analysis. Teams report 41% more code churn when AI tools are involved — meaning more rework, more reverts, more bugs found downstream. The code ships fast. The consequences arrive later.

The Accountability Gap

In a traditional engineering workflow, accountability is legible. A developer writes a function. A reviewer approves it. It ships. When a bug causes an outage, you can trace it back to a decision a human made. You can have a conversation about it. You can learn from it.

That loop is broken when AI writes the code.

Not because AI code is inherently worse — in some cases it’s better. The problem is that the accountability model has been quietly abandoned while everyone focused on the productivity gains.

Consider what happens in practice:

A developer uses Claude Code or Copilot to generate a feature. They review it, roughly. It looks plausible. They ship it. Three months later, a penetration test finds a SQL injection vulnerability in that generated query builder. Who owns that? The developer who approved it without deep review? The team lead who didn’t update code review norms? The company that deployed an AI tool without adapting its security processes?

The answer, right now, is: nobody has decided. And that ambiguity is load-bearing. It means nobody has real skin in the game for AI-generated code quality.

Vibe Coding Isn’t the Problem — the Vibe Is

Andrej Karpathy coined “vibe coding” and has since backed away from it, now preferring “agentic engineering.” The terminology war is revealing.

Vibe coding, as originally described, means letting the AI write freely and not worrying too much about the specifics. Correct the errors, steer the output, ship when it works. It’s a real and useful mode — great for prototyping, personal projects, throwaway scripts.

The problem is that “vibe coding” escaped the prototype sandbox and entered production systems at companies with millions of users. Nobody changed the accountability model when that happened. The same loose review norms, the same “it compiled, it’s fine” culture, just applied to code that didn’t come from a human brain.

Agentic engineering is a better frame precisely because it implies ownership. Engineers directing agents need to own the output of those agents. Every line of AI-generated code that ships is a decision a human made — to generate it, review it, and ship it. That decision should be traceable.

What Responsible Ownership Looks Like

The tooling isn’t the bottleneck. What’s missing is engineering culture adapting to the new reality.

Code review norms need to change. Reviewing AI-generated code isn’t the same as reviewing human-written code. AI tools pattern-match against training data — they produce code that looks correct and often is, but they can confidently generate subtle security flaws, race conditions, or logic errors that a human would second-guess. Review processes need to be more adversarial, not less. Especially for security-sensitive paths.

Provenance needs to be tracked. If 30% of your codebase was AI-generated, you should know which 30%. Not for blame — for risk assessment. When a CVE drops, you want to know which components were AI-generated so you can audit them faster. This is basic security hygiene that most teams haven’t implemented.

The security bar needs to rise, not fall. The appropriate response to AI tools generating 45% vulnerable code is not to ship faster and patch later. It is to invest in static analysis, SAST tooling, and automated security review as a prerequisite for AI-assisted code paths. Faster code generation requires stronger automated gates.

Humans stay in the loop on consequential decisions. This is the one that gets ignored. Not “human in the loop” as a checkbox on a compliance form, but genuine architectural review by engineers who understand the system. AI can generate the implementation. Humans need to own the design decisions.

The Atlassian Signal

The Atlassian announcement matters not because of the number — 1,600 is not large in the context of the global tech industry — but because of the admission. AI is now advanced enough, and trusted enough, that companies will publicly justify workforce reductions by citing it.

That is a different world than 2024, when every AI efficiency claim came with a dozen caveats and nobody was willing to say the quiet part.

What happens next is predictable: more companies follow. Headcount pressure increases across the industry. The pressure to ship more with less intensifies. And the temptation to lean harder on AI tools — with less review, less accountability, looser norms — grows.

That is the actual risk. Not that AI writes bad code. It’s that the economic pressure to move fast will cause teams to abdicate ownership of AI-generated code precisely when the stakes of that code are rising.

Own the Output

The most important shift the industry needs to make is cultural, not technical.

AI coding tools are not autonomous engineers. They are powerful amplifiers of human intent. The output belongs to the engineer who directed the generation and approved the result. Full stop.

This means:

  • You read the AI-generated code before it ships, actually read it, not just run the tests
  • You understand what the AI produced well enough to defend it in a postmortem
  • You treat AI-generated code with the same scepticism you would treat code from a contractor you’ve never worked with before — trust but verify, always

The Atlassian cuts will not be the last. The productivity claims are real. The economic incentives are powerful. AI-assisted development is not going away.

But the engineers who survive and thrive in this environment will be the ones who understood that AI writes the code, and humans own the bugs. That distinction is not a technicality. It is the entire job.