Vibe Coding and the New Algorithm Prison

January 29, 2026

The Feed That Writes Your Code

I remember the exact moment I stopped discovering music on my own. It was around 2016. Spotify's Discover Weekly had gotten so good that I stopped browsing record stores, stopped asking friends what they were listening to, stopped doing the work of taste. The algorithm was just better at finding songs I liked than I was. So I surrendered.

A decade later, I find myself watching programmers make the same surrender — only this time, what they are giving up is not taste in music but the capacity to think through problems in code. They call it "vibe coding," and it feels like the most exciting thing to happen to software development in years. I think it might also be one of the most dangerous.

What "Vibe Coding" Actually Means

The term comes from Andrej Karpathy, the former Tesla AI director and OpenAI co-founder, who coined it in a February 2025 post on X. His description was disarmingly honest: "There's a new kind of coding I call 'vibe coding,' where you fully give in to the vibes, embrace exponentials, and forget that the code even exists." You talk to an AI agent in plain English, it writes the code, you run it, and if something breaks, you paste the error back in and let the AI fix it. You never read the code. You just vibe.

The term exploded. By the end of 2025, Collins English Dictionary had named it Word of the Year. Google searches for "vibe coding" spiked. Startups launched entire platforms around the concept. And a new class of builders emerged — product managers, designers, entrepreneurs — who could suddenly conjure working applications without writing a single line of code themselves.

By early 2026, even Karpathy had moved on. He now prefers the term "agentic engineering," reflecting a more mature, professional framing. But the genie was out of the bottle. Millions of people had tasted the intoxicating feeling of building software without understanding software, and they were not going back.

The Algorithm Parallel

Here is what strikes me about vibe coding: it follows the exact same trajectory as algorithmic social media feeds. As I argued in Build vs. Buy Is Dead, AI is making it trivially cheap to generate software — but cheap generation and deep understanding are not the same thing.

In the early days of the internet, you chose what to consume. You bookmarked blogs. You curated RSS feeds. You went looking for information and brought it back, like a hunter returning with game. Then the algorithms arrived — Facebook's News Feed in 2006, YouTube's recommendation engine, TikTok's For You page — and something fundamental shifted. You stopped choosing. The feed chose for you.

The bargain seemed great at first. The algorithm surfaced content you would never have found on your own. It was efficient. It was personalized. It felt like an upgrade. But over time, we noticed the costs: shortened attention spans, filter bubbles, a generation of people who could scroll for hours but struggled to sit with a single long-form piece. A 2025 study from Microsoft and Carnegie Mellon University found that the more people relied on AI tools, the less critical thinking they engaged in, and the harder it became to summon those skills when they were genuinely needed. The researchers described the result bluntly: cognition left "atrophied and unprepared."

Vibe coding is this same story, retold in a new domain. Instead of outsourcing your taste to an algorithm, you outsource your problem-solving to an LLM. Instead of passively consuming content someone else curated, you passively accept code someone — something — else wrote. The feeling of productivity is real. The underlying dependency is also real.

The Evidence Is Coming In

For a while, the skill atrophy concern was just a theory. It is no longer theoretical.

In early 2026, Anthropic published a randomized controlled trial with 52 engineers who were asked to learn an unfamiliar Python library. Half used AI coding assistance. Half coded manually. The result: developers using AI scored 17% lower on comprehension tests — equivalent to nearly two full letter grades — despite completing the task slightly faster. Debugging skills showed the steepest decline, which is particularly alarming given that catching AI-generated errors is supposed to be the human's job.

The Anthropic study also identified a telling pattern in how developers used AI. High-scoring participants asked follow-up questions, combined code generation with explanations, and used AI for conceptual queries while writing code themselves. Low-scoring participants did what most vibe coders do: they delegated everything. They became what one developer in Addy Osmani's analysis of the phenomenon called "a human clipboard" — shuttling errors back and forth to the AI without ever learning from the solutions.

Meanwhile, the METR research group found something counterintuitive: experienced open-source developers using AI tooling actually completed tasks 19% slower than those working without it. The efficiency gains that vibe coding promises may be, at least for experienced engineers working on complex problems, a mirage.

And the downstream effects are starting to ripple. A January 2026 paper titled "Vibe Coding Kills Open Source" documented how AI-mediated development is hollowing out the open-source ecosystem. Tailwind CSS, one of the most popular front-end frameworks, saw its documentation traffic drop roughly 40% and revenue fall close to 80% — because AI agents were using the library without humans ever visiting the docs, reporting bugs, or engaging with the community. Stack Overflow activity fell approximately 25% within six months of ChatGPT's launch. The knowledge commons that made software development possible is being consumed without being replenished.

The "Good Enough" Trap

Stack Overflow's editorial team published a sharp piece in January 2026 called "A new worst coder has entered the chat," documenting a non-technical writer's attempt to build an app using vibe coding tools. The app appeared to work. But when developer friends reviewed the code, the verdict was damning: messy, nearly impossible to understand, no data protection, no unit tests, inline styling everywhere, and massive monolithic components. As the author put it, "It felt like hitting one of those 'That was easy!' buttons from Staples. But it was too easy."

This is the "good enough" trap. The code works, in the narrow sense that it runs and produces output. But you cannot debug it when it breaks in production. You cannot extend it when requirements change. You cannot audit it for security vulnerabilities — and a December 2025 analysis found that AI-generated code contains security vulnerabilities at 2.74 times the rate of human-written code. You have a product, but you do not have understanding. And in software, understanding is not a luxury. It is the foundation.

The analogy I keep coming back to is hiring a contractor to build your house while refusing to look at the blueprints. The house might stand. But when a pipe bursts at 2 AM, you will have no idea where the shutoff valve is.

The Calculator Objection (And Why It Doesn't Hold)

The strongest counter-argument is the calculator analogy. When calculators entered classrooms, critics predicted the death of mathematical reasoning. That didn't happen. Students still learned long division; they just spent less time on arithmetic drills and more on higher-order concepts. Calculators became a tool that amplified mathematical thinking rather than replacing it.

So will vibe coding be the same? Will it simply shift the burden upward, from syntax to architecture, from implementation to design? Scott Young makes a thoughtful version of this argument, noting that even in a vibe-coding world, "I needed the knowledge of how code works and what design constraints to set up." The theory-heavy computer science curriculum might become more valuable, not less.

I find this argument partially convincing. But there is a crucial difference between calculators and LLMs: calculators were never designed to feel like they understood you. A calculator does not give you the illusion that it is thinking on your behalf. An LLM does. When you ask Claude or GPT to explain a bug, the explanation feels like understanding — your understanding. But it is not. You have consumed an explanation. You have not built a mental model. The feeling of learning and the act of learning have been decoupled, and that decoupling is far more dangerous than anything a TI-84 ever did.

Moreover, we kept teaching math fundamentals alongside calculators. The educational infrastructure was preserved. Are we doing the same for coding? Early signs are mixed. Universities are scrambling to figure out assessment in a world of AI coding assistants. Some bootcamps have pivoted entirely to "prompt engineering." The fundamentals are not guaranteed to survive.

The Paradox at the Center

Here is the deep irony of vibe coding: the people who are best at it are the ones who need it least.

Simon Willison, one of the most respected voices in the Python community, drew a careful distinction in 2025 between vibe coding and what he calls "vibe engineering." If an LLM wrote every line of your code, but you have reviewed, tested, and understood it all, that is not vibe coding — that is using an LLM as a typing assistant. Real vibe coding, in Willison's framing, is the "fast, loose, and irresponsible" version. Vibe engineering is the professional version: experienced developers accelerating their work while remaining "proudly and confidently accountable for the software they produce."

The distinction matters because it reveals a skill prerequisite hidden inside the tool. An experienced developer can evaluate AI output, catch subtle bugs, recognize architectural anti-patterns, and know when the AI is confidently wrong. A novice cannot. The tool appears to democratize coding, but it actually widens the gap between those who understand software and those who merely use software that appears to work.

Anthropic's own research confirmed this: the developers who used AI most effectively were the ones who engaged critically with its output, asking why, not just asking for more code. The ones who suffered skill degradation were the ones who treated the AI as an oracle rather than a collaborator.

What Lingers

I am not against AI coding tools. I use them daily. They have changed how I prototype, how I explore unfamiliar APIs, how I handle the kind of boilerplate that no human should write by hand. The problem is not the tools. The problem is the vibe — the cultural attitude that says understanding is optional, that the code does not matter as long as the demo works, that we can skip the hard parts because the machine will handle them.

Social media algorithms did not destroy our capacity for attention in a single moment. They eroded it gradually, through a thousand small surrenders: one more scroll, one more autoplay video, one more recommendation accepted without question. Each individual surrender was trivial. The cumulative effect was not.

Vibe coding invites the same pattern. Each individual delegation — "just let the AI write this function, I will review it later" — is harmless. But "later" never comes, and the muscle atrophies, and one day you realize you are a passenger in your own codebase.

The developers who will thrive in this new landscape are the ones who treat AI the way a good chess player treats an engine: as a sparring partner that makes you sharper, not a crutch that lets you stop thinking. They will be the ones who insist on understanding, even when understanding is no longer required. They will be the ones who remember that the point of building software was never just to have software. It was to solve problems. And you cannot solve problems you do not understand.

The vibes are great. But someone should still be paying attention.

References