Skip to main content
Blog

Vibe Coding: The Era Where Everyone Is a Programmer Has Truly Arrived

From code completion to vibe coding, AI programming has undergone three major leaps. This isn't just tool improvement—when AI learns to write code, it gets the keys to manipulate the entire digital world.

7 min read
Share:
Vibe Coding: The Era Where Everyone Is a Programmer Has Truly Arrived

One afternoon, I spoke a sentence into my terminal window: "Build me a bilingual Chinese-English personal website with physics engine animations."

Then I hit Enter.

I don't know frontend development at all. I've never written React, and Next.js routing configurations are completely foreign to me. But that afternoon, the website you're looking at right now—75 TypeScript files, 29 React components, a draggable tag wall powered by the Matter.js physics engine, and an AI chat assistant connected to a large language model—was "talked" into existence.

All the code was written by Claude Code. I didn't write a single character.

This is called vibe coding. A year ago, most people hadn't even heard the term.

One Word, One Movement

On February 2, 2025, Andrej Karpathy casually posted a tweet on X:

"I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."

He called this new way of programming "vibe coding." You no longer write code line by line; instead, you tell the AI what you want in plain language, then watch it build the thing. When you hit an error, you throw the error message at it, and it usually gets fixed. The code grows to a scale you can't even read, and you don't bother reading it.

Karpathy himself said it was just "a shower thought casually posted." But it named something many people were already doing without knowing what to call it.

You probably noticed what happened next—Collins Dictionary selected "vibe coding" as the 2025 Word of the Year, and Google searches for the term exploded by 6,700%. A casual tweet became the name of a movement.

Put simply, vibe coding boils down to one sentence: You don't need to understand code; you just need to articulate what you want.

From Completion to Conversation: The Three Leaps of AI Programming

AI helping humans write code isn't new. But the past few years have seen three completely different leaps, and understanding the distinctions is key to grasping what's actually new about vibe coding.

The first step was code completion. When GitHub launched Copilot in 2021, programmers got excited—it could guess what you were going to write next based on your half-finished code. Much like predictive text on your phone, typing did get faster, but you still had to know what to type. Non-programmers facing Copilot were still utterly lost.

The second step was code generation. After ChatGPT went viral in 2023, you could describe a need in Chinese and receive a chunk of code. This was a major leap beyond Copilot, but you still had to understand that code, know where to put it, and fix bugs yourself. It was like having a fast but unreliable intern—they could help, but you had to watch them closely.

The third step is vibe coding. Starting in early 2025, tools like Claude Code and Cursor can read your entire project's codebase, create files, modify files, run tests, fix bugs, and even find workarounds when they hit obstacles. You transform from a code writer into a requirements provider. In essence, you become the product manager, and the AI becomes the engineering team.

The difference here isn't one of degree; it's categorical. Crossing from "AI helps you write code" to "you tell AI what you want" represents a threshold. As I discussed in my previous article, AI isn't the next Copilot—it's a category leap.

Numbers Don't Lie

Data from last year's Y Combinator Winter batch surprised many people. YC CEO Garry Tan said that 25% of startups in that cohort had 95% AI-generated code.

YC is a top-tier global startup incubator—these are people raising real funding and building real products.

Around the same time, Stack Overflow's developer survey showed 84% of programmers using AI tools in their daily work. Nadella said at LlamaCon that 20% to 30% of the code in Microsoft's own repositories is AI-written.

Jensen Huang said something that I think captures this most accurately:

"Everyone is a programmer. The new programming language is called human language."

Two years ago, that sounded like a vision. Looking back now, it's more like a plain description of what was already happening.

My Personal Experiment

Back to the website I mentioned at the beginning. Honestly, when I decided to build it, I wasn't confident.

The tech stack sounds intimidating: Next.js 16, React 19, TypeScript strict mode, Tailwind CSS v4, Velite + MDX content system, Matter.js physics engine. But I didn't understand any of these things—not being modest, I genuinely knew nothing.

All I did was converse with Claude Code. All the code was written by it; I didn't touch a single character.

That particle animation on the homepage—the floating light points that follow your mouse when you refresh—is backed by 187 lines of Canvas rendering code handling device pixel ratio adaptation, dark mode switching, and detecting the user's "reduced motion" preference. I know nothing about Canvas programming, but I can describe the visual effects I want to the AI and iterate round by round.

That draggable physics tag wall is even more interesting. Under the hood runs the Matter.js physics engine; each tag has gravity, friction, and restitution coefficients, and if they fall off-screen they bounce back automatically. 262 lines of code, not a single one typed by me, all written by Claude Code. But every line exists because I said something like "tags should bounce back when they fall off."

Then there's the AI chat assistant—299 lines of complete chat interface code, connected to the LLM API, supporting streaming output, displaying WeChat-style in Chinese environments and WhatsApp-style in English environments.

Domain configuration, DNS settings, Vercel deployment—all things I completely don't understand—Claude handled those too. I didn't even know what DNS was; I just watched Claude clicking around in the browser, then telling me "done."

The whole project took one afternoon. If I had to learn these tech stacks from scratch and write it myself, I wouldn't have started at all, because I wouldn't have known where to begin.

I'm not watching this revolution happen from the sidelines. I'm using it to build houses—though I haven't moved a single brick myself.

But—

If I only wrote about the good parts up to here, this article would become a sales pitch. It's not that simple.

In July 2025, METR released a rigorous controlled experiment: they had experienced open-source developers (averaging 5 years of experience, 1,500 commits) use AI tools to work on development tasks for their own projects. The result: with AI, these people were actually 19% slower. More intriguingly, the developers themselves felt AI made them 20% faster—perception and reality were completely inverted.

Another data point from CodeRabbit's analysis at the end of 2025: code with AI involvement had 1.7 times the proportion of severe issues compared to purely human-written code, and security vulnerabilities were 2.74 times higher.

These numbers are real; there's no point avoiding them.

But I think they're actually describing two different things. The METR experiment measured "experts doing work they were already good at"—if you make a race car driver explain every turn to a passenger while driving, of course they'll slow down. But the point of vibe coding was never to make race car drivers faster; it was to let people who couldn't even get on the road start driving.

As for security vulnerabilities, this shows AI-written code does need human review before hitting production. But consider this: without vibe coding, many of these projects wouldn't exist at all. The question to ask isn't "Is AI-written code perfect?" but rather "Is it good enough to build things that couldn't be built before?" For personal projects, prototypes, and internal tools, the answer is obvious.

These problems are real. But tools are iterating rapidly; weaknesses from six months ago may already be fixed today. The direction is right; the road is still being paved.

Code Is Just the Beginning

If you still think vibe coding just means "building websites without learning to program," you're missing the bigger picture.

Following the logic from my previous article: when AI can directly convert capital into productivity, what exactly is that conversion mechanism? I believe the answer is code.

The digital world runs on code. Every app on your phone, every website you use, every online payment behind the scenes—it's all code running. Code is the universal interface through which humans manipulate the digital world.

And vibe coding means AI has mastered this interface.

When AI can reliably write code, it can build software. When it can build software, it can automate nearly any digital task—booking flights, managing schedules, analyzing reports, building websites, calling various APIs. These things are all essentially variants of "write a program and run it."

This is why AI Agents, which I discussed in the previous post, deserve so much attention. For an agent to do things for you in the digital world, it needs to be able to operate that world. How? By writing code, calling APIs, reading and writing files. Vibe coding gives the Agent this capability. Or put another way: vibe coding is the interface between you and the AI Agent—you express intent in natural language, and the AI uses code to make it reality.

The previous article said "capital can bypass human labor and convert directly into productivity." Vibe coding is that bypass.

Your First Step

If you've read this far, you might have two voices battling in your head. One says "this is so cool," and the other says "but I don't know how to program."

The good news is, the second voice is precisely the problem vibe coding solves. You don't need to know programming. You just need to know how to speak and type.

I use Claude Code myself—a terminal-based AI programming tool from Anthropic. Installation takes just one command.

For macOS or Linux users, open your terminal:

curl -fsSL https://claude.ai/install.sh | bash

For Windows users, open PowerShell:

irm https://claude.ai/install.ps1 | iex

After installation, open your terminal in any folder, type claude, and try saying: "Help me build a small BMI calculator tool."

See what happens.

What you just did would have required a computer science degree five years ago. Three years ago, it would have meant digging through Stack Overflow for hours. A year ago, it would have involved copying and pasting code snippets from ChatGPT back and forth.

Now you just said a sentence.

Tools will continue getting faster and smarter; this direction won't change. Looking back from five years in the future, 2026 might be the year ordinary people started "writing" software in natural language. Rather than sighing about it then, you might as well try it now.

Recommended Reading

Subscribe to Updates

Get notified when I publish new posts. No spam, ever.

Only used for blog update notifications. Unsubscribe anytime.

Comments

or comment anonymously
0/2000