Skip to main content
Blog
Vibe Coding: The Era Where Everyone Is a Programmer Has Arrived

Vibe Coding: The Era Where Everyone Is a Programmer Has Arrived

From code completion to vibe coding, AI programming has gone through three leaps. This isn't just about better tools—when AI learns to write code, it gains the key to control the entire digital world.

Jiawei GuanJiawei Guan7 min read
Share:

One afternoon, I said something to my terminal window: "Build me a bilingual personal website with physics engine animations."

Then I hit Enter.

I know nothing about frontend development. I've never written React, and Next.js routing is completely foreign to me. But that same afternoon, the website you're looking at right now—75 TypeScript files, 29 React components, a draggable tag wall powered by the Matter.js physics engine, and an AI chat assistant connected to a large language model—was "talked" into existence.

All the code was written by Claude Code. I didn't write a single word.

This is called vibe coding. A year ago, most people had never heard the term.

What Exactly Is Vibe Coding?

On February 2, 2025, Andrej Karpathy casually posted a tweet on X:

"I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works."

He called this new way of programming vibe coding. You no longer write code line by line; instead, you tell the AI what you want in everyday language, then watch it build things for you. When you hit an error, you throw the error message at it, and it usually fixes it. The codebase grows to a point where you can't possibly read it all, and you simply stop trying.

Karpathy himself said it was just a "shower thought" he posted on a whim. But it seemed to articulate something many people were already doing but didn't have a name for.

You probably noticed what happened next—Collins Dictionary named vibe coding its 2025 Word of the Year, and Google searches for the term surged by 6,700%. A casual tweet became the name of a movement.

To put it simply, vibe coding boils down to one sentence: you don't need to understand code; you just need to be able to articulate what you want.

From Completion to Conversation: The Three Leaps of AI Programming

AI helping people write code is nothing new. But the past few years have seen three completely distinct leaps. Understanding the differences between them is key to grasping what's actually new about vibe coding.

The first step was code completion. When GitHub launched Copilot in 2021, programmers got excited—it could guess what you were going to write next based on your half-finished code. Much like predictive text on a smartphone, it made typing faster, but you still had to know what to type. Someone who doesn't know programming would still be completely lost in front of Copilot.

The second step was code generation. After ChatGPT exploded in 2023, you could describe a requirement in Chinese and it would spit out a code snippet. This was a huge step up from Copilot, but you still had to understand that code, know where to put it, and fix bugs yourself. It was like having a fast but somewhat unreliable intern—helpful, but you had to watch them closely.

The third step is vibe coding. Starting in early 2025, tools like Claude Code and Cursor can read your entire project codebase, create files, modify files, run tests, fix bugs, and even find workarounds when they hit roadblocks. You go from being the person who writes code to the person who makes requests. In other words, you become the product manager, and the AI becomes the engineering team.

The difference here isn't one of degree; it's one of kind. Moving from "AI helps you write code" to "you tell the AI what you want" crosses a fundamental threshold. As I discussed in my previous article, AI isn't the next Copilot—it's a category jump.

How Widespread Is AI Programming, Really?

Last year's Y Combinator winter batch data surprised a lot of people. YC's CEO Garry Tan said that 25% of startups in that cohort had codebases that were almost entirely AI-generated, with 95% of their code written by AI.

YC is a top-tier global startup accelerator. These are people seriously raising funding and shipping products.

Around the same time, Stack Overflow's developer survey showed that 84% of programmers use AI tools in their daily work. At LlamaCon, Nadella said that 20% to 30% of Microsoft's own codebase was written by AI.

Jensen Huang put it best, in what I think is the most accurate summary of what's happening:

"Everyone is a programmer. The new programming language is human language."

Two years ago, that sounded like a vision. Looking back now, it reads more like a plain description of what's already happened.

My Personal Experiment

Back to the website I mentioned at the beginning. To be honest, I wasn't confident when I decided to build it.

The tech stack looks intimidating on paper: Next.js 16, React 19, TypeScript strict mode, Tailwind CSS v4, Velite + MDX content system, Matter.js physics engine. But I knew nothing about any of these—I'm not being modest; I genuinely had no clue.

All I did was talk to Claude Code. It wrote all the code; I didn't touch a single character.

The particle animation on the homepage—if you refresh, you'll see floating light dots that follow your mouse—is backed by 187 lines of Canvas rendering code that handles device pixel ratio adaptation, dark mode toggling, and even detects the user's "reduce motion" preference. I know nothing about Canvas programming, but I could describe the visual effect I wanted to the AI and iterate round after round.

The draggable physics tag wall is even more interesting. It runs on the Matter.js physics engine underneath; each tag has gravity, friction, and restitution coefficients, and if one falls off the screen, it bounces back automatically. Two hundred sixty-two lines of code, not one of them typed by me—all written by Claude Code. Yet every single line exists because I said something like "if a tag falls off, it should bounce back."

Then there's the AI chat assistant—a full 299-line chat interface connected to a large language model API, supporting streaming output, with a WeChat-style UI in Chinese environments and a WhatsApp-style UI in English ones.

Domain configuration, DNS settings, Vercel deployment—all things I completely didn't understand—were also handled by Claude. I didn't even know what DNS was. I just watched Claude click around in the browser and then tell me, "Done."

The whole project took one afternoon. If I had to learn these tech stacks from scratch and write it myself, I never would have started, because I wouldn't have known where to begin.

I'm not watching this transformation from the sidelines. I'm using it to build houses—without ever laying a single brick myself.

Is AI-Generated Code Actually Reliable?

If I only told the good side up to this point, this article would be an advertorial. It's not that simple.

In July 2025, METR published a rigorous controlled experiment: they had experienced open-source developers (averaging 5 years of experience and 1,500 commits) use AI tools to work on their own projects. The result? These developers were actually 19% slower when using AI. Even more telling, the developers themselves felt they were 20% faster—perception and reality were completely inverted.

Another data point comes from CodeRabbit's year-end 2025 analysis: code with AI involvement had 1.7 times as many critical issues as purely human-written code, and security vulnerabilities were 2.74 times higher.

These numbers are real, and there's no point in avoiding them.

But I think they're actually telling two different stories. The METR experiment measured "experts doing work they already know well"—it's like asking a race car driver to explain every turn to a passenger while driving; of course they'll slow down. But the point of vibe coding was never to make the race car driver faster; it was to let people who could never get on the road start driving in the first place.

As for security vulnerabilities, this does mean AI-written code needs human review before going to production. But consider this: without vibe coding, many of these projects wouldn't exist at all. The right question isn't "Is AI-written code perfect?" but "Is it good enough to build things that couldn't be built before?" For personal projects, prototypes, and internal tools, the answer is obvious. Later, I used the same approach to "write" 300,000 lines of code in 10 days, then delete all of it—that experience taught me that code is a liability, not an asset.

These problems are real. But the tools are iterating rapidly; weaknesses from six months ago may already be patched today. The direction is right; the road is still being paved.

Code Is Just the Beginning

If you still think vibe coding is just "making websites without learning to program," you're selling it short.

Following the thread from my previous article: when AI can convert capital directly into productivity, what is the mechanism in the middle? I believe the answer is code.

The digital world runs on code. Every app on your phone, every website you use, every online payment—all of it is powered by code running behind the scenes. Code is humanity's universal interface for controlling the digital world.

And vibe coding means AI has mastered that interface.

When AI can reliably write code, it can build software. When it can build software, it can automate almost any digital task—booking flights, managing schedules, analyzing reports, building websites, calling APIs. All of these are essentially variations on "write a program and run it."

That's why the AI Agents I discussed in my previous article are so worth watching. For an agent to do things on your behalf in the digital world, it needs to be able to operate the digital world. How? By writing code, calling APIs, reading and writing files. Vibe coding gives the agent that capability. Or to put it another way: vibe coding is the interface between you and the AI agent—you express intent in natural language, and the AI turns it into reality with code.

The previous article said that "capital can bypass labor and convert directly into productivity." Vibe coding is the bypass.

Your First Step

If you've read this far, there might be two voices in your head. One says, "This is so cool." The other says, "But I don't know how to program."

The good news is that the second voice is describing exactly the problem vibe coding solves. You don't need to know how to program. You just need to be able to speak and type.

I personally use Claude Code, a terminal-based AI programming tool from Anthropic. Installation takes just one command.

For macOS or Linux users, open your terminal:

curl -fsSL https://claude.ai/install.sh | bash

For Windows users, open PowerShell:

irm https://claude.ai/install.ps1 | iex

Once installed, open a terminal in any folder, type claude, and try saying: "Help me write a small BMI calculator."

See what happens.

What you just did would have required a computer science degree five years ago. Three years ago, it would have meant digging through StackOverflow for hours. A year ago, it would have involved copying and pasting code snippets from ChatGPT back and forth.

Now you just said a sentence.

The tools will keep getting faster and smarter; that direction won't change. Looking back from five years in the future, 2026 may well be remembered as the year ordinary people started "writing" software in natural language. Rather than waiting to marvel at it in retrospect, you might as well try it now. If you want to see more real-world examples, I documented the entire process of talking this official website into existence with Claude Code.

Recommended Reading

Subscribe to Updates

Get notified when I publish new posts. No spam, ever.

Only used for blog update notifications. Unsubscribe anytime.

Comments

or comment anonymously
0/2000