Skip to main content
Blog
The Word 'AI' Has Changed Its Soul Three Times

The Word 'AI' Has Changed Its Soul Three Times

A few decades ago, AI meant machine learning. Three years ago, it was ChatGPT. This year, it's agents. The word never changed, but what it contains has been replaced three times over. Most people are still stuck in the previous generation's cognitive framework, discussing something that doesn't actually exist.

Jiawei GuanJiawei Guan7 min read
Share:

Recently, I've been chatting with friends about AI, and the more we talk, the more something feels off. Everyone claims to understand it; everyone can carry a conversation. But halfway through, I often realize we're not talking about the same thing at all.

This reminds me of a joke an economist once made. He said economics is different from physics. If you say at the dinner table that you're a physicist, the other person will usually respond, "I don't understand physics—what's new lately?" and then listen attentively. But if you say you study economics, the other person will say, "I don't really understand economics," and then proceed to share their opinions on economics for half an hour. The whole table is discussing economics, except the economist barely gets a word in.

AI is the new economics.

One Word, Three Souls

The term "Artificial Intelligence" was coined by John McCarthy at the Dartmouth workshop in 1956. Nearly seventy years have passed, and the word has never changed, but what it actually refers to has quietly shifted several times.

Generation One: A Distant Legend of Intelligence

Back when I was in school, AI in the industry mainly referred to machine learning. Finding patterns in astronomy, mining data in finance, making click-through predictions for ads. These things were far removed from ordinary people; no one would bring them up at the dinner table.

The public's first impression of AI came from two events.

On May 11, 1997, IBM's Deep Blue defeated Garry Kasparov in the sixth game—the first time a reigning world chess champion lost to a computer. At the time, people still viewed it as a supercomputing machine; it had nothing to do with so-called "general intelligence."

The real turning point was March 2016, when AlphaGo defeated Lee Sedol 4-1 in Seoul. Go holds a special place in the Chinese-speaking world; it represents strategy, the big picture, the art of war. A program that could beat a top human player immediately sparked associations: if it's this strong at Go, is it also strong in other domains? Does it already possess "general intelligence"?

Around the same period, computer vision also broke through. In March 2014, the DeepID model from Professor Tang Xiao'ou's team at the Chinese University of Hong Kong achieved 98.52% accuracy on the LFW face recognition benchmark, surpassing the human eye's average of 97.53% for the first time. In 2015, Microsoft's ResNet dropped the classification error rate to 4.94% on ImageNet, once again exceeding human performance.

Faces are different. If a cat shifts slightly, you can't tell whether it's Cat A or Cat B; but if a person's facial features change a little, you immediately know "this isn't the same person." Human sensitivity to faces is innate. For computers to surpass humans at this task was profoundly significant.

The strategy of Go combined with the perception of vision—these two things collided, and public sentiment was instantly ignited.

What was dramatic is that before AlphaGo, domestic computer vision companies weren't actually doing well. After SenseTime received its angel round from IDG in 2014, it went through a period of tight funding; outside capital was reluctant to come in. A seemingly unrelated event to Go—it was only after AlphaGo ignited the entire AI track that the next round of funding came together smoothly. From 2016 to 2018, the "Four Little Dragons" of AI—SenseTime, Megvii, CloudWalk, and Yitu—raised money hand over fist.

But the hype cooled after a few years.

Why? Because the anticipated "generalization" never materialized. AlphaGo could only play Go; ask it to do anything else and it was useless. Vision companies would train one model for face recognition and another for vehicle recognition; each additional scene increased costs almost linearly. The story that "massive investment produces general intelligence" simply didn't hold for that generation of technology.

Around 2020, sentiment returned to the nuts and bolts of commercialization.

Generation Two: ChatGPT Gave AI a Mouth

The second major shift came on November 30, 2022, when OpenAI released ChatGPT 3.5.

The biggest difference this time was "language." For the first time, AI could chat with people in a way that actually seemed plausible. Sitting on the other side of the screen, you sometimes really couldn't tell whether it was a person or a machine.

Veterans of AI would immediately think of the Turing test: a blind conversation between a human and AI, with a third party guessing which is the machine. This was the standard Alan Turing proposed in 1950 for judging general intelligence. People had assumed it would take a long time to pass.

As it turned out, in 2024 research by Cameron Jones at UC San Diego, GPT-4 was judged to be "a real person" with a 54% probability in a five-minute conversation test, already close to the human baseline of 67%. In follow-up research in 2025, GPT-4.5 was considered to have passed the "original Turing test" outright. A standard once enshrined as sacred is now no longer mentioned, because it was quietly crossed long ago.

What truly made ChatGPT different was that it turned AI into something "everyone could use with their own hands." The machine learning and AlphaGo generation was news for ordinary people; pontificating was someone else's business. After ChatGPT, things changed. By February 2026, 900 million people were opening it to chat once a week. This level of penetration is an entirely different thing from before.

So for the past two or three years, when the vast majority of people talked about AI—whether they meant ChatGPT, DeepSeek, Doubao, or Kimi—they were referring to this one thing: chatbots.

Generation Three: From Mouth to Hands and Feet

But from the end of last year to the beginning of this year, the meaning of AI changed once again.

I often run into a scenario lately in conversation. The other person says, "I get it, I use it all the time—I've used Doubao, Kimi, DeepSeek." That statement held true for the past few years. It doesn't anymore. Because the AI I'm talking about is no longer that thing.

The new paradigm is called "Agent."

If the previous generation of AI was an advisor with a mouth and a brain, sitting across from you making conversation, then this generation is a colleague with hands and feet, capable of picking up tools and doing work for you. At least in the digital world.

The defining event that ignited this shift was Anthropic's release of Claude Computer Use in October 2024. For the first time, AI could look at the screen, move the mouse, click buttons, and type on the keyboard. In the year or so since, coding agents have become the main battleground. Claude Code was announced in 2024 and matured in 2025; OpenAI released Codex CLI in April 2025 and followed up with Codex Cloud in May; domestic products like Kimi Code also joined the race.

What's most interesting is the generalization. Coding agents were originally aimed at "writing code," but in practice, people discovered they could do far more than that: researching information, batch processing files, operating browsers, debugging software, running experiments automatically. They can handle 80% to 90% of tasks on a computer, and do them decently well. This generation has finally delivered on the "generalization" that the previous generation promised but failed to achieve.

Why didn't many people notice that this AI and the last AI are two completely different things? Because the word didn't change—it's still the two characters "AI." But whether the "AI" in your mind is a chat box or an agent that can work on its own makes a world of difference.

What Exactly Are We Talking About

Returning to the original question: what exactly are we talking about when we discuss AI today?

I've tried to break it down.

First, the person for whom "AI = Doubao / Kimi / DeepSeek, that kind of chat box." That's the 2022 to 2024 generation's understanding. We're not talking about the same thing.

Second, someone who has used agent applications like OpenClaw, Hermes, or KimiClaw that are controlled via instant messaging. At least we're on the same plane; we can have a conversation.

Third, someone who uses Claude Code, Codex, or Kimi Code on a daily basis—any one of these coding agents. Then we're talking about the exact same thing, and we can discuss the changes it brings with great precision.

I keep emphasizing this because it's a lot like the Turing test. It has already been quietly crossed, yet many people are still stuck in the previous generation's cognitive framework, discussing something that doesn't actually exist.

The Real Barrier Isn't Technical

Every time the meaning of AI shifts, the way ordinary people experience it changes as well.

In the machine learning generation, ordinary people had no chance to get their hands on it. The barrier was too high—you had to prepare data, configure environments, tune parameters; it was completely unfriendly to non-specialists. Impressions of AI could only come from the news.

In the ChatGPT generation, the ceiling of adoption was shattered. Open a web page and use it, for free—you could finally "use" AI. That's when people started weighing in on "which one is better or worse"; they could comment because they could use it.

The agent generation is actually the same. The barrier is low; you can get started for a few dozen yuan a month. I previously helped a few college students use Claude Code. They had no special background, yet they still managed to complete their own projects.

So where is the real barrier? I think it's just one word: laziness.

Not technical laziness, but cognitive laziness. You've heard others talk about it, you've swiped through two short videos, you've read a few viral articles on social media, and you think you understand. But you haven't actually gotten your hands dirty, haven't had it do a concrete task for you, haven't experienced that "ah, so that's how it works" moment.

This is how it is with every technological transformation. It was like this when the steam engine emerged, when the internet spread, and when mobile internet spread. Technology is never the slow part; the slow part is the people who think they understand after hearing a thing or two.

And after every shift, old cognitive frameworks become a hidden cost. You think you're using AI, but you're actually using the previous generation of AI.

If you've read this far without actually using an agent once, the advice is simple: find the cheapest option and start using it today. No need to watch videos, no need to read tutorials—just have it do something you would have had to do yourself, and see what happens.


References

Recommended Reading

Subscribe to Updates

Get notified when I publish new posts. No spam, ever.

Only used for blog update notifications. Unsubscribe anytime.

Comments

or comment anonymously
0/2000