Skip to main content
Blog
What Agents Lack Isn't Intelligence, It's Trust

What Agents Lack Isn't Intelligence, It's Trust

Zero-friction onboarding plus extremely powerful AI intelligence—two pillars can't hold up a product. When users see an agent executing incomprehensible commands in the terminal, their first reaction isn't awe, it's fear. The missing pillar is progressive trust.

Jiawei GuanJiawei Guan3 min read
Share:

Recently, while working on an AI product, I ran into a pitfall.

We had been building around two core ideas. The first is zero-friction onboarding. Open the terminal, hit Enter on one line of code, and you're using it. No software installation, no permissions granted, no wrestling with OS security pop-ups. Earlier, during promotion, we discovered that friction during onboarding was the number-one killer of trial rates—people got annoyed before they even started. After achieving zero friction, success rates improved a lot and the experience felt great.

The second is extremely powerful AI intelligence. Since onboarding is so simple, users just state their needs and leave the rest to the agent. We designed an agent team architecture that combines hybrid models with multiple workers collaborating to handle complex tasks at the lowest possible cost and in the shortest time.

Both pillars were in place, and the results were decent.

But when demoing to others, the reactions were far weaker than I expected. I kept wondering where the problem lay.

Fear

One day, while having dinner with a friend and talking about this, it suddenly clicked.

When our product executes tasks, lines of commands pop up in the terminal. Technical friends find it interesting, saying the commands are well-chosen and the task breakdown is solid. But for most people, when a string of incomprehensible code suddenly flashes across the screen, their first reaction isn't awe—it's fear.

What's it doing? Will it delete my stuff? Will it break my computer?

Previously, a user gave me feedback after using it: "Are you running a script? What's written in the script?" I was quite puzzled at the time—why would they think that? Someone else said: "Wow, it really finished! But... what exactly is this thing?"

Even with a technical background, you can't fully understand what the agent is doing just by looking at an interface. Let alone ordinary users.

Without understanding, there is fear.

The Trust Is Broken

Thinking back to the early promotion days, some people would rather have me help them remotely than let the agent do it. Although the operation was more troublesome, they felt secure. There's a person helping, and if something goes wrong, they can communicate with them. They feel confident. They know it's Guan Jiawei helping them with this task, and they trust this person.

Replace that with an agent, and that layer of trust disappears.

On one side, there's extreme intelligence making autonomous decisions and executing autonomously on your device. On the other side, there's completely incomprehensible output. The device is my asset; having something indescribable messing around on it makes everyone uncomfortable.

The stronger the intelligence, the more incomprehensible its exposed behavior, and the more scared users become. These two things stacked together are dangerous.

We were missing a pillar.

Rebuilt Overnight

After figuring it out, we rebuilt the interaction overnight.

It's still a terminal, but what you see after opening it is completely different. When the agent connects, it gives an opening line. When researching, it says, "I'm looking up relevant information." When it finds reusable information, it tells you. Every step is explained in natural language: what it's preparing to do, how it decided to do it, and what it's currently executing. If it fails, it explains why and why it's changing direction.

It's no longer rows of incomprehensible commands—it's a collaborator that can talk.

The agent's capabilities haven't changed, but user feedback is completely different.

Claude Code Walked the Same Path

After finishing, I was reminded of Claude Code.

At first, engineers would watch every line of code it wrote and every command it executed. Some weren't reassured, so they expanded all folded content and checked each item one by one. Later, they found that 95% of the time it wouldn't mess up, so people started folding the information back up. Executing a bash command would display just one line—just wait for it. Later on, less and less information was shown, and no one thought there was a problem.

Someone on our team told me something. He suddenly realized one day that he had never said no to Claude Code. Every time a permission request popped up, he clicked approve. A step where he said yes 100% of the time had no reason to exist, so he simply turned on bypass permissions and let it do its thing.

This isn't something you can do from day one. Handing over all permissions on the first day would make anyone panic. But after interacting for a while and confirming it won't mess up, trust naturally develops.

You Can't Skip Steps

Building trust between humans and the unknown is a slow process.

If a product launches on day one, explains nothing, and automatically executes a bunch of operations on the user's device—even if the results are good—people will freak out. "What's going on? Will it mess up my stuff?"

There must be a gradual process. First, let people clearly see what the agent is doing and why, confirm that it won't cause problems, and then slowly let go. You can't skip steps.

So the three pillars of our product are set. Zero-friction onboarding, extremely powerful AI intelligence, and progressive trust. Translated into experience: simple, powerful, friendly, safe, and controllable.

Only when all three pillars stand firm does the product reach a state where it's ready for others to use.

Recommended Reading

Subscribe to Updates

Get notified when I publish new posts. No spam, ever.

Only used for blog update notifications. Unsubscribe anytime.

Comments

or comment anonymously
0/2000