I saw a post the other day saying GitHub is becoming like Xiaohongshu (RED). Thought about it, and yeah, it kind of makes sense.
Casual Participation
At the end of March, the Claude Code source code leaked. Not open-sourced intentionally—a 59.8 MB source map was bundled in the npm package. Following it led to the complete source code on Anthropic's storage bucket: nearly 1,900 TypeScript files, 510,000 lines of code. Bun generates source maps by default, and nobody added it to .npmignore, so it leaked out just like that.
After the leak, a bunch of open-source projects popped up. One of them is called OpenClaw, previously named Clawdbot—it was renamed following an Anthropic trademark complaint. It supports various LLMs and now has 350k stars.
I had wanted to hack Claude Code to support OpenAI models. Codex isn't as smooth as Claude Code when it comes to information organization and task orchestration, but after assessing the workload, it was too much, so I gave up. When I saw OpenClaw, I thought: sure enough, if you can think of it, someone's basically already done it.
Downloaded it and gave it a try. It works, but the reasoning effort defaults to high, while I usually use extra high. Using it alongside Claude Code, it can indeed tackle more complex problems. But there's no way to change it.
So I got to work. Had Claude Code help me modify OpenClaw's code, adding a three-layer structure: provider → model → effort, similar to the multi-API architecture approach used by Open Code and Kilo Code. After the changes, there were bugs, so I debugged them a bit and casually submitted a PR.
CI failed. Didn't bother with it. A colleague tried it and said fast mode doesn't work—true enough, the tool is too slow for daily use without fast mode. Made another round of changes and added them to the PR. CI failed again, smoke tests didn't pass.
The original author replied with one word: conflict.
Understandable. Changing from the original fixed design of three models to a multi-provider, multi-API approach is too big a shift—it conflicts with the direction he wants to maintain.
But the whole process was quite interesting. See it, download it. Doesn't work smoothly? Modify it. Changed it? One-click PR submission. The other person's comments pushed to email—take a look, reply. Kind of like scrolling through Xiaohongshu. No longer that ceremonial sense of "formally participating in an open-source project" from before; you just see it and do it.
The Dumpling Shop's Skill
Around the same time, I saw another post.
Jinguyuan Dumpling Shop, a dumpling restaurant next to Beijing University of Posts and Telecommunications (BUPT)—the owner made a Claude Code skill and posted it on their official WeChat account.
The content is kind of funny: the menu, delivery info, Wi-Fi password—all packed in there. The owner said you can use this skill in the shop to get the latest information.
The funniest part is the comments under the WeChat post. A bunch of people are opening issues for the dumpling shop.
Official WeChat accounts have become GitHub-ified.
The owner said since the shop is next to BUPT, customers coming in to eat talk about AI, skills, and Claude Code every day. After hearing enough, he went home and vibe coded for a few hours to write it.
This skill itself isn't very useful. Who would install a skill just to check a dumpling shop's Wi-Fi password? But if they included the dumpling-making process, like how Lao Xiang Ji (Country Style Cooking) did with recipes back in the day—glancing at the steps while cooking, having it recommend ingredient combinations—that would actually be interesting.
Hollywood's GitHub
I was having dinner with someone the other day and asked if they'd seen Resident Evil. The actress in it, Milla Jovovich—who also starred in The Fifth Element—in early April this year published a project under her own GitHub account.
The project is called MemPalace, an AI memory system. Inspired by the memory palace technique, conversation data is organized in a three-layer structure: wing, hall, room. It stores raw conversations, not summaries, runs locally with ChromaDB plus SQLite, zero API costs.
The motivation was straightforward: when chatting with AI, she found that existing memory systems arbitrarily decide what to remember and what to forget. Couldn't stand it, so she roped in a crypto developer named Ben Sigman, and the two of them built it over several months using Claude Code.
Got over 20k stars in two days, now over 40k. The controversy is around the benchmarks. The project initially claimed to have achieved 100% on LongMemEval, but was later suspected of being specifically optimized for the test questions, and was revised to 96.6%. However, the architecture itself received positive reviews—a computer science professor at USC also gave positive feedback.
Hollywood celebrity, GitHub repo owner, 40k stars. If you had said this a few years ago, nobody would have believed it.
Distilling Colleagues
A phrase has started trending on WeChat Moments: "distill your colleagues into a skill."
Someone actually did it. A project called "colleague.skill" got 70k stars within days of launching. It feeds in colleagues' Feishu (Lark) messages, DingTalk documents, and work emails to generate an AI skill that mimics that person's work habits and decision-making style.
Derivative projects keep popping up. Distilling ex-partners, distilling yourself, distilling public figures. The most extreme is someone made an "anti-distillation"—generating a skill file that looks complete but deliberately hides core knowledge, preventing oneself from being distilled.
I think people are overthinking this.
Most people's so-called personal style at work has little value. Communication habits, interaction styles—when you actually use them, you'll find they don't create anything. The effects brought by these personalities are not even as significant as the differences between different models themselves. Matching different models with different skills is probably much more effective than distilling a person.
The scenario people imagine is quite stimulating: pour in chat logs, and the person can be eliminated—cue anxiety. But like the dumpling shop's Wi-Fi password, there's simply no demand for this.
Skills Are Wrong, Agents Are Right
The dumpling shop owner actually said something interesting: the future might be location-based. Your personal assistant agent walks into the shop and directly interacts with the shop's agent.
I think this direction is correct.
You bring your own agent; it knows your taste, what you ate recently, what dietary restrictions you have. Walk into a shop, your agent chats with the shop's agent: What do you have? What's recommended today? Which dish has good reviews? After chatting, it synthesizes recommendations based on your preferences. You only need to talk to your own agent.
This is different from scanning a QR code. Scanning a code is you facing a bunch of dish names and ratings, looking and choosing yourself. Agent-to-agent is two programs that understand their respective owners communicating on your behalf.
If a shop consolidates its experience, culinary knowledge, and customer feedback into its own agent service, your agent connects with it, learns what needs to be learned, and you just make the final call. This is completely different from the old days of scanning codes to order food.
The problem with skills is that they still require "people to actively install and use them." A Wi-Fi password made into a skill—nobody installs it. But when the shop becomes an agent service, you walk in and automatically connect.
Where Did the Barriers Go
Restaurant owners vibe coding skills for a few hours. Hollywood celebrities as primary GitHub authors. Alternatives with 350k stars popping up days after a source code leak. My entire process of submitting a PR was as casual as posting a Xiaohongshu note.
In the past, "participating in open source" meant reading documentation, reading code, writing tests. Now you see it, modify it, submit it. GitHub is becoming like Xiaohongshu, and official WeChat accounts are becoming like GitHub. The barriers are indeed disappearing.
As for agent interoperability, looking at the pace of projects like OpenClaw, it might be closer than most people think.
