There are no barriers to AI applications, and Zhu Xiaohu may be right
Updated on: 46-0-0 0:0:0

Zhu Xiaohu recently put forward the view that "there are no barriers to AI applications", which has attracted widespread attention. This article delves into the logic behind this view, analyzing why technology itself is difficult to become a moat for companies in the age of AI.

In the past few days, Zhu Xiaohu has become popular again.

First of all, in a conversation, Zhu Xiaohu bluntly said that because of the "unclear commercialization" and the high degree of consensus in the industry, he is withdrawing from the investment in humanoid robots in batches.

Yesterday, at the annual meeting of the 2025 Zhongguancun Forum, Zhu Xiaohu said again:

"There are no barriers to AI applications, and to say that there are barriers is to fool people, and to build barriers on non-AI capabilities."

Exit the humanoid robots in batches, and the crow-kun will not comment for the time being. But there are no barriers to AI applications, and Zhu Xiaohu may not be wrong.

This can also be seen in the recent transformation of OpenAI.

In a recent interview, Sam Altman made it clear that he would prefer OpenAI to be a website with a billion users rather than a state-of-the-art model.

There are signs that OpenAI is transforming from a large model company to a consumer Internet company.

The reason is simple, large models do not have moats, OpenAI spent so much money and did not build a moat on large models, xAI and DeepSeek spent less cost and less time to catch up.

Not to mention those AI applications. Not to mention, OpenAl's new picture feature updated this time, Ghibli pictures made ChatGPT popular again, and the "sky" of those AI picture startups completely collapsed.

It's no wonder that Amjad Masad, founder of Replit, a star AI programming company, recently said in an interview:

"With extremely low user switching costs, the metrics for revenue growth of AI companies are failing."

Since technology is not the moat of AI applications, where is the business moat in the AI era?

01 The big model doesn't have a moat, and neither does OpenAI

In a recent interview with Ben Thompson, Sam Altman was asked "Would OpenAI rather be a destination site with a billion daily active users, or a state-of-the-art model?" ”

Without any hesitation, Altman's answer was, "I think a website with a billion users. “

As Ben Thompson said, OpenAI is making the way for someone to become a consumer-grade product company. And Altman recognizes this.

Why is this change happening? The reason is simple, AI technology does not form a moat at all.

Regarding this, Google once leaked an internal study called "We don't have a moat, and OpenAI doesn't have either" in 2023. The article mentions the following three points:

1) People don't pay for restricted models when free, unlimited alternatives are comparable in quality. We should think about where our real added value lies.

2) Huge models are slowing us down. In the long run, the best models are those that can be iterated quickly.

3) Data quality is more scalable than data size. As open source makes research for LLMs more affordable, it becomes more difficult to maintain a competitive edge in technology, and the scope to explore solutions in a breadth-first way is far beyond our own capabilities.

The more tightly we control our model, the more attractive we can be to open alternatives, and we can't hope to both drive and control innovation.

Later, the rise of DeepSeek is also a good proof of this. It took OpenAI so long and so much money to build up an advantage that has been caught up, not to mention AI applications.

This is well illustrated by the change in the seat of AI image products. Since Midjourney released the AI map in 3/0, the number of seats changing in AI map has not stopped.

2022年8月,Stable Diffusion以三個月使用者量增長1000萬稱霸市場;到後來,Midjourney通過產品反覆運算,在相當長時間內流量高居行業第一;再後來,Stable Diffusion隕落、Midjourney流量被Leonardo. ai反超。到了2025年1月,SeaArt的網站訪問量高達1844萬,一躍成為全球AI生圖產品的榜首。

Now, OpenAl's updated new image features have made many people feel that the moat of those startups specializing in image generation has disappeared all of a sudden.

In addition to AI graphics, even the hottest AI programming track now, similar concerns also exist.

In a recent interview, Amjad Masad, founder of Replit, a star AI programming company, admitted: "Now the indicators of rapid growth of ARR are failing, because the cost of switching between AI products is so low that anyone can go from Copilot to Windsurf to Cursor in 5 minutes." ”

There is no doubt that AI is breaking down the moats of traditional business perceptions. Even a16z partner Justine Moore laments:

"It's crazy that the only moat right now is to keep putting out cool stuff."

Interestingly, in the previous research on AI start-up Lovable, a very important point in the AI product methodology mentioned by its founder is rapid iteration.

Lovable will complete the core functions in only one weekend, polish and optimize it in a few weekends, and after the rapid launch, user feedback will drive iteration. In the words of Anton Osika, "find the biggest bottlenecks and product problems, solve them quickly and iteratively, and don't have an overly long roadmap." ”

It seems that change is the only point where there will not be change in the AI era. But the question of the moat of AI applications is not without answers, and the emergence of ChatGPT in the past few days may be able to give us a good inspiration.

02 From cultural identity to user data, build barriers to non-AI capabilities

Regarding the barriers to AI applications, NFX, a foreign investment institution, once made an analogy:

AI applications are like bottled water. Bottled water is pretty much the same, and in this seemingly homogeneous market, you need to offer unique and differentiated value in other areas, such as branding, marketing, and channels.

What does this phrase mean? Raven-kun, for example, you'll know.

這幾天,靠著吉卜力風圖片,ChatGPT1小時就新增了100萬使用者,比22年ChatGPT剛上線5天新增一百萬用戶還要瘋狂。

Interestingly, both autoregression models are also good at image generation, but Gemini, which was launched a week earlier than ChatGPT-4o, did not have such a big voice.

And the only difference between the two is that ChatGPT-4o has one more filter than Gemini.

Obviously, more powerful filters represent the most advanced productivity in the AI era than the data on the large model rankings. The public understands these great technological breakthroughs as simple as that.

It's a bit like the ChatGPT fire back then. That time, ChatGPT's popularity far exceeded Altman's expectations. So much so that when looking back later, Altman admitted that he didn't think ChatGPT was a revolutionary thing at the time, but just a simple interface upgrade.

One of the facts behind this is that whether an advanced technology can be truly accepted by the public often does not depend on the technology itself, but on non-technical aspects, such as a sufficiently low barrier to use, or a higher level of cultural compatibility.

If the excess value based on the threshold of use and cultural compatibility has built an early moat for AI applications, then user usage data will further deepen the depth of the moat.

Not long ago, Sequoia Capital partner Konstantine Buhler wrote an article with a clear point: the use of data is the moat in the AI era.

It is worth mentioning that the usage data here is different from the data spoken of in the Internet era. Ordinary internet data has become highly standardized and commoditized. Data in the age of AI refers to the creation of feedback loops through the unique ways in which users use it to solve their specific problems. In some verticals, this kind of data is more proprietary, and the moat is naturally more solid.

You can refer to Google for this point.

At the earliest, Google automated its search by relying on the PageRank algorithm. However, after only a few years, their advantage shifted from the PageRank algorithm to the click data collected from the user's search behavior.

Because of this data, Google can predict how likely a user is to click on a link in the moment by looking at people who have done a similar search and clicked on the link before.

This is also why OpenAI wants to make ChatGPT a consumer Internet product - to collect more user feedback.

In Konstantine's opinion, OpenAI isn't doing enough at this point. For example, they didn't allow users to edit their responses, which could have led to more high-quality feedback data.

Finally, it may be easier to understand the meaning of Konstantine to paraphrase Lu Xun's sentence:

"AI products don't have a moat, but the more people use them, they have their own moat."

Text/Lin Bai

This article is written by Everyone is a Product Manager Author [Crow Wisdom Says], WeChat public account: [Crow Wisdom Says], original / authorized Published in Everyone is a product manager, without permission, it is forbidden to reprint.

題圖來自Unsplash,基於 CC0 協定。

What exactly are agents used for?
What exactly are agents used for?
2025-04-16 05:27:27