Author: Jin Lei, Fa Zi, Ao Fei Si; Source: Qubit
DeepSeek's Hot , still going on.
Just this past weekend, DeepSeek overtook ChatGPT and topped the list of free apps in the US Apple App Store. one!

It is so popular that some netizens even described it like this:
What do I think? I don’t even like AI assistant apps, but I have downloaded DeepSeek.

As for the reason, it was the inference model R1 that DeepSeek opened up a few days ago, which triggered waves of public opinion.
R1, which cost only US$5.6 million to train, has reached or even surpassed the OpenAI o1 model in many AI benchmark tests.
And DeepSeek is really free, and although ChatGPT is on the free list, if you want to unlock its complete version, you still have to Spend $200...
Now if you search for "DeepSeek" on , “Goodbye ChatGPT” Such topics:

And not only people in the technology circle are paying attention, for example, venture capitalist Marc Andreessen highly praised:
DeepSeek R1 is one of the most amazing breakthroughs I have ever seen.

Even one sentence in response to the question "How will DeepSeek make money?" - "DeepSeek is a small project" was turned crazy by netizens... p>

Well, it’s really hot

< h2 style="text-align: left;">It has also just triggered a wave of reappearance
DeepSeek R1, which is itself an open source model, was available just yesterday It triggered a wave of reproduction craze.
This project is Open R1 initiated by HuggingFace on GitHub.

The project has only been released for 2 days and has already earned 4.2K stars.
Co-founder and CEO Clem Delangue said:
Our scientific team has begun working on fully replicating and open-sourcing R1, including training data, training scripts...
We hope to make full use of it The power of open source AI allows everyone in the world to benefit from the progress of AI! I believe this will also help debunk some myths.
In the Open R1 project document, the official further stated:
The purpose of this project is to build the missing pieces of the R1 pipeline so that anyone can copy and build R1 on top of it.
HuggingFace stated that it will use the technical report of DeepSeek-R1 as a guide to complete the project in three steps:
Step 1: Use DeepSeek-R1 to distill a high-quality corpus to replicate the R1-Distill model.

Step 2: Copy the pure reinforcement learning (RL) pipeline DeepSeek used to build R1-Zero. This may involve curating new large-scale data sets for mathematics, reasoning, and coding.

Step 3: Transition from the base model to the RL version through multi-stage training.

In addition to the reappearance trend, there are also endless gameplays shared by netizens.
For example, a guy shared "Build Everything with DeepSeek R1", which teaches you step by step how to make games, develop programming, etc.

Extending from the popularity of DeepSeek, the value of Light of Domestic Products is still rising:
First DeepSeek, now Kimi k1.5... China (large models) is developing very fast.

Regarding the hot topics triggered by DeepSeek, LeCun stood up and said:
The real point we should focus on is that the open source model is surpassing the proprietary model.

What kind of craze will DeepSeek trigger in the future is worth continuing to pay attention to.