Author: Haotian
McKinsey's Lilli case provides key development ideas for the enterprise AI market: edge computing + small model potential market opportunities. This AI assistant, which integrates 100,000 internal documents, has not only gained an adoption rate of 70% of employees, but is also used an average of 17 times a week. This product stickiness is rare in enterprise tools. Below, I will talk about my thoughts:
1) Enterprise data security is a pain point: McKinsey's 100-year accumulated core knowledge assets and some small and medium-sized enterprises have extremely strong data sensitivity and are not processed on the public cloud. How to explore a balance state of "data does not leave the local area, AI capabilities are not discounted" is the actual market demand. Edge computing is an exploratory direction;
2) Professional small models will replace general large models: Enterprise users do not need general models with "billions of parameters and all-round", but professional assistants that can accurately answer questions in specific fields. In contrast, there is a natural contradiction between the versatility and professional depth of large models, and small models are often more valued in enterprise scenarios;
3) Cost balance between self-built AI infra and API calls: Although the combination of edge computing and small models has a large initial investment, the long-term operating costs are significantly reduced. Imagine if the AI large model used frequently by 45,000 employees comes from API calls, the dependence, scale of use and product theory generated by this will make self-built AI infra a rational choice for large and medium-sized enterprises;
4) New opportunities in the edge hardware market: Large model training cannot be separated from high-end GPUs, but edge reasoning has completely different hardware requirements. Processors optimized for edge AI by chip manufacturers such as Qualcomm and MediaTek are ushering in market opportunities. When every enterprise wants to build its own "Lilli", edge AI chips designed for low power consumption and high efficiency will become a necessity for infrastructure;
5) The decentralized web3 AI market is also strengthened simultaneously: Once the demand for computing power, fine-tuning, algorithms, etc. on small models is driven, how to balance resource scheduling will become a problem, and traditional centralized resource scheduling will become a problem, which will directly bring great market demand for web3AI decentralized small model fine-tuning network, decentralized computing power service platform, etc.;
When the market is still discussing the general capability boundaries of AGI, it is more pleasing to see that many enterprise users are already exploring the practical value of AI. Obviously, compared with the resource monopoly leap in the past that competed in computing power and algorithms, when the market focuses on edge computing + small model methods, it will bring greater market vitality.