Introduction
In the past two years, AI assistance has become commonplace in my work.
Taking a recent article I wrote about compliant business operations in the Web3 industry as an example, I needed to search for relevant domestic cases of "unlicensed fund sales constituting illegal business operations," and I tried using DeepSeek for the search.
Its performance can be described as a **perfect misleading**: it not only instantly provided details such as the case number and the court of trial, but also described the defendant's defense logic and the final sentencing in a clear and organized manner, and even attached a **highly misleading** webpage link at the end of the article.

(Image content source: People's Daily Online)
Industry Characterization Debate: "Goods" or "Services"?
Does AI provide "goods" or "services"? This is the most commercially valuable legal characterization in this case, and this question will determine the underlying risk logic of the entire AI industry.
Does AI provide "goods" or "services"? This is the most commercially valuable legal characterization in this case, and this question will determine the underlying risk logic of the entire AI industry. 1. Developers' Core Concern: The Risk of "No-Fault Liability" Large model manufacturers are most wary of being included in the category of "products" under the Product Quality Law. The underlying logic is that "product" liability is subject to the no-fault principle. Simply put, like a pressure cooker exploding, no matter how careful the production process, the manufacturer may still bear liability for compensation. If AI is defined as a "product," then every instance of "hallucination" it outputs could be considered a "product defect." Under current technological conditions, no manufacturer can guarantee the complete elimination of hallucinations, which implies a theoretically unlimited risk of liability. 2. Judicial Determination: AI is a "Service" Rather Than a "Product" The Hangzhou Internet Court astutely pointed out the essential difference between AI and traditional physical goods in its judgment: **unpredictability and interactivity**. The performance of traditional products is determined at the time of manufacture, while the output of AI is random and highly dependent on the algorithm model and user-input prompts. This output is formed jointly by AI and the user, which is more in line with the characteristics of an "intelligence-generated service." 3. Liability Framework: Based on Process, Not Result The court returned the determination of liability for AI services to the **fault-based liability principle** of Article 1165 of the Civil Code, and explained the reasons for this determination. From the logic of the judgment, it can be seen that the law does not require AI output to be absolutely correct, but rather requires service providers to fulfill their duty of care within a reasonable scope. In other words, **AI illusions** themselves do not necessarily constitute a violation of the law. Developers are only held liable when they fail to take reasonable measures to prevent or reduce illusions and are at fault. This definition provides the industry with a clear **technical tolerance range**, the key being how to **determine the substantive and formal standards of a reasonable scope**. **Clarifying the Boundaries:** "Tolerable" Illusions vs. "High-Risk" Illusions Under the principle of "fault-based liability," not all illusions will be attributed to developers. The law's determination of fault is **dynamic**, adjusting in stages based on the stage of technological development and the risks of application scenarios. 1. High Tolerance in Low-Risk Areas In non-professional fields such as creation, entertainment, and general knowledge consultation, users should possess basic discernment abilities. If AI outputs clearly non-common knowledge in casual conversation, and a user suffers harm as a result and seeks legal recourse, their claim may be dismissed due to their failure to exercise reasonable care. In such scenarios, as long as the manufacturer has provided basic risk warnings, courts are generally tolerant of "algorithmic bias." 2. Reasonable Care in Medium-Risk Areas While some entrepreneurial directions have commercial potential, they also carry higher legal risks, requiring extra attention to compliance. For example: In the field of emotional companionship: This field has strong practical application and clear demand, but it may lead to emotional dependence or even improper guidance among users, posing significant ethical and legal risks. In the field of psychological support: If AI provides harmful or misleading advice to users experiencing psychological distress, such as encouraging suicide or incorrect medication, it will directly endanger user safety, and the boundaries of liability will be more stringent. 3. Necessary Caution in High-Risk Areas In the field of professional services, if AI claims to be a "professional lawyer" or "licensed psychological counselor" in its marketing, the court may determine its duty of care based on expert standards. In the event of significant harm, it may incur higher levels of legal liability. Discrepancies between promotional wording and actual capabilities will significantly increase legal risks. 4. Dynamic Duty of Care: What is “Reasonable Effort”? When determining fault, courts typically focus on whether developers have fulfilled the following obligations: Strictly prohibit illegal and harmful content; Clearly indicate the limitations of AI, including clearly informing users of functional limitations, ensuring the prompts are prominent, and providing immediate warnings in high-risk scenarios; Adopt industry-standard technologies to improve reliability, such as using search enhancement generation technologies. Furthermore, commercial factors such as whether the service is charged for and whether third parties are involved and advertising fees are collected may also affect the determination of fault.
Developer Compliance Guide: How to Write a Good "Disclaimer"?
A "disclaimer" is not just a formality, but a key tool for balancing innovation and risk. It clearly conveys the boundaries of services to users and can also prove in judicial review that the operator has fulfilled its necessary notification obligations.
To ensure its effective implementation, it needs to be improved in both form and content: 1. Form: To ensure the effective communication of the disclaimer, three principles must be followed: First, **dynamic reminders** should be proactively displayed when users log in for the first time, when functional modules are updated, or when sensitive scenarios are encountered; Second, **prominent presentation** should be used, with key clauses highlighted in bold and red, and a mandatory reading time can be set; Third, **real-time alerts** should be provided, with the system promptly displaying a warning when users engage in high-risk consultations (such as medical consultations), clearly informing them of the content's reference value and limitations.
2. Substantive Level
Don't try to achieve "one-size-fits-all" exemption from liability; focus on two points:
First, clearly define its identity. When AI handles professional issues such as medical and legal matters, it must clearly state its auxiliary nature as a non-professional, for example, by proactively responding, "I am not a professional doctor/lawyer";
Second, customize the scenario. For high-risk areas such as medical, psychological, and financial and tax matters, specific notification content and liability agreements should be designed based on the regulatory requirements and risk characteristics of the industry, and a compliance system that matches the depth of its services should be built.
A Brief Discussion Across Industries: When AI Agents Meet Web3
As a lawyer deeply involved in the Web3 field, I believe the Hangzhou court's ruling not only points the way for the compliant operation of AI agents but also provides an important compliance reference for the Web3 industry.
Unlike traditional customer service, some Web3 trading platforms have begun to introduce a Web2-like architecture, integrating AI agents within the software to divert user inquiries. Taking the leading exchange CoinX as an example, users can verify the authenticity of information by @ a specific AI agent. However, to our knowledge, this platform has not yet updated its user agreement or set up a specific disclaimer regarding AI interaction functions, and this lack of compliance may bring significant legal risks.
At the same time, the Web3 field adheres to the principle of "code is law."
If a partially authorized AI agent, due to illusion, exceeds its authority to execute transactions, it will raise a series of complex issues: Does this constitute apparent agency? Are the related legal acts revocable? Can the assets be recovered? Behind these scenarios lie questions such as how to define a clear and explicit scope of authorization, whether to prohibit "fully automated" signatures, and further clarification regarding the self-assumption of risks by AI illusions. In conclusion, this judgment by the Hangzhou court provides valuable institutional buffer space for large-scale model entrepreneurs. It follows the current logic of legal pragmatism, that is, law is a tool serving the direction of mainstream social development. The specific means is to implement rules and provide guidance to the industry through the selection and interpretation of laws. The judge made a good comment in the judgment: AI is an "auxiliary tool," not a "decision substitute." Developers, please accept the buffer provided by this verdict and continue your courageous exploration within the rules; and we, as users, please maintain that precious spirit of skepticism.