According to BlockBeats, experts from AI Agent, security alignment, intellectual property, and compliance sectors gathered at the AI Apex Asia Capital Connect Forum to discuss the intricate environment faced by AI companies preparing for an initial public offering (IPO). The roundtable, moderated by James Liu, International Director at Alibaba Cloud, provided crucial insights into the evolving regulatory landscape and successful strategies for navigating it.
Key discussion points included the unprecedented challenges AI companies face regarding data privacy, intellectual property protection, and regulatory compliance when preparing for an IPO. The rapid pace of AI innovation often outstrips the development of regulatory frameworks, necessitating proactive risk management and effective stakeholder communication. Collaboration between AI companies and regulatory bodies is essential to develop frameworks that foster innovation while protecting public interests. The EU AI Act represents a significant shift in AI regulation, with global implications for companies operating or selling products in the European market. Deepfake technology poses substantial risks to copyright and identity protection, requiring a combination of technical and regulatory solutions.
Professor Liu Yang, co-founder of AgentLayer, highlighted the evolving nature of AI security risks and proposed an innovative solution: “When using any AI solution, it is crucial to understand potential new attacks. For instance, we have seen breakthroughs in AI model jailbreaks that can bypass existing defense mechanisms with a 100% success rate. This presents significant challenges for any AI solution we develop.” Liu further elaborated on a novel approach to addressing these challenges: “Enhancing LLMs with agent-based models as validators and managers adapts to the complexities within enterprises while focusing on creating automated LLM validators and managers. Instead of merely considering whether LLMs are safe for enterprises, using specially built AI Agents to validate and manage all LLM interactions can help ensure secure outcomes, even if the underlying LLMs themselves are not secure.”
Hsu Li-Chuan, a partner at Dentons Rodyk, emphasized the importance of clear communication with regulators and investors: “AI's challenge lies in its coverage of numerous regulatory areas, unlike e-commerce or even blockchain technology. AI may involve broader regulatory, compliance, and ethical domains. We must keep this in mind during any public market fundraising activities.” Yang Jingwei, Director of Security Technology at Ant Group, stressed the necessity of robust technical solutions: “For data security, I have three recommendations: establish very strong data governance, have strict policies and standardization, be very careful with data transmission, and maintain transparency in data usage. For intellectual property, consult IP experts, seek patent protection, and have clear IP ownership contracts.”
Moderator James Liu concluded, “Key points are that communication with investors, the public, and regulators is crucial during the pre- and post-IPO periods. Given the rapid pace of technological development often outstripping regulation, self-regulation and frameworks are particularly important. As Professor Liu suggested, using purpose-built AI Agents to enhance LLMs represents a promising direction for addressing current AI industry safety and security issues.”
Experts unanimously agreed that as AI rapidly evolves, companies must stay ahead of governance issues to successfully complete the IPO process and maintain public trust. Ongoing dialogue between industry, academia, and regulatory bodies will play a key role in shaping balanced AI governance approaches, while emerging technologies like AI Agents will be central to ensuring the secure and effective deployment of AI systems.