In our June research report, "The Holy Grail of Crypto AI: Frontiers of Decentralized Training," we mentioned Federated Learning, a "controlled decentralization" solution that lies between distributed and decentralized training. Its core is local data retention and centralized parameter aggregation, meeting privacy and compliance requirements in healthcare, finance, and other fields. At the same time, we have consistently highlighted the rise of agent networks in previous reports. Their value lies in the collaborative completion of complex tasks through the autonomy and division of labor of multiple agents, driving the evolution from "large models" to "multi-agent ecosystems." Federated Learning, with its principle of "no data leaving the local machine and incentives based on contribution," lays the foundation for multi-party collaboration. Its distributed nature, transparent incentives, privacy protections, and compliance practices provide directly reusable experience for Agent Networks. Following this path, the FedML team upgraded its open-source nature into TensorOpera (the AI industry infrastructure layer) and then evolved it into ChainOpera (a decentralized agent network). Of course, Agent Networks are not necessarily an extension of Federated Learning. Its core lies in the autonomous collaboration and task division of multiple agents. They can also be directly built on multi-agent systems (MAS), reinforcement learning (RL), or blockchain incentive mechanisms. I. Federated Learning and AI Agent Technology Stack Architecture Federated Learning (FL) is a framework for collaborative training without centralized data. Its fundamental principle is that each participant trains the model locally and only uploads parameters or gradients to the coordination end for aggregation, thus achieving privacy compliance with "no data leaving the local machine." Federated learning has reached a relatively mature commercial stage after being implemented in typical scenarios such as healthcare, finance, and mobile devices. However, it still faces bottlenecks such as high communication overhead, incomplete privacy protection, and low convergence efficiency due to heterogeneous devices. Compared to other training models, distributed training emphasizes centralized computing power for efficiency and scale, while decentralized training achieves fully distributed collaboration through open computing networks. Federated learning lies somewhere in between, representing a "controlled decentralization" approach that not only meets industry needs for privacy and compliance but also provides a viable path for cross-institutional collaboration, making it more suitable for transitional deployment architectures within the industry.
In the entire AI Agent protocol stack, we divided it into three main layers in our previous research report, namely
Agent Infrastructure Layer: This layer provides the lowest-level operation support for the intelligent agent and is the technical foundation for the construction of all Agent systems. Core Modules: These include the Agent Framework (agent development and execution framework) and the Agent OS (lower-level multi-task scheduling and modular runtime), providing core capabilities for agent lifecycle management. Support Modules: These include Agent DID (decentralized identity), Agent Wallet & Abstraction (account abstraction and transaction execution), and Agent Payment/Settlement (payment and settlement capabilities). The Coordination & Execution Layer focuses on collaboration among multiple agents, task scheduling, and system incentive mechanisms, and is key to building the "swarm intelligence" of agent systems. Agent Orchestration: This is the command mechanism used to uniformly schedule and manage the agent lifecycle, task allocation, and execution processes, and is suitable for workflow scenarios with central control. Agent Swarm: This collaborative structure emphasizes the collaboration of distributed agents, offering a high degree of autonomy, division of labor, and flexible collaboration, making it suitable for handling complex tasks in dynamic environments. Agent Incentive Layer: This constructs an economic incentive system for the Agent network, motivating developers, executors, and validators, and providing sustainable power for the agent ecosystem.
Application & Distribution Layer
Distribution subcategory: including Agent Launchpad, Agent Marketplace and Agent Plugin Network
Application subcategory: covering AgentFi, Agent Native DApp, Agent-as-a-Service, etc.
Consumption subcategory: mainly Agent Social / Consumer Agent, oriented to lightweight scenarios such as consumer social networking
Meme: hyped up the concept of Agent, lacks actual technical implementation and application landing, and is only driven by marketing. 2. FedML, the Benchmark for Federated Learning, and the TensorOpera Full-Stack Platform FedML is one of the earliest open-source frameworks for federated learning and distributed training. Originating from an academic team (USC) and gradually incorporating into a core product of TensorOpera AI, it provides researchers and developers with tools for collaborative data training across institutions and devices. In academia, FedML, frequently featured at top conferences such as NeurIPS, ICML, and AAAI, has become a universal experimental platform for federated learning research. In industry, FedML enjoys a strong reputation in privacy-sensitive scenarios such as healthcare, finance, edge AI, and Web3 AI, and is considered a benchmark toolchain for federated learning. TensorOpera is FedML, upgraded based on a commercialization path, into a full-stack AI infrastructure platform for enterprises and developers. While maintaining federated learning capabilities, it expands to GPU Marketplace, model services, and MLOps, thereby entering the larger market of the large model and agent era. TensorOpera's overall architecture can be divided into three layers: the Compute Layer (foundation layer), the Scheduler Layer (scheduling layer), and the MLOps Layer (application layer):
Compute Layer (Bottom Layer) The Compute layer is the technical foundation of TensorOpera, continuing the open source DNA of FedML. Its core functions include Parameter Server, Distributed Training, Inference Endpoint, and Aggregation Server. Its value proposition lies in providing distributed training, privacy-preserving federated learning, and a scalable inference engine. It supports the three core capabilities of "Train/Deploy/Federate" and covers the entire chain from model training and deployment to cross-institutional collaboration. It is the foundation of the entire platform. Scheduler Layer (Middle Layer): The Scheduler layer serves as the computing power trading and scheduling hub. Composed of the GPU Marketplace, Provision, Master Agent, and Schedule & Orchestrate, it supports resource allocation across public clouds, GPU providers, and independent contributors. This layer represents a key milestone in the evolution of FedML to TensorOpera. Through intelligent computing power scheduling and task orchestration, it enables larger-scale AI training and inference, covering typical scenarios for both LLM and generative AI. Furthermore, its Share & Earn model reserves incentive mechanism interfaces, potentially enabling compatibility with DePIN or Web3 models. MLOps Layer (Upper Layer): The MLOps layer is the platform's direct service interface for developers and enterprises, encompassing modules such as Model Serving, AI Agent, and Studio. Typical applications include the LLM chatbot, multimodal generative AI, and the developer Copilot tool. Its value lies in abstracting underlying computing power and training capabilities into high-level APIs and products, lowering the barrier to entry. It provides ready-to-use agents, a low-code development environment, and scalable deployment capabilities. It is positioned alongside next-generation AI infrastructure platforms such as Anyscale, Together, and Modal, serving as a bridge from infrastructure to applications. In March 2025, TensorOpera upgraded to a full-stack platform for AI agents, with core products including the AgentOpera AI App, Framework, and Platform. The application layer provides a ChatGPT-like multi-agent entry point. The framework layer evolves into the "Agentic OS" with a graph-based multi-agent system and orchestrator/router. The platform layer deeply integrates with the TensorOpera model platform and FedML to enable distributed model serving, RAG optimization, and hybrid end-to-end cloud deployment. The overall goal is to create "one operating system, one agent network," enabling developers, enterprises, and users to jointly build a next-generation Agentic AI ecosystem in an open and privacy-protected environment.
III. ChainOpera AI Ecosystem Panorama: From Co-Creator to Technology Foundation
If FedML is the technical core, providing the open source genes of federated learning and distributed training; TensorOpera abstracts FedML's scientific research results into a commercially viable full-stack AI infrastructure, then ChainOpera "chains" TensorOpera's platform capabilities, creating a decentralized agent network ecosystem through the AI Terminal + Agent Social Network + DePIN model and computing power layer + AI-Native blockchain. The core transformation is that TensorOpera is still mainly aimed at enterprises and developers, while ChainOpera uses Web3-based governance and incentive mechanisms to bring users, developers, GPU/data providers into co-construction and co-governance, allowing AI Agents to not only be "used" but also "co-created and co-owned." ChainOpera AI provides the toolchain, infrastructure, and coordination layer for ecosystem co-creation through the Model & GPU Platform and Agent Platform, supporting model training, intelligent agent development, deployment, and extended collaboration. The ChainOpera ecosystem's co-creators include AI agent developers (designing and operating agents), tool and service providers (templates, MCP, databases, and APIs), model developers (training and publishing model cards), GPU providers (contributing computing power through DePIN and Web2 cloud partners), and data contributors and annotators (uploading and annotating multimodal data). These three core components—development, computing power, and data—jointly drive the continuous growth of the agent network. The ChainOpera ecosystem also introduces a co-ownership mechanism, enabling collaboration and participation in building the network. AI agent creators are individuals or teams who design and deploy new agents through the Agent Platform, responsible for their construction, launch, and ongoing maintenance, driving innovation in features and applications. AI Agent participants come from the community. They participate in the lifecycle of the agent by acquiring and holding Access Units, supporting its growth and activity during use and promotion. These two roles represent the supply and demand sides, respectively, and together form a model of value sharing and collaborative development within the ecosystem. ChainOpera AI collaborates with multiple parties to enhance the platform's usability and security, focusing on integrating Web3 scenarios. These include intelligent service recommendations through the AI Terminal App, which combines wallets, algorithms, and aggregation platforms. The Agent Platform introduces multiple frameworks and zero-code tools to lower the development barrier. Model training and inference are performed using TensorOpera AI. Furthermore, the company has established an exclusive partnership with FedML to support privacy-preserving training across institutions and devices. Overall, an open ecosystem is being formed that balances enterprise-level applications with Web3 user experience. **Hardware Portal: AI Hardware & Partners** Through partners such as the DeAI Phone, wearables, and Robot AI, ChainOpera integrates blockchain and AI into smart devices, enabling dApp interaction, on-device training, and privacy protection, gradually forming a decentralized AI hardware ecosystem. **Core Platform and Technology Foundation: TensorOpera GenAI & FedMLTensorOpera provides a full-stack GenAI platform covering MLOps, Scheduler, and Compute. Its sub-platform, FedML, has grown from academic open source to an industrialized framework, strengthening AI's ability to "run anywhere and scale anywhere."
ChainOpera AI Ecosystem
IV. ChainOpera Core Products and Full-Stack AI Agent Infrastructure
In June 2025, ChainOpera officially launched the AI Terminal App and decentralized technology stack, positioning itself as a "decentralized version of OpenAI". Its core products cover four major modules: application layer (AI Terminal & Agent Network), developer layer (Agent Creator Center), model and GPU The AI Terminal App has integrated BNBChain, supporting on-chain transactions and DeFi scenarios. The Agent Creator Center is open to developers, providing capabilities such as MCP/HUB, knowledge base, and RAG, and community agents continue to join. At the same time, the CO-AI Alliance was launched, linking partners such as io.net, Render, TensorOpera, FedML, and MindNetwork. According to the on-chain data of BNB DApp Bay in the past 30 days, it has 158.87K independent users and 2.6 million transaction volume in the past 30 days. It ranks second in the BSC "AI Agent" category, showing strong on-chain activity. **Super AI Agent App – AI Terminal (https://chat.chainopera.ai/)** As a decentralized ChatGPT and AI social portal, AI Terminal offers multimodal collaboration, data contribution incentives, DeFi tool integration, a cross-platform assistant, and support for AI agent collaboration and privacy protection (Your Data, Your Agent). Users can directly access the open-source DeepSeek-R1 model and community agents on their mobile devices, with language tokens and crypto tokens transparently transferred on-chain during interactions. Its value lies in enabling users to transition from "content consumers" to "intelligent co-creators," enabling them to leverage a dedicated agent network across scenarios such as DeFi, Reliable Web Access (RWA), PayFi, and e-commerce. The **AI Agent Social Network (https://chat.chainopera.ai/agent-social-network)** is positioned similarly to LinkedIn + Messenger, but for the AI agent community. Through virtual workspaces and agent-to-agent collaboration mechanisms (MetaGPT, ChatDEV, AutoGEN, and Camel), we promote the evolution of single agents into multi-agent collaborative networks, covering applications such as finance, gaming, e-commerce, and research, while gradually enhancing memory and autonomy. The AI Agent Developer Platform (https://agent.chainopera.ai/) provides developers with a "Lego-like" creative experience. It supports zero-code and modular expansion, with blockchain contracts guaranteeing ownership. DePIN + cloud infrastructure lowers the barrier to entry, and the Marketplace provides distribution and discovery channels. Its core goal is to enable developers to quickly reach users, transparently record their contributions to the ecosystem, and earn rewards. The AI Model & GPU Platform (https://platform.chainopera.ai/) serves as the infrastructure layer, combining DePIN and federated learning to address the pain point of Web3 AI's reliance on centralized computing power. Through distributed GPUs, privacy-preserving data training, model and data marketplaces, and end-to-end MLOps, ChainOpera AI supports multi-agent collaboration and personalized AI. Its vision is to promote a paradigm shift in infrastructure from "large-scale monopoly" to "community co-construction." V. ChainOpera AI Roadmap In addition to the official launch of its full-stack AI Agent platform, ChainOpera AI firmly believes that artificial general intelligence (AGI) will emerge from multimodal, multi-agent collaborative networks. Therefore, its long-term roadmap is divided into four stages:
Phase 1 (Compute → Capital): Build a decentralized infrastructure, including a GPU DePIN network, federated learning, and distributed training/inference platforms, and introduce a Model Router to coordinate multi-terminal inference; through an incentive mechanism, computing power, model, and data providers will receive revenue distributed according to usage. Phase 2 (Agentic Apps → Collaborative AI Economy): Launch the AI Terminal, Agent Marketplace, and Agent Social Network to form a multi-agent application ecosystem; connect users, developers, and resource providers through the CoAI protocol, and introduce a user-developer matching system and credit system to promote high-frequency interactions and sustained economic activity. Phase 3 (Collaborative AI → Crypto-Native AI): Implement in DeFi, RWA, payments, e-commerce, and other fields, while expanding to KOL scenarios and personal data exchange; develop dedicated LLMs for finance/cryptocurrency, and launch agent-to-agent payment and wallet systems to promote scenario-based applications of "Crypto AGI." Phase 4 (Ecosystems → Autonomous AI Economies): Gradually evolve into an autonomous subnet economy, with each subnet independently governed and tokenized around applications, infrastructure, computing power, models, and data. These subnets collaborate through cross-subnet protocols to form a multi-subnet collaborative ecosystem. Simultaneously, the ecosystem will move from Agentic AI to Physical AI (robotics, autonomous driving, and aerospace). Disclaimer: This roadmap is for reference only. Timelines and features are subject to dynamic adjustment based on market conditions and do not constitute a delivery guarantee.
VII. Token Incentives and Protocol Governance
Currently, ChainOpera has not yet announced a complete token incentive plan, but its CoAI protocol is centered on "**co-creation and co-ownership"**, and uses blockchain and Proof-of-Intelligence mechanisms to achieve transparent and verifiable contribution records: the input of developers, computing power, data, and service providers is measured and rewarded in a standardized manner, users use services, resource providers support operations, and developers build applications, and all participants share the growth dividend; the platform maintains the cycle with a 1% service fee, reward distribution, and liquidity support, promoting an open, fair, and collaborative decentralized AI ecosystem. Proof-of-Intelligence Learning Framework Proof-of-Intelligence (PoI) is ChainOpera's core consensus mechanism under the CoAI protocol, designed to provide a transparent, fair, and verifiable incentive and governance system for decentralized AI. This blockchain-based collaborative machine learning framework, based on Proof-of-Contribution, aims to address the practical challenges of federated learning (FL), such as insufficient incentives, privacy risks, and lack of verifiability. This design, centered around smart contracts and combining decentralized storage (IPFS), aggregation nodes, and zero-knowledge proofs (zkSNARKs), achieves five major goals: ① Fairly distribute rewards based on contribution, ensuring that trainers are incentivized based on actual model improvements; ② Maintain data localization to protect privacy; ③ Introduce robustness mechanisms to combat poisoning or aggregation attacks by malicious trainers; ④ Ensure the verifiability of key computations such as model aggregation, anomaly detection, and contribution assessment through ZKP; and ⑤ Be efficient and versatile enough to work with heterogeneous data and diverse learning tasks. Token Value in Full-Stack AI ChainOpera's token mechanism operates around five value streams (LaunchPad, Agent API, Model Serving, Contribution, and Model Training). Its core is service fees, contribution recognition, and resource allocation, rather than speculative returns. AI Users: Use tokens to access services or subscribe to applications, and contribute to the ecosystem by providing, annotating, and staking data. Agents/Application Developers: Use the platform's computing power and data for development and receive protocol recognition for their contributed agents, applications, or datasets. Resource Providers: Contribute computing power, data, or models, and receive transparent record keeping and incentives. Governance Participants (Community & DAO): Use tokens to participate in voting, mechanism design, and ecosystem coordination. Protocol Layer (COAI): Maintain sustainable development through service fees, utilizing an automated allocation mechanism to balance supply and demand. Nodes and Validators: Provide verification, computing power, and security services to ensure network reliability.
Protocol Governance
ChainOpera adopts DAO governance, allowing participants to participate in proposals and voting through staking tokens to ensure transparent and fair decision-making. Governance mechanisms include a reputation system (to verify and quantify contributions), community collaboration (proposals and voting to drive ecosystem development), and parameter adjustments (data usage, security, and validator accountability). The overall goal is to avoid concentration of power and maintain system stability and community co-creation.
VIII. Team Background and Project Funding
The ChainOpera project was co-founded by Professor Salman Avestimehr and Dr. He Chaoyang (Aiden), both experts in federated learning. Other core team members come from top academic and technology institutions such as UC Berkeley, Stanford, USC, MIT, Tsinghua University, and Google, Amazon, Tencent, Meta, and Apple, bringing together both academic research and industry expertise. The ChainOpera AI team has grown to over 40 people. Co-founder: Salman Avestimehr Professor Salman Avestimehr is Dean's Professor of Electrical and Computer Engineering at the University of Southern California (USC), Founding Director of the USC-Amazon Trusted AI Center, and leads the USC Information Theory and Machine Learning Laboratory (vITAL). He is the co-founder and CEO of FedML and co-founded TensorOpera/ChainOpera AI in 2022. Professor Salman Avestimehr received his PhD in EECS from UC Berkeley (Best Paper Award). An IEEE Fellow, he has published over 300 high-level papers in information theory, distributed computing, and federated learning, with over 30,000 citations. He has received numerous international honors, including PECASE, NSF CAREER, and the IEEE Massey Award. He led the creation of the FedML open-source framework, which is widely used in healthcare, finance, and privacy-preserving computing, and forms the core technology foundation of TensorOpera/ChainOpera AI. Co-founder: Dr. Aiden Chaoyang He Dr. Aiden Chaoyang He is the co-founder and president of TensorOpera/ChainOpera AI, holds a PhD in Computer Science from the University of Southern California (USC), and is the original creator of FedML. His research interests include distributed and federated learning, large-scale model training, blockchain, and privacy-preserving computing. Prior to starting his own business, he held R&D positions at Meta, Amazon, Google, and Tencent, and held core engineering and management positions at Tencent, Baidu, and Huawei, leading the implementation of numerous internet-grade products and AI platforms. Aiden has published over 30 papers in both academia and industry, with over 13,000 citations on Google Scholar. He has received an Amazon Ph.D. Fellowship, a Qualcomm Innovation Fellowship, and Best Paper Awards at NeurIPS and AAAI. The FedML framework, which he led in development, is one of the most widely used open-source projects in the federated learning field, supporting an average of 27 billion requests per day. He was also a core author of the FedNLP framework and hybrid model parallel training method, which are widely used in decentralized AI projects such as Sahara AI. In December 2024, ChainOpera AI announced the completion of a $3.5 million seed round of financing, bringing the total financing raised with TensorOpera to $17 million. The funds will be used to build a blockchain L1 and AI operating system for decentralized AI agents. This round of financing was led by Finality Capital, Road Capital, and IDG Capital, with participation from Camford VC, ABCDE Capital, Amber Group, and Modular Capital. The company also received support from prominent institutional and individual investors, including Sparkle Ventures, Plug and Play, USC, and EigenLayer founder Sreeram Kannan and BabylonChain co-founder David Tse. The team stated that this round of financing will accelerate the realization of its vision of "a decentralized AI ecosystem where AI resource contributors, developers, and users co-own and co-create."
IX. Analysis of the Federated Learning and AI Agent Market Landscape
There are four main representative federated learning frameworks: FedML, Flower, TFF, and OpenFL. Among them, FedML is the most comprehensive, combining federated learning, distributed large-scale model training, and MLOps, making it suitable for industrial deployment. Flower is lightweight and easy to use, with an active community, and is oriented towards education and small-scale experiments. TFF, deeply dependent on TensorFlow, has high academic research value but weak industrialization. OpenFL focuses on healthcare and finance, emphasizes privacy compliance, and has a relatively closed ecosystem. Overall, FedML represents an all-around path for industrialization, Flower focuses on ease of use and education, TFF is more academic and experimental, and OpenFL has advantages in vertical industry compliance. At the industrialization and infrastructure level, TensorOpera (the commercialization of FedML) inherits the technical expertise of open-source FedML and provides integrated capabilities for cross-cloud GPU scheduling, distributed training, federated learning, and MLOps. Its goal is to bridge academic research and industrial applications, serving developers, small and medium-sized enterprises, and the Web3/Decentralized Ecosystem. Overall, TensorOpera is like "Hugging Face + W&B for open-source FedML," offering a more comprehensive and versatile approach to full-stack distributed training and federated learning, unlike other platforms focused on community, tools, or a single industry. Among the innovation-tier platforms, ChainOpera and Flock both attempt to integrate federated learning with Web3, but their approaches differ significantly. ChainOpera builds a full-stack AI agent platform encompassing a four-layer architecture: onboarding, social networking, development, and infrastructure. Its core value lies in transforming users from "consumers" to "co-creators," enabling collaborative AGI and community ecosystem building through its AI Terminal and Agent Social Network. Flock, on the other hand, focuses more on blockchain-enhanced federated learning (BAFL), emphasizing privacy protection and incentive mechanisms within a decentralized environment, primarily for collaborative verification at the computing and data layers. ChainOpera focuses more on the implementation of applications and agent network layers, while Flock focuses on strengthening underlying training and privacy-preserving computing. At the agent network level, the most representative project in the industry is Olas Network. ChainOpera, originating from federated learning, builds a full-stack closed loop of model-computing power-agent, and uses Agent Social Network as a testing ground to explore multi-agent interaction and social collaboration. Olas Network, originating from DAO collaboration and DeFi ecology, is positioned as a decentralized autonomous service network. Through Pearl, it launches DeFi profit scenarios that can be directly implemented, showing a completely different path from ChainOpera. ChainOpera's advantage lies first in its technological moat: from FedML (a benchmark open source framework for federated learning) to TensorOpera (enterprise-level full-stack AI Infra), and then to ChainOpera (Web3 Agent Network + DePIN + Tokenomics) has formed a unique continuous evolutionary path, combining academic accumulation, industrial implementation, and crypto narratives. In terms of application and user scale, AI Terminal has established an ecosystem with hundreds of thousands of daily active users and thousands of Agent applications, ranking first in the AI category on BNBChain DApp Bay, demonstrating clear on-chain user growth and real transaction volume. Its multimodal coverage of crypto-native applications is expected to gradually spill over to a wider range of Web2 users. Regarding ecosystem collaboration, ChainOpera launched the CO-AI Alliance, joining forces with partners such as io.net, Render, TensorOpera, FedML, and MindNetwork to build multi-sided network effects across GPUs, models, data, and privacy-preserving computing. ChainOpera is also collaborating with Samsung Electronics to validate mobile multimodal GenAI, demonstrating the potential for expansion into hardware and edge AI. In terms of its token and economic model, ChainOpera, based on Proof-of-Intelligence consensus, allocates incentives around five key value streams (LaunchPad, Agent API, Model Serving, Contribution, and Model Training). This creates a positive cycle through a 1% platform service fee, incentive distribution, and liquidity support, avoiding a single "coin speculation" model and enhancing sustainability. Potential Risks: First, the technical implementation is challenging. ChainOpera's proposed five-layer decentralized architecture spans a wide range, and cross-layer collaboration (especially in large-scale distributed inference and privacy-preserving training) still presents performance and stability challenges, and has yet to be verified in large-scale applications. Second, the user stickiness of the ecosystem remains to be seen. While the project has achieved initial user growth, it remains to be seen whether the Agent Marketplace and developer toolchain can maintain long-term activity and high-quality supply. The currently launched Agent Social Network primarily relies on LLM-driven text conversations, and user experience and long-term retention still need further improvement. If the incentive mechanism is not carefully designed, there is a risk of high short-term activity but insufficient long-term value. Finally, the sustainability of the business model remains to be determined. Currently, revenue relies primarily on platform service fees and token circulation, and stable cash flow has yet to be established. Compared to more financial or productivity-oriented applications like AgentFi or Payment, the commercial value of the current model requires further verification. Furthermore, the mobile and hardware ecosystems are still in the exploratory stage, and the market prospects are uncertain.
Preview
Gain a broader understanding of the crypto industry through informative reports, and engage in in-depth discussions with other like-minded authors and readers. You are welcome to join us in our growing Coinlive community:https://t.me/CoinliveSG