Gradient, a distributed AI lab, today released Echo-2, a distributed reinforcement learning framework designed to break down training efficiency barriers in AI research. By completely decoupling Learner and Actor at the architectural level, Echo-2 drastically reduces the post-training cost of a 30-bit model from $4,500 to $425. This translates to over 10 times the research throughput within the same budget. The framework utilizes a storage-computation separation technique for asynchronous training (Async RL), offloading massive sampling computations to unstable GPU instances and heterogeneous GPUs based on Parallax. Combined with breakthroughs in bounded stagnation, fault-tolerant instance scheduling, and the proprietary Lattica communication protocol, it significantly improves training efficiency while maintaining model accuracy. Alongside the framework release, Gradient will also soon launch its RLaaS platform, Logits, driving a shift in AI research from a "capital-driven" to an "efficiency-based" paradigm. Logits is now open for reservations by students and researchers worldwide (logits.dev). Gradient is an AI lab dedicated to building distributed infrastructure, focusing on the distributed training, service, and deployment of cutting-edge large models.