How RecursiveMAS speeds up multi-agent inference by 2.4x and reduces token usage by 75%
One of the key challenges of current multi-agent AI systems is that they communicate by generating and sharing text sequences, which introduces latency, drives up token costs, and makes it difficult to train the entire system as a cohesive unit. To overcome this challenge, researchers at University of Illinois Urbana-Champaign and Stanford University developed RecursiveMAS, a framework that enables agents to collaborate and transmit information through embedding space instead of text. This change results in both efficiency and performance gains. Experiments show that RecursiveMAS achieves accuracy improvement across complex domains like code generation, medical reasoning, and search, while also increasing inference speed and slashing token usage. RecursiveMAS is significantly cheaper to tr
Generated by Pulse AI, Glideslope's proprietary engine for interpreting market sentiment and economic signals. For informational purposes only — not financial advice.