Meet ZAYA1-8B, a super efficient, open reasoning model trained on AMD Instinct MI300 GPUs
Even as leading AI providers like OpenAI and Anthropic battle over the compute to train and release ever larger, more powerful models, other labs are going in a different direction — pursuing the development of smaller, more efficient models and often open sourcing them. The latest worth paying attention to comes from the lesser-known Palo Alto startup Zyphra, which this week released its new reasoning, mixture-of-experts (MoE) language model, ZAYA1-8B, with just over 8 billion parameters and only 760 million active — far fewer than the trillions estimated for the likes of the big labs. Yet, ZAYA1-8B retains competitive performance on third-party benchmarks against GPT-5-High and DeepSeek-V3.2.It can be downloaded from Hugging Face now free of charge under a permissive, standard, enterpris
Generated by Pulse AI, Glideslope's proprietary engine for interpreting market sentiment and economic signals. For informational purposes only — not financial advice.