Generative AI NVIDIA Releases Nemotron-Cascade 2: An Open 30B MOE with 3B Active Parameters, Providing Better Logic and Stronger Agent Capabilities by ai-intensify March 21, 2026 March 21, 2026 Read more
Generative AI NVIDIA releases Nemotron 3 Super: a 120B parameter open-source hybrid Mamba-Attention MOE model that delivers 5x higher throughput for agent AI. by ai-intensify March 11, 2026 March 11, 2026 Read more
AI News YuanLab AI Releases Yuan 3.0 Ultra: A Flagship Multimodal MOE Foundation Model, Built for Strong Intelligence and Unmatched Efficiency by March 5, 2026 March 5, 2026 Read more
Generative AI Alibaba Qiwen Team Releases Qiwen3.5-397B MOE Model with 17B Active Parameters and 1M Token Reference for AI Agents by February 16, 2026 February 16, 2026 Read more
Machine Learning NVIDIA Nemotron 3 Nano 30B MoE Model Now Available on Amazon SageMaker Jumpstart by February 11, 2026 February 11, 2026 Read more
AI News Zipu AI Releases GLM-4.7-Flash: A 30B-A3B MOE Model for Efficient Local Coding and Agents by January 20, 2026 January 20, 2026 Read more
AI Tools NVIDIA AI Releases Nemotron 3: A Hybrid Mamba Transformer MOE Stack for Long Context Agent AI by December 20, 2025 December 20, 2025 Read more
Generative AI AI Interview Series #4: Transformers vs. Mixture of Experts (MOE) by December 4, 2025 December 4, 2025 Read more