Introduction to Meta Llama 4 Scout and Maverick AI Models With MoE Architecture
Meta recently introduced the Llama 4 Scout and Llama 4 Maverick artificial intelligence (AI) models to the open community. These models, available on Hugging Face and the Llama website, are the first open models with Mixture-of-Experts (MoE) architecture.

Features of Llama 4 Scout and Maverick Models
The Llama 4 Scout is a 17 billion active parameter model with 16 experts, while the Maverick model boasts 17 billion active parameters and 128 experts. Scout can operate on a single Nvidia H100 GPU and the Behemoth model, which is the largest in the family, outperforms several benchmarks.
MoE Architecture in Llama 4 AI Models
The Llama 4 models utilize an MoE architecture that activates only a fraction of the total parameters based on the initial prompt's requirement. This approach enhances compute efficiency for training and inference. Meta also incorporated techniques like early fusion and MetaP for setting critical model hyper-parameters.
Enhancements and Performance
Meta's Maverick model has shown superior performance on various benchmarks including image reasoning, image understanding, reasoning and knowledge, and long context. The Scout model also outperforms competitors on multiple benchmarks.
Safety Measures and Licensing
Meta has implemented safety measures throughout the pre-training and post-training processes to secure the AI models from external threats. The Llama 4 models are available to the open community under a permissive license, allowing both academic and commercial usage.
For more information, you can visit the official blog post or access the models on the Hugging Face listing or the Llama website.

Stay updated with the latest tech news and reviews by following Gadgets 360 on various platforms!




















