Llama 4 Scout: Redefining AI Context with 10 Million Tokens

Published On Thu Apr 10 2025
Llama 4 Scout: Redefining AI Context with 10 Million Tokens

Highlights

Meta’s latest open-source Llama 4 models are much cheaper to run than closed models like OpenAI’s GPT-4o, making enterprise AI deployment more feasible. Llama 4 Scout supports up to 10 million tokens (7.5 million words) compared to previous recordholder Gemini 2.5’s 1 million tokens, enabling businesses to process 10 times more data in a single prompt. Llama 4 has fewer restrictions than Llama 3 as it aims to be politically neutral; the new models refuse to answer sensitive topics less often.

Introduction to Meta's Latest Llama 4 Models

Meta’s latest open-source AI models are a shot across the bow to the more expensive closed models from OpenAI, Google, Anthropic, and others. But it’s good news for businesses because they could potentially lower the cost of deploying artificial intelligence (AI), according to experts. The social media giant has released two models from its Llama family of models: Llama 4 Scout and Llama 4 Maverick. They are Meta’s first natively multimodal models — meaning they were built from the ground up to handle text and images; these capabilities were not bolted on.

Meta Unveils Llama 4 AI Series Featuring New Expert-Based Architecture

Advantages of Llama 4 Scout

Llama 4 Scout’s unique proposition lies in its context window, supporting up to 10 million tokens (7.5 million words). This surpasses the previous record held by Google’s Gemini 2.5, which only supported 1 million tokens. The larger context window allows for more data and documents to be processed by the AI chatbot.

Cost Efficiency and Parameters

Both Llama 4 Scout and Maverick boast 17 billion active parameters, with Scout having a total of 109 billion parameters and Maverick with 400 billion. Meta highlighted that Llama 4 Maverick is cheaper to run, costing between 19 and 49 cents per million tokens for input and output, significantly lower than competitors like OpenAI’s GPT-4o.

Lightning AI ⚡️ on X: \

Competitive Landscape and Expert Opinion

Meta’s open-source Llama models are seen as a challenge to closed models like Gemini, offering cost-effective solutions that meet the needs of most business use cases. Experts believe that the affordability and performance of Llama 4 make it a competitive option in the market, outperforming leading closed-source alternatives on various benchmarks.

Enhanced Neutrality and Responsiveness

Meta emphasized that Llama 4 exhibits improved neutrality compared to its predecessor, being less likely to avoid sensitive queries. The models are designed to provide balanced responses and are continuously evolving to maintain political neutrality, making them more adaptable in contentious issue scenarios.

Introducing Meta's Llama 4 on the Databricks Data Intelligence ...

For more AI-related coverage, subscribe to the PYMNTS AI Newsletter.