Meta's Llama 3.1 promises to compete with closed source models
Meta has unveiled its most capable ‘open-source’ model yet, which it promises will outperform the leading private models from OpenAI, Google, and Anthropic. Llama 3.1 is significantly more complex than the Llama 3 models released earlier this year, with its largest variant boasting 405 billion parameters and was trained with over 16,000 Nvidia H100s.
Advancements in Open Source AI Models
This marks the first ‘frontier-level’ open source model, according to Meta, with improved 70 billion and 8 billion parameter models distilled from the larger variant also now available, which Meta claims are the leading open-source models for their sizes. 
In a statement accompanying the release of Llama 3.1, Mark Zuckerberg, founder and CEO at Meta, said the open source model ecosystem is quickly ‘closing the gap’ on their private counterparts, claiming he expects future Llama models to surpass the closed-source competition in 2025. Zuckerberg reiterated his claim that open source is the future of AI, projecting it will follow a similar path to open source operating system, as it competed with closed source versions of Unix in its early days.
Open Source vs Closed Source
The flexibility and affordability of Linux were soon complemented by improvements in its security and performance, driving the platform’s rise to becoming the industry-standard operating system for cloud computing and mobile devices, and Zuckerberg stated he thinks AI will follow a similar trajectory. 
Accessibility to AI Technology
Victor Botev, CTO and co-founder at Iris.ai, said Llama 3.1 is a big leap in terms of expanding the level of access people have to powerful AI systems, eliminating some of the barriers to entry for those without deeper pockets. Meta is enabling researchers and developers worldwide to explore, innovate, and build upon state-of-the-art language AI without the barriers of proprietary APIs or expensive licensing fees.
Some have criticized Meta’s use of the label ‘open source’ for their Llama models, however, preferring to use ‘free to use’ as Meta places restrictions on particularly large enterprises. Organizations with hundreds of millions of users will need to seek approval from Meta before they can take advantage of the Llama 3.1, and Meta still refuses to share the model’s training data or the code used to train it. 




















