Facebook parent company Meta announced the release of its Llama 3.1 open source large language model on Tuesday. The new LLM will be available in three sizes — 8B, 70B, and 405B parameters — the latter being the largest open-source AI built to date, which Meta CEO Mark Zuckerberg describes as “the first frontier-level open source AI model.”
“Last year, Llama 2 was only comparable to an older generation of models behind the frontier,” Zuckerberg wrote in a blog post Tuesday. “This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry.”
Meta
Trained on 15 trillion tokens using 16,000 H100 GPUs, Meta claims that the 405B model is significantly larger than its Llama 3 predecessor. It reportedly rivals today’s top closed source models, such as OpenAI’s GPT-4o, Google’s Gemini 1.5, or Anthropic’s Claude 3.5 in “general knowledge, math, tool use, and multilingual translation. Zuckerberg predicted on Instagram on Tuesday that Meta AI would surpass ChatGPT as the most widely used AI assistant by the end of the year.
The company notes that all three versions of Llama 3.1 will enjoy expanded prompt lengths of 128k tokens, enabling users to provide added context and up to a book’s worth of supporting documentation. They’ll also support eight languages at launch. What’s more, Meta has amended its license agreement to allow developers to use Llama 3.1 outputs to train other models.
Meta also announced that it is partnering with more than a dozen other companies in the industry to further develop the Llama ecosystem. Amazon, Databricks, and Nvidia will launch full-service software suites to help developers fine-tune their own models based off Llama, while the startup Groq has “built low-latency, low-cost inference serving” for the new family of 3.1 models, Zuckerberg wrote.
Being open-source, Llama 3.1 will be available on all the major cloud services including AWS, Google Cloud, and Azure.