Meta Platforms announced on Saturday the launch of the latest version of its large language model (LLM), introducing Llama 4 Scout and Llama 4 Maverick.
According to Meta, Llama 4 is a multimodal AI system, meaning it can process and integrate various types of data, including text, images, video, and audio, while also converting content across these formats.
In a statement, the company described Llama 4 Scout and Llama 4 Maverick as its “most advanced models yet” and the “best in their class for multimodality.”
Meta also confirmed that both models would be available as open-source software. Additionally, it revealed a preview of Llama 4 Behemoth, which it described as “one of the smartest LLMs in the world and our most powerful yet, designed to serve as a teacher for future models.”
The announcement comes as major tech firms continue ramping up investments in AI infrastructure, following the transformative impact of OpenAI’s ChatGPT, which has reshaped the industry and fueled competition in machine learning.
However, The Information reported on Friday that Meta had delayed the release of Llama 4 due to concerns over its performance. During development, the model reportedly fell short of Meta’s expectations in key technical benchmarks, particularly in reasoning and mathematical tasks.
The report also stated that Meta was worried about Llama 4’s ability to conduct humanlike voice interactions, suggesting it might lag behind OpenAI’s models in this area.
Despite these challenges, Meta is forging ahead with its AI expansion, planning to invest up to $65 billion this year to bolster its AI infrastructure. The move comes amid increasing pressure from investors demanding tangible returns on big tech’s AI investments.