Table of Contents
Meta has officially introduced its latest large language models Llama 4 Scout and Llama 4 Maverick which it describes as its most advanced multimodal AI systems to date. These models are capable of understanding and integrating different types of inputs such as text, images, video, and audio. According to Meta, both models excel in handling this kind of complex, cross-format content, offering improved performance across various tasks.
Importantly, both Scout and Maverick are being released as open-source, making them accessible to developers and researchers around the world. This move signals Meta’s continued support for collaborative AI development, especially at a time when big tech firms are competing to lead the AI race.
Preview of Llama 4 Behemoth and Meta’s AI Ambitions

Alongside the official release, Meta also gave a preview of Llama 4 Behemoth, a much more powerful model designed to train and guide future generations of AI systems. Though earlier reports suggested delays due to performance concerns in math and reasoning tasks, Meta appears to be pushing forward with confidence. Behemoth is being positioned as a kind of ‘teacher’ model for improving the next iterations of AI.
In line with this strategy, Meta has committed to investing as much as $65 billion in AI infrastructure this year, reflecting the intense pressure among major tech companies to show real returns on their AI investments. Also check out Dailly blog page here
Meta launched its most advanced open-source AI models yet, Llama 4 Scout and Maverick, and previewed Llama 4 Behemoth. These multimodal systems are part of Meta’s major push into AI, supported by a $65 billion investment in infrastructure.