• Source:JND

So, you know how everyone's been buzzing about AI lately? Well, Meta dropped their new Llama 4 models over the weekend. Totally under the radar, right? But these things are a big deal. Llama 4 is their shot at taking on the big names like GPT, Gemini, and Claude. Here's what you need to know about Meta AI's Llama 4 models.

Meta Launched Llama 4

Meta has launched three models under the Llama 4 collection: Scout, Maverick, and Behemoth. Where Scout is a lightweight and has 109B parameters, Maverick is a middle-tier model having 400B total parameters, and Behemoth is Meta’s largest model, having 2 trillion parameters.

In simple terms, parameters are like the brain cells of the AI model. The more parameters a model has, the more information it can understand, even when trained on the same amount of data.

As of now, only Scout and Maverick can be found. Behemoth is still undergoing training. The model's capabilities stem from being trained on copious amounts of unlabelled text, images, and video data, enabling native multimodal proficiency; just like Gemini 2.0 and ChatGPT 4o, these models also understand text and visuals.

Access to Scout and Maverick is available on Llama.com and on Hugging Face. They now drive Meta AI across WhatsApp, Instagram, Messenger, and the Meta AI web app in 40 countries. However, only English speakers from the US have access to those features.

Llama 4 Update : What’s New?

1. MoE(Mixture of Experts) Architecture

While dense models utilize every part of a model for every task, MoE only activates specific “experts” based on the task at hand.

For example, when it receives a math question, instead of using the entire model, this architecture only utilises the math expert, leaving the other experts dormant. It provides speed and cost-efficiency for developers, adding to its appeal. This was first popularized by DeepSeek models, but now many companies use MoE for optimization.

2. Memory Upgrade with Huge Context Windows:

Scout now accommodates a larger context window of up to 10 million tokens, enabling it to recall uploaded files and past conversations more accurately. This surpasses Gemini's capacity by ten times and Mavericks's by much more than 1 million tokens. This advanced context window allows Scout to manage greater quantities of data, including entire code bases or several documents simultaneously.

3. Native Multimodal Support:

Like with ChatGPT and Gemini, all Llama 4 models are capable of processing text and images simultaneously. Multi-modal functionality, according to Meta, has not been tacked on as an afterthought—it was integrated in the model’s training regime from the beginning. In other words, these models tend to reason about and perceive both types of input in a more holistic manner.  

The extent to which ChatGPT and Gemini have integrated these capabilities is still a mystery to us. Likewise, the efficacy of this so-called early fusion approach in real-world scenarios remains to be seen. One thing is for certain, though: The ability to comprehend text and images will be dramatically more advanced than the previous iterations of Llama models.

4. Stronger Benchmark Performance:

Scout surpasses Gemini 2.0 Flash Lite and Mistral 3.1 on multiple reported benchmarks while operating on a single Nvidia H100 GPU. Maverick's score of 1417 on the LMArena ELO leaderboard places him above GPT-4o, GPT-4.5, and Claude Sonnet 3.7. He ranks second overall, just behind Gemini 2.5 Pro.

In training, Behemoth reportedly outperforms GPT-4.5 and Gemini 2.0 Pro and Claude Sonnet 3.7 in STEM subjects.

5. Looser Guardrails:

Meta reports that Llama 4 is designed to handle more political and social inquiries than its predecessor. To achieve this, the models have been fine-tuned to respond to contentious prompts in a more nuanced manner, providing factual and balanced information without refusing to answer outright. This approach has become increasingly popular among AI companies since the emergence of Grok.

6. Licensing Restrictions:

While there are some limitations, Llama 4 is not without its advantages. However, the model's open-weight status means that companies with over 700 million monthly active users require special permission to utilize it. Furthermore, users in the EU are currently restricted from using or distributing Llama 4 under the current terms. Nevertheless, Llama remains the only AI offering from a major tech company that is available for open use, albeit with certain caveats.

Meta Llama 4 AI Model Llama

Okay, so Llama 4 is like Meta's big move to compete with ChatGPT, Grok, and Gemini. It's not just an upgrade; it's got cool features like handling different types of info, a special design to work better, and can remember longer conversations. Basically, Meta's trying to be super powerful without using a ton of resources. And, keep an eye out for more – they're expected to announce even more at their LlamaCon event on April 29. If you thought Meta was behind in AI, Llama 4 shows they're definitely in the game and pushing hard.

That was it guys for this article, keep an eye out Jagran English, for more such updates!

Also Read : GTA 6 Delay Rumors Explained : Should Fans Be Worried?