
Meta releases Llama 4, a new crop of flagship AI models
techcrunch.com
Meta has released a new collection of AI models, Llama 4, in its Llama family on a Saturday, no less. There are four new models in total: Llama 4 Scout, Llama 4 Maverick, and Llama 4 Behemoth. All were trained on large amounts of unlabeled text, image, and video data to give them broad visual understanding, Meta says. The success of open models from Chinese AI lab DeepSeek, which perform on par or better than Metas previous flagship Llama models, reportedly kicked Llama development into overdrive. Meta is said to have scrambled war rooms to decipher how DeepSeek lowered the cost of running and deploying models like R1 and V3.Scout and Maverick are openly available on Llama.com and from Metas partners, including the AI dev platform Hugging Face, while Behemoth is still in training. Meta says that Meta AI, its AI-powered assistant across apps including WhatsApp, Messenger, and Instagram, has been updated to use Llama 4 in 40 countries. Multimodal features are limited to the U.S. in English for now.Some developers may take issue with the Llama 4 license. Users and companies domiciled or with a principal place of business in the EU are prohibited from using or distributing the models, likely the result of governance requirements imposed by the regions AI and data privacy laws. (In the past, Meta has decried these laws as overly burdensome.) In addition, as with previous Llama releases, companies with more than 700 million monthly active users must request a special license from Meta, which Meta can grant or deny at its sole discretion.These Llama 4 models mark the beginning of a new era for the Llama ecosystem, Meta wrote in a blog post. This is just the beginning for the Llama 4 collection. Image Credits:MetaMeta says that Llama 4 is its first cohort of models to use a mixture of experts (MoE) architecture, which is more computationally efficient for training and answering queries. MoE architectures basically break down data processing tasks into subtasks and then delegate them to smaller, specialized expert models.Maverick, for example, has 400 billion total parameters, but only 17 billion active parameters across 128 experts. (Parameters roughly correspond to a models problem-solving skills.) Scout has 17 billion active parameters, 16 experts, and 109 billion total parameters.According to Metas internal testing, Maverick, which the company says is best for general assistant and chat use cases like creative writing, exceeds models such as OpenAIs GPT-4o and Googles Gemini 2.0 on certain coding, reasoning, multilingual, long-context, and image benchmarks. However, Maverick doesnt quite measure up to more capable recent models like Googles Gemini 2.5 Pro, Anthropics Claude 3.7 Sonnet, and OpenAIs GPT-4.5.Scouts strengths lie in tasks like document summarization and reasoning over large codebases. Uniquely, it has a very large context window: 10 million tokens. (Tokens represent bits of raw text e.g. the word fantastic split into fan, tas and tic.) In plain English, Scout can take in images and up to millions of words, allowing it to process and work with extremely lengthy documents. Scout can run on a single Nvidia H100 GPU, while Maverick requires an Nvidia H100 DGX system or equivalent, according to Metas calculations.Metas unreleased Behemoth will need even beefier hardware. According to the company, Behemoth has 288 billion active parameters, 16 experts, and nearly two trillion total parameters. Metas internal benchmarking has Behemoth outperforming GPT-4.5, Claude 3.7 Sonnet, and Gemini 2.0 Pro (but not 2.5 Pro) on several evaluations measuring STEM skills like math problem solving. Of note, none of the Llama 4 models is a proper reasoning model along the lines of OpenAIs o1 and o3-mini. Reasoning models fact-check their answers and generally respond to questions more reliably, but as a consequence take longer than traditional, non-reasoning models to deliver answers.Image Credits:MetaInterestingly, Meta says that it tuned all of its Llama 4 models to refuse to answer contentious questions less often. According to the company, Llama 4 responds to debated political and social topics that the previous crop of Llama models wouldnt. In addition, the company says, Llama 4 is dramatically more balanced with which prompts it flat-out wont entertain.[Y]ou can count on [Lllama 4] to provide helpful, factual responses without judgment, a Meta spokesperson told TechCrunch. [W]ere continuing to make Llama more responsive so that it answers more questions, can respond to a variety of different viewpoints [] and doesnt favor some views over others.Those tweaks come as some White House allies accuse AI chatbots of being too politically woke. Many of President Donald Trumps close confidants, including billionaire Elon Musk and crypto and AI czar David Sacks, have alleged that popular AI chatbotscensor conservative views. Sacks has historicallysingled outOpenAIs ChatGPT as programmed to be woke and untruthful about political subject matter.In actuality, bias in AI is an intractable technical problem. Musks own AI company, xAI, hasstruggledto create a chatbot that doesnt endorse some political views over others.That hasnt stopped companies including OpenAI from adjusting their AI models to answer more questions than they would have previously, in particular questions relating to controversial subjects.
0 Comentários
·0 Compartilhamentos
·31 Visualizações