Metas benchmarks for its new AI models are a bit misleading
One of the new flagship AI models Meta released on Saturday, Maverick, ranks second on LM Arena, a test that has human raters compare the outputs of models and choose which they prefer. But it seems the version of Maverick that Meta deployed to LM Arena differs from the version thats widely available to developers. As several AI researchers pointed out on X, Meta noted in its announcement that the Maverick on LM Arena is an experimental chat version. A chart on the official Llama website, meanwhile, discloses that Metas LM Arena testing was conducted using Llama 4 Maverick optimized for conversationality.As weve written about before, for various reasons, LM Arena has never been the most reliable measure of an AI models performance. But AI companies generally havent customized or otherwise fine-tuned their models to score better on LM Arena or havent admitted to doing so, at least. The problem with tailoring a model to a benchmark, withholding it, and then releasing a vanilla variant of that same model is that it makes it challenging for developers to predict exactly how well the model will perform in particular contexts. Its also misleading. Ideally, benchmarks woefully inadequate as they are provide a snapshot of a single models strengths and weaknesses across a range of tasks.Indeed, researchers on X have observed stark differences in the behavior of the publicly downloadable Maverick compared with the model hosted on LM Arena. The LM Arena version seems to use a lot of emojis, and give incredibly long-winded answers.Weve reached out to Meta and Chatbot Arena, the organization that maintains LM Arena, for comment.