arstechnica.com
Video Synthesis With new Gen-4 model, Runway claims to have finally achieved consistency in AI videos The new model is rolling out to paid users starting today. Samuel Axon Mar 31, 2025 5:07 pm | 19 Runway's new Gen-4 model claims to support consistency characters and objects. Credit: Runway Runway's new Gen-4 model claims to support consistency characters and objects. Credit: Runway Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAI video startup Runway announced the availability of its newest video synthesis model today. Dubbed Gen-4, the model purports to solve several key problems with AI video generation.Chief among those is the notion of consistent characters and objects across shots. If you've watched any short films made with AI, you've likely noticed that they're either dream-like sequences of thematically but not realistically connected imagesmood pieces more than consistent narratives.Runway claims Gen-4 can maintain consistent characters and objects, provided it's given a single reference image of the character or object in question as part of the project in Runway's interface.The company published example videos including the same woman appearing in various different shots across different scenes, and the same statue appearing in completely different contexts, looking largely the same in a variety of environments and lighting conditions.Likewise, Gen-4 aims to allow filmmakers who use the tool to get coverage of the same environment or subject from multiple angles across several shots in the same sequence. With Gen-2 and Gen-3, this was virtually impossible. The tool has in the past been good at maintaining stylistic integrity, but not at generating multiple angles within the same scene.The last major model update at Runway was Gen-3, which was announced just under a year ago in June 2024. That update greatly expanded the length of videos users could produce from just two seconds to 10, and offered greater consistency and coherence than its predecessor, Gen-2.Runways unique positioning in a crowded spaceRunway released the first publicly available version of its video synthesis product to users in February 2023. Gen-1 creations tended to be more curiosities than anything useful to creatives, but subsequent optimizations have allowed the tool to be used in limited ways in real projects.For example, it was used in producing the sequence in the film Everything Everywhere All At Once, where two rocks with googly eyes had a conversation on a cliff, and it has also been used to make visual gags for The Late Show with Stephen Colbert.Whereas many competing startups were started by AI researchers or Silicon Valley entrepreneurs, Runway was founded in 2018 by art students at New York University's Tisch School of the ArtsCristbal Valenzuela and Alejandro Matamala from Chil, and Anastasis Germanidis from Greece.It was one of the first companies to release a usable video-generation tool to the public, and its team also contributed in foundational ways to the Stable Diffusion model.It is vastly outspent by competitors like OpenAI, but while most of its competitors have released general-purpose video creation tools, Runway has sought an Adobe-like place in the industry. It has focused on marketing to creative professionals like designers and filmmakers, and has implemented tools meant to make Runway a support tool into existing creative workflows.The support tool argument (as opposed to a standalone creative product) helped Runway secure a deal with motion picture company Lionsgate, wherein Lionsgate allowed Runway to legally train its models on its library of films, and Runway provided bespoke tools for Lionsgate for use in production or post-production.That said, Runway is, along with Midjourney and others, one of the subjects of a widely publicized intellectual property case brought by artists who claim the companies illegally trained their models on their work, so not all creatives are on board.Apart from the announcement about the partnership with Lionsgate, Runway has never publicly shared what data is used to train its models. However, a report in 404 Media seemed to reveal that at least some of the training data included video scraped from the YouTube channels of popular influencers, film studios, and more.Time will tell for Gen-4The claimed improvements in Gen-4 target complaints from the creatives who use the tools, saying that these video synthesis tools are limited in their usefulness because they have limited consistency or understanding of a scene. Competing tools like OpenAI's Sora have also tried to improve on these limitations, but with limited results.Runway's announcement says that Gen-4 is rolling out to "all paid plans and Enterprise customers" today. However, when I logged into my paid account, Gen-4 is listed in the model picker but with the word "Soon" next to it, and it's not selectable yet. Runway may be rolling the model out to accounts slowly to avoid problems with server load. Gen-4 is listed as an option, but not yet usable, as of this article's publication. Credit: Samuel Axon Whenever it arrives for all users, it will only be available with a paid plan. Individual, non-enterprise plans start at $15 per month and scale up to as much as $95 per month, though there is a 20 percent discount for signing up for an annual plan instead. An Enterprise account runs $1,500 per year.The plans provide users with up to 2,250 credits monthly, but because generating usable AI video is an act of curation, you probably can't generate too many usable videos with that amount. There is an "Explore Mode" in the $95 per month individual plan that allows unlimited generations at a relaxed rate, which is meant as a way to gradually find your way to the output you want to invest in.Samuel AxonSenior EditorSamuel AxonSenior Editor Samuel Axon is a senior editor at Ars Technica, where he is the editorial director for tech and gaming coverage. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and heis a graduate of DePaul University, where he studied interactive media and software development. 19 Comments