
Midjourney introduces first new image generation model in over a year
arstechnica.com
Image Synthesis Midjourney introduces first new image generation model in over a year The new model is now in public alpha and has personalization enabled by default. Samuel Axon Apr 4, 2025 6:34 pm | 12 Midjourney V7 claims to be much more consistent at generating things like hands that don't look strange. Credit: Xeophon Midjourney V7 claims to be much more consistent at generating things like hands that don't look strange. Credit: Xeophon Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreAI image generator Midjourney released its first new model in quite some time today; dubbed V7, it's a ground-up rework that is available in alpha to users now.There are two areas of improvement in V7: the first is better images, and the second is new tools and workflows.Starting with the image improvements, V7 promises much higher coherence and consistency for hands, fingers, body parts, and "objects of all kinds." It also offers much more detailed and realistic textures and materials, like skin wrinkles or the subtleties of a ceramic pot.Those details are often among the most obvious telltale signs that an image has been AI-generated. To be clear, Midjourney isn't claiming to have made advancements that make AI images unrecognizable to a trained eye; it's just saying that some of the messiness we're accustomed to has been cleaned up to a significant degree. V7 can reproduce materials and lighting situations that V6.1 usually couldn't. Credit: Xeophon On the features side, the star of the show is the new "Draft Mode." On its various communication channels with users (a blog, Discord, X, and so on), Midjourney says that "Draft mode is half the cost and renders images at 10 times the speed."However, the images are of lower quality than what you get in the other modes, so this is not intended to be the way you produce final images. Rather, it's meant to be a way to iterate and explore to find the desired result before switching modes to make something ready for public consumption.V7 comes with two modes: turbo and relax. Turbo generates final images quickly but is twice as expensive in terms of credit use, while relax mode takes its time but is half as expensive. There is currently no standard mode for V7, strangely; Midjourney says that's coming later, as it needs some more time to be refined.V7 works with most parameters from previous versions (--ar, --seed, etc), including users' existing --sref codes from 6.1, as well as the recently introduced personalization feature. In fact, V7 is the first Midjourney model that has personalization enabled by default, meaning users will have to train it by picking at least 200 images to build their aesthetic profile.Personalization presents you with a choice between two images hundreds of times so it can learn what you find "beautiful" and tailor its generations to those tastes. You can disable personalization if you want in V7 just like prior models, though.Midjourney was one of the first AI image generation tools that found widespread use. Initially, it was available on Discord and usable via a somewhat arcane syntax, but it has since launched a more modern web interface.A significant portion of the AI art shared on social media has been made with Discord. It's also a key part of the workflow for many AI video creators, who often make the initial image in Midjourney before using the image-to-video feature of applications like Runway.However, as popular as it is, Midjourney has been the subject of multiple lawsuits, and is part of the ongoing debate about whether training AI models on copyrighted works found on the web constitutes fair use. (Anyone who's used Midjourney knows it was trained on copyrighted work; it even sometimes generates watermarks and artist signatures in its outputs.)Recently, the company announced it plans to launch hardware in the future, but it remains unclear what that will look like.Samuel AxonSenior EditorSamuel AxonSenior Editor Samuel Axon is a senior editor at Ars Technica, where he is the editorial director for tech and gaming coverage. He covers AI, software development, gaming, entertainment, and mixed reality. He has been writing about gaming and technology for nearly two decades at Engadget, PC World, Mashable, Vice, Polygon, Wired, and others. He previously ran a marketing and PR agency in the gaming industry, led editorial for the TV network CBS, and worked on social media marketing strategy for Samsung Mobile at the creative agency SPCSHP. He also is an independent software and game developer for iOS, Windows, and other platforms, and heis a graduate of DePaul University, where he studied interactive media and software development. 12 Comments
0 Kommentare
·0 Anteile
·30 Ansichten