• Happy Gilmore 2

    Movie & Games Trailers

    Happy Gilmore 2

    By Vincent Frei - 04/06/2025

    He’s back on the green—and still swinging like no one else! Watch the official trailer for Happy Gilmore 2, starring Adam Sandler in his most iconic role. The chaos returns!
    The Production VFX Supervisor is Marcus Taormina.
    The Production VFX Producer is Mare McIntosh.Director: Kyle Newacheck
    Release Date: July 25, 2025Screenshot
    © Vincent Frei – The Art of VFX – 2025
    #happy #gilmore
    Happy Gilmore 2
    Movie & Games Trailers Happy Gilmore 2 By Vincent Frei - 04/06/2025 He’s back on the green—and still swinging like no one else! Watch the official trailer for Happy Gilmore 2, starring Adam Sandler in his most iconic role. The chaos returns! The Production VFX Supervisor is Marcus Taormina. The Production VFX Producer is Mare McIntosh.Director: Kyle Newacheck Release Date: July 25, 2025Screenshot © Vincent Frei – The Art of VFX – 2025 #happy #gilmore
    WWW.ARTOFVFX.COM
    Happy Gilmore 2
    Movie & Games Trailers Happy Gilmore 2 By Vincent Frei - 04/06/2025 He’s back on the green—and still swinging like no one else! Watch the official trailer for Happy Gilmore 2, starring Adam Sandler in his most iconic role. The chaos returns! The Production VFX Supervisor is Marcus Taormina. The Production VFX Producer is Mare McIntosh.Director: Kyle Newacheck Release Date: July 25, 2025 (Netflix)Screenshot © Vincent Frei – The Art of VFX – 2025
    Like
    Love
    Wow
    Sad
    Angry
    356
    0 Comments 0 Shares
  • The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen

    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam.
    #multiplayer #stack #behind #mmorpg #pantheon
    The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen
    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam. #multiplayer #stack #behind #mmorpg #pantheon
    UNITY.COM
    The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen
    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam.
    0 Comments 0 Shares
  • I replaced my laptop with Microsoft's 12-inch Surface Pro for weeks - here's my buying advice now

    ZDNET's key takeaways The 12-inch Microsoft Surface Pro is available now starting at for the Platinum color, and for the new Violet and Ocean colors. The 12-inch version is exceedingly thin and light with a fast-charging battery, and the refreshed form factor looks more premium. The 256GB of storage is not enough for a device at this price point, and the cost adds up quickly, as the keyboard, mouse, and even the power adapter are sold separately. more buying choices Microsoft's 2025 12-inch Surface Pro is thinner, lighter, and a little more affordable, with a battery-efficient Snapdragon X Plus processor and refreshed design. The latest version of the Surface Pro rounds out the lineup with a more affordable option focused on ultra-long battery life, new colors, and redesigned accessories to show off Windows' latest Copilot+ PC features. Also: I recommend this HP laptop to creatives and business pros alike - especially at nearly 50% offI recently replaced my laptop with the 12-inch Surface Pro for more than two weeks now, and the Surface Pro seems to me to be more of an addition to the current lineup than a standalone upgrade, particularly in comparison to the enterprise models Microsoft released in January.The 2025 Surface Pro has relatively modest hardware, with 16GB of RAM and 256GB or 512GB of UFS storage, instead targeting a more everyday consumer who makes use of on-device AI and appreciates the ultraportability. 
    details
    View at Best Buy Besides the smaller form factor, this year's Surface Pro comes in two new colors: Violet and Ocean. The default Platinum color starts at whereas the other two will run you bringing the starting price a little further away from that advertised low price. I must admit that the design on the 12-inch tablet looks better. It looks more like a premium tabletwith rounded corners, thin bezels, and the webcam moved to the back corner of the device. Also: Microsoft unveils new AI agent customization and oversight features at Build 2025Additionally, I'm a fan of the new Violet and Ocean colorways, which aren't what I'd call "bold", but at least they're not the same desaturated pastels we see everywhere else. The colors extend to the Surface Pro keyboards, which are updated by removing the alcantara fabric on the front of the keyboard for a cleaner, monochromatic matte look. Instead, the fabric is relegated to the back of the keyboard case, which has a more premium tablet feel for storage and transport. The Surface keyboard is functional and satisfying to type on, with springy keys and a responsive, premium trackpad.  Kyle Kucharski/ZDNETAdditionally, the tablet snaps to the keyboard a little tighter and closer to the tablet now, with no gap in the hinge, giving it a slightly smaller footprint on the desk. The Surface Pen also magnetically snaps to the back of the 12-inch instead of storing on the keyboard. This requires you to store the device with the fabric facing down, as you don't want to squish the pen. When throwing the Surface Pro in a bag, the Pen also tends to stay put but can come unattached if you're not paying attention. Microsoft wants to show off its new AI-driven Copilot+ features, and the 12-inch Surface Pro is a good conduit for marketing them to the consumer, especially with its attractive price point and the 45 TOPS Qualcomm Hexagon NPU. Also: I've tested dozens of work laptops - but I'd take this Lenovo to the office everydayFor example, the long-awaited Recall feature is still in Preview mode, but it's getting closer to a useful state. Other applications that leverage AI processes, particularly ones for creators like Capcut, Davinci Resolve, and DJay Pro should feel smooth and snappy. This makes it a very AI-ready device for everyday users who don't need high-end hardware for demanding creative projects.  Kyle Kucharski/ZDNETRunning Windows on Qualcomm's Snapdragon X Plus chip shouldn't be too much of a problem for most users in this category, as the areas that saw the most issues with compatibility, like gaming and connecting to legacy software, are more likely not to apply to the targeted user. The 12-inch Surface Pro's modest hardware positions itself as a competitive device in the family's lineup. The aforementioned 16GB of RAM and max 512GB of storage, paired with the Snapdragon X Plus and 2196 x 1464 resolutionLCD display that targets everyday users, while its 13-inch siblings can be loaded up with more premium hardware. Also: This ultraportable Windows laptop raised the bar for the MacBook AirThat being said, the Snapdragon X Plus processor is snappy and responsive, excelling at tasks that the average consumer cares about: fast startup and app load times, smooth multitasking, and solid battery performance, whether in laptop or tablet mode. During my benchmarking of the 12-inch Surface Pro, I got numbers that place it around other thin and light laptops in the same price range, including Asus' Zenbook A14, which also features the Snapdragon X Plus processor, and HP's OmniBook X 14, one of the first Copilot+ PCs with the Snapdragon X Elite chip from 2024.  Cinebench 24 MCGeekbench 6.2.2 SCGeekbench 6.2.2 MC12-inch Microsoft Surface Pro4182,2529,555Asus Zenbook A145412,13310,624HP Omnibook X4702,32613,160
    Show more
    The display is sharp and crisp, but it does cap out at 400 nits of brightness and a 90Hz refresh rate. Since it's a tablet, it's also quite glossy. In the office, for example, I found myself readjusting the device's angle numerous times throughout the day to account for glare from overhead lighting. Also: How to clear the cache on your Windows 11 PCSpeaking of using the Surface Pro in the office, it works equally well as a laptop or a tablet, depending on what you need. Detached from the keyboard and armed with the Surface Pen, it becomes a snappy productivity tablet that allows for note taking, prototyping, and freeform idea generation in Windows' Whiteboard app. You can also assign different actions to the Pen, including starting apps or performing functions with the button on the device or the "clicky" on the end. I will say that the Pen's performance can be variable, though. If you're running multiple programs open in the background, you might notice lag while writing, especially if you're moving quickly.  Kyle Kucharski/ZDNETSimilarly, the location of the front-facing HD camera means that it has a slightly downward-up orientation while connected to the keyboard, as the kickstand can only prop it up so high. Consider a clamshell laptop, for example, which can sit at a 90-degree angle or less. In that sense, untethering the keyboard and using it as a tablet might be more optimal for users who make frequent video calls. Also: The best laptops for graphic designers in 2025: Expert tested and reviewedRegarding battery life, the Snapdragon X Plus processor ensures that it drains at a mere trickle when the device is not in use, and is good enough for over a full day's worth of work on one charge. Microsoft advertises 16 hours of battery life, and I got a little over 15 in our video playback test. Regarding more sustained use, I got over 10 hours on a single charge, which isn't far off from the advertised 12 hours without using all the max battery efficiency settings. Couple this with the fact that the Surface Pro charges extremely fast. From a completely dead battery, you'll get to about 50% in 30 minutes, and around 80% in an hour. Of the Surface Pro family, the 12-inch is certainly the most battery efficient and the fastest to charge. ZDNET's buying adviceThe 12-inch Microsoft Surface Pro completes the family's lineup with a thinner, lighter, and more battery-efficient tablet/laptop hybrid with refreshed colors and design. It comes with slightly more modest hardwarefor a lower starting price of  If you're looking for a functional 2-in-1 tablet/laptop, enjoy using a stylus, and don't need a ton of local storage, it's a great option, especially for its long-lasting battery. It's an all-around sharp-looking device, and the premium keyboard case provides a satisfying tactile experience. Also: How to clear the cache on your Windows 11 PCThe cost of the Surface Pro can quickly add up, however, as the Surface Keyboard, Surface Arc mouse, and power adapter are sold separately, bringing the final cost over the mark. Combined with the low amount of local storage and modest memory, I'd recommend this device for users who are committed to the 12-inch form factor and want reliable battery life. Looking for the next best product? Get expert reviews and editor favorites with ZDNET Recommends.Featured reviews
    #replaced #laptop #with #microsoft039s #12inch
    I replaced my laptop with Microsoft's 12-inch Surface Pro for weeks - here's my buying advice now
    ZDNET's key takeaways The 12-inch Microsoft Surface Pro is available now starting at for the Platinum color, and for the new Violet and Ocean colors. The 12-inch version is exceedingly thin and light with a fast-charging battery, and the refreshed form factor looks more premium. The 256GB of storage is not enough for a device at this price point, and the cost adds up quickly, as the keyboard, mouse, and even the power adapter are sold separately. more buying choices Microsoft's 2025 12-inch Surface Pro is thinner, lighter, and a little more affordable, with a battery-efficient Snapdragon X Plus processor and refreshed design. The latest version of the Surface Pro rounds out the lineup with a more affordable option focused on ultra-long battery life, new colors, and redesigned accessories to show off Windows' latest Copilot+ PC features. Also: I recommend this HP laptop to creatives and business pros alike - especially at nearly 50% offI recently replaced my laptop with the 12-inch Surface Pro for more than two weeks now, and the Surface Pro seems to me to be more of an addition to the current lineup than a standalone upgrade, particularly in comparison to the enterprise models Microsoft released in January.The 2025 Surface Pro has relatively modest hardware, with 16GB of RAM and 256GB or 512GB of UFS storage, instead targeting a more everyday consumer who makes use of on-device AI and appreciates the ultraportability.  details View at Best Buy Besides the smaller form factor, this year's Surface Pro comes in two new colors: Violet and Ocean. The default Platinum color starts at whereas the other two will run you bringing the starting price a little further away from that advertised low price. I must admit that the design on the 12-inch tablet looks better. It looks more like a premium tabletwith rounded corners, thin bezels, and the webcam moved to the back corner of the device. Also: Microsoft unveils new AI agent customization and oversight features at Build 2025Additionally, I'm a fan of the new Violet and Ocean colorways, which aren't what I'd call "bold", but at least they're not the same desaturated pastels we see everywhere else. The colors extend to the Surface Pro keyboards, which are updated by removing the alcantara fabric on the front of the keyboard for a cleaner, monochromatic matte look. Instead, the fabric is relegated to the back of the keyboard case, which has a more premium tablet feel for storage and transport. The Surface keyboard is functional and satisfying to type on, with springy keys and a responsive, premium trackpad.  Kyle Kucharski/ZDNETAdditionally, the tablet snaps to the keyboard a little tighter and closer to the tablet now, with no gap in the hinge, giving it a slightly smaller footprint on the desk. The Surface Pen also magnetically snaps to the back of the 12-inch instead of storing on the keyboard. This requires you to store the device with the fabric facing down, as you don't want to squish the pen. When throwing the Surface Pro in a bag, the Pen also tends to stay put but can come unattached if you're not paying attention. Microsoft wants to show off its new AI-driven Copilot+ features, and the 12-inch Surface Pro is a good conduit for marketing them to the consumer, especially with its attractive price point and the 45 TOPS Qualcomm Hexagon NPU. Also: I've tested dozens of work laptops - but I'd take this Lenovo to the office everydayFor example, the long-awaited Recall feature is still in Preview mode, but it's getting closer to a useful state. Other applications that leverage AI processes, particularly ones for creators like Capcut, Davinci Resolve, and DJay Pro should feel smooth and snappy. This makes it a very AI-ready device for everyday users who don't need high-end hardware for demanding creative projects.  Kyle Kucharski/ZDNETRunning Windows on Qualcomm's Snapdragon X Plus chip shouldn't be too much of a problem for most users in this category, as the areas that saw the most issues with compatibility, like gaming and connecting to legacy software, are more likely not to apply to the targeted user. The 12-inch Surface Pro's modest hardware positions itself as a competitive device in the family's lineup. The aforementioned 16GB of RAM and max 512GB of storage, paired with the Snapdragon X Plus and 2196 x 1464 resolutionLCD display that targets everyday users, while its 13-inch siblings can be loaded up with more premium hardware. Also: This ultraportable Windows laptop raised the bar for the MacBook AirThat being said, the Snapdragon X Plus processor is snappy and responsive, excelling at tasks that the average consumer cares about: fast startup and app load times, smooth multitasking, and solid battery performance, whether in laptop or tablet mode. During my benchmarking of the 12-inch Surface Pro, I got numbers that place it around other thin and light laptops in the same price range, including Asus' Zenbook A14, which also features the Snapdragon X Plus processor, and HP's OmniBook X 14, one of the first Copilot+ PCs with the Snapdragon X Elite chip from 2024.  Cinebench 24 MCGeekbench 6.2.2 SCGeekbench 6.2.2 MC12-inch Microsoft Surface Pro4182,2529,555Asus Zenbook A145412,13310,624HP Omnibook X4702,32613,160 Show more The display is sharp and crisp, but it does cap out at 400 nits of brightness and a 90Hz refresh rate. Since it's a tablet, it's also quite glossy. In the office, for example, I found myself readjusting the device's angle numerous times throughout the day to account for glare from overhead lighting. Also: How to clear the cache on your Windows 11 PCSpeaking of using the Surface Pro in the office, it works equally well as a laptop or a tablet, depending on what you need. Detached from the keyboard and armed with the Surface Pen, it becomes a snappy productivity tablet that allows for note taking, prototyping, and freeform idea generation in Windows' Whiteboard app. You can also assign different actions to the Pen, including starting apps or performing functions with the button on the device or the "clicky" on the end. I will say that the Pen's performance can be variable, though. If you're running multiple programs open in the background, you might notice lag while writing, especially if you're moving quickly.  Kyle Kucharski/ZDNETSimilarly, the location of the front-facing HD camera means that it has a slightly downward-up orientation while connected to the keyboard, as the kickstand can only prop it up so high. Consider a clamshell laptop, for example, which can sit at a 90-degree angle or less. In that sense, untethering the keyboard and using it as a tablet might be more optimal for users who make frequent video calls. Also: The best laptops for graphic designers in 2025: Expert tested and reviewedRegarding battery life, the Snapdragon X Plus processor ensures that it drains at a mere trickle when the device is not in use, and is good enough for over a full day's worth of work on one charge. Microsoft advertises 16 hours of battery life, and I got a little over 15 in our video playback test. Regarding more sustained use, I got over 10 hours on a single charge, which isn't far off from the advertised 12 hours without using all the max battery efficiency settings. Couple this with the fact that the Surface Pro charges extremely fast. From a completely dead battery, you'll get to about 50% in 30 minutes, and around 80% in an hour. Of the Surface Pro family, the 12-inch is certainly the most battery efficient and the fastest to charge. ZDNET's buying adviceThe 12-inch Microsoft Surface Pro completes the family's lineup with a thinner, lighter, and more battery-efficient tablet/laptop hybrid with refreshed colors and design. It comes with slightly more modest hardwarefor a lower starting price of  If you're looking for a functional 2-in-1 tablet/laptop, enjoy using a stylus, and don't need a ton of local storage, it's a great option, especially for its long-lasting battery. It's an all-around sharp-looking device, and the premium keyboard case provides a satisfying tactile experience. Also: How to clear the cache on your Windows 11 PCThe cost of the Surface Pro can quickly add up, however, as the Surface Keyboard, Surface Arc mouse, and power adapter are sold separately, bringing the final cost over the mark. Combined with the low amount of local storage and modest memory, I'd recommend this device for users who are committed to the 12-inch form factor and want reliable battery life. Looking for the next best product? Get expert reviews and editor favorites with ZDNET Recommends.Featured reviews #replaced #laptop #with #microsoft039s #12inch
    WWW.ZDNET.COM
    I replaced my laptop with Microsoft's 12-inch Surface Pro for weeks - here's my buying advice now
    ZDNET's key takeaways The 12-inch Microsoft Surface Pro is available now starting at $799 for the Platinum color, and $899 for the new Violet and Ocean colors. The 12-inch version is exceedingly thin and light with a fast-charging battery, and the refreshed form factor looks more premium. The 256GB of storage is not enough for a device at this price point, and the cost adds up quickly, as the keyboard, mouse, and even the power adapter are sold separately. more buying choices Microsoft's 2025 12-inch Surface Pro is thinner, lighter, and a little more affordable, with a battery-efficient Snapdragon X Plus processor and refreshed design. The latest version of the Surface Pro rounds out the lineup with a more affordable option focused on ultra-long battery life, new colors, and redesigned accessories to show off Windows' latest Copilot+ PC features. Also: I recommend this HP laptop to creatives and business pros alike - especially at nearly 50% offI recently replaced my laptop with the 12-inch Surface Pro for more than two weeks now, and the Surface Pro seems to me to be more of an addition to the current lineup than a standalone upgrade, particularly in comparison to the enterprise models Microsoft released in January.The 2025 Surface Pro has relatively modest hardware, with 16GB of RAM and 256GB or 512GB of UFS storage, instead targeting a more everyday consumer who makes use of on-device AI and appreciates the ultraportability.  details View at Best Buy Besides the smaller form factor, this year's Surface Pro comes in two new colors: Violet and Ocean (a blueish gray). The default Platinum color starts at $799, whereas the other two will run you $899, bringing the starting price a little further away from that advertised low price (and we haven't even bought the keyboard yet). I must admit that the design on the 12-inch tablet looks better. It looks more like a premium tablet (and more like an iPad) with rounded corners, thin bezels, and the webcam moved to the back corner of the device. Also: Microsoft unveils new AI agent customization and oversight features at Build 2025Additionally, I'm a fan of the new Violet and Ocean colorways, which aren't what I'd call "bold", but at least they're not the same desaturated pastels we see everywhere else. The colors extend to the Surface Pro keyboards, which are updated by removing the alcantara fabric on the front of the keyboard for a cleaner, monochromatic matte look. Instead, the fabric is relegated to the back of the keyboard case, which has a more premium tablet feel for storage and transport. The Surface keyboard is functional and satisfying to type on, with springy keys and a responsive, premium trackpad.  Kyle Kucharski/ZDNETAdditionally, the tablet snaps to the keyboard a little tighter and closer to the tablet now, with no gap in the hinge, giving it a slightly smaller footprint on the desk. The Surface Pen also magnetically snaps to the back of the 12-inch instead of storing on the keyboard. This requires you to store the device with the fabric facing down, as you don't want to squish the pen. When throwing the Surface Pro in a bag, the Pen also tends to stay put but can come unattached if you're not paying attention. Microsoft wants to show off its new AI-driven Copilot+ features, and the 12-inch Surface Pro is a good conduit for marketing them to the consumer, especially with its attractive price point and the 45 TOPS Qualcomm Hexagon NPU. Also: I've tested dozens of work laptops - but I'd take this Lenovo to the office everydayFor example, the long-awaited Recall feature is still in Preview mode, but it's getting closer to a useful state. Other applications that leverage AI processes, particularly ones for creators like Capcut, Davinci Resolve, and DJay Pro should feel smooth and snappy. This makes it a very AI-ready device for everyday users who don't need high-end hardware for demanding creative projects.  Kyle Kucharski/ZDNETRunning Windows on Qualcomm's Snapdragon X Plus chip shouldn't be too much of a problem for most users in this category, as the areas that saw the most issues with compatibility, like gaming and connecting to legacy software, are more likely not to apply to the targeted user. The 12-inch Surface Pro's modest hardware positions itself as a competitive device in the family's lineup. The aforementioned 16GB of RAM and max 512GB of storage, paired with the Snapdragon X Plus and 2196 x 1464 resolution (220 PPI) LCD display that targets everyday users, while its 13-inch siblings can be loaded up with more premium hardware. Also: This ultraportable Windows laptop raised the bar for the MacBook Air (and everything else)That being said, the Snapdragon X Plus processor is snappy and responsive, excelling at tasks that the average consumer cares about: fast startup and app load times, smooth multitasking, and solid battery performance, whether in laptop or tablet mode. During my benchmarking of the 12-inch Surface Pro, I got numbers that place it around other thin and light laptops in the same price range, including Asus' Zenbook A14, which also features the Snapdragon X Plus processor, and HP's OmniBook X 14, one of the first Copilot+ PCs with the Snapdragon X Elite chip from 2024.  Cinebench 24 MCGeekbench 6.2.2 SCGeekbench 6.2.2 MC12-inch Microsoft Surface Pro (Snapdragon X Plus)4182,2529,555Asus Zenbook A14 (Snapdragon X Plus)5412,13310,624HP Omnibook X (Snapdragon X Elite)4702,32613,160 Show more The display is sharp and crisp, but it does cap out at 400 nits of brightness and a 90Hz refresh rate. Since it's a tablet, it's also quite glossy. In the office, for example, I found myself readjusting the device's angle numerous times throughout the day to account for glare from overhead lighting. Also: How to clear the cache on your Windows 11 PC (and why it makes such a big difference)Speaking of using the Surface Pro in the office, it works equally well as a laptop or a tablet, depending on what you need. Detached from the keyboard and armed with the Surface Pen, it becomes a snappy productivity tablet that allows for note taking, prototyping, and freeform idea generation in Windows' Whiteboard app. You can also assign different actions to the Pen, including starting apps or performing functions with the button on the device or the "clicky" on the end. I will say that the Pen's performance can be variable, though. If you're running multiple programs open in the background, you might notice lag while writing, especially if you're moving quickly.  Kyle Kucharski/ZDNETSimilarly, the location of the front-facing HD camera means that it has a slightly downward-up orientation while connected to the keyboard, as the kickstand can only prop it up so high. Consider a clamshell laptop, for example, which can sit at a 90-degree angle or less. In that sense, untethering the keyboard and using it as a tablet might be more optimal for users who make frequent video calls. Also: The best laptops for graphic designers in 2025: Expert tested and reviewedRegarding battery life, the Snapdragon X Plus processor ensures that it drains at a mere trickle when the device is not in use, and is good enough for over a full day's worth of work on one charge. Microsoft advertises 16 hours of battery life, and I got a little over 15 in our video playback test. Regarding more sustained use, I got over 10 hours on a single charge, which isn't far off from the advertised 12 hours without using all the max battery efficiency settings. Couple this with the fact that the Surface Pro charges extremely fast. From a completely dead battery, you'll get to about 50% in 30 minutes, and around 80% in an hour. Of the Surface Pro family, the 12-inch is certainly the most battery efficient and the fastest to charge. ZDNET's buying adviceThe 12-inch Microsoft Surface Pro completes the family's lineup with a thinner, lighter, and more battery-efficient tablet/laptop hybrid with refreshed colors and design. It comes with slightly more modest hardware (16GB of RAM, 256GB of storage) for a lower starting price of $799. If you're looking for a functional 2-in-1 tablet/laptop, enjoy using a stylus, and don't need a ton of local storage, it's a great option, especially for its long-lasting battery. It's an all-around sharp-looking device, and the premium keyboard case provides a satisfying tactile experience. Also: How to clear the cache on your Windows 11 PC (and why it makes such a big difference)The cost of the Surface Pro can quickly add up, however, as the Surface Keyboard, Surface Arc mouse, and power adapter are sold separately, bringing the final cost over the $1,000 mark. Combined with the low amount of local storage and modest memory, I'd recommend this device for users who are committed to the 12-inch form factor and want reliable battery life. Looking for the next best product? Get expert reviews and editor favorites with ZDNET Recommends.Featured reviews
    0 Comments 0 Shares
  • Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    The turing test in reverse

    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention

    From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI

    Kyle Orland



    May 31, 2025 7:08 am

    |

    13

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Of course I'm an AI creation! Why would you even doubt it?

    Credit:

    Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes.
    However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars.
    “This has to be real. There’s no way it's AI.”
    I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion."

    @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS

    After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention.
    Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade.

    Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea.

    @gameboi_pat This has got to be real. There’s no way it’s AI #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat

    I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke.
    The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that!

    Are we just prompts?
    Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia.
    On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling.

    @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings

    Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees.
    I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video."
    Which one is real?
    The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?"

    @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett

    After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos.

    There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly.
    There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns.
    Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally.
    For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling.

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland
    Senior Gaming Editor

    Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.

    13 Comments
    #real #tiktokers #are #pretending #veo
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to thesong. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI". I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke. The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling. @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that, I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relativelyconvinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longeris almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments #real #tiktokers #are #pretending #veo
    ARSTECHNICA.COM
    Real TikTokers are pretending to be Veo 3 AI creations for fun, attention
    The turing test in reverse Real TikTokers are pretending to be Veo 3 AI creations for fun, attention From music videos to "Are you a prompt?" stunts, "real" videos are presenting as AI Kyle Orland – May 31, 2025 7:08 am | 13 Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Of course I'm an AI creation! Why would you even doubt it? Credit: Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more Since Google released its Veo 3 AI model last week, social media users have been having fun with its ability to quickly generate highly realistic eight-second clips complete with sound and lip-synced dialogue. TikTok's algorithm has been serving me plenty of Veo-generated videos featuring impossible challenges, fake news reports, and even surreal short narrative films, to name just a few popular archetypes. However, among all the AI-generated video experiments spreading around, I've also noticed a surprising counter-trend on my TikTok feed. Amid all the videos of Veo-generated avatars pretending to be real people, there are now also a bunch of videos of real people pretending to be Veo-generated avatars. “This has to be real. There’s no way it's AI.” I stumbled on this trend when the TikTok algorithm fed me this video topped with the extra-large caption "Google VEO 3 THIS IS 100% AI." As I watched and listened to the purported AI-generated band that appeared to be playing in the crowded corner of someone's living room, I read the caption containing the supposed prompt that had generated the clip: "a band of brothers with beards playing rock music in 6/8 with an accordion." @kongosmusicWe are so cooked. This took 3 mins to generate. Simple prompt: “a band of brothers playing rock music in 6/8 with an accordion”♬ original sound - KONGOS After a few seconds of taking those captions at face value, something started to feel a little off. After a few more seconds, I finally noticed the video was posted by Kongos, an indie band that you might recognize from their minor 2012 hit "Come With Me Now." And after a little digging, I discovered the band in the video was actually just Kongos, and the tune was a 9-year-old song that the band had dressed up as an AI creation to get attention. Here's the sad thing: It worked! Without the "Look what Veo 3 did!" hook, I might have quickly scrolled by this video before I took the time to listen to the (pretty good!) song. The novel AI angle made me stop just long enough to pay attention to a Kongos song for the first time in over a decade. Kongos isn't the only musical act trying to grab attention by claiming their real performances are AI creations. Darden Bela posted that Veo 3 had "created a realistic AI music video" over a clip from what is actually a 2-year-old music video with some unremarkable special effects. Rapper GameBoi Pat dressed up an 11-month-old song with a new TikTok clip captioned "Google's Veo 3 created a realistic sounding rapper... This has to be real. There's no way it's AI" (that last part is true, at least). I could go on, but you get the idea. @gameboi_pat This has got to be real. There’s no way it’s AI 😩 #google #veo3 #googleveo3 #AI #prompts #areweprompts? ♬ original sound - GameBoi_pat I know it's tough to get noticed on TikTok, and that creators will go to great lengths to gain attention from the fickle algorithm. Still, there's something more than a little off-putting about flesh-and-blood musicians pretending to be AI creations just to make social media users pause their scrolling for a few extra seconds before they catch on to the joke (or don't, based on some of the comments). The whole thing evokes last year's stunt where a couple of podcast hosts released a posthumous "AI-generated" George Carlin routine before admitting that it had been written by a human after legal threats started flying. As an attention-grabbing stunt, the conceit still works. You want AI-generated content? I can pretend to be that! Are we just prompts? Some of the most existentially troubling Veo-generated videos floating around TikTok these days center around a gag known as "the prompt theory." These clips focus on various AI-generated people reacting to the idea that they are "just prompts" with various levels of skepticism, fear, or even conspiratorial paranoia. On the other side of that gag, some humans are making joke videos playing off the idea that they're merely prompts. RedondoKid used the conceit in a basketball trick shot video, saying "of course I'm going to make this. This is AI, you put that I'm going to make this in the prompt." User thisisamurica thanked his faux prompters for putting him in "a world with such delicious food" before theatrically choking on a forkful of meat. And comedian Drake Cummings developed TikTok skits pretending that it was actually AI video prompts forcing him to indulge in vices like shots of alcohol or online gambling ("Goolgle’s [sic] New A.I. Veo 3 is at it again!! When will the prompts end?!" Cummings jokes in the caption). @justdrakenaround Goolgle’s New A.I. Veo 3 is at it again!! When will the prompts end?! #veo3 #google #ai #aivideo #skit ♬ original sound - Drake Cummings Beyond the obvious jokes, though, I've also seen a growing trend of TikTok creators approaching friends or strangers and asking them to react to the idea that "we're all just prompts." The reactions run the gamut from "get the fuck away from me" to "I blame that [prompter], I now have to pay taxes" to solipsistic philosophical musings from convenience store employees. I'm loath to call this a full-blown TikTok trend based on a few stray examples. Still, these attempts to exploit the confusion between real and AI-generated video are interesting to see. As one commenter on an "Are you a prompt?" ambush video put it: "New trend: Do normal videos and write 'Google Veo 3' on top of the video." Which one is real? The best Veo-related TikTok engagement hack I've stumbled on so far, though, might be the videos that show multiple short clips and ask the viewer to decide which are real and which are fake. One video I stumbled on shows an increasing number of "Veo 3 Goth Girls" across four clips, challenging in the caption that "one of these videos is real... can you guess which one?" In another example, two similar sets of kids are shown hanging out in cars while the caption asks, "Are you able to identify which scene is real and which one is from veo3?" @spongibobbu2 One of these videos is real… can you guess which one? #veo3 ♬ original sound - Jett After watching both of these videos on loop a few times, I'm relatively (but not entirely) convinced that every single clip in them is a Veo creation. The fact that I watched these videos multiple times shows how effective the "Real or Veo" challenge framing is at grabbing my attention. Additionally, I'm still not 100 percent confident in my assessments, which is a testament to just how good Google's new model is at creating convincing videos. There are still some telltale signs for distinguishing a real video from a Veo creation, though. For one, Veo clips are still limited to just eight seconds, so any video that runs longer (without an apparent change in camera angle) is almost certainly not generated by Google's AI. Looking back at a creator's other videos can also provide some clues—if the same person was appearing in "normal" videos two weeks ago, it's unlikely they would be appearing in Veo creations suddenly. There's also a subtle but distinctive style to most Veo creations that can distinguish them from the kind of candid handheld smartphone videos that usually fill TikTok. The lighting in a Veo video tends to be too bright, the camera movements a bit too smooth, and the edges of people and objects a little too polished. After you watch enough "genuine" Veo creations, you can start to pick out the patterns. Regardless, TikTokers trying to pass off real videos as fakes—even as a joke or engagement hack—is a recognition that video sites are now deep in the "deep doubt" era, where you have to be extra skeptical of even legitimate-looking video footage. And the mere existence of convincing AI fakes makes it easier than ever to claim real events captured on video didn't really happen, a problem that political scientists call the liar's dividend. We saw this when then-candidate Trump accused Democratic nominee Kamala Harris of "A.I.'d" crowds in real photos of her Detroit airport rally. For now, TikTokers of all stripes are having fun playing with that idea to gain social media attention. In the long term, though, the implications for discerning truth from reality are more troubling. Kyle Orland Senior Gaming Editor Kyle Orland Senior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 13 Comments
    0 Comments 0 Shares
  • Mission: Impossible – The Final Reckoning Fan Theory Fixes Series’ Most Controversial Twist

    Features Mission: Impossible – The Final Reckoning Fan Theory Fixes Series’ Most Controversial Twist
    A new fan theory about the ending of Mission: Impossible - The Final Reckoning is gaining steam online. And it would fix what some consider to be the series' biggest mistake.

    By Tom Chapman | May 30, 2025 | |

    Photo: Paramount Pictures

    This article contains Mission: Impossible – The Final Reckoning spoilers.
    For now it looks like Christopher McQuarrie’s Mission: Impossible – The Final Reckoning really could be the end of the long-running spy series. While there’s plenty of talk about Tom Cruise hanging up his badge as the Impossible Missions Force’s Ethan Hunt or possibly handing over the baton to one of the many other unwilling recruits, there’s plenty of evidence that we’re not done yet. The critic scores and box office point to an appetite for Mission: Impossible 9, and now a popular online theory is taking off that a fan-favorite could soon be back in action.
    After Brian De Palma’s original Mission: ImpossibleRebecca Ferguson’s Isla Faust in 2023’s Mission: Impossible – Dead Reckoning that sticks out more than most. Although Ilsa was seemingly killed by Esai Morales’ villainous Gabriel during a tense Venice action scene, the fact that her death seemed so sudden and was glossed over so quickly led many to believe she’d be back for The Final Reckoning. That’s sadly not the case, but what about in the franchise’s future?

    Previous outings have shown that Ethan’s dangerous career path affects his ability to hold down a relationship. Additionally, the franchise is no stranger to bringing characters back from the dead. When both features are coupled with Ilsa’s somewhat underwhelming death, it’s no surprise that fans are clinging onto the idea she’ll return in the inevitable next movie. And during The Final Reckoning’s final scene, where Ethan splits from his team in London, eagle-eyed fans spotted him veering close to an unnamed woman who looks a lot like Ferguson’s dearly departed assassin. Some suggested it was Hayley Atwell’s Grace, but with her having already said her goodbyes and gone in a different direction, it clearly can’t be her.

    Supporters of the theory have latched onto footage of Ilsa from Fallout and compared it to the mysterious Final Reckoning woman. The stranger has a similar wavy hairstyle to Ilsa, and a choice in baggy clothes. It would also be a neat parallel of the pair parting ways and going in different directionsduring Rogue Nation.
    Others have likened this theory to Christian Bale’s Bruce Wayne meeting with Anne Hathaway’s Selina Kyle after he faked his death in The Dark Knight Rises. Given Ferguson’s raised profile in Silo and the Dune movies, landing her again would be a major coup, but what has the star herself said?
    Ferguson has previously explained why she felt the need to step away from Mission: Impossible, telling the Unwrapped podcast how it was more than just her three-movie deal being done: “Ilsa was becoming a team player. And we all can want different things, but for me, Ilsa was rogue. Ilsa was naughty. Ilsa was unpredictable. There was a lot of characters coming in, not leaving enough space for what she had been.”
    We previously said how Ilsa’s Dead Reckoning death effectively ‘fridged’ her character to catapult Ethan’s arc forward and leave more room for Grace to step up as a franchise lead. Most frustratingly, after becoming a mainstay of the previous two movies, she was forced to take a backseat in the first half of Dead Reckoning and given a quick demise that was barely referenced afterward. Going against the idea that we’ll see Ilsa again, Dead Reckoning’s Arabian-set opening already had Ethan help her fake her death. It’s true that we don’t see what happens to her body, but a double fake out might be even too much for a franchise that’s taught us to never trust what we see thanks to its mask technology and old-fashioned sleight of hand. 
    Another reason you shouldn’t start cheering Ilsa’s welcome return to Mission: Impossible is that McQuarrie might have shut down the theory before it even got to do the rounds. The issue of Ilsa’s absence has been a hotly contested one, especially considering Ferguson only appeared via archive footage without filming anything new. Despite the controversy, McQuarrie told theHappy Sad Confused podcast that “it’s the cost versus benefit. The death of essential characters has followed Ethanthroughout every one of these movies. I don’t think up until that point a character that resonated so deeply with the audience had died.” While the director says he understands why some were dissatisfied with how it happened, he concluded, “Which is where I thought that wouldn’t motivate me to undo the one thing that gives Mission: Impossible teeth, which is ‘death is permanent’.”
    It’s no secret that the Mission: Impossible movies have tried their best to tie up loose ends. Thandiwe Newton denied rumors she was asked to reprise her role as Nyah Nordoff-Hall in Mission: Impossible III, Jeremy Renner recently told the Happy Sad Confused podcast that he turned down another chance to play William Brandt because he wanted to spend more time with his daughter, and Maggie Q told Yahoo in 2020 that she had to turn down two opportunities to reappear as Zhen Li due to filming commitments.

    Unfortunately for Faust fans, it sounds like McQuarrie thinks she got the ending he wanted. It might be hard to keep Ferguson’s return a secret if there’s another Mission: Impossible, and we’re still a long way from potentially seeing Isla Faust again.

    Join our mailing list
    Get the best of Den of Geek delivered right to your inbox!
    #mission #impossible #final #reckoning #fan
    Mission: Impossible – The Final Reckoning Fan Theory Fixes Series’ Most Controversial Twist
    Features Mission: Impossible – The Final Reckoning Fan Theory Fixes Series’ Most Controversial Twist A new fan theory about the ending of Mission: Impossible - The Final Reckoning is gaining steam online. And it would fix what some consider to be the series' biggest mistake. By Tom Chapman | May 30, 2025 | | Photo: Paramount Pictures This article contains Mission: Impossible – The Final Reckoning spoilers. For now it looks like Christopher McQuarrie’s Mission: Impossible – The Final Reckoning really could be the end of the long-running spy series. While there’s plenty of talk about Tom Cruise hanging up his badge as the Impossible Missions Force’s Ethan Hunt or possibly handing over the baton to one of the many other unwilling recruits, there’s plenty of evidence that we’re not done yet. The critic scores and box office point to an appetite for Mission: Impossible 9, and now a popular online theory is taking off that a fan-favorite could soon be back in action. After Brian De Palma’s original Mission: ImpossibleRebecca Ferguson’s Isla Faust in 2023’s Mission: Impossible – Dead Reckoning that sticks out more than most. Although Ilsa was seemingly killed by Esai Morales’ villainous Gabriel during a tense Venice action scene, the fact that her death seemed so sudden and was glossed over so quickly led many to believe she’d be back for The Final Reckoning. That’s sadly not the case, but what about in the franchise’s future? Previous outings have shown that Ethan’s dangerous career path affects his ability to hold down a relationship. Additionally, the franchise is no stranger to bringing characters back from the dead. When both features are coupled with Ilsa’s somewhat underwhelming death, it’s no surprise that fans are clinging onto the idea she’ll return in the inevitable next movie. And during The Final Reckoning’s final scene, where Ethan splits from his team in London, eagle-eyed fans spotted him veering close to an unnamed woman who looks a lot like Ferguson’s dearly departed assassin. Some suggested it was Hayley Atwell’s Grace, but with her having already said her goodbyes and gone in a different direction, it clearly can’t be her. Supporters of the theory have latched onto footage of Ilsa from Fallout and compared it to the mysterious Final Reckoning woman. The stranger has a similar wavy hairstyle to Ilsa, and a choice in baggy clothes. It would also be a neat parallel of the pair parting ways and going in different directionsduring Rogue Nation. Others have likened this theory to Christian Bale’s Bruce Wayne meeting with Anne Hathaway’s Selina Kyle after he faked his death in The Dark Knight Rises. Given Ferguson’s raised profile in Silo and the Dune movies, landing her again would be a major coup, but what has the star herself said? Ferguson has previously explained why she felt the need to step away from Mission: Impossible, telling the Unwrapped podcast how it was more than just her three-movie deal being done: “Ilsa was becoming a team player. And we all can want different things, but for me, Ilsa was rogue. Ilsa was naughty. Ilsa was unpredictable. There was a lot of characters coming in, not leaving enough space for what she had been.” We previously said how Ilsa’s Dead Reckoning death effectively ‘fridged’ her character to catapult Ethan’s arc forward and leave more room for Grace to step up as a franchise lead. Most frustratingly, after becoming a mainstay of the previous two movies, she was forced to take a backseat in the first half of Dead Reckoning and given a quick demise that was barely referenced afterward. Going against the idea that we’ll see Ilsa again, Dead Reckoning’s Arabian-set opening already had Ethan help her fake her death. It’s true that we don’t see what happens to her body, but a double fake out might be even too much for a franchise that’s taught us to never trust what we see thanks to its mask technology and old-fashioned sleight of hand.  Another reason you shouldn’t start cheering Ilsa’s welcome return to Mission: Impossible is that McQuarrie might have shut down the theory before it even got to do the rounds. The issue of Ilsa’s absence has been a hotly contested one, especially considering Ferguson only appeared via archive footage without filming anything new. Despite the controversy, McQuarrie told theHappy Sad Confused podcast that “it’s the cost versus benefit. The death of essential characters has followed Ethanthroughout every one of these movies. I don’t think up until that point a character that resonated so deeply with the audience had died.” While the director says he understands why some were dissatisfied with how it happened, he concluded, “Which is where I thought that wouldn’t motivate me to undo the one thing that gives Mission: Impossible teeth, which is ‘death is permanent’.” It’s no secret that the Mission: Impossible movies have tried their best to tie up loose ends. Thandiwe Newton denied rumors she was asked to reprise her role as Nyah Nordoff-Hall in Mission: Impossible III, Jeremy Renner recently told the Happy Sad Confused podcast that he turned down another chance to play William Brandt because he wanted to spend more time with his daughter, and Maggie Q told Yahoo in 2020 that she had to turn down two opportunities to reappear as Zhen Li due to filming commitments. Unfortunately for Faust fans, it sounds like McQuarrie thinks she got the ending he wanted. It might be hard to keep Ferguson’s return a secret if there’s another Mission: Impossible, and we’re still a long way from potentially seeing Isla Faust again. Join our mailing list Get the best of Den of Geek delivered right to your inbox! #mission #impossible #final #reckoning #fan
    WWW.DENOFGEEK.COM
    Mission: Impossible – The Final Reckoning Fan Theory Fixes Series’ Most Controversial Twist
    Features Mission: Impossible – The Final Reckoning Fan Theory Fixes Series’ Most Controversial Twist A new fan theory about the ending of Mission: Impossible - The Final Reckoning is gaining steam online. And it would fix what some consider to be the series' biggest mistake. By Tom Chapman | May 30, 2025 | | Photo: Paramount Pictures This article contains Mission: Impossible – The Final Reckoning spoilers. For now it looks like Christopher McQuarrie’s Mission: Impossible – The Final Reckoning really could be the end of the long-running spy series. While there’s plenty of talk about Tom Cruise hanging up his badge as the Impossible Missions Force’s Ethan Hunt or possibly handing over the baton to one of the many other unwilling recruits, there’s plenty of evidence that we’re not done yet. The critic scores and box office point to an appetite for Mission: Impossible 9, and now a popular online theory is taking off that a fan-favorite could soon be back in action. After Brian De Palma’s original Mission: ImpossibleRebecca Ferguson’s Isla Faust in 2023’s Mission: Impossible – Dead Reckoning that sticks out more than most. Although Ilsa was seemingly killed by Esai Morales’ villainous Gabriel during a tense Venice action scene, the fact that her death seemed so sudden and was glossed over so quickly led many to believe she’d be back for The Final Reckoning. That’s sadly not the case, but what about in the franchise’s future? Previous outings have shown that Ethan’s dangerous career path affects his ability to hold down a relationship (Michelle Monaghan’s Julia in Mission: Impossible III). Additionally, the franchise is no stranger to bringing characters back from the dead (Jon Voight’s Jim Phelps in Mission: Impossible springs to mind). When both features are coupled with Ilsa’s somewhat underwhelming death, it’s no surprise that fans are clinging onto the idea she’ll return in the inevitable next movie. And during The Final Reckoning’s final scene, where Ethan splits from his team in London, eagle-eyed fans spotted him veering close to an unnamed woman who looks a lot like Ferguson’s dearly departed assassin. Some suggested it was Hayley Atwell’s Grace, but with her having already said her goodbyes and gone in a different direction, it clearly can’t be her. Supporters of the theory have latched onto footage of Ilsa from Fallout and compared it to the mysterious Final Reckoning woman. The stranger has a similar wavy hairstyle to Ilsa, and a choice in baggy clothes. It would also be a neat parallel of the pair parting ways and going in different directions (in London, no less) during Rogue Nation. Others have likened this theory to Christian Bale’s Bruce Wayne meeting with Anne Hathaway’s Selina Kyle after he faked his death in The Dark Knight Rises. Given Ferguson’s raised profile in Silo and the Dune movies, landing her again would be a major coup, but what has the star herself said? Ferguson has previously explained why she felt the need to step away from Mission: Impossible, telling the Unwrapped podcast how it was more than just her three-movie deal being done: “Ilsa was becoming a team player. And we all can want different things, but for me, Ilsa was rogue. Ilsa was naughty. Ilsa was unpredictable. There was a lot of characters coming in, not leaving enough space for what she had been.” We previously said how Ilsa’s Dead Reckoning death effectively ‘fridged’ her character to catapult Ethan’s arc forward and leave more room for Grace to step up as a franchise lead. Most frustratingly, after becoming a mainstay of the previous two movies, she was forced to take a backseat in the first half of Dead Reckoning and given a quick demise that was barely referenced afterward. Going against the idea that we’ll see Ilsa again, Dead Reckoning’s Arabian-set opening already had Ethan help her fake her death. It’s true that we don’t see what happens to her body, but a double fake out might be even too much for a franchise that’s taught us to never trust what we see thanks to its mask technology and old-fashioned sleight of hand.  Another reason you shouldn’t start cheering Ilsa’s welcome return to Mission: Impossible is that McQuarrie might have shut down the theory before it even got to do the rounds. The issue of Ilsa’s absence has been a hotly contested one, especially considering Ferguson only appeared via archive footage without filming anything new. Despite the controversy, McQuarrie told theHappy Sad Confused podcast that “it’s the cost versus benefit. The death of essential characters has followed Ethan [Hunt] throughout every one of these movies. I don’t think up until that point a character that resonated so deeply with the audience had died.” While the director says he understands why some were dissatisfied with how it happened, he concluded, “Which is where I thought that wouldn’t motivate me to undo the one thing that gives Mission: Impossible teeth, which is ‘death is permanent’.” It’s no secret that the Mission: Impossible movies have tried their best to tie up loose ends. Thandiwe Newton denied rumors she was asked to reprise her role as Nyah Nordoff-Hall in Mission: Impossible III, Jeremy Renner recently told the Happy Sad Confused podcast that he turned down another chance to play William Brandt because he wanted to spend more time with his daughter, and Maggie Q told Yahoo in 2020 that she had to turn down two opportunities to reappear as Zhen Li due to filming commitments. Unfortunately for Faust fans, it sounds like McQuarrie thinks she got the ending he wanted. It might be hard to keep Ferguson’s return a secret if there’s another Mission: Impossible, and we’re still a long way from potentially seeing Isla Faust again. Join our mailing list Get the best of Den of Geek delivered right to your inbox!
    0 Comments 0 Shares
  • Nick Clegg says asking artists for use permission would ‘kill’ the AI industry

    As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would “basically kill” the AI industry.Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content,first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”The comments follow a back-and-forth in Parliament over new legislation that aims to give creative industries more insight into how their work is used by AI companies. An amendment to the DataBill would require technology companies to disclose what copyrighted works were used to train AI models. Paul McCartney, Dua Lipa, Elton John, and Andrew Lloyd Webber are among the hundreds of musicians, writers, designers, and journalists who signed an open letter in support of the amendment earlier in May.The amendment — introduced by Beeban Kidron, who is also a film producer and director — has bounced around gaining support. But on Thursday members of parliament rejected the proposal, with technology secretary Peter Kyle saying the “Britain’s economy needs bothsectors to succeed and to prosper.” Kidron and others have said a transparency requirement would allow copyright law to be enforced, and that AI companies would be less likely to “steal” work in the first place if they are required to disclose what content they used to train models.In an op-ed in the Guardian Kidron promised that “the fight isn’t over yet,” as the DataBill returns to the House of Lords in early June.See More:
    #nick #clegg #says #asking #artists
    Nick Clegg says asking artists for use permission would ‘kill’ the AI industry
    As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would “basically kill” the AI industry.Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content,first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”The comments follow a back-and-forth in Parliament over new legislation that aims to give creative industries more insight into how their work is used by AI companies. An amendment to the DataBill would require technology companies to disclose what copyrighted works were used to train AI models. Paul McCartney, Dua Lipa, Elton John, and Andrew Lloyd Webber are among the hundreds of musicians, writers, designers, and journalists who signed an open letter in support of the amendment earlier in May.The amendment — introduced by Beeban Kidron, who is also a film producer and director — has bounced around gaining support. But on Thursday members of parliament rejected the proposal, with technology secretary Peter Kyle saying the “Britain’s economy needs bothsectors to succeed and to prosper.” Kidron and others have said a transparency requirement would allow copyright law to be enforced, and that AI companies would be less likely to “steal” work in the first place if they are required to disclose what content they used to train models.In an op-ed in the Guardian Kidron promised that “the fight isn’t over yet,” as the DataBill returns to the House of Lords in early June.See More: #nick #clegg #says #asking #artists
    WWW.THEVERGE.COM
    Nick Clegg says asking artists for use permission would ‘kill’ the AI industry
    As policy makers in the UK weigh how to regulate the AI industry, Nick Clegg, former UK deputy prime minister and former Meta executive, claimed a push for artist consent would “basically kill” the AI industry.Speaking at an event promoting his new book, Clegg said the creative community should have the right to opt out of having their work used to train AI models. But he claimed it wasn’t feasible to ask for consent before ingesting their work first.“I think the creative community wants to go a step further,” Clegg said according to The Times. “Quite a lot of voices say, ‘You can only train on my content, [if you] first ask’. And I have to say that strikes me as somewhat implausible because these systems train on vast amounts of data.”“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” Clegg said. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”The comments follow a back-and-forth in Parliament over new legislation that aims to give creative industries more insight into how their work is used by AI companies. An amendment to the Data (Use and Access) Bill would require technology companies to disclose what copyrighted works were used to train AI models. Paul McCartney, Dua Lipa, Elton John, and Andrew Lloyd Webber are among the hundreds of musicians, writers, designers, and journalists who signed an open letter in support of the amendment earlier in May.The amendment — introduced by Beeban Kidron, who is also a film producer and director — has bounced around gaining support. But on Thursday members of parliament rejected the proposal, with technology secretary Peter Kyle saying the “Britain’s economy needs both [AI and creative] sectors to succeed and to prosper.” Kidron and others have said a transparency requirement would allow copyright law to be enforced, and that AI companies would be less likely to “steal” work in the first place if they are required to disclose what content they used to train models.In an op-ed in the Guardian Kidron promised that “the fight isn’t over yet,” as the Data (Use and Access) Bill returns to the House of Lords in early June.See More:
    0 Comments 0 Shares
  • 200 mph for 500 miles: How IndyCar drivers prepare for the big race

    Memorial Day Sunday

    200 mph for 500 miles: How IndyCar drivers prepare for the big race

    Andretti Global's Kyle Kirkwood and Marcus Ericsson talk to us about the Indy 500.

    Jonathan M. Gitlin



    May 24, 2025 11:30 am

    |

    8

    #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana.

    Credit:

    Brandon Badraoui/Lumen via Getty Images

    #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana.

    Credit:

    Brandon Badraoui/Lumen via Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    This coming weekend is a special one for most motorsport fans. There are Formula 1 races in Monaco and NASCAR races in Charlotte. And arguably towering over them both is the Indianapolis 500, being held this year for the 109th time. America's oldest race is also one of its toughest: The track may have just four turns, but the cars negotiate them going three times faster than you drive on the highway, inches from the wall. For hours. At least at Le Mans, you have more than one driver per car.
    This year's race promises to be an exciting one. The track is sold out for the first time since the centenary race in 2016. A rookie driver and a team new to the series took pole position. Two very fast cars are starting at the back thanks to another conflict-of-interest scandal involving Team Penske, the second in two years for a team whose owner also owns the track and the series. And the cars are trickier to drive than they have been for many years, thanks to a new supercapacitor-based hybrid system that has added more than 100 lbs to the rear of the car, shifting the weight distribution further back.
    Ahead of Sunday's race, I spoke with a couple of IndyCar drivers and some engineers to get a better sense of how they prepare and what to expect.

    This year, the cars are harder to drive thanks to a hybrid system that has altered the weight balance.

    Credit:

    Geoff MIller/Lumen via Getty Images

    Concentrate
    It all comes "from months of preparation," said Marcus Ericsson, winner of the race in 2022 and one of Andretti Global's drivers in this year's event. "When we get here to the month of May, it's just such a busy month. So you've got to be prepared mentally—and basically before you get to the month of May because if you start doing it now, it's too late," he told me.

    The drivers spend all month at the track, with a race on the road course earlier this month. Then there's testing on the historic oval, followed by qualifying last weekend and the race this coming Sunday. "So all those hours you put in in the winter, really, and leading up here to the month of May—it's what pays off now," Ericsson said. That work involved multiple sessions of physical training each week, and Ericsson says he also does weekly mental coaching sessions.
    "This is a mental challenge," Ericsson told me. "Doing those speeds with our cars, you can't really afford to have a split second of loss of concentration because then you might be in the wall and your day is over and you might hurt yourself."
    When drivers get tired or their focus slips, that's when mistakes happen, and a mistake at Indy often has consequences.

    Ericsson is sponsored by the antihistamine Allegra and its anti-drowsy-driving campaign. Fans can scan the QR codes on the back of his pit crew's shirts for a "gamified experience."

    Credit:

    Andretti Global/Allegra

    Simulate
    Being mentally and physically prepared is part of it. It also helps if you can roll the race car off the transporter and onto the track with a setup that works rather than spending the month chasing the right combination of dampers, springs, wing angles, and so on. And these days, that means a lot of simulation testing.
    The multi-axis driver in the loop simulators might look like just a very expensive video game, but these multimillion-dollar setups aren't about having fun. "Everything that you are feeling or changing in the sim is ultimately going to reflect directly to what happens on track," explained Kyle Kirkwood, teammate to Ericsson at Andretti Global and one of only two drivers to have won an Indycar race in 2025.
    Andretti, like the other teams using Honda engines, uses the new HRC simulator in Indiana. "And yes, it's a very expensive asset, but it's also likely cheaper than going to the track and doing the real thing," Kirkwood said. "And it's a much more controlled environment than being at the track because temperature changes or track conditions or wind direction play a huge factor with our car."

    A high degree of correlation between the simulation and the track is what makes it a powerful tool. "We run through a sim, and you only get so many opportunities, especially at a place like Indianapolis, where you go from one day to the next and the temperature swings, or the wind conditions, or whatever might change drastically," Kirkwood said. "You have to be able to sim it and be confident with the sim that you're running to go out there and have a similar balance or a similar performance."

    Andretti Global's Kyle Kirkwood is the only driver other than Álex Palou to have won an IndyCar race in 2025.

    Credit:

    Alison Arena/Andretti Global

    "So you have to make adjustments, whether it's a spring rate, whether it's keel ballast or just overall, maybe center of pressure, something like that," Kirkwood said. "You have to be able to adjust to it. And that's where the sim tool comes in play. You move the weight balance back, and you're like, OK, now what happens with the balance? How do I tune that back in? And you run that all through the sim, and for us, it's been mirror-perfect going to the track when we do that."
    More impressively, a lot of that work was done months ago. "I would say most of it, we got through it before the start of this season," Kirkwood said. "Once we get into the season, we only get a select few days because every Honda team has to run on the same simulator. Of course, it's different with the engineering sim; those are running nonstop."
    Sims are for engineers, too
    An IndyCar team is more than just its drivers—"the spacer between the seat and the wheel," according to Kirkwood—and the engineers rely heavily on sim work now that real-world testing is so highly restricted. And they use a lot more than just driver-in-the-loop.

    "Digital simulation probably goes to a higher level," explained Scott Graves, engineering manager at Andretti Global. "A lot of the models we develop work in the DiL as well as our other digital tools. We try to develop universal models, whether that's tire models, engine models, or transmission models."
    "Once you get into to a fully digital model, then I think your optimization process starts kicking in," Graves said. "You're not just changing the setting and running a pretend lap with a driver holding a wheel. You're able to run through numerous settings and optimization routines and step through a massive number of permutations on a car. Obviously, you're looking for better lap times, but you're also looking for fuel efficiency and a lot of other parameters that go into crossing the finish line first."

    Parts like this anti-roll bar are simulated thousands of times.

    Credit:

    Siemens/Andretti Global

    As an example, Graves points to the dampers. "The shock absorber is a perfect example where that's a highly sophisticated piece of equipment on the car and it's very open for team development. So our cars have fully customized designs there that are optimized for how we run the car, and they may not be good on another team's car because we're so honed in on what we're doing with the car," he said.
    "The more accurate a digital twin is, the more we are able to use that digital twin to predict the performance of the car," said David Taylor, VP of industry strategy at Siemens DISW, which has partnered with Andretti for some years now. "It will never be as complete and accurate as we want it to be. So it's a continuous pursuit, and we keep adding technology to our portfolio and acquiring companies to try to provide more and more tools to people like Scott so they can more accurately predict that performance."

    What to expect on Sunday?
    Kirkwood was bullish about his chances despite starting relatively deep in the field, qualifying in 23rd place. "We've been phenomenal in race trim and qualifying," he said. "We had a bit of a head-scratcher if I'm being honest—I thought we would definitely be a top-six contender, if not a front row contender, and it just didn't pan out that way on Saturday qualifying."
    "But we rolled back out on Monday—the car was phenomenal. Once again, we feel very, very racy in traffic, which is a completely different animal than running qualifying," Kirkwood said. "So I'm happy with it. I think our chances are good. We're starting deep in the field, but so are a lot of other drivers. So you can expect a handful of us to move forward."
    The more nervous hybrid IndyCars with their more rearward weight bias will probably result in more cautions, according to Ericsson, who will line up sixth for the start of the race on Sunday.
    "Whereas in previous years you could have a bit of a moment and it would scare you, you usually get away with it," he said. "This year, if you have a moment, it usually ends up with you being in the fence. I think that's why we've seen so many crashes this year—because a pendulum effect from the rear of the car that when you start losing it, this is very, very difficult or almost impossible to catch."
    "I think it's going to mean that the race is going to be quite a few incidents with people making mistakes," Ericsson said. "In practice, if your car is not behaving well, you bring it to the pit lane, right? You can do adjustments, whereas in the race, you have to just tough it out until the next pit stop and then make some small adjustments. So if you have a bad car at the start a race, it's going to be a tough one. So I think it's going to be a very dramatic and entertaining race."

    Jonathan M. Gitlin
    Automotive Editor

    Jonathan M. Gitlin
    Automotive Editor

    Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC.

    8 Comments
    #mph #miles #how #indycar #drivers
    200 mph for 500 miles: How IndyCar drivers prepare for the big race
    Memorial Day Sunday 200 mph for 500 miles: How IndyCar drivers prepare for the big race Andretti Global's Kyle Kirkwood and Marcus Ericsson talk to us about the Indy 500. Jonathan M. Gitlin – May 24, 2025 11:30 am | 8 #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This coming weekend is a special one for most motorsport fans. There are Formula 1 races in Monaco and NASCAR races in Charlotte. And arguably towering over them both is the Indianapolis 500, being held this year for the 109th time. America's oldest race is also one of its toughest: The track may have just four turns, but the cars negotiate them going three times faster than you drive on the highway, inches from the wall. For hours. At least at Le Mans, you have more than one driver per car. This year's race promises to be an exciting one. The track is sold out for the first time since the centenary race in 2016. A rookie driver and a team new to the series took pole position. Two very fast cars are starting at the back thanks to another conflict-of-interest scandal involving Team Penske, the second in two years for a team whose owner also owns the track and the series. And the cars are trickier to drive than they have been for many years, thanks to a new supercapacitor-based hybrid system that has added more than 100 lbs to the rear of the car, shifting the weight distribution further back. Ahead of Sunday's race, I spoke with a couple of IndyCar drivers and some engineers to get a better sense of how they prepare and what to expect. This year, the cars are harder to drive thanks to a hybrid system that has altered the weight balance. Credit: Geoff MIller/Lumen via Getty Images Concentrate It all comes "from months of preparation," said Marcus Ericsson, winner of the race in 2022 and one of Andretti Global's drivers in this year's event. "When we get here to the month of May, it's just such a busy month. So you've got to be prepared mentally—and basically before you get to the month of May because if you start doing it now, it's too late," he told me. The drivers spend all month at the track, with a race on the road course earlier this month. Then there's testing on the historic oval, followed by qualifying last weekend and the race this coming Sunday. "So all those hours you put in in the winter, really, and leading up here to the month of May—it's what pays off now," Ericsson said. That work involved multiple sessions of physical training each week, and Ericsson says he also does weekly mental coaching sessions. "This is a mental challenge," Ericsson told me. "Doing those speeds with our cars, you can't really afford to have a split second of loss of concentration because then you might be in the wall and your day is over and you might hurt yourself." When drivers get tired or their focus slips, that's when mistakes happen, and a mistake at Indy often has consequences. Ericsson is sponsored by the antihistamine Allegra and its anti-drowsy-driving campaign. Fans can scan the QR codes on the back of his pit crew's shirts for a "gamified experience." Credit: Andretti Global/Allegra Simulate Being mentally and physically prepared is part of it. It also helps if you can roll the race car off the transporter and onto the track with a setup that works rather than spending the month chasing the right combination of dampers, springs, wing angles, and so on. And these days, that means a lot of simulation testing. The multi-axis driver in the loop simulators might look like just a very expensive video game, but these multimillion-dollar setups aren't about having fun. "Everything that you are feeling or changing in the sim is ultimately going to reflect directly to what happens on track," explained Kyle Kirkwood, teammate to Ericsson at Andretti Global and one of only two drivers to have won an Indycar race in 2025. Andretti, like the other teams using Honda engines, uses the new HRC simulator in Indiana. "And yes, it's a very expensive asset, but it's also likely cheaper than going to the track and doing the real thing," Kirkwood said. "And it's a much more controlled environment than being at the track because temperature changes or track conditions or wind direction play a huge factor with our car." A high degree of correlation between the simulation and the track is what makes it a powerful tool. "We run through a sim, and you only get so many opportunities, especially at a place like Indianapolis, where you go from one day to the next and the temperature swings, or the wind conditions, or whatever might change drastically," Kirkwood said. "You have to be able to sim it and be confident with the sim that you're running to go out there and have a similar balance or a similar performance." Andretti Global's Kyle Kirkwood is the only driver other than Álex Palou to have won an IndyCar race in 2025. Credit: Alison Arena/Andretti Global "So you have to make adjustments, whether it's a spring rate, whether it's keel ballast or just overall, maybe center of pressure, something like that," Kirkwood said. "You have to be able to adjust to it. And that's where the sim tool comes in play. You move the weight balance back, and you're like, OK, now what happens with the balance? How do I tune that back in? And you run that all through the sim, and for us, it's been mirror-perfect going to the track when we do that." More impressively, a lot of that work was done months ago. "I would say most of it, we got through it before the start of this season," Kirkwood said. "Once we get into the season, we only get a select few days because every Honda team has to run on the same simulator. Of course, it's different with the engineering sim; those are running nonstop." Sims are for engineers, too An IndyCar team is more than just its drivers—"the spacer between the seat and the wheel," according to Kirkwood—and the engineers rely heavily on sim work now that real-world testing is so highly restricted. And they use a lot more than just driver-in-the-loop. "Digital simulation probably goes to a higher level," explained Scott Graves, engineering manager at Andretti Global. "A lot of the models we develop work in the DiL as well as our other digital tools. We try to develop universal models, whether that's tire models, engine models, or transmission models." "Once you get into to a fully digital model, then I think your optimization process starts kicking in," Graves said. "You're not just changing the setting and running a pretend lap with a driver holding a wheel. You're able to run through numerous settings and optimization routines and step through a massive number of permutations on a car. Obviously, you're looking for better lap times, but you're also looking for fuel efficiency and a lot of other parameters that go into crossing the finish line first." Parts like this anti-roll bar are simulated thousands of times. Credit: Siemens/Andretti Global As an example, Graves points to the dampers. "The shock absorber is a perfect example where that's a highly sophisticated piece of equipment on the car and it's very open for team development. So our cars have fully customized designs there that are optimized for how we run the car, and they may not be good on another team's car because we're so honed in on what we're doing with the car," he said. "The more accurate a digital twin is, the more we are able to use that digital twin to predict the performance of the car," said David Taylor, VP of industry strategy at Siemens DISW, which has partnered with Andretti for some years now. "It will never be as complete and accurate as we want it to be. So it's a continuous pursuit, and we keep adding technology to our portfolio and acquiring companies to try to provide more and more tools to people like Scott so they can more accurately predict that performance." What to expect on Sunday? Kirkwood was bullish about his chances despite starting relatively deep in the field, qualifying in 23rd place. "We've been phenomenal in race trim and qualifying," he said. "We had a bit of a head-scratcher if I'm being honest—I thought we would definitely be a top-six contender, if not a front row contender, and it just didn't pan out that way on Saturday qualifying." "But we rolled back out on Monday—the car was phenomenal. Once again, we feel very, very racy in traffic, which is a completely different animal than running qualifying," Kirkwood said. "So I'm happy with it. I think our chances are good. We're starting deep in the field, but so are a lot of other drivers. So you can expect a handful of us to move forward." The more nervous hybrid IndyCars with their more rearward weight bias will probably result in more cautions, according to Ericsson, who will line up sixth for the start of the race on Sunday. "Whereas in previous years you could have a bit of a moment and it would scare you, you usually get away with it," he said. "This year, if you have a moment, it usually ends up with you being in the fence. I think that's why we've seen so many crashes this year—because a pendulum effect from the rear of the car that when you start losing it, this is very, very difficult or almost impossible to catch." "I think it's going to mean that the race is going to be quite a few incidents with people making mistakes," Ericsson said. "In practice, if your car is not behaving well, you bring it to the pit lane, right? You can do adjustments, whereas in the race, you have to just tough it out until the next pit stop and then make some small adjustments. So if you have a bad car at the start a race, it's going to be a tough one. So I think it's going to be a very dramatic and entertaining race." Jonathan M. Gitlin Automotive Editor Jonathan M. Gitlin Automotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 8 Comments #mph #miles #how #indycar #drivers
    ARSTECHNICA.COM
    200 mph for 500 miles: How IndyCar drivers prepare for the big race
    Memorial Day Sunday 200 mph for 500 miles: How IndyCar drivers prepare for the big race Andretti Global's Kyle Kirkwood and Marcus Ericsson talk to us about the Indy 500. Jonathan M. Gitlin – May 24, 2025 11:30 am | 8 #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images #28, Marcus Ericsson, Andretti Global Honda prior to the NTT IndyCar Series 109th Running of the Indianapolis 500 at Indianapolis Motor Speedway on May 15, 2025 in Indianapolis, Indiana. Credit: Brandon Badraoui/Lumen via Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This coming weekend is a special one for most motorsport fans. There are Formula 1 races in Monaco and NASCAR races in Charlotte. And arguably towering over them both is the Indianapolis 500, being held this year for the 109th time. America's oldest race is also one of its toughest: The track may have just four turns, but the cars negotiate them going three times faster than you drive on the highway, inches from the wall. For hours. At least at Le Mans, you have more than one driver per car. This year's race promises to be an exciting one. The track is sold out for the first time since the centenary race in 2016. A rookie driver and a team new to the series took pole position. Two very fast cars are starting at the back thanks to another conflict-of-interest scandal involving Team Penske, the second in two years for a team whose owner also owns the track and the series. And the cars are trickier to drive than they have been for many years, thanks to a new supercapacitor-based hybrid system that has added more than 100 lbs to the rear of the car, shifting the weight distribution further back. Ahead of Sunday's race, I spoke with a couple of IndyCar drivers and some engineers to get a better sense of how they prepare and what to expect. This year, the cars are harder to drive thanks to a hybrid system that has altered the weight balance. Credit: Geoff MIller/Lumen via Getty Images Concentrate It all comes "from months of preparation," said Marcus Ericsson, winner of the race in 2022 and one of Andretti Global's drivers in this year's event. "When we get here to the month of May, it's just such a busy month. So you've got to be prepared mentally—and basically before you get to the month of May because if you start doing it now, it's too late," he told me. The drivers spend all month at the track, with a race on the road course earlier this month. Then there's testing on the historic oval, followed by qualifying last weekend and the race this coming Sunday. "So all those hours you put in in the winter, really, and leading up here to the month of May—it's what pays off now," Ericsson said. That work involved multiple sessions of physical training each week, and Ericsson says he also does weekly mental coaching sessions. "This is a mental challenge," Ericsson told me. "Doing those speeds with our cars, you can't really afford to have a split second of loss of concentration because then you might be in the wall and your day is over and you might hurt yourself." When drivers get tired or their focus slips, that's when mistakes happen, and a mistake at Indy often has consequences. Ericsson is sponsored by the antihistamine Allegra and its anti-drowsy-driving campaign. Fans can scan the QR codes on the back of his pit crew's shirts for a "gamified experience." Credit: Andretti Global/Allegra Simulate Being mentally and physically prepared is part of it. It also helps if you can roll the race car off the transporter and onto the track with a setup that works rather than spending the month chasing the right combination of dampers, springs, wing angles, and so on. And these days, that means a lot of simulation testing. The multi-axis driver in the loop simulators might look like just a very expensive video game, but these multimillion-dollar setups aren't about having fun. "Everything that you are feeling or changing in the sim is ultimately going to reflect directly to what happens on track," explained Kyle Kirkwood, teammate to Ericsson at Andretti Global and one of only two drivers to have won an Indycar race in 2025. Andretti, like the other teams using Honda engines, uses the new HRC simulator in Indiana. "And yes, it's a very expensive asset, but it's also likely cheaper than going to the track and doing the real thing," Kirkwood said. "And it's a much more controlled environment than being at the track because temperature changes or track conditions or wind direction play a huge factor with our car." A high degree of correlation between the simulation and the track is what makes it a powerful tool. "We run through a sim, and you only get so many opportunities, especially at a place like Indianapolis, where you go from one day to the next and the temperature swings, or the wind conditions, or whatever might change drastically," Kirkwood said. "You have to be able to sim it and be confident with the sim that you're running to go out there and have a similar balance or a similar performance." Andretti Global's Kyle Kirkwood is the only driver other than Álex Palou to have won an IndyCar race in 2025. Credit: Alison Arena/Andretti Global "So you have to make adjustments, whether it's a spring rate, whether it's keel ballast or just overall, maybe center of pressure, something like that," Kirkwood said. "You have to be able to adjust to it. And that's where the sim tool comes in play. You move the weight balance back, and you're like, OK, now what happens with the balance? How do I tune that back in? And you run that all through the sim, and for us, it's been mirror-perfect going to the track when we do that." More impressively, a lot of that work was done months ago. "I would say most of it, we got through it before the start of this season," Kirkwood said. "Once we get into the season, we only get a select few days because every Honda team has to run on the same simulator. Of course, it's different with the engineering sim; those are running nonstop." Sims are for engineers, too An IndyCar team is more than just its drivers—"the spacer between the seat and the wheel," according to Kirkwood—and the engineers rely heavily on sim work now that real-world testing is so highly restricted. And they use a lot more than just driver-in-the-loop (DiL). "Digital simulation probably goes to a higher level," explained Scott Graves, engineering manager at Andretti Global. "A lot of the models we develop work in the DiL as well as our other digital tools. We try to develop universal models, whether that's tire models, engine models, or transmission models." "Once you get into to a fully digital model, then I think your optimization process starts kicking in," Graves said. "You're not just changing the setting and running a pretend lap with a driver holding a wheel. You're able to run through numerous settings and optimization routines and step through a massive number of permutations on a car. Obviously, you're looking for better lap times, but you're also looking for fuel efficiency and a lot of other parameters that go into crossing the finish line first." Parts like this anti-roll bar are simulated thousands of times. Credit: Siemens/Andretti Global As an example, Graves points to the dampers. "The shock absorber is a perfect example where that's a highly sophisticated piece of equipment on the car and it's very open for team development. So our cars have fully customized designs there that are optimized for how we run the car, and they may not be good on another team's car because we're so honed in on what we're doing with the car," he said. "The more accurate a digital twin is, the more we are able to use that digital twin to predict the performance of the car," said David Taylor, VP of industry strategy at Siemens DISW, which has partnered with Andretti for some years now. "It will never be as complete and accurate as we want it to be. So it's a continuous pursuit, and we keep adding technology to our portfolio and acquiring companies to try to provide more and more tools to people like Scott so they can more accurately predict that performance." What to expect on Sunday? Kirkwood was bullish about his chances despite starting relatively deep in the field, qualifying in 23rd place. "We've been phenomenal in race trim and qualifying," he said. "We had a bit of a head-scratcher if I'm being honest—I thought we would definitely be a top-six contender, if not a front row contender, and it just didn't pan out that way on Saturday qualifying." "But we rolled back out on Monday—the car was phenomenal. Once again, we feel very, very racy in traffic, which is a completely different animal than running qualifying," Kirkwood said. "So I'm happy with it. I think our chances are good. We're starting deep in the field, but so are a lot of other drivers. So you can expect a handful of us to move forward." The more nervous hybrid IndyCars with their more rearward weight bias will probably result in more cautions, according to Ericsson, who will line up sixth for the start of the race on Sunday. "Whereas in previous years you could have a bit of a moment and it would scare you, you usually get away with it," he said. "This year, if you have a moment, it usually ends up with you being in the fence. I think that's why we've seen so many crashes this year—because a pendulum effect from the rear of the car that when you start losing it, this is very, very difficult or almost impossible to catch." "I think it's going to mean that the race is going to be quite a few incidents with people making mistakes," Ericsson said. "In practice, if your car is not behaving well, you bring it to the pit lane, right? You can do adjustments, whereas in the race, you have to just tough it out until the next pit stop and then make some small adjustments. So if you have a bad car at the start a race, it's going to be a tough one. So I think it's going to be a very dramatic and entertaining race." Jonathan M. Gitlin Automotive Editor Jonathan M. Gitlin Automotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 8 Comments
    0 Comments 0 Shares