• Taiwan est en train de se précipiter pour créer sa propre industrie de drones. Avec un conflit potentiel avec la Chine qui plane, il semble qu'ils doivent agir rapidement. Les véhicules sans pilote deviennent de plus en plus essentiels dans les guerres, mais bon, qui aurait cru qu'on en arriverait là ? Tout cela semble un peu ennuyeux. On dirait juste une autre course pour être à jour.

    #Taiwan #Drones #Conflit #Industrie #Technologie
    Taiwan est en train de se précipiter pour créer sa propre industrie de drones. Avec un conflit potentiel avec la Chine qui plane, il semble qu'ils doivent agir rapidement. Les véhicules sans pilote deviennent de plus en plus essentiels dans les guerres, mais bon, qui aurait cru qu'on en arriverait là ? Tout cela semble un peu ennuyeux. On dirait juste une autre course pour être à jour. #Taiwan #Drones #Conflit #Industrie #Technologie
    Taiwan Is Rushing to Make Its Own Drones Before It's Too Late
    Unmanned vehicles are increasingly becoming essential weapons of war. But with a potential conflict with China looming large, Taiwan is scrambling to build a domestic drone industry from scratch.
    1 Commenti 0 condivisioni
  • ¡Es increíble cómo los desarrolladores de Oblivion Remastered se creen ingeniosos al ofrecernos 'The Thieves Den' como un hogar para jugadores! ¿En serio? ¿Es esto lo mejor que pueden hacer para los estilos de juego nefastos? Un hogar que cuesta 1,000 de oro solo para tener un 'Fence' es simplemente una burla. ¿Por qué no pueden darnos algo más interesante en lugar de un simple cobijo para ladrones? Este tipo de contenido es un insulto a la creatividad de los jugadores. La idea de un hogar para ladrones es buena, pero esta ejecución es mediocre y decepcionante. ¡Es el momento de exigir más!

    #OblivionRemastered #Thieves
    ¡Es increíble cómo los desarrolladores de Oblivion Remastered se creen ingeniosos al ofrecernos 'The Thieves Den' como un hogar para jugadores! ¿En serio? ¿Es esto lo mejor que pueden hacer para los estilos de juego nefastos? Un hogar que cuesta 1,000 de oro solo para tener un 'Fence' es simplemente una burla. ¿Por qué no pueden darnos algo más interesante en lugar de un simple cobijo para ladrones? Este tipo de contenido es un insulto a la creatividad de los jugadores. La idea de un hogar para ladrones es buena, pero esta ejecución es mediocre y decepcionante. ¡Es el momento de exigir más! #OblivionRemastered #Thieves
    KOTAKU.COM
    The Thieves Den Player Home In Oblivion Remastered Is Perfect For Nefarious Playstyles
    Who doesn’t love free player housing? If you’re a thief, any of the city’s homes will suffice as a temporary base of operations. But none of them come with a Fence (albeit for a small sum of 1,000 Gold). You know what does? The Thieves Den in Oblivio
    1 Commenti 0 condivisioni
  • Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Commenti 0 condivisioni
  • Hitman: IO Interactive Has Big Plans For World of Assassination

    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France.
    Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller.

    Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through.
    At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table.

    Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels.
    Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down.
    MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard.
    Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future.
    The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2.
    MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC.
    #hitman #interactive #has #big #plans
    Hitman: IO Interactive Has Big Plans For World of Assassination
    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France. Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller. Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through. At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table. Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels. Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down. MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard. Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future. The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2. MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC. #hitman #interactive #has #big #plans
    WWW.DENOFGEEK.COM
    Hitman: IO Interactive Has Big Plans For World of Assassination
    While IO Interactive may be heavily focused on its inaugural James Bond game, 2026’s 007 First Light, it’s still providing ambitious new levels and updates for Hitman: World of Assassination and its new science fiction action game MindsEye. To continue to build hype for First Light and IOI’s growing partnership with the James Bond brand, the latest World of Assassination level is a Bond crossover, as Hitman protagonist Agent 47 targets Le Chiffre, the main villain of the 2006 movie Casino Royale. Available through July 6, 2025, the Le Chiffre event in World of Assassination features actor Mads Mikkelsen reprising his fan-favorite Bond villain role, not only providing his likeness but voicing the character as he confronts the contract killer in France. Den of Geek attended the first-ever in-person IO Interactive Showcase, a partner event with Summer Game Fest held at The Roosevelt Hotel in Hollywood. Mikkelsen and the developers shared insight on the surprise new World of Assassination level, with the level itself playable in its entirety to attendees on the Nintendo Switch 2 and PlayStation Portal. The developers also included an extended gameplay preview for MindsEye, ahead of its June 10 launch, while sharing some details about the techno-thriller. Matching his background from Casino Royale, Le Chiffre is a terrorist financier who manipulates the stock market by any means necessary to benefit himself and his clients. After an investment deal goes wrong, Le Chiffre tries to recoup a brutal client’s losses through a high-stakes poker game in France, with Agent 47 hired to assassinate the criminal mastermind on behalf of an unidentified backer. The level opens with 47 infiltrating a high society gala linked to the poker game, with the contract killer entering under his oft-used assumed name of Tobias Rieper, a facade that Le Chiffre immediately sees through. At the IO Interactive Showcase panel, Mikkelsen observed that the character of Le Chiffre is always one that he enjoyed and held a special place for him and his career. Reprising his villainous role also gave Mikkelsen the chance to reunite with longtime Agent 47 voice actor David Bateson since their ‘90s short film Tom Merritt, though both actors recorded their respective lines separately. Mikkelsen enjoyed that Le Chiffre’s appearance in World of Assassination gave him a more physical role than he had in Casino Royale, rather than largely placing him at a poker table. Of course, like most Hitman levels, there are multiple different ways that players can accomplish their main objective of killing Le Chiffre and escaping the premises. The game certainly gives players multiple avenues to confront the evil financier over a game of poker before closing in for the kill, but it’s by no means the only way to successfully assassinate him. We won’t give away how we ultimately pulled off the assassination, but rest assured that it took multiple tries, careful plotting, and with all the usual trial-and-error that comes from playing one of Hitman’s more difficult and immersively involved levels. Moving away from its more grounded action titles, IO Interactive also provided a deeper look at its new sci-fi game MindsEye, developed by Build a Rocket Boy. Set in the fictional Redrock City, the extended gameplay sneak peek at the showcase featured protagonist Adam Diaz fighting shadowy enemies in the futuristic city’s largely abandoned streets. While there were no hands-on demos at the showcase itself, the preview demonstrated Diaz using his abilities and equipment, including an accompanying drone, to navigate the city from a third-person perspective and use an array of weapons to dispatch those trying to hunt him down. MindsEye marks the first game published through IOI Partners, an initiative that has IOI publish games from smaller, external developers. The game did not have a hands-on demo at the showcase and, given its bug-heavy and poorly-received launch, this distinction is not particularly surprising. Build a Robot Boy has since pledged to support the game through June to fix its technical issues but, given the game’s hands-on access at the IOI Showcase, there were already red flags surrounding the game’s performance. With that in mind, most of the buzz at the showcase was unsurprisingly centered around 007 First Light and updates to Hitman: World of Assassination, and IO Interactive did not disappoint in that regard. Even with Hitman: World of Assassination over four years old now, the game continues to receive impressive post-release support from IO Interactive, both in bringing the title to the Nintendo Switch 2 and with additional DLC. At the showcase, IOI hinted at additional special levels for World of Assassintation with high-profile guest targets like Le Chiffre, without identifying who or if they’re also explicitly tied to the James Bond franchise. But with 007 First Light slated for its eagerly anticipated launch next year, it’s a safe bet that IOI has further plans to hype its own role in building out the James Bond legacy for the foreseeable future. The Hitman: World of Assassination special Le Chiffre level is available now through July 6, 2025 on all the game’s major platforms, including the Nintendo Switch 2. MindsEye is now on sale for PlayStation 5, Xbox Series X|S, and PC.
    Like
    Love
    Wow
    Angry
    Sad
    498
    0 Commenti 0 condivisioni
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Commenti 0 condivisioni
  • NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE

    By TREVOR HOGG

    Images courtesy of Chocolate Tribe.

    Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe

    After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures.

    “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!”

    Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024.

    A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?”

    Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle.

    With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.”

    Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire.

    Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting.

    Stella Gono, Software Developer, working on the Chocolate Tribe website.

    Family photo of the Maketos. Maketo-van de Bragt has two siblings.

    Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.”

    AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!”

    Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.”

    South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024.

    There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.”

    Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years.

    Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024.

    Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.”

    Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    #nosipho #maketovan #den #bragt #altered
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.” #nosipho #maketovan #den #bragt #altered
    WWW.VFXVOICE.COM
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebe (center in black T-shirt) and friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI [nickname for Johannesburg],” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Gold (2023) for Netflix. (Image courtesy of Chocolate Tribe and Netflix) Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    Like
    Love
    Wow
    Angry
    Sad
    397
    0 Commenti 0 condivisioni
  • One of the most versatile action cameras I've tested isn't from GoPro - and it's on sale

    DJI Osmo Action 4. Adrian Kingsley-Hughes/ZDNETMultiple DJI Osmo Action 4 packages are on sale . Both the Essential and Standard Combos have been discounted to while the Adventure Combo has dropped to DJI might not be the first name on people's lips when it comes to action cameras, but the company that's better known for its drones also has a really solid line of action cameras. And its latest device, the Osmo Action 4 camera, has some very impressive tricks up its sleeve.Also: One of the most versatile cameras I've used is not from Sony or Canon and it's on saleSo, what sets this action camera apart from the competition? Let's take a look.
    details
    View First off, this is not just an action camera -- it's a pro-grade action camera.From a hardware point of view, the Osmo Action 4 features a 1/1.3-inch image sensor that can record 4K at up to 120 frames per second. This sensor is combined with a wide-angle f/2.8 aperture lens that provides an ultra-wide field of view of up to 155°. And that's wide. Build quality and fit and finish are second to none. Adrian Kingsley-Hughes/ZDNETFor when the going gets rough, the Osmo Action 4 offers 360° HorizonSteady stabilization modes, including RockSteady 3.0/3.0+ for first-person video footage and HorizonBalancing/HorizonSteady modes for horizontal shots. That's pro-grade hardware right there.Also: This new AI video editor is an all-in-one production service for filmmakers - how to try itThe Osmo Action 4 also features a 10-bit D-Log M color mode. This mode allows the sensor to record over one billion colors and offers a wider dynamic range, giving you a video that is more vivid and that offers greater detail in the highlights and shadows. This mode, combined with an advanced color temperature sensor, means that the colors have a true-to-life feel regardless of whether you're shooting outdoors, indoors, or even underwater. The DJI Osmo Action 4 ready for action. Adrian Kingsley-Hughes/ZDNETI've added some video output from the Osmo Action 4 below. There are examples in both 1080p and 4K. To test the stabilization, I attached the camera to the truck and took it on some roads, some of which are pretty rough. The Osmo Action 4 had no problem with that terrain. I also popped the camera into the sea, just because. And again, no problem.I've also captured a few time-lapses with the camera -- not because I like clouds, but pointing a camera at a sky can be a good test of how it handles changing light. Also: I recommend this action camera to beginners and professional creators. Here's whyTimelapses with action cameras can suffer from unsightly exposure changes that cause the image to pulse, a condition known as exposure pumping. This issue can also cause the white balance to change noticeably in a video, but the Osmo Action 4 handled this test well.All the footage I've shot is what I've come to expect from a DJI camera, whether it's from an action camera or drone -- crisp, clear, vivid, and also nice and stable.The Osmo Action 4 is packed with various electronic image-stabilizationtech to ensure that your footage is smooth and on the horizon. It's worth noting the limitations of EIS -- it's not supported in slow-motion and timelapse modes, and the HorizonSteady and HorizonBalancing features are only available for video recorded at 1080por 2.7Kwith a frame rate of 60fps or below. On the durability front, I've no concerns. I've subjected the Osmo Action 4 to a hard few days of testing, and it's not let me down or complained once. It takes impacts like a champ, and being underwater or in dirt and sand is no problem at all. Also: I'm a full-time Canon photographer, but this Nikon camera made me wonder if I'm missing outYou might think that this heavy-duty testing would be hard on the camera's tiny batteries, but you'd be wrong. Remember I said the Osmo Action 4 offered hours of battery life? Well, I wasn't kidding.  The Osmo Action 4's ultra-long life batteries are incredible.  Adrian Kingsley-Hughes/ZDNETDJI says that a single battery can deliver up to 160 minutes of 1080p/24fps video recording. That's over two and a half hours of recording time. In the real world, I was blown away by how much a single battery can deliver. I shot video and timelapse, messed around with a load of camera settings, and then transferred that footage to my iPhone, and still had 16% battery left.No action camera has delivered so much for me on one battery. The two extra batteries and the multifunction case that come as part of the Adventure Combo are worth the extra Adrian Kingsley-Hughes/ZDNETAnd when you're ready to recharge, a 30W USB-C charger can take a battery from zero to 80% in 18 minutes. That's also impressive.What's more, the batteries are resistant to cold, offering up to 150 minutes of 1080p/24fps recording in temperatures as low as -20°C. This resistance also blows the competition away.Even taking into account all these strong points, the Osmo Action 4 offers even more.The camera has 2x digital zoom for better composition, Voice Prompts that let you know what the camera is doing without looking, and Voice Control that lets you operate the device without touching the screen or using the app. The Osmo Action 4 also digitally hides the selfie stick from a variety of different shots, and you can even connect the DJI Mic to the camera via the USB-C port for better audio capture.Also: Yes, an Android tablet finally made me reconsider my iPad Pro loyaltyAs for price, the Osmo Action 4 Standard Combo bundle comes in at while the Osmo Action 4 Adventure Combo, which comes with two extra Osmo Action Extreme batteries, an additional mini Osmo Action quick-release adapter mount, a battery case that acts as a power bank, and a 1.5-meter selfie stick, is I'm in love with the Osmo Action 4. It's hands down the best, most versatile, most powerful action camera on the market today, offering pro-grade features at a price that definitely isn't pro-grade.  Everything included in the Action Combo bundle. DJIDJI Osmo Action 4 tech specsDimensions: 70.5×44.2×32.8mmWeight: 145gWaterproof: 18m, up to 60m with the optional waterproof case Microphones: 3Sensor 1/1.3-inch CMOSLens: FOV 155°, aperture f/2.8, focus distance 0.4m to ∞Max Photo Resolution: 3648×2736Max Video Resolution: 4K: 3840×2880@24/25/30/48/50/60fps and 4K: 3840×2160@24/25/30/48/50/60/100/120fpsISO Range: 100-12800Front Screen: 1.4-inch, 323ppi, 320×320Rear Screen: 2.25-inch, 326ppi, 360×640Front/Rear Screen Brightness: 750±50 cd/m² Storage: microSDBattery: 1770mAh, lab tested to offer up to 160 minutes of runtimeOperating Temperature: -20° to 45° CThis article was originally published in August of 2023 and updated in March 2025.Featured reviews
    #one #most #versatile #action #cameras
    One of the most versatile action cameras I've tested isn't from GoPro - and it's on sale
    DJI Osmo Action 4. Adrian Kingsley-Hughes/ZDNETMultiple DJI Osmo Action 4 packages are on sale . Both the Essential and Standard Combos have been discounted to while the Adventure Combo has dropped to DJI might not be the first name on people's lips when it comes to action cameras, but the company that's better known for its drones also has a really solid line of action cameras. And its latest device, the Osmo Action 4 camera, has some very impressive tricks up its sleeve.Also: One of the most versatile cameras I've used is not from Sony or Canon and it's on saleSo, what sets this action camera apart from the competition? Let's take a look. details View First off, this is not just an action camera -- it's a pro-grade action camera.From a hardware point of view, the Osmo Action 4 features a 1/1.3-inch image sensor that can record 4K at up to 120 frames per second. This sensor is combined with a wide-angle f/2.8 aperture lens that provides an ultra-wide field of view of up to 155°. And that's wide. Build quality and fit and finish are second to none. Adrian Kingsley-Hughes/ZDNETFor when the going gets rough, the Osmo Action 4 offers 360° HorizonSteady stabilization modes, including RockSteady 3.0/3.0+ for first-person video footage and HorizonBalancing/HorizonSteady modes for horizontal shots. That's pro-grade hardware right there.Also: This new AI video editor is an all-in-one production service for filmmakers - how to try itThe Osmo Action 4 also features a 10-bit D-Log M color mode. This mode allows the sensor to record over one billion colors and offers a wider dynamic range, giving you a video that is more vivid and that offers greater detail in the highlights and shadows. This mode, combined with an advanced color temperature sensor, means that the colors have a true-to-life feel regardless of whether you're shooting outdoors, indoors, or even underwater. The DJI Osmo Action 4 ready for action. Adrian Kingsley-Hughes/ZDNETI've added some video output from the Osmo Action 4 below. There are examples in both 1080p and 4K. To test the stabilization, I attached the camera to the truck and took it on some roads, some of which are pretty rough. The Osmo Action 4 had no problem with that terrain. I also popped the camera into the sea, just because. And again, no problem.I've also captured a few time-lapses with the camera -- not because I like clouds, but pointing a camera at a sky can be a good test of how it handles changing light. Also: I recommend this action camera to beginners and professional creators. Here's whyTimelapses with action cameras can suffer from unsightly exposure changes that cause the image to pulse, a condition known as exposure pumping. This issue can also cause the white balance to change noticeably in a video, but the Osmo Action 4 handled this test well.All the footage I've shot is what I've come to expect from a DJI camera, whether it's from an action camera or drone -- crisp, clear, vivid, and also nice and stable.The Osmo Action 4 is packed with various electronic image-stabilizationtech to ensure that your footage is smooth and on the horizon. It's worth noting the limitations of EIS -- it's not supported in slow-motion and timelapse modes, and the HorizonSteady and HorizonBalancing features are only available for video recorded at 1080por 2.7Kwith a frame rate of 60fps or below. On the durability front, I've no concerns. I've subjected the Osmo Action 4 to a hard few days of testing, and it's not let me down or complained once. It takes impacts like a champ, and being underwater or in dirt and sand is no problem at all. Also: I'm a full-time Canon photographer, but this Nikon camera made me wonder if I'm missing outYou might think that this heavy-duty testing would be hard on the camera's tiny batteries, but you'd be wrong. Remember I said the Osmo Action 4 offered hours of battery life? Well, I wasn't kidding.  The Osmo Action 4's ultra-long life batteries are incredible.  Adrian Kingsley-Hughes/ZDNETDJI says that a single battery can deliver up to 160 minutes of 1080p/24fps video recording. That's over two and a half hours of recording time. In the real world, I was blown away by how much a single battery can deliver. I shot video and timelapse, messed around with a load of camera settings, and then transferred that footage to my iPhone, and still had 16% battery left.No action camera has delivered so much for me on one battery. The two extra batteries and the multifunction case that come as part of the Adventure Combo are worth the extra Adrian Kingsley-Hughes/ZDNETAnd when you're ready to recharge, a 30W USB-C charger can take a battery from zero to 80% in 18 minutes. That's also impressive.What's more, the batteries are resistant to cold, offering up to 150 minutes of 1080p/24fps recording in temperatures as low as -20°C. This resistance also blows the competition away.Even taking into account all these strong points, the Osmo Action 4 offers even more.The camera has 2x digital zoom for better composition, Voice Prompts that let you know what the camera is doing without looking, and Voice Control that lets you operate the device without touching the screen or using the app. The Osmo Action 4 also digitally hides the selfie stick from a variety of different shots, and you can even connect the DJI Mic to the camera via the USB-C port for better audio capture.Also: Yes, an Android tablet finally made me reconsider my iPad Pro loyaltyAs for price, the Osmo Action 4 Standard Combo bundle comes in at while the Osmo Action 4 Adventure Combo, which comes with two extra Osmo Action Extreme batteries, an additional mini Osmo Action quick-release adapter mount, a battery case that acts as a power bank, and a 1.5-meter selfie stick, is I'm in love with the Osmo Action 4. It's hands down the best, most versatile, most powerful action camera on the market today, offering pro-grade features at a price that definitely isn't pro-grade.  Everything included in the Action Combo bundle. DJIDJI Osmo Action 4 tech specsDimensions: 70.5×44.2×32.8mmWeight: 145gWaterproof: 18m, up to 60m with the optional waterproof case Microphones: 3Sensor 1/1.3-inch CMOSLens: FOV 155°, aperture f/2.8, focus distance 0.4m to ∞Max Photo Resolution: 3648×2736Max Video Resolution: 4K: 3840×2880@24/25/30/48/50/60fps and 4K: 3840×2160@24/25/30/48/50/60/100/120fpsISO Range: 100-12800Front Screen: 1.4-inch, 323ppi, 320×320Rear Screen: 2.25-inch, 326ppi, 360×640Front/Rear Screen Brightness: 750±50 cd/m² Storage: microSDBattery: 1770mAh, lab tested to offer up to 160 minutes of runtimeOperating Temperature: -20° to 45° CThis article was originally published in August of 2023 and updated in March 2025.Featured reviews #one #most #versatile #action #cameras
    WWW.ZDNET.COM
    One of the most versatile action cameras I've tested isn't from GoPro - and it's on sale
    DJI Osmo Action 4. Adrian Kingsley-Hughes/ZDNETMultiple DJI Osmo Action 4 packages are on sale at Amazon. Both the Essential and Standard Combos have been discounted to $249, while the Adventure Combo has dropped to $349.DJI might not be the first name on people's lips when it comes to action cameras, but the company that's better known for its drones also has a really solid line of action cameras. And its latest device, the Osmo Action 4 camera, has some very impressive tricks up its sleeve.Also: One of the most versatile cameras I've used is not from Sony or Canon and it's on saleSo, what sets this action camera apart from the competition? Let's take a look. details View at Amazon First off, this is not just an action camera -- it's a pro-grade action camera.From a hardware point of view, the Osmo Action 4 features a 1/1.3-inch image sensor that can record 4K at up to 120 frames per second (fps). This sensor is combined with a wide-angle f/2.8 aperture lens that provides an ultra-wide field of view of up to 155°. And that's wide. Build quality and fit and finish are second to none. Adrian Kingsley-Hughes/ZDNETFor when the going gets rough, the Osmo Action 4 offers 360° HorizonSteady stabilization modes, including RockSteady 3.0/3.0+ for first-person video footage and HorizonBalancing/HorizonSteady modes for horizontal shots. That's pro-grade hardware right there.Also: This new AI video editor is an all-in-one production service for filmmakers - how to try itThe Osmo Action 4 also features a 10-bit D-Log M color mode. This mode allows the sensor to record over one billion colors and offers a wider dynamic range, giving you a video that is more vivid and that offers greater detail in the highlights and shadows. This mode, combined with an advanced color temperature sensor, means that the colors have a true-to-life feel regardless of whether you're shooting outdoors, indoors, or even underwater. The DJI Osmo Action 4 ready for action. Adrian Kingsley-Hughes/ZDNETI've added some video output from the Osmo Action 4 below. There are examples in both 1080p and 4K. To test the stabilization, I attached the camera to the truck and took it on some roads, some of which are pretty rough. The Osmo Action 4 had no problem with that terrain. I also popped the camera into the sea, just because. And again, no problem.I've also captured a few time-lapses with the camera -- not because I like clouds (well, actually, I do like clouds), but pointing a camera at a sky can be a good test of how it handles changing light. Also: I recommend this action camera to beginners and professional creators. Here's whyTimelapses with action cameras can suffer from unsightly exposure changes that cause the image to pulse, a condition known as exposure pumping. This issue can also cause the white balance to change noticeably in a video, but the Osmo Action 4 handled this test well.All the footage I've shot is what I've come to expect from a DJI camera, whether it's from an action camera or drone -- crisp, clear, vivid, and also nice and stable.The Osmo Action 4 is packed with various electronic image-stabilization (EIS) tech to ensure that your footage is smooth and on the horizon. It's worth noting the limitations of EIS -- it's not supported in slow-motion and timelapse modes, and the HorizonSteady and HorizonBalancing features are only available for video recorded at 1080p (16:9) or 2.7K (16:9) with a frame rate of 60fps or below. On the durability front, I've no concerns. I've subjected the Osmo Action 4 to a hard few days of testing, and it's not let me down or complained once. It takes impacts like a champ, and being underwater or in dirt and sand is no problem at all. Also: I'm a full-time Canon photographer, but this Nikon camera made me wonder if I'm missing outYou might think that this heavy-duty testing would be hard on the camera's tiny batteries, but you'd be wrong. Remember I said the Osmo Action 4 offered hours of battery life? Well, I wasn't kidding.  The Osmo Action 4's ultra-long life batteries are incredible.  Adrian Kingsley-Hughes/ZDNETDJI says that a single battery can deliver up to 160 minutes of 1080p/24fps video recording (at room temperature, with RockSteady on, Wi-Fi off, and screen off). That's over two and a half hours of recording time. In the real world, I was blown away by how much a single battery can deliver. I shot video and timelapse, messed around with a load of camera settings, and then transferred that footage to my iPhone, and still had 16% battery left.No action camera has delivered so much for me on one battery. The two extra batteries and the multifunction case that come as part of the Adventure Combo are worth the extra $100. Adrian Kingsley-Hughes/ZDNETAnd when you're ready to recharge, a 30W USB-C charger can take a battery from zero to 80% in 18 minutes. That's also impressive.What's more, the batteries are resistant to cold, offering up to 150 minutes of 1080p/24fps recording in temperatures as low as -20°C (-4°F). This resistance also blows the competition away.Even taking into account all these strong points, the Osmo Action 4 offers even more.The camera has 2x digital zoom for better composition, Voice Prompts that let you know what the camera is doing without looking, and Voice Control that lets you operate the device without touching the screen or using the app. The Osmo Action 4 also digitally hides the selfie stick from a variety of different shots, and you can even connect the DJI Mic to the camera via the USB-C port for better audio capture.Also: Yes, an Android tablet finally made me reconsider my iPad Pro loyaltyAs for price, the Osmo Action 4 Standard Combo bundle comes in at $399, while the Osmo Action 4 Adventure Combo, which comes with two extra Osmo Action Extreme batteries, an additional mini Osmo Action quick-release adapter mount, a battery case that acts as a power bank, and a 1.5-meter selfie stick, is $499.I'm in love with the Osmo Action 4. It's hands down the best, most versatile, most powerful action camera on the market today, offering pro-grade features at a price that definitely isn't pro-grade.  Everything included in the Action Combo bundle. DJIDJI Osmo Action 4 tech specsDimensions: 70.5×44.2×32.8mmWeight: 145gWaterproof: 18m, up to 60m with the optional waterproof case Microphones: 3Sensor 1/1.3-inch CMOSLens: FOV 155°, aperture f/2.8, focus distance 0.4m to ∞Max Photo Resolution: 3648×2736Max Video Resolution: 4K (4:3): 3840×2880@24/25/30/48/50/60fps and 4K (16:9): 3840×2160@24/25/30/48/50/60/100/120fpsISO Range: 100-12800Front Screen: 1.4-inch, 323ppi, 320×320Rear Screen: 2.25-inch, 326ppi, 360×640Front/Rear Screen Brightness: 750±50 cd/m² Storage: microSD (up to 512GB)Battery: 1770mAh, lab tested to offer up to 160 minutes of runtime (tested at room temperature - 25°C/77°F - and 1080p/24fps, with RockSteady on, Wi-Fi off, and screen off)Operating Temperature: -20° to 45° C (-4° to 113° F)This article was originally published in August of 2023 and updated in March 2025.Featured reviews
    0 Commenti 0 condivisioni
  • MindsEye review – a dystopian future that plays like it’s from 2012

    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games.Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaignis the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does.

    MindsEye is out now; £54.99
    #mindseye #review #dystopian #future #that
    MindsEye review – a dystopian future that plays like it’s from 2012
    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games.Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaignis the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does. MindsEye is out now; £54.99 #mindseye #review #dystopian #future #that
    WWW.THEGUARDIAN.COM
    MindsEye review – a dystopian future that plays like it’s from 2012
    There’s a Sphere-alike in Redrock, MindsEye’s open-world version of Las Vegas. It’s pretty much a straight copy of the original: a huge soap bubble, half sunk into the desert floor, with its surface turned into a gigantic TV. Occasionally you’ll pull up near the Sphere while driving an electric vehicle made by Silva, the megacorp that controls this world. You’ll sometimes come to a stop just as an advert for an identical Silva EV plays out on the huge curved screen overhead. The doubling effect can be slightly vertigo-inducing.At these moments, I truly get what MindsEye is trying to do. You’re stuck in the ultimate company town, where oligarchs and other crooks run everything, and there’s no hope of escaping the ecosystem they’ve built. MindsEye gets this all across through a chance encounter, and in a way that’s both light of touch and clever. The rest of the game tends towards the heavy-handed and silly, but it’s nice to glimpse a few instances where everything clicks.With its Spheres and omnipresent EVs, MindsEye looks and sounds like the future. It’s concerned with AI and tech bros and the insidious creep of a corporate dystopia. You play as an amnesiac former-soldier who must work out the precise damage that technology has done to his humanity, while shooting people and robots and drones. And alongside the campaign itself, MindsEye also has a suite of tools for making your own game or levels and publishing them for fellow players. All of this has come from a studio founded by Leslie Benzies, whose production credits include the likes of GTA 5.AI overlords … MindsEye. Photograph: IOI PartnersWhat’s weird, then, is that MindsEye generally plays like the past. Put a finger to the air and the wind is blowing from somewhere around 2012. At heart, this is a roughly hewn cover shooter with an open world that you only really experience when you’re driving between missions. Its topical concerns mainly exist to justify double-crosses and car chases and shootouts, and to explain why you head into battle with a personal drone that can open doors for you and stun nearby enemies.It can be an uncanny experience, drifting back through the years to a time when many third-person games still featured unskippable cut-scenes and cover that could be awkward to unstick yourself from. I should add that there are plenty of reports at the moment of crashes and technical glitches and characters turning up without their faces in place. Playing on a relatively old PC, aside from one crash and a few amusing bugs, I’ve been mostly fine. I’ve just been playing a game that feels equally elderly.This is sometimes less of a criticism than it sounds. There is a definite pleasure to be had in simple run-and-gun missions where you shoot very similar looking people over and over again and pick a path between waypoints. The shooting often feels good, and while it’s a bit of a swizz to have to drive to and from each mission, the cars have a nice fishtaily looseness to them that can, at times, invoke the Valium-tinged glory of the Driver games. (The airborne craft are less fun because they have less character.)Driving between missions … MindsEye. Photograph: Build A Rocket Boy/IOI PartnersAnd for a game that has thought a lot about the point at which AI takes over, the in-game AI around me wasn’t in danger of taking over anything. When I handed over control of my car to the game while tailing an enemy, having been told I should try not to be spotted, the game made sure our bumpers kissed at every intersection. The streets of this particular open world are filled with amusingly unskilled AI drivers. I’d frequently arrive at traffic lights to be greeted by a recent pile-up, so delighted by the off-screen collisions that had scattered road cones and Dumpsters across my path that I almost always stopped to investigate.I even enjoyed the plot’s hokeyness, which features lines such as: “Your DNA has been altered since we last met!” Has it, though? Even so, I became increasingly aware that clever people had spent a good chunk of their working lives making this game. I don’t think they intended to cast me as what is in essence a Deliveroo bullet courier for an off-brand Elon Musk. Or to drop me into an open world that feels thin not because it lacks mission icons and fishing mini-games, but because it’s devoid of convincing human detail.I suspect the problem may actually be a thematically resonant one: a reckless kind of ambition. When I dropped into the level editor I found a tool that’s astonishingly rich and complex, but which also requires a lot of time and effort if you want to make anything really special in it. This is for the mega-fans, surely, the point-one percent. It must have taken serious time to build, and to do all that alongside a campaign (one that tries, at least, to vary things now and then with stealth, trailing and sniper sections) is the kind of endeavour that requires a real megacorp behind it.MindsEye is an oddity. For all its failings, I rarely disliked playing it, and yet it’s also difficult to sincerely recommend. Its ideas, its moment-to-moment action and narrative are so thinly conceived that it barely exists. And yet: I’m kind of happy that it does. MindsEye is out now; £54.99
    0 Commenti 0 condivisioni
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Commenti 0 condivisioni