• Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Kommentare 0 Anteile
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Kommentare 0 Anteile
  • AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES

    By CHRIS McGOWAN

    Images courtesy of Warner Bros. Pictures.

    Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors.

    “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”
    —Nordin Rahhali, VFX Supervisor

    The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed.

    “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.”

    “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.”

    Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor.

    “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”

    The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.”
    —Christian Sebaldt, ASC, Director of Photography

    For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day”

    Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”

    Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall.

    The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.”

    The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.”

    Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils.

    “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”
    —Nordin Rahhali, VFX Supervisor

    Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.”

    To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.”

    Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine.

    Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard.

    A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.”

    Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films.

    From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    #explosive #mix #sfx #vfx #ignites
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.” #explosive #mix #sfx #vfx #ignites
    WWW.VFXVOICE.COM
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbell (Brec Bassinger) has a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes (Kaitlyn Santa Juana), inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the whole [Skyview restaurant] on fire, but Tony [Lazarowich, Special Effects Supervisor] tried and put as much fire as he could safely and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots. (Photo: Eric Milner) Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive set [that] was fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical. (Photo: Eric Milner) “We got all the Vancouver skyline [with drones] so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height [we needed]. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wall [so] we could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed. (Photo: Eric Milner) “We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineered [them] while we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots. [For example,] some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Max Lloyd-Jones] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbell (Max Lloyd-Jones) as he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful location [in] GVRD [Greater Vancouver], very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosion [of Iris’s home] was unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbell (Richard Harmon) and drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producer [Craig Perry] came up with a great gag [for the] septum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell (Richard Harmon) – with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “[S]ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Campbell] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire line [for] when Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result. (Photo: Eric Milner) A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erik (Richard Harmon) appears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws it [off the deck] are all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines. (Photo: Eric Milner) Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris. (Photo: Eric Milner) Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    0 Kommentare 0 Anteile
  • Epic Games to rebrand RealityCapture as RealityScan 2.0

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Epic Games is rebranding RealityCapture, its professional desktop photogrammetry software, as RealityScan.RealityScan 2.0, due in the “coming weeks”, will unify the desktop application with the existing RealityScan: Epic Games’ free 3D scanning app for iOS and Android devices.
    The update will also introduce new features including AI-based mask generation, support for aerial Lidar data, and new visual tools for troubleshooting scan quality.
    A desktop photogrammetry tool for games, VFX, visualization and urban planning

    First released in 2016, RealityCapture generates accurate triangle-based meshes of real-world objects, from people and props to environments.Its core photogrammetry toolset, for generating 3D meshes from sets of source images, is augmented by support for laser scan data.
    The software includes features aimed at aerial surveying and urban planning, but is also used in the entertainment industry to generate assets for use in games and VFX.
    RealityCapture was acquired by Epic Games in 2021, which made the software available free to artists and studios with revenue under million/year last year.
    Now rebranded as RealityScan to unify it with the existing mobile app

    RealityCapture 2.0 – or rather, RealityScan 2.0 – is a change of branding, with the desktop application taking its new name and logo from Epic Games’ existing mobile scanning app.First released in 2022, RealityScan was originally pitched as a way to make RealityCapture’s functionality accessible to hobbyists as well as pros.
    It’s a pure photogrammetry tool, turning photos captured on a mobile phone or tablet into textured 3D models for use in AR, game development or general 3D work.
    RealityScan 2.0: AI masking, new Quality Analysis Tool, and support for aerial Lidar data

    New features in RealityCapture 2.0 will include AI-powered masking, with the software automatically identifying and masking out the background of the source images.The change should remove the need to generate masks manually, either in RealityCapture itself or an external DCC app.
    In addition, the default settings have been updated to improve alignment of source images, particularly when scanning objects with smooth surfaces and few surface features.
    To help troubleshoot scans, a new Quality Analysis Tool displays heatmaps showing parts of the scan where more images may be needed to reconstruct the source object accurately.
    The update will also introduce support for aerial Lidar data, which may be used alongside aerial photography and terrestrial data to reconstruct environments more accurately.
    No information yet on how the new features break down between desktop and mobile

    It isn’t clear which of those new features will be included in the mobile app, although it will presumably also be updated to version 2.0 at the same time, since Epic Games’ blog post announcing the changes describes its aim as to “unify the desktop and mobile versions”.We’ve contacted Epic for more information, and will update if we hear back.
    Price, system requirements and release date

    RealityScan 2.0 is due in the “coming weeks”. Epic Games hasn’t announced an exact release date, or any changes to price or system requirements.The current version of the desktop software, RealityCapture 1.5, is available for Windows 7+ and Windows Server 2008+. It’s CUDA-based, so you need a CUDA 3.0-capable NVIDIA GPU.
    The desktop software is free to artists and studios with revenue under million/year. For larger studios, subscriptions cost /seat/year.
    The current version of the mobile app, RealityScan 1.6, is compatible with Android 7.0+, iOS 16.0+ and iPadOS 16.0+. It’s free, including for commercial use.
    By default, its EULA gives Epic Games the right to use your scan data to train products and services, but you can opt out in the in-app settings.
    Read Epic Games’ blog post announcing that it is rebranding RealityCapture as RealityScan
    about RealityCapture and RealityScan on the product website

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #epic #games #rebrand #realitycapture #realityscan
    Epic Games to rebrand RealityCapture as RealityScan 2.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Epic Games is rebranding RealityCapture, its professional desktop photogrammetry software, as RealityScan.RealityScan 2.0, due in the “coming weeks”, will unify the desktop application with the existing RealityScan: Epic Games’ free 3D scanning app for iOS and Android devices. The update will also introduce new features including AI-based mask generation, support for aerial Lidar data, and new visual tools for troubleshooting scan quality. A desktop photogrammetry tool for games, VFX, visualization and urban planning First released in 2016, RealityCapture generates accurate triangle-based meshes of real-world objects, from people and props to environments.Its core photogrammetry toolset, for generating 3D meshes from sets of source images, is augmented by support for laser scan data. The software includes features aimed at aerial surveying and urban planning, but is also used in the entertainment industry to generate assets for use in games and VFX. RealityCapture was acquired by Epic Games in 2021, which made the software available free to artists and studios with revenue under million/year last year. Now rebranded as RealityScan to unify it with the existing mobile app RealityCapture 2.0 – or rather, RealityScan 2.0 – is a change of branding, with the desktop application taking its new name and logo from Epic Games’ existing mobile scanning app.First released in 2022, RealityScan was originally pitched as a way to make RealityCapture’s functionality accessible to hobbyists as well as pros. It’s a pure photogrammetry tool, turning photos captured on a mobile phone or tablet into textured 3D models for use in AR, game development or general 3D work. RealityScan 2.0: AI masking, new Quality Analysis Tool, and support for aerial Lidar data New features in RealityCapture 2.0 will include AI-powered masking, with the software automatically identifying and masking out the background of the source images.The change should remove the need to generate masks manually, either in RealityCapture itself or an external DCC app. In addition, the default settings have been updated to improve alignment of source images, particularly when scanning objects with smooth surfaces and few surface features. To help troubleshoot scans, a new Quality Analysis Tool displays heatmaps showing parts of the scan where more images may be needed to reconstruct the source object accurately. The update will also introduce support for aerial Lidar data, which may be used alongside aerial photography and terrestrial data to reconstruct environments more accurately. No information yet on how the new features break down between desktop and mobile It isn’t clear which of those new features will be included in the mobile app, although it will presumably also be updated to version 2.0 at the same time, since Epic Games’ blog post announcing the changes describes its aim as to “unify the desktop and mobile versions”.We’ve contacted Epic for more information, and will update if we hear back. Price, system requirements and release date RealityScan 2.0 is due in the “coming weeks”. Epic Games hasn’t announced an exact release date, or any changes to price or system requirements.The current version of the desktop software, RealityCapture 1.5, is available for Windows 7+ and Windows Server 2008+. It’s CUDA-based, so you need a CUDA 3.0-capable NVIDIA GPU. The desktop software is free to artists and studios with revenue under million/year. For larger studios, subscriptions cost /seat/year. The current version of the mobile app, RealityScan 1.6, is compatible with Android 7.0+, iOS 16.0+ and iPadOS 16.0+. It’s free, including for commercial use. By default, its EULA gives Epic Games the right to use your scan data to train products and services, but you can opt out in the in-app settings. Read Epic Games’ blog post announcing that it is rebranding RealityCapture as RealityScan about RealityCapture and RealityScan on the product website Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #epic #games #rebrand #realitycapture #realityscan
    Epic Games to rebrand RealityCapture as RealityScan 2.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Epic Games is rebranding RealityCapture, its professional desktop photogrammetry software, as RealityScan.RealityScan 2.0, due in the “coming weeks”, will unify the desktop application with the existing RealityScan: Epic Games’ free 3D scanning app for iOS and Android devices. The update will also introduce new features including AI-based mask generation, support for aerial Lidar data, and new visual tools for troubleshooting scan quality. A desktop photogrammetry tool for games, VFX, visualization and urban planning First released in 2016, RealityCapture generates accurate triangle-based meshes of real-world objects, from people and props to environments.Its core photogrammetry toolset, for generating 3D meshes from sets of source images, is augmented by support for laser scan data. The software includes features aimed at aerial surveying and urban planning, but is also used in the entertainment industry to generate assets for use in games and VFX. RealityCapture was acquired by Epic Games in 2021, which made the software available free to artists and studios with revenue under $1 million/year last year. Now rebranded as RealityScan to unify it with the existing mobile app RealityCapture 2.0 – or rather, RealityScan 2.0 – is a change of branding, with the desktop application taking its new name and logo from Epic Games’ existing mobile scanning app.First released in 2022, RealityScan was originally pitched as a way to make RealityCapture’s functionality accessible to hobbyists as well as pros. It’s a pure photogrammetry tool, turning photos captured on a mobile phone or tablet into textured 3D models for use in AR, game development or general 3D work. RealityScan 2.0: AI masking, new Quality Analysis Tool, and support for aerial Lidar data New features in RealityCapture 2.0 will include AI-powered masking, with the software automatically identifying and masking out the background of the source images.The change should remove the need to generate masks manually, either in RealityCapture itself or an external DCC app. In addition, the default settings have been updated to improve alignment of source images, particularly when scanning objects with smooth surfaces and few surface features. To help troubleshoot scans, a new Quality Analysis Tool displays heatmaps showing parts of the scan where more images may be needed to reconstruct the source object accurately. The update will also introduce support for aerial Lidar data, which may be used alongside aerial photography and terrestrial data to reconstruct environments more accurately. No information yet on how the new features break down between desktop and mobile It isn’t clear which of those new features will be included in the mobile app, although it will presumably also be updated to version 2.0 at the same time, since Epic Games’ blog post announcing the changes describes its aim as to “unify the desktop and mobile versions”.We’ve contacted Epic for more information, and will update if we hear back. Price, system requirements and release date RealityScan 2.0 is due in the “coming weeks”. Epic Games hasn’t announced an exact release date, or any changes to price or system requirements.The current version of the desktop software, RealityCapture 1.5, is available for Windows 7+ and Windows Server 2008+. It’s CUDA-based, so you need a CUDA 3.0-capable NVIDIA GPU. The desktop software is free to artists and studios with revenue under $1 million/year. For larger studios, subscriptions cost $1,250/seat/year. The current version of the mobile app, RealityScan 1.6, is compatible with Android 7.0+, iOS 16.0+ and iPadOS 16.0+. It’s free, including for commercial use. By default, its EULA gives Epic Games the right to use your scan data to train products and services, but you can opt out in the in-app settings. Read Epic Games’ blog post announcing that it is rebranding RealityCapture as RealityScan Read more about RealityCapture and RealityScan on the product website Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    196
    0 Kommentare 0 Anteile
  • 108-year-old submarine wreck seen in stunning detail in new footage

    Photogrammetric reconstruction of the submarine USS F-1 on the seafloor west of San Diego, California. CREDIT: Image by Zoe Daheron, ©Woods Hole Oceanographic Institution.

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    In 1917, two US submarines collided off the coast of San Diego and submarine USS F-1 sank to the bottom of the Pacific Ocean, along with 19 crew members aboard. The horrible accident, whose wreckage was discovered in 1975, represents the US Naval Submarine Force’s first wartime submarine loss. Now, researchers from Woods Hole Oceanographic Institution have captured new footage of the 1,300 feet-deep underwater archaeological site.
    “They were technical dives requiring specialized expertise and equipment,” Anna Michel, a co-lead of the expedition and chief scientist at the National Deep Submergence Facility, said in a statement. “We were careful and methodical in surveying these historical sites so that we could share these stunning images, while also maintaining the reverence these sites deserve.”

    The high-definition imagining and mapping of the USS F-1 took place during a deep-sea training and engineering mission in February and March. The missions aimed to train future submersible pilots and test the human-occupied vehicle Alvin and autonomous underwater vehicle Sentry. 
    The team captured never-seen-before images and videos and conducted a sonar survey, which essentially consists of mapping a region by shooting sound waves at it and registering the echo. Imaging specialists combined the 2D images into a 3D model of the wreck—a technique called photogrammetry. Using photogrammetry reveals measurements not just of the submarine but of the marine life that over the past century has claimed the vessel as its own. 
    Photogrammetric reconstruction of the submarine USS F-1 showing the sub’s stern and propeller. CREDIT: Image by Zoe Daheron, ©Woods Hole Oceanographic Institution.
    “As a Navy veteran, making this dive—together with another Navy veteran and a Navy historian—was a solemn privilege,” said Office of Naval Research Program Officer Rob Sparrock, who was in Alvin when it went down to the wreck. “There was time to contemplate the risks that all mariners, past and present, face. It also reminded me of the importance of these training dives, which leverage the knowledge from past dives, lessons learned and sound engineering.”
    The researchers also investigated a Navy torpedo bomber training aircraft that went down in the region in 1950. After the dives, they held a remembrance ceremony aboard the research vessel Atlantis during which a bell rang once for each of the crew members lost in 1917. 
    “History and archaeology are all about people and we felt it was important to read their names aloud,” said Naval History and Heritage Command Underwater Archaeologist Brad Krueger, who also dove in Alvin. “The Navy has a solemn responsibility to ensure the legacies of its lost Sailors are remembered.”
    #108yearold #submarine #wreck #seen #stunning
    108-year-old submarine wreck seen in stunning detail in new footage
    Photogrammetric reconstruction of the submarine USS F-1 on the seafloor west of San Diego, California. CREDIT: Image by Zoe Daheron, ©Woods Hole Oceanographic Institution. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. In 1917, two US submarines collided off the coast of San Diego and submarine USS F-1 sank to the bottom of the Pacific Ocean, along with 19 crew members aboard. The horrible accident, whose wreckage was discovered in 1975, represents the US Naval Submarine Force’s first wartime submarine loss. Now, researchers from Woods Hole Oceanographic Institution have captured new footage of the 1,300 feet-deep underwater archaeological site. “They were technical dives requiring specialized expertise and equipment,” Anna Michel, a co-lead of the expedition and chief scientist at the National Deep Submergence Facility, said in a statement. “We were careful and methodical in surveying these historical sites so that we could share these stunning images, while also maintaining the reverence these sites deserve.” The high-definition imagining and mapping of the USS F-1 took place during a deep-sea training and engineering mission in February and March. The missions aimed to train future submersible pilots and test the human-occupied vehicle Alvin and autonomous underwater vehicle Sentry.  The team captured never-seen-before images and videos and conducted a sonar survey, which essentially consists of mapping a region by shooting sound waves at it and registering the echo. Imaging specialists combined the 2D images into a 3D model of the wreck—a technique called photogrammetry. Using photogrammetry reveals measurements not just of the submarine but of the marine life that over the past century has claimed the vessel as its own.  Photogrammetric reconstruction of the submarine USS F-1 showing the sub’s stern and propeller. CREDIT: Image by Zoe Daheron, ©Woods Hole Oceanographic Institution. “As a Navy veteran, making this dive—together with another Navy veteran and a Navy historian—was a solemn privilege,” said Office of Naval Research Program Officer Rob Sparrock, who was in Alvin when it went down to the wreck. “There was time to contemplate the risks that all mariners, past and present, face. It also reminded me of the importance of these training dives, which leverage the knowledge from past dives, lessons learned and sound engineering.” The researchers also investigated a Navy torpedo bomber training aircraft that went down in the region in 1950. After the dives, they held a remembrance ceremony aboard the research vessel Atlantis during which a bell rang once for each of the crew members lost in 1917.  “History and archaeology are all about people and we felt it was important to read their names aloud,” said Naval History and Heritage Command Underwater Archaeologist Brad Krueger, who also dove in Alvin. “The Navy has a solemn responsibility to ensure the legacies of its lost Sailors are remembered.” #108yearold #submarine #wreck #seen #stunning
    WWW.POPSCI.COM
    108-year-old submarine wreck seen in stunning detail in new footage
    Photogrammetric reconstruction of the submarine USS F-1 on the seafloor west of San Diego, California. CREDIT: Image by Zoe Daheron, ©Woods Hole Oceanographic Institution. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. In 1917, two US submarines collided off the coast of San Diego and submarine USS F-1 sank to the bottom of the Pacific Ocean, along with 19 crew members aboard. The horrible accident, whose wreckage was discovered in 1975, represents the US Naval Submarine Force’s first wartime submarine loss. Now, researchers from Woods Hole Oceanographic Institution have captured new footage of the 1,300 feet-deep underwater archaeological site. “They were technical dives requiring specialized expertise and equipment,” Anna Michel, a co-lead of the expedition and chief scientist at the National Deep Submergence Facility, said in a statement. “We were careful and methodical in surveying these historical sites so that we could share these stunning images, while also maintaining the reverence these sites deserve.” The high-definition imagining and mapping of the USS F-1 took place during a deep-sea training and engineering mission in February and March. The missions aimed to train future submersible pilots and test the human-occupied vehicle Alvin and autonomous underwater vehicle Sentry.  The team captured never-seen-before images and videos and conducted a sonar survey, which essentially consists of mapping a region by shooting sound waves at it and registering the echo. Imaging specialists combined the 2D images into a 3D model of the wreck—a technique called photogrammetry. Using photogrammetry reveals measurements not just of the submarine but of the marine life that over the past century has claimed the vessel as its own.  Photogrammetric reconstruction of the submarine USS F-1 showing the sub’s stern and propeller. CREDIT: Image by Zoe Daheron, ©Woods Hole Oceanographic Institution. “As a Navy veteran, making this dive—together with another Navy veteran and a Navy historian—was a solemn privilege,” said Office of Naval Research Program Officer Rob Sparrock, who was in Alvin when it went down to the wreck. “There was time to contemplate the risks that all mariners, past and present, face. It also reminded me of the importance of these training dives, which leverage the knowledge from past dives, lessons learned and sound engineering.” The researchers also investigated a Navy torpedo bomber training aircraft that went down in the region in 1950. After the dives, they held a remembrance ceremony aboard the research vessel Atlantis during which a bell rang once for each of the crew members lost in 1917.  “History and archaeology are all about people and we felt it was important to read their names aloud,” said Naval History and Heritage Command Underwater Archaeologist Brad Krueger, who also dove in Alvin. “The Navy has a solemn responsibility to ensure the legacies of its lost Sailors are remembered.”
    0 Kommentare 0 Anteile
  • Senior Environment Artist – Unannounced Game | Irvine, CA at Blizzard Entertainment

    Senior Environment Artist – Unannounced Game | Irvine, CABlizzard EntertainmentIrvine California 92618 United States of America50 minutes agoApplyTeam Name:Unannounced ProjectJob Title:Senior Environment Artist – Unannounced Game | Irvine, CARequisition ID:R025360Job Description:At Blizzard, we craft genre-defining games and legendary worlds for all to share. Through unparalleled creativity and storytelling, we create immersive universes and iconic characters that are beloved across platforms, borders, backgrounds, and generations - only made possible by building a work environment that nurtures the artistry of game development and unleashes the aspirations of our people.We are looking for a Senior Environment Artist to help us craft a new, unannounced game for Blizzard. As a key member of the development team, you will be responsible for delivering high-quality environment artwork through artistic and technical knowledge. You understand how to effectively build, integrate, and optimize a variety of environment asset types as well as how to support the gameplay experience. You are comfortable operating in a dynamic environment, navigating fluid deadlines and changing priorities with purpose.This role is anticipated to be a hybrid work position, with some work on-site and some work-from-home. The potential home studio for this role is Irvine, CA.Responsibilities:Priorities can often change in a fast-paced environment like ours, so this role includes, but is not limited to, the following responsibilities:Working closely with the art director, collaborate with designers, artists, code, materials, VFX and lighting departments to help deliver game environments to a world-class standard .Use your knowledge of environment art creation to deliver work from iterative prototyping to final execution.Create artifactsto guide the work of internal and external artists.Remain current on industry trends including advancement in commercial game engines and workflow efficiencies.Be comfortable touching many aspects of environment work including asset creation, texture work, material creation, composition and set dressing, foliage, geology, roads, and terrain.Proactively identify risks and propose solutions within the area of environment art creation.Minimum RequirementsExperience7+ years of experience in AAA environment art.Proven experience, having shipped multiple major titles in a relevant role.Experience working with a AAA game engine with strong understanding of implementation and integration.Knowledge of many aspects of environment art creation including strong proficiency in industry-standard 3D workflows.A portfolio, with examples and detailed breakdowns, showcasing your practical work.Strong understanding of authoring PBR content.Experience collaborating to develop better workflows and technology.Key AttributesA team-oriented approach to your work, understanding the importance of collaboration, and partnering with all departments.The ability to Inspire others with an enthusiastic approach to game development and improving core skills.Self-motivated ability to push initiatives forward.Extra PointsExperienceExperience working through early development R&D phases of a project is a plus.Solid understanding of game art creation outside of environment art.An interest in photography and cinema.Experience creating and handling photogrammetry data.Expertise in 3D composition, form, balance, and color theory.Your PlatformBest known for iconic video game universes including Warcraft®, Overwatch®, Diablo®, and StarCraft®, Blizzard Entertainment, Inc., a division of Activision Blizzard, which was acquired by Microsoft, is a premier developer and publisher of entertainment experiences. Blizzard Entertainment has created some of the industry’s most critically acclaimed and genre-defining games over the last 30 years, with a track record that includes multiple Game of the Year awards. Blizzard Entertainment engages tens of millions of players around the world with titles available on PC via Battle.net®, Xbox, PlayStation, Nintendo Switch, iOS, and Android.Our WorldActivision Blizzard, Inc., is one of the world's largest and most successful interactive entertainment companies and is at the intersection of media, technology and entertainment. We are home to some of the most beloved entertainment franchises including Call of Duty®, World of Warcraft®, Overwatch®, Diablo®, Candy Crush ™ and Bubble Witch™. Our combined entertainment network delights hundreds of millions of monthly active users in 196 countries, making us the largest gaming network on the planet!Our ability to build immersive and innovative worlds is only enhanced by diverse teams working in an inclusive environment . We aspire to have a culture where everyone can thrive in order to connect and engage the world through epic entertainment. We provide a suite of benefits that promote physical, emotional and financial well-being for ‘Every World’ - we’ve got our employees covered!The videogame industry and therefore our business is fast-paced and will continue to evolve. As such, the duties and responsibilities of this role may be changed as directed by the Company at any time to promote and support our business and relationships with industry partners.We love hearing from anyone who is enthusiastic about changing the games industry. Not sure you meet all qualifications ? Let us decide! Research shows that women and members of other under-represented groups tend to not apply to jobs when they think they may not meet every qualification, when, in fact, they often do! We are committed to creating a diverse and inclusive environment and strongly encourage you to apply.We are committed to working with and providing reasonable assistance to individuals with physical and mental disabilities. If you are a disabled individual requiring an accommodation to apply for an open position, please email your request to accommodationrequests@activisionblizzard.com . General employment questions cannot be accepted or processed here. Thank you for your interest.We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity, age, marital status, veteran status, or disability status, among other characteristics.RewardsWe provide a suite of benefits that promote physical, emotional and financial well-being for ‘Every World’ - we’ve got our employees covered! Subject to eligibility requirements, the Company offers comprehensive benefits including:Medical, dental, vision, health savings account or health reimbursement account, healthcare spending accounts, dependent care spending accounts, life and AD&D insurance, disability insurance;401with Company match, tuition reimbursement, charitable donation matching;Paid holidays and vacation, paid sick time, floating holidays, compassion and bereavement leaves, parental leave;Mental health & wellbeing programs, fitness programs, free and discounted games, and a variety of other voluntary benefit programs like supplemental life & disability, legal service, ID protection, rental insurance, and others;If the Company requires that you move geographic locations for the job, then you may also be eligible for relocation assistance.Eligibility to participate in these benefits may vary for part time and temporary full-time employees and interns with the Company. You can learn more by visiting / .In the U.S., the standard base pay range for this role is - Annual. These values reflect the expected base pay range of new hires across all U.S. locations. Ultimately, your specific range and offer will be based on several factors, including relevant experience, performance, and work location. Your Talent Professional can share this role’s range details for your local geography during the hiring process. In addition to a competitive base pay, employees in this role may be eligible for incentive compensation. Incentive compensation is not guaranteed. While we strive to provide competitive offers to successful candidates, new hire compensation is negotiable.

    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #senior #environment #artist #unannounced #game
    Senior Environment Artist – Unannounced Game | Irvine, CA at Blizzard Entertainment
    Senior Environment Artist – Unannounced Game | Irvine, CABlizzard EntertainmentIrvine California 92618 United States of America50 minutes agoApplyTeam Name:Unannounced ProjectJob Title:Senior Environment Artist – Unannounced Game | Irvine, CARequisition ID:R025360Job Description:At Blizzard, we craft genre-defining games and legendary worlds for all to share. Through unparalleled creativity and storytelling, we create immersive universes and iconic characters that are beloved across platforms, borders, backgrounds, and generations - only made possible by building a work environment that nurtures the artistry of game development and unleashes the aspirations of our people.We are looking for a Senior Environment Artist to help us craft a new, unannounced game for Blizzard. As a key member of the development team, you will be responsible for delivering high-quality environment artwork through artistic and technical knowledge. You understand how to effectively build, integrate, and optimize a variety of environment asset types as well as how to support the gameplay experience. You are comfortable operating in a dynamic environment, navigating fluid deadlines and changing priorities with purpose.This role is anticipated to be a hybrid work position, with some work on-site and some work-from-home. The potential home studio for this role is Irvine, CA.Responsibilities:Priorities can often change in a fast-paced environment like ours, so this role includes, but is not limited to, the following responsibilities:Working closely with the art director, collaborate with designers, artists, code, materials, VFX and lighting departments to help deliver game environments to a world-class standard .Use your knowledge of environment art creation to deliver work from iterative prototyping to final execution.Create artifactsto guide the work of internal and external artists.Remain current on industry trends including advancement in commercial game engines and workflow efficiencies.Be comfortable touching many aspects of environment work including asset creation, texture work, material creation, composition and set dressing, foliage, geology, roads, and terrain.Proactively identify risks and propose solutions within the area of environment art creation.Minimum RequirementsExperience7+ years of experience in AAA environment art.Proven experience, having shipped multiple major titles in a relevant role.Experience working with a AAA game engine with strong understanding of implementation and integration.Knowledge of many aspects of environment art creation including strong proficiency in industry-standard 3D workflows.A portfolio, with examples and detailed breakdowns, showcasing your practical work.Strong understanding of authoring PBR content.Experience collaborating to develop better workflows and technology.Key AttributesA team-oriented approach to your work, understanding the importance of collaboration, and partnering with all departments.The ability to Inspire others with an enthusiastic approach to game development and improving core skills.Self-motivated ability to push initiatives forward.Extra PointsExperienceExperience working through early development R&D phases of a project is a plus.Solid understanding of game art creation outside of environment art.An interest in photography and cinema.Experience creating and handling photogrammetry data.Expertise in 3D composition, form, balance, and color theory.Your PlatformBest known for iconic video game universes including Warcraft®, Overwatch®, Diablo®, and StarCraft®, Blizzard Entertainment, Inc., a division of Activision Blizzard, which was acquired by Microsoft, is a premier developer and publisher of entertainment experiences. Blizzard Entertainment has created some of the industry’s most critically acclaimed and genre-defining games over the last 30 years, with a track record that includes multiple Game of the Year awards. Blizzard Entertainment engages tens of millions of players around the world with titles available on PC via Battle.net®, Xbox, PlayStation, Nintendo Switch, iOS, and Android.Our WorldActivision Blizzard, Inc., is one of the world's largest and most successful interactive entertainment companies and is at the intersection of media, technology and entertainment. We are home to some of the most beloved entertainment franchises including Call of Duty®, World of Warcraft®, Overwatch®, Diablo®, Candy Crush ™ and Bubble Witch™. Our combined entertainment network delights hundreds of millions of monthly active users in 196 countries, making us the largest gaming network on the planet!Our ability to build immersive and innovative worlds is only enhanced by diverse teams working in an inclusive environment . We aspire to have a culture where everyone can thrive in order to connect and engage the world through epic entertainment. We provide a suite of benefits that promote physical, emotional and financial well-being for ‘Every World’ - we’ve got our employees covered!The videogame industry and therefore our business is fast-paced and will continue to evolve. As such, the duties and responsibilities of this role may be changed as directed by the Company at any time to promote and support our business and relationships with industry partners.We love hearing from anyone who is enthusiastic about changing the games industry. Not sure you meet all qualifications ? Let us decide! Research shows that women and members of other under-represented groups tend to not apply to jobs when they think they may not meet every qualification, when, in fact, they often do! We are committed to creating a diverse and inclusive environment and strongly encourage you to apply.We are committed to working with and providing reasonable assistance to individuals with physical and mental disabilities. If you are a disabled individual requiring an accommodation to apply for an open position, please email your request to accommodationrequests@activisionblizzard.com . General employment questions cannot be accepted or processed here. Thank you for your interest.We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity, age, marital status, veteran status, or disability status, among other characteristics.RewardsWe provide a suite of benefits that promote physical, emotional and financial well-being for ‘Every World’ - we’ve got our employees covered! Subject to eligibility requirements, the Company offers comprehensive benefits including:Medical, dental, vision, health savings account or health reimbursement account, healthcare spending accounts, dependent care spending accounts, life and AD&D insurance, disability insurance;401with Company match, tuition reimbursement, charitable donation matching;Paid holidays and vacation, paid sick time, floating holidays, compassion and bereavement leaves, parental leave;Mental health & wellbeing programs, fitness programs, free and discounted games, and a variety of other voluntary benefit programs like supplemental life & disability, legal service, ID protection, rental insurance, and others;If the Company requires that you move geographic locations for the job, then you may also be eligible for relocation assistance.Eligibility to participate in these benefits may vary for part time and temporary full-time employees and interns with the Company. You can learn more by visiting / .In the U.S., the standard base pay range for this role is - Annual. These values reflect the expected base pay range of new hires across all U.S. locations. Ultimately, your specific range and offer will be based on several factors, including relevant experience, performance, and work location. Your Talent Professional can share this role’s range details for your local geography during the hiring process. In addition to a competitive base pay, employees in this role may be eligible for incentive compensation. Incentive compensation is not guaranteed. While we strive to provide competitive offers to successful candidates, new hire compensation is negotiable. Create Your Profile — Game companies can contact you with their relevant job openings. Apply #senior #environment #artist #unannounced #game
    Senior Environment Artist – Unannounced Game | Irvine, CA at Blizzard Entertainment
    Senior Environment Artist – Unannounced Game | Irvine, CABlizzard EntertainmentIrvine California 92618 United States of America50 minutes agoApplyTeam Name:Unannounced ProjectJob Title:Senior Environment Artist – Unannounced Game | Irvine, CARequisition ID:R025360Job Description:At Blizzard, we craft genre-defining games and legendary worlds for all to share. Through unparalleled creativity and storytelling, we create immersive universes and iconic characters that are beloved across platforms, borders, backgrounds, and generations - only made possible by building a work environment that nurtures the artistry of game development and unleashes the aspirations of our people.We are looking for a Senior Environment Artist to help us craft a new, unannounced game for Blizzard. As a key member of the development team, you will be responsible for delivering high-quality environment artwork through artistic and technical knowledge. You understand how to effectively build, integrate, and optimize a variety of environment asset types as well as how to support the gameplay experience. You are comfortable operating in a dynamic environment, navigating fluid deadlines and changing priorities with purpose.This role is anticipated to be a hybrid work position, with some work on-site and some work-from-home. The potential home studio for this role is Irvine, CA.Responsibilities:Priorities can often change in a fast-paced environment like ours, so this role includes, but is not limited to, the following responsibilities:Working closely with the art director, collaborate with designers, artists, code, materials, VFX and lighting departments to help deliver game environments to a world-class standard .Use your knowledge of environment art creation to deliver work from iterative prototyping to final execution.Create artifacts (content creation briefs, workflow techniques) to guide the work of internal and external artists.Remain current on industry trends including advancement in commercial game engines and workflow efficiencies.Be comfortable touching many aspects of environment work including asset creation, texture work, material creation, composition and set dressing, foliage, geology, roads, and terrain.Proactively identify risks and propose solutions within the area of environment art creation.Minimum RequirementsExperience7+ years of experience in AAA environment art.Proven experience, having shipped multiple major titles in a relevant role.Experience working with a AAA game engine with strong understanding of implementation and integration.Knowledge of many aspects of environment art creation including strong proficiency in industry-standard 3D workflows.A portfolio, with examples and detailed breakdowns, showcasing your practical work.Strong understanding of authoring PBR content.Experience collaborating to develop better workflows and technology.Key AttributesA team-oriented approach to your work, understanding the importance of collaboration, and partnering with all departments.The ability to Inspire others with an enthusiastic approach to game development and improving core skills.Self-motivated ability to push initiatives forward.Extra PointsExperienceExperience working through early development R&D phases of a project is a plus.Solid understanding of game art creation outside of environment art ( e.g. lighting, VFX, characters, animation, rendering).An interest in photography and cinema.Experience creating and handling photogrammetry data.Expertise in 3D composition, form, balance, and color theory.Your PlatformBest known for iconic video game universes including Warcraft®, Overwatch®, Diablo®, and StarCraft®, Blizzard Entertainment, Inc. ( www.blizzard.com ), a division of Activision Blizzard, which was acquired by Microsoft (NASDAQ: MSFT), is a premier developer and publisher of entertainment experiences. Blizzard Entertainment has created some of the industry’s most critically acclaimed and genre-defining games over the last 30 years, with a track record that includes multiple Game of the Year awards. Blizzard Entertainment engages tens of millions of players around the world with titles available on PC via Battle.net®, Xbox, PlayStation, Nintendo Switch, iOS, and Android.Our WorldActivision Blizzard, Inc., is one of the world's largest and most successful interactive entertainment companies and is at the intersection of media, technology and entertainment. We are home to some of the most beloved entertainment franchises including Call of Duty®, World of Warcraft®, Overwatch®, Diablo®, Candy Crush ™ and Bubble Witch™. Our combined entertainment network delights hundreds of millions of monthly active users in 196 countries, making us the largest gaming network on the planet!Our ability to build immersive and innovative worlds is only enhanced by diverse teams working in an inclusive environment . We aspire to have a culture where everyone can thrive in order to connect and engage the world through epic entertainment. We provide a suite of benefits that promote physical, emotional and financial well-being for ‘Every World’ - we’ve got our employees covered!The videogame industry and therefore our business is fast-paced and will continue to evolve. As such, the duties and responsibilities of this role may be changed as directed by the Company at any time to promote and support our business and relationships with industry partners.We love hearing from anyone who is enthusiastic about changing the games industry. Not sure you meet all qualifications ? Let us decide! Research shows that women and members of other under-represented groups tend to not apply to jobs when they think they may not meet every qualification, when, in fact, they often do! We are committed to creating a diverse and inclusive environment and strongly encourage you to apply.We are committed to working with and providing reasonable assistance to individuals with physical and mental disabilities. If you are a disabled individual requiring an accommodation to apply for an open position, please email your request to accommodationrequests@activisionblizzard.com . General employment questions cannot be accepted or processed here. Thank you for your interest.We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, gender identity, age, marital status, veteran status, or disability status, among other characteristics.RewardsWe provide a suite of benefits that promote physical, emotional and financial well-being for ‘Every World’ - we’ve got our employees covered! Subject to eligibility requirements, the Company offers comprehensive benefits including:Medical, dental, vision, health savings account or health reimbursement account, healthcare spending accounts, dependent care spending accounts, life and AD&D insurance, disability insurance;401(k) with Company match, tuition reimbursement, charitable donation matching;Paid holidays and vacation, paid sick time, floating holidays, compassion and bereavement leaves, parental leave;Mental health & wellbeing programs, fitness programs, free and discounted games, and a variety of other voluntary benefit programs like supplemental life & disability, legal service, ID protection, rental insurance, and others;If the Company requires that you move geographic locations for the job, then you may also be eligible for relocation assistance.Eligibility to participate in these benefits may vary for part time and temporary full-time employees and interns with the Company. You can learn more by visiting https://www.benefitsforeveryworld.com/ .In the U.S., the standard base pay range for this role is $80,800.00 - $149,400.00 Annual. These values reflect the expected base pay range of new hires across all U.S. locations. Ultimately, your specific range and offer will be based on several factors, including relevant experience, performance, and work location. Your Talent Professional can share this role’s range details for your local geography during the hiring process. In addition to a competitive base pay, employees in this role may be eligible for incentive compensation. Incentive compensation is not guaranteed. While we strive to provide competitive offers to successful candidates, new hire compensation is negotiable. Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    0 Kommentare 0 Anteile
  • Senior Environment Artist - World at Activision

    As a member of the project, you’ll help take the studio and our game from great to even better—with a focus on inclusivity, mutual respect, thoughtfulness, empathy, professionalism, and collaboration.You’ll help us by…Collaborate with level designers and the art director to discuss and understand game intentions and visionHave ownership of a particular part of the gameParticipate in the ideation of locations/levels by performing researchCraft compelling visual narratives that reinforce and enhance the game’s setting, story, and gameplay elementsAssess level scope with your immediate supervisor to establish an achievable production scheduleBuild the rough map of the level with the level designer to show the preliminary graphic intentions, gameplays and ensure they are approvedSet up placeholders and submit prop and material requests to internal or OS teams; follow up on the progress and quality of the work during its developmentCreate a variety of 3D modelsfrom photo references and concept artBuildrealistic detailed 3d environments from photo reference and conceptAssist in troubleshooting artistic and technical issuesYou’re awesome because you have…Solid portfolio demonstrating focus and passion for 3d environment & prop creationAt least 8 years of experience in creating Environment or Prop Art in AAA game development with at least one AAA title connected to your nameExcellent artistic and design skills, including a keen eye for composition, color theory, and visual storytellingProficiency in industry-standard 3D modeling and texturing softwareExperience working in Modern Game Engines with an awareness of game budgets, limits, and memory constraintsAbility to consistently resolve issues from a visual, production, and technical perspectiveBasic ability to communicate visually utilizing traditional mediaEffective communication and collaboration skills, with the ability to work well in a team environmentAbility to adapt to evolving project needs and work under tight deadlinesBonus points for…Experience in working in or developing procedural workflows via tools like Houdini or UPSExperience with photogrammetry or other scanning techniques & workflowsNot sure you meet all the qualifications? Let us decide! Research shows that women and members of other under-represented groups tend to not apply to jobs when they think they may not meet every qualification, when, in fact, they often do! At Activision, we are committed to creating a diverse and inclusive environment and strongly encourage you to apply.Our WorldAt Activision, we strive to create the most iconic brands in gaming and entertainment. We’re driven by our mission to deliver unrivaled gaming experiences for the world to enjoy, together. We are home to some of the most beloved entertainment franchises including Call of Duty®, Crash Bandicoot™, Tony Hawk’s™ Pro Skater™, and Guitar Hero®. As a leading worldwide developer, publisher and distributor of interactive entertainment and products, our “press start” is simple: delight hundreds of millions of players around the world with innovative, fun, thrilling, and engaging entertainment experiences.We’re not just looking back at our decades-long legacy; we’re forging ahead to keep advancing gameplay with some of the most popular titles and sophisticated technology in the world. We have bold ambitions to create the most inclusive company as we know our success comes from the passionate, creative, and diverse teams within our organization.We’re in the business of delivering fun and unforgettable entertainment for our player community to enjoy. And our future opportunities have never been greater — this could be your opportunity to level up.Ready to Activate Your Future?#elsewhere
    #senior #environment #artist #world #activision
    Senior Environment Artist - World at Activision
    As a member of the project, you’ll help take the studio and our game from great to even better—with a focus on inclusivity, mutual respect, thoughtfulness, empathy, professionalism, and collaboration.You’ll help us by…Collaborate with level designers and the art director to discuss and understand game intentions and visionHave ownership of a particular part of the gameParticipate in the ideation of locations/levels by performing researchCraft compelling visual narratives that reinforce and enhance the game’s setting, story, and gameplay elementsAssess level scope with your immediate supervisor to establish an achievable production scheduleBuild the rough map of the level with the level designer to show the preliminary graphic intentions, gameplays and ensure they are approvedSet up placeholders and submit prop and material requests to internal or OS teams; follow up on the progress and quality of the work during its developmentCreate a variety of 3D modelsfrom photo references and concept artBuildrealistic detailed 3d environments from photo reference and conceptAssist in troubleshooting artistic and technical issuesYou’re awesome because you have…Solid portfolio demonstrating focus and passion for 3d environment & prop creationAt least 8 years of experience in creating Environment or Prop Art in AAA game development with at least one AAA title connected to your nameExcellent artistic and design skills, including a keen eye for composition, color theory, and visual storytellingProficiency in industry-standard 3D modeling and texturing softwareExperience working in Modern Game Engines with an awareness of game budgets, limits, and memory constraintsAbility to consistently resolve issues from a visual, production, and technical perspectiveBasic ability to communicate visually utilizing traditional mediaEffective communication and collaboration skills, with the ability to work well in a team environmentAbility to adapt to evolving project needs and work under tight deadlinesBonus points for…Experience in working in or developing procedural workflows via tools like Houdini or UPSExperience with photogrammetry or other scanning techniques & workflowsNot sure you meet all the qualifications? Let us decide! Research shows that women and members of other under-represented groups tend to not apply to jobs when they think they may not meet every qualification, when, in fact, they often do! At Activision, we are committed to creating a diverse and inclusive environment and strongly encourage you to apply.Our WorldAt Activision, we strive to create the most iconic brands in gaming and entertainment. We’re driven by our mission to deliver unrivaled gaming experiences for the world to enjoy, together. We are home to some of the most beloved entertainment franchises including Call of Duty®, Crash Bandicoot™, Tony Hawk’s™ Pro Skater™, and Guitar Hero®. As a leading worldwide developer, publisher and distributor of interactive entertainment and products, our “press start” is simple: delight hundreds of millions of players around the world with innovative, fun, thrilling, and engaging entertainment experiences.We’re not just looking back at our decades-long legacy; we’re forging ahead to keep advancing gameplay with some of the most popular titles and sophisticated technology in the world. We have bold ambitions to create the most inclusive company as we know our success comes from the passionate, creative, and diverse teams within our organization.We’re in the business of delivering fun and unforgettable entertainment for our player community to enjoy. And our future opportunities have never been greater — this could be your opportunity to level up.Ready to Activate Your Future?#elsewhere #senior #environment #artist #world #activision
    Senior Environment Artist - World at Activision
    As a member of the project, you’ll help take the studio and our game from great to even better—with a focus on inclusivity, mutual respect (for ourselves and our players), thoughtfulness, empathy, professionalism, and collaboration.You’ll help us by…Collaborate with level designers and the art director to discuss and understand game intentions and visionHave ownership of a particular part of the game (i.e. a large part of the environment)Participate in the ideation of locations/levels by performing research (collecting written descriptions and visual references)Craft compelling visual narratives that reinforce and enhance the game’s setting, story, and gameplay elementsAssess level scope with your immediate supervisor to establish an achievable production scheduleBuild the rough map of the level with the level designer to show the preliminary graphic intentions, gameplays and ensure they are approvedSet up placeholders and submit prop and material requests to internal or OS teams; follow up on the progress and quality of the work during its developmentCreate a variety of 3D models (architecture, foliage, props) from photo references and concept artBuild (set dress, edit, optimize) realistic detailed 3d environments from photo reference and conceptAssist in troubleshooting artistic and technical issuesYou’re awesome because you have…Solid portfolio demonstrating focus and passion for 3d environment & prop creationAt least 8 years of experience in creating Environment or Prop Art in AAA game development with at least one AAA title connected to your nameExcellent artistic and design skills, including a keen eye for composition, color theory, and visual storytellingProficiency in industry-standard 3D modeling and texturing software (e.g., Maya, 3ds Max, Blender, ZBrush, Substance Painter/Designer)Experience working in Modern Game Engines with an awareness of game budgets, limits, and memory constraintsAbility to consistently resolve issues from a visual, production, and technical perspectiveBasic ability to communicate visually utilizing traditional mediaEffective communication and collaboration skills, with the ability to work well in a team environmentAbility to adapt to evolving project needs and work under tight deadlinesBonus points for…Experience in working in or developing procedural workflows via tools like Houdini or UPSExperience with photogrammetry or other scanning techniques & workflowsNot sure you meet all the qualifications? Let us decide! Research shows that women and members of other under-represented groups tend to not apply to jobs when they think they may not meet every qualification, when, in fact, they often do! At Activision, we are committed to creating a diverse and inclusive environment and strongly encourage you to apply.Our WorldAt Activision, we strive to create the most iconic brands in gaming and entertainment. We’re driven by our mission to deliver unrivaled gaming experiences for the world to enjoy, together. We are home to some of the most beloved entertainment franchises including Call of Duty®, Crash Bandicoot™, Tony Hawk’s™ Pro Skater™, and Guitar Hero®. As a leading worldwide developer, publisher and distributor of interactive entertainment and products, our “press start” is simple: delight hundreds of millions of players around the world with innovative, fun, thrilling, and engaging entertainment experiences.We’re not just looking back at our decades-long legacy; we’re forging ahead to keep advancing gameplay with some of the most popular titles and sophisticated technology in the world. We have bold ambitions to create the most inclusive company as we know our success comes from the passionate, creative, and diverse teams within our organization.We’re in the business of delivering fun and unforgettable entertainment for our player community to enjoy. And our future opportunities have never been greater — this could be your opportunity to level up.Ready to Activate Your Future?#elsewhere
    0 Kommentare 0 Anteile