• Hungry Bacteria Hunt Their Neighbors With Tiny, Poison-Tipped Harpoons

    Starving bacteriause a microscopic harpoon—called the Type VI secretion system—to stab and kill neighboring cells. The prey burst, turning spherical and leaking nutrients, which the killers then use to survive and grow.NewsletterSign up for our email newsletter for the latest science newsBacteria are bad neighbors. And we’re not talking noisy, never-take-out-the-trash bad neighbors. We’re talking has-a-harpoon-gun-and-points-it-at-you bad neighbors. According to a new study in Science, some bacteria hunt nearby bacterial species when they’re hungry. Using a special weapon system called the Type VI Secretion System, these bacteria shoot, spill, and then absorb the nutrients from the microbes they harpoon. “The punchline is: When things get tough, you eat your neighbors,” said Glen D’Souza, a study author and an assistant professor at Arizona State University, according to a press release. “We’ve known bacteria kill each other, that’s textbook. But what we’re seeing is that it’s not just important that the bacteria have weapons to kill, but they are controlling when they use those weapons specifically for situations to eat others where they can’t grow themselves.” According to the study authors, the research doesn’t just have implications for bacterial neighborhoods; it also has implications for human health and medicine. By harnessing these bacterial weapons, it may be possible to build better targeted antibiotics, designed to overcome antibiotic resistance. Ruthless Bacteria Use HarpoonsResearchers have long known that some bacteria can be ruthless, using weapons like the T6SS to clear out their competition. A nasty tool, the T6SS is essentially a tiny harpoon gun with a poison-tipped needle. When a bacterium shoots the weapon into another bacterium from a separate species, the needle pierces the microbe without killing it. Then, it injects toxins into the microbe that cause its internal nutrients to spill out.Up until now, researchers thought that this weapon helped bacteria eliminate their competition for space and for food, but after watching bacteria use the T6SS to attack their neighbors when food was scarce, the study authors concluded that these tiny harpooners use the weapon not only to remove rivals, but also to consume their competitors’ leaked nutrients.“Watching these cells in action really drives home how resourceful bacteria can be,” said Astrid Stubbusch, another study author and a researcher who worked on the study while at ETH Zurich, according to the press release. “By slowly releasing nutrients from their neighbors, they maximize their nutrient harvesting when every molecule counts.” Absorbing Food From NeighborsTo show that the bacteria used this system to eat when there was no food around, the study authors compared their attacks in both nutrient-rich and nutrient-poor environments. When supplied with ample resources, the bacteria used their harpoons to kill their neighbors quickly, with the released nutrients leaking out and dissolving immediately. But when resources were few and far between, they used their harpoons to kill their neighbors slowly, with the nutrients seeping out and sticking around. “This difference in dissolution time could mean that the killer cells load their spears with different toxins,” D’Souza said in another press release. While one toxin could eliminate the competition for space and for food when nutrients are available, another could create a food source, allowing bacteria to “absorb as many nutrients as possible” when sustenance is in short supply.Because of all this, this weapon system is more than ruthless; it’s also smart, and important to some species’ survival. When genetically unedited T6SS bacteria were put in an environment without food, they survived on spilled nutrients. But when genetically edited T6SS bacteria were placed in a similar environment, they died, because their ability to find food in their neighbors had been “turned off.”Harnessing Bacterial HarpoonsAccording to the study authors, the T6SS system is widely used by bacteria, both in and outside the lab. “It’s present in many different environments,” D’Souza said in one of the press releases. “It’s operational and happening in nature, from the oceans to the human gut.” The study authors add that their research could change the way we think about bacteria and could help in our fight against antibiotic resistance. In fact, the T6SS could one day serve as a foundation for targeted drug delivery systems, which could mitigate the development of broader bacterial resistance to antibiotics. But before that can happen, however, researchers have to learn more about bacterial harpoons, and about when and how bacteria use them, both to beat and eat their neighbors.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.1 free article leftWant More? Get unlimited access for as low as /monthSubscribeAlready a subscriber?Register or Log In1 free articleSubscribeWant more?Keep reading for as low as !SubscribeAlready a subscriber?Register or Log In
    #hungry #bacteria #hunt #their #neighbors
    Hungry Bacteria Hunt Their Neighbors With Tiny, Poison-Tipped Harpoons
    Starving bacteriause a microscopic harpoon—called the Type VI secretion system—to stab and kill neighboring cells. The prey burst, turning spherical and leaking nutrients, which the killers then use to survive and grow.NewsletterSign up for our email newsletter for the latest science newsBacteria are bad neighbors. And we’re not talking noisy, never-take-out-the-trash bad neighbors. We’re talking has-a-harpoon-gun-and-points-it-at-you bad neighbors. According to a new study in Science, some bacteria hunt nearby bacterial species when they’re hungry. Using a special weapon system called the Type VI Secretion System, these bacteria shoot, spill, and then absorb the nutrients from the microbes they harpoon. “The punchline is: When things get tough, you eat your neighbors,” said Glen D’Souza, a study author and an assistant professor at Arizona State University, according to a press release. “We’ve known bacteria kill each other, that’s textbook. But what we’re seeing is that it’s not just important that the bacteria have weapons to kill, but they are controlling when they use those weapons specifically for situations to eat others where they can’t grow themselves.” According to the study authors, the research doesn’t just have implications for bacterial neighborhoods; it also has implications for human health and medicine. By harnessing these bacterial weapons, it may be possible to build better targeted antibiotics, designed to overcome antibiotic resistance. Ruthless Bacteria Use HarpoonsResearchers have long known that some bacteria can be ruthless, using weapons like the T6SS to clear out their competition. A nasty tool, the T6SS is essentially a tiny harpoon gun with a poison-tipped needle. When a bacterium shoots the weapon into another bacterium from a separate species, the needle pierces the microbe without killing it. Then, it injects toxins into the microbe that cause its internal nutrients to spill out.Up until now, researchers thought that this weapon helped bacteria eliminate their competition for space and for food, but after watching bacteria use the T6SS to attack their neighbors when food was scarce, the study authors concluded that these tiny harpooners use the weapon not only to remove rivals, but also to consume their competitors’ leaked nutrients.“Watching these cells in action really drives home how resourceful bacteria can be,” said Astrid Stubbusch, another study author and a researcher who worked on the study while at ETH Zurich, according to the press release. “By slowly releasing nutrients from their neighbors, they maximize their nutrient harvesting when every molecule counts.” Absorbing Food From NeighborsTo show that the bacteria used this system to eat when there was no food around, the study authors compared their attacks in both nutrient-rich and nutrient-poor environments. When supplied with ample resources, the bacteria used their harpoons to kill their neighbors quickly, with the released nutrients leaking out and dissolving immediately. But when resources were few and far between, they used their harpoons to kill their neighbors slowly, with the nutrients seeping out and sticking around. “This difference in dissolution time could mean that the killer cells load their spears with different toxins,” D’Souza said in another press release. While one toxin could eliminate the competition for space and for food when nutrients are available, another could create a food source, allowing bacteria to “absorb as many nutrients as possible” when sustenance is in short supply.Because of all this, this weapon system is more than ruthless; it’s also smart, and important to some species’ survival. When genetically unedited T6SS bacteria were put in an environment without food, they survived on spilled nutrients. But when genetically edited T6SS bacteria were placed in a similar environment, they died, because their ability to find food in their neighbors had been “turned off.”Harnessing Bacterial HarpoonsAccording to the study authors, the T6SS system is widely used by bacteria, both in and outside the lab. “It’s present in many different environments,” D’Souza said in one of the press releases. “It’s operational and happening in nature, from the oceans to the human gut.” The study authors add that their research could change the way we think about bacteria and could help in our fight against antibiotic resistance. In fact, the T6SS could one day serve as a foundation for targeted drug delivery systems, which could mitigate the development of broader bacterial resistance to antibiotics. But before that can happen, however, researchers have to learn more about bacterial harpoons, and about when and how bacteria use them, both to beat and eat their neighbors.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.1 free article leftWant More? Get unlimited access for as low as /monthSubscribeAlready a subscriber?Register or Log In1 free articleSubscribeWant more?Keep reading for as low as !SubscribeAlready a subscriber?Register or Log In #hungry #bacteria #hunt #their #neighbors
    Hungry Bacteria Hunt Their Neighbors With Tiny, Poison-Tipped Harpoons
    www.discovermagazine.com
    Starving bacteria (cyan) use a microscopic harpoon—called the Type VI secretion system—to stab and kill neighboring cells (magenta). The prey burst, turning spherical and leaking nutrients, which the killers then use to survive and grow. (Image Credit: Glen D'Souza/ASU/Screen shot from video)NewsletterSign up for our email newsletter for the latest science newsBacteria are bad neighbors. And we’re not talking noisy, never-take-out-the-trash bad neighbors. We’re talking has-a-harpoon-gun-and-points-it-at-you bad neighbors. According to a new study in Science, some bacteria hunt nearby bacterial species when they’re hungry. Using a special weapon system called the Type VI Secretion System (T6SS), these bacteria shoot, spill, and then absorb the nutrients from the microbes they harpoon. “The punchline is: When things get tough, you eat your neighbors,” said Glen D’Souza, a study author and an assistant professor at Arizona State University, according to a press release. “We’ve known bacteria kill each other, that’s textbook. But what we’re seeing is that it’s not just important that the bacteria have weapons to kill, but they are controlling when they use those weapons specifically for situations to eat others where they can’t grow themselves.” According to the study authors, the research doesn’t just have implications for bacterial neighborhoods; it also has implications for human health and medicine. By harnessing these bacterial weapons, it may be possible to build better targeted antibiotics, designed to overcome antibiotic resistance. Ruthless Bacteria Use HarpoonsResearchers have long known that some bacteria can be ruthless, using weapons like the T6SS to clear out their competition. A nasty tool, the T6SS is essentially a tiny harpoon gun with a poison-tipped needle. When a bacterium shoots the weapon into another bacterium from a separate species, the needle pierces the microbe without killing it. Then, it injects toxins into the microbe that cause its internal nutrients to spill out.Up until now, researchers thought that this weapon helped bacteria eliminate their competition for space and for food, but after watching bacteria use the T6SS to attack their neighbors when food was scarce, the study authors concluded that these tiny harpooners use the weapon not only to remove rivals, but also to consume their competitors’ leaked nutrients.“Watching these cells in action really drives home how resourceful bacteria can be,” said Astrid Stubbusch, another study author and a researcher who worked on the study while at ETH Zurich, according to the press release. “By slowly releasing nutrients from their neighbors, they maximize their nutrient harvesting when every molecule counts.” Absorbing Food From NeighborsTo show that the bacteria used this system to eat when there was no food around, the study authors compared their attacks in both nutrient-rich and nutrient-poor environments. When supplied with ample resources, the bacteria used their harpoons to kill their neighbors quickly, with the released nutrients leaking out and dissolving immediately. But when resources were few and far between, they used their harpoons to kill their neighbors slowly, with the nutrients seeping out and sticking around. “This difference in dissolution time could mean that the killer cells load their spears with different toxins,” D’Souza said in another press release. While one toxin could eliminate the competition for space and for food when nutrients are available, another could create a food source, allowing bacteria to “absorb as many nutrients as possible” when sustenance is in short supply.Because of all this, this weapon system is more than ruthless; it’s also smart, and important to some species’ survival. When genetically unedited T6SS bacteria were put in an environment without food, they survived on spilled nutrients. But when genetically edited T6SS bacteria were placed in a similar environment, they died, because their ability to find food in their neighbors had been “turned off.”Harnessing Bacterial HarpoonsAccording to the study authors, the T6SS system is widely used by bacteria, both in and outside the lab. “It’s present in many different environments,” D’Souza said in one of the press releases. “It’s operational and happening in nature, from the oceans to the human gut.” The study authors add that their research could change the way we think about bacteria and could help in our fight against antibiotic resistance. In fact, the T6SS could one day serve as a foundation for targeted drug delivery systems, which could mitigate the development of broader bacterial resistance to antibiotics. But before that can happen, however, researchers have to learn more about bacterial harpoons, and about when and how bacteria use them, both to beat and eat their neighbors.Article SourcesOur writers at Discovermagazine.com use peer-reviewed studies and high-quality sources for our articles, and our editors review for scientific accuracy and editorial standards. Review the sources used below for this article:Sam Walters is a journalist covering archaeology, paleontology, ecology, and evolution for Discover, along with an assortment of other topics. Before joining the Discover team as an assistant editor in 2022, Sam studied journalism at Northwestern University in Evanston, Illinois.1 free article leftWant More? Get unlimited access for as low as $1.99/monthSubscribeAlready a subscriber?Register or Log In1 free articleSubscribeWant more?Keep reading for as low as $1.99!SubscribeAlready a subscriber?Register or Log In
    Like
    Love
    Wow
    Sad
    Angry
    375
    · 2 Reacties ·0 aandelen ·0 voorbeeld
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    www.ilm.com
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Alibaba Qwen Team Releases Qwen3-Embedding and Qwen3-Reranker Series – Redefining Multilingual Embedding and Ranking Standards

    Text embedding and reranking are foundational to modern information retrieval systems, powering applications such as semantic search, recommendation systems, and retrieval-augmented generation. However, current approaches often face key challenges—particularly in achieving both high multilingual fidelity and task adaptability without relying on proprietary APIs. Existing models frequently fall short in scenarios requiring nuanced semantic understanding across multiple languages or domain-specific tasks like code retrieval and instruction following. Moreover, most open-source models either lack scale or flexibility, while commercial APIs remain costly and closed.
    Qwen3-Embedding and Qwen3-Reranker: A New Standard for Open-Source Embedding
    Alibaba’s Qwen Team has unveiled the Qwen3-Embedding and Qwen3-Reranker Series—models that set a new benchmark in multilingual text embedding and relevance ranking. Built on the Qwen3 foundation models, the series includes variants in 0.6B, 4B, and 8B parameter sizes and supports a wide range of languages, making it one of the most versatile and performant open-source offerings to date. These models are now open-sourced under the Apache 2.0 license on Hugging Face, GitHub, and ModelScope, and are also accessible via Alibaba Cloud APIs.
    These models are optimized for use cases such as semantic retrieval, classification, RAG, sentiment analysis, and code search—providing a strong alternative to existing solutions like Gemini Embedding and OpenAI’s embedding APIs.

    Technical Architecture
    Qwen3-Embedding models adopt a dense transformer-based architecture with causal attention, producing embeddings by extracting the hidden state corresponding to thetoken. Instruction-awareness is a key feature: input queries are formatted as {instruction} {query}<|endoftext|>, enabling task-conditioned embeddings. The reranker models are trained with a binary classification format, judging document-query relevance in an instruction-guided manner using a token likelihood-based scoring function.

    The models are trained using a robust multi-stage training pipeline:

    Large-scale weak supervision: 150M synthetic training pairs generated using Qwen3-32B, covering retrieval, classification, STS, and bitext mining across languages and tasks.
    Supervised fine-tuning: 12M high-quality data pairs are selected using cosine similarity, fine-tuning performance in downstream applications.
    Model merging: Spherical linear interpolationof multiple fine-tuned checkpoints ensures robustness and generalization.

    This synthetic data generation pipeline enables control over data quality, language diversity, task difficulty, and more—resulting in a high degree of coverage and relevance in low-resource settings.
    Performance Benchmarks and Insights
    The Qwen3-Embedding and Qwen3-Reranker series demonstrate strong empirical performance across several multilingual benchmarks.

    On MMTEB, Qwen3-Embedding-8B achieves a mean task score of 70.58, surpassing Gemini and GTE-Qwen2 series.
    On MTEB: Qwen3-Embedding-8B reaches 75.22, outperforming other open models including NV-Embed-v2 and GritLM-7B.
    On MTEB-Code: Qwen3-Embedding-8B leads with 80.68, excelling in applications like code retrieval and Stack Overflow QA.

    For reranking:

    Qwen3-Reranker-0.6B already outperforms Jina and BGE rerankers.
    Qwen3-Reranker-8B achieves 81.22 on MTEB-Code and 72.94 on MMTEB-R, marking state-of-the-art performance.

    Ablation studies confirm the necessity of each training stage. Removing synthetic pretraining or model merging led to significant performance drops, emphasizing their contributions.
    Conclusion
    Alibaba’s Qwen3-Embedding and Qwen3-Reranker Series present a robust, open, and scalable solution to multilingual and instruction-aware semantic representation. With strong empirical results across MTEB, MMTEB, and MTEB-Code, these models bridge the gap between proprietary APIs and open-source accessibility. Their thoughtful training design—leveraging high-quality synthetic data, instruction-tuning, and model merging—positions them as ideal candidates for enterprise applications in search, retrieval, and RAG pipelines. By open-sourcing these models, the Qwen team not only pushes the boundaries of language understanding but also empowers the broader community to innovate on top of a solid foundation.

    Check out the Paper, Technical details, Qwen3-Embedding and Qwen3-Reranker. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Coding Guide to Building an Iterative AI Workflow Agent Using LangGraph and GeminiAsif Razzaqhttps://www.marktechpost.com/author/6flvq/From Clicking to Reasoning: WebChoreArena Benchmark Challenges Agents with Memory-Heavy and Multi-Page TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Mistral AI Introduces Mistral Code: A Customizable AI Coding Assistant for Enterprise WorkflowsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA AI Releases Llama Nemotron Nano VL: A Compact Vision-Language Model Optimized for Document Understanding
    #alibaba #qwen #team #releases #qwen3embedding
    Alibaba Qwen Team Releases Qwen3-Embedding and Qwen3-Reranker Series – Redefining Multilingual Embedding and Ranking Standards
    Text embedding and reranking are foundational to modern information retrieval systems, powering applications such as semantic search, recommendation systems, and retrieval-augmented generation. However, current approaches often face key challenges—particularly in achieving both high multilingual fidelity and task adaptability without relying on proprietary APIs. Existing models frequently fall short in scenarios requiring nuanced semantic understanding across multiple languages or domain-specific tasks like code retrieval and instruction following. Moreover, most open-source models either lack scale or flexibility, while commercial APIs remain costly and closed. Qwen3-Embedding and Qwen3-Reranker: A New Standard for Open-Source Embedding Alibaba’s Qwen Team has unveiled the Qwen3-Embedding and Qwen3-Reranker Series—models that set a new benchmark in multilingual text embedding and relevance ranking. Built on the Qwen3 foundation models, the series includes variants in 0.6B, 4B, and 8B parameter sizes and supports a wide range of languages, making it one of the most versatile and performant open-source offerings to date. These models are now open-sourced under the Apache 2.0 license on Hugging Face, GitHub, and ModelScope, and are also accessible via Alibaba Cloud APIs. These models are optimized for use cases such as semantic retrieval, classification, RAG, sentiment analysis, and code search—providing a strong alternative to existing solutions like Gemini Embedding and OpenAI’s embedding APIs. Technical Architecture Qwen3-Embedding models adopt a dense transformer-based architecture with causal attention, producing embeddings by extracting the hidden state corresponding to thetoken. Instruction-awareness is a key feature: input queries are formatted as {instruction} {query}<|endoftext|>, enabling task-conditioned embeddings. The reranker models are trained with a binary classification format, judging document-query relevance in an instruction-guided manner using a token likelihood-based scoring function. The models are trained using a robust multi-stage training pipeline: Large-scale weak supervision: 150M synthetic training pairs generated using Qwen3-32B, covering retrieval, classification, STS, and bitext mining across languages and tasks. Supervised fine-tuning: 12M high-quality data pairs are selected using cosine similarity, fine-tuning performance in downstream applications. Model merging: Spherical linear interpolationof multiple fine-tuned checkpoints ensures robustness and generalization. This synthetic data generation pipeline enables control over data quality, language diversity, task difficulty, and more—resulting in a high degree of coverage and relevance in low-resource settings. Performance Benchmarks and Insights The Qwen3-Embedding and Qwen3-Reranker series demonstrate strong empirical performance across several multilingual benchmarks. On MMTEB, Qwen3-Embedding-8B achieves a mean task score of 70.58, surpassing Gemini and GTE-Qwen2 series. On MTEB: Qwen3-Embedding-8B reaches 75.22, outperforming other open models including NV-Embed-v2 and GritLM-7B. On MTEB-Code: Qwen3-Embedding-8B leads with 80.68, excelling in applications like code retrieval and Stack Overflow QA. For reranking: Qwen3-Reranker-0.6B already outperforms Jina and BGE rerankers. Qwen3-Reranker-8B achieves 81.22 on MTEB-Code and 72.94 on MMTEB-R, marking state-of-the-art performance. Ablation studies confirm the necessity of each training stage. Removing synthetic pretraining or model merging led to significant performance drops, emphasizing their contributions. Conclusion Alibaba’s Qwen3-Embedding and Qwen3-Reranker Series present a robust, open, and scalable solution to multilingual and instruction-aware semantic representation. With strong empirical results across MTEB, MMTEB, and MTEB-Code, these models bridge the gap between proprietary APIs and open-source accessibility. Their thoughtful training design—leveraging high-quality synthetic data, instruction-tuning, and model merging—positions them as ideal candidates for enterprise applications in search, retrieval, and RAG pipelines. By open-sourcing these models, the Qwen team not only pushes the boundaries of language understanding but also empowers the broader community to innovate on top of a solid foundation. Check out the Paper, Technical details, Qwen3-Embedding and Qwen3-Reranker. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Coding Guide to Building an Iterative AI Workflow Agent Using LangGraph and GeminiAsif Razzaqhttps://www.marktechpost.com/author/6flvq/From Clicking to Reasoning: WebChoreArena Benchmark Challenges Agents with Memory-Heavy and Multi-Page TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Mistral AI Introduces Mistral Code: A Customizable AI Coding Assistant for Enterprise WorkflowsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA AI Releases Llama Nemotron Nano VL: A Compact Vision-Language Model Optimized for Document Understanding #alibaba #qwen #team #releases #qwen3embedding
    Alibaba Qwen Team Releases Qwen3-Embedding and Qwen3-Reranker Series – Redefining Multilingual Embedding and Ranking Standards
    www.marktechpost.com
    Text embedding and reranking are foundational to modern information retrieval systems, powering applications such as semantic search, recommendation systems, and retrieval-augmented generation (RAG). However, current approaches often face key challenges—particularly in achieving both high multilingual fidelity and task adaptability without relying on proprietary APIs. Existing models frequently fall short in scenarios requiring nuanced semantic understanding across multiple languages or domain-specific tasks like code retrieval and instruction following. Moreover, most open-source models either lack scale or flexibility, while commercial APIs remain costly and closed. Qwen3-Embedding and Qwen3-Reranker: A New Standard for Open-Source Embedding Alibaba’s Qwen Team has unveiled the Qwen3-Embedding and Qwen3-Reranker Series—models that set a new benchmark in multilingual text embedding and relevance ranking. Built on the Qwen3 foundation models, the series includes variants in 0.6B, 4B, and 8B parameter sizes and supports a wide range of languages (119 in total), making it one of the most versatile and performant open-source offerings to date. These models are now open-sourced under the Apache 2.0 license on Hugging Face, GitHub, and ModelScope, and are also accessible via Alibaba Cloud APIs. These models are optimized for use cases such as semantic retrieval, classification, RAG, sentiment analysis, and code search—providing a strong alternative to existing solutions like Gemini Embedding and OpenAI’s embedding APIs. Technical Architecture Qwen3-Embedding models adopt a dense transformer-based architecture with causal attention, producing embeddings by extracting the hidden state corresponding to the [EOS] token. Instruction-awareness is a key feature: input queries are formatted as {instruction} {query}<|endoftext|>, enabling task-conditioned embeddings. The reranker models are trained with a binary classification format, judging document-query relevance in an instruction-guided manner using a token likelihood-based scoring function. The models are trained using a robust multi-stage training pipeline: Large-scale weak supervision: 150M synthetic training pairs generated using Qwen3-32B, covering retrieval, classification, STS, and bitext mining across languages and tasks. Supervised fine-tuning: 12M high-quality data pairs are selected using cosine similarity (>0.7), fine-tuning performance in downstream applications. Model merging: Spherical linear interpolation (SLERP) of multiple fine-tuned checkpoints ensures robustness and generalization. This synthetic data generation pipeline enables control over data quality, language diversity, task difficulty, and more—resulting in a high degree of coverage and relevance in low-resource settings. Performance Benchmarks and Insights The Qwen3-Embedding and Qwen3-Reranker series demonstrate strong empirical performance across several multilingual benchmarks. On MMTEB (216 tasks across 250+ languages), Qwen3-Embedding-8B achieves a mean task score of 70.58, surpassing Gemini and GTE-Qwen2 series. On MTEB (English v2): Qwen3-Embedding-8B reaches 75.22, outperforming other open models including NV-Embed-v2 and GritLM-7B. On MTEB-Code: Qwen3-Embedding-8B leads with 80.68, excelling in applications like code retrieval and Stack Overflow QA. For reranking: Qwen3-Reranker-0.6B already outperforms Jina and BGE rerankers. Qwen3-Reranker-8B achieves 81.22 on MTEB-Code and 72.94 on MMTEB-R, marking state-of-the-art performance. Ablation studies confirm the necessity of each training stage. Removing synthetic pretraining or model merging led to significant performance drops (up to 6 points on MMTEB), emphasizing their contributions. Conclusion Alibaba’s Qwen3-Embedding and Qwen3-Reranker Series present a robust, open, and scalable solution to multilingual and instruction-aware semantic representation. With strong empirical results across MTEB, MMTEB, and MTEB-Code, these models bridge the gap between proprietary APIs and open-source accessibility. Their thoughtful training design—leveraging high-quality synthetic data, instruction-tuning, and model merging—positions them as ideal candidates for enterprise applications in search, retrieval, and RAG pipelines. By open-sourcing these models, the Qwen team not only pushes the boundaries of language understanding but also empowers the broader community to innovate on top of a solid foundation. Check out the Paper, Technical details, Qwen3-Embedding and Qwen3-Reranker. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/A Step-by-Step Coding Guide to Building an Iterative AI Workflow Agent Using LangGraph and GeminiAsif Razzaqhttps://www.marktechpost.com/author/6flvq/From Clicking to Reasoning: WebChoreArena Benchmark Challenges Agents with Memory-Heavy and Multi-Page TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Mistral AI Introduces Mistral Code: A Customizable AI Coding Assistant for Enterprise WorkflowsAsif Razzaqhttps://www.marktechpost.com/author/6flvq/NVIDIA AI Releases Llama Nemotron Nano VL: A Compact Vision-Language Model Optimized for Document Understanding
    Like
    Love
    Wow
    Angry
    Sad
    332
    · 0 Reacties ·0 aandelen ·0 voorbeeld
  • Why isn’t an atom’s nucleus round?

    The nuclei of atoms are often portrayed as round in textbooks, but it turns out they're rarely spherical.
    #why #isnt #atoms #nucleus #round
    Why isn’t an atom’s nucleus round?
    The nuclei of atoms are often portrayed as round in textbooks, but it turns out they're rarely spherical. #why #isnt #atoms #nucleus #round
    Why isn’t an atom’s nucleus round?
    www.livescience.com
    The nuclei of atoms are often portrayed as round in textbooks, but it turns out they're rarely spherical.
    0 Reacties ·0 aandelen ·0 voorbeeld
  • How Doppler Radar Lets Meteorologists Predict Weather and Save Lives

    May 30, 20256 min readInside the Lifesaving Power of Doppler Weather RadarDoppler radar is one of the most revolutionary and lifesaving tools of modern meteorology, which has experts worried about outages because of recent staffing cuts and conspiracy theoriesBy Andrea Thompson edited by Dean Visser Mfotophile/Getty ImagesOutside every National Weather Serviceoffice around the U.S. stands what looks like an enormous white soccer ball, perched atop metal scaffolding several stories high. These somewhat plain spheres look as ho-hum as a town water tower, but tucked inside each is one of modern meteorology’s most revolutionary and lifesaving tools: Doppler radar.The national network of 160 high-resolution radars, installed in 1988 and updated in 2012, sends out microwave pulses that bounce off raindrops or other precipitation to help forecasters see what is falling and how much—providing crucial early information about events ranging from flash floods to blizzards. And the network is especially irreplaceable when it comes to spotting tornadoes; it has substantially lengthened warning times and reduced deaths. Doppler radar has “really revolutionized how we’ve been able to issue warnings,” says Ryan Hanrahan, chief meteorologist of the NBC Connecticut StormTracker team.But now meteorologists and emergency managers are increasingly worried about what might happen if any of these radars go offline, whether because of cuts to the NWS made by the Trump administration or threats from groups that espouse conspiracy theories about the radars being used to control the weather. “Losing radar capabilities would “take us back in time by four decades,” says Jana Houser, a tornado researcher at the Ohio State University. If they go down, “there’s no way we’re going to be effective at storm warnings.”On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.How Doppler radars workThe NWS installations form a network called the Next Generation Weather Radar, or NEXRAD. Inside each giant white sphere is a device that looks like a larger version of a home satellite TV dish, with a transmitter that emits pulses in the microwave region of the electromagnetic spectrum. Those pulses bounce off raindrops, snowflakes, hailstones—what meteorologists collectively call hydrometeors—and back to the dish antenna.Amanda MontañezThe power of the returning signals lets experts create a picture of size, shape and intensity of any precipitation—and this is what you see on a phone app’s radar map or a TV broadcast.But NEXRAD can do much, much more than show how hard it’s raining. Within its sphere, each unit rotates and scans up and down through the sky, helping forecasters see what is happening at multiple levels of a storm system. These vertical profiles can show, for example, whether a tornado is forming or a storm is creating a downburst—a rapid downward blast of wind. “Doppler radar basically allows us to see in the clouds,” Hanrahan says.And then there’s the “Doppler” part itself. The name refers to a phenomenon that’s familiar to many, thanks to the electromagnetic waves’ acoustic counterpart. We’ve all experienced this, often most obviously when we hear an emergency vehicle siren pass nearby: the pitch increases as the car gets closer and decreases as it moves away. Similarly, the returning radar bounce from a rain droplet or piece of tornadic debris that is moving toward the emitter will have a shorter wavelength than the pulse that was sent out, and the signal from an object moving away from the radar will have a longer wavelength. This allows the radar to efficiently distinguish the tight circulation of a tornado.These two images show how dual-polarization helps NWS forecasters detect a tornado that is producing damage. The left image shows how the Doppler radar can detect rotation. Between the two yellow arrows, the red color indicates outbound wind, while the green color indicate inbound wind, relative to the location of the radar. The right image shows how dual-polarization information helps detect debris picked up by the tornado.NOAAThe nation’s radar system was upgraded in 2012 to include what is called dual polarization. This means the signal has both vertically and horizontally oriented wavelengths, providing information about precipitation in more than one dimension. “A drizzle droplet is almost perfectly spherical, so it returns the same amount of power in the horizontal and in the vertical,” Hanrahan says, whereas giant drops look almost like “hamburger buns” and so send back more power in the horizontal than the vertical.Are Doppler radars dangerous? Can they affect the weather?Doppler radars do not pose any danger to people, wildlife or structures—and they cannot affect the weather.Along the electromagnetic spectrum, it is the portions with shorter wavelengths such as gamma rays and ultraviolet radiation that can readily damage the human body—because their wavelengths are the right size to interact with and damage DNA or our cells. Doppler radars emit pulses in wavelengths about the size of a baseball.Amanda MontañezBeing hit by extremely concentrated microwave radiation could be harmful; this is why microwave ovens have mesh screens that keep the rays from escaping. Similarly, you wouldn’t want to stand directly in front of a radar microwave beam. Military radar technicians found this out years ago when working on radars under operation, University of California, Los Angeles, climate scientist Daniel Swain said during one of his regular YouTube talks. They “had experiences like the candy bar in their pocket instantly melting and then feeling their skin getting really hot,” he said.Similar to how a microwave oven works, when the microwave signal from a radar hits a hydrometeor, the water molecules vibrate and so generate heat because of friction and reradiate some of the received energy, says Cynthia Fay, who serves as a focal point for the National Weather Service’s Radar Operations Center. But “microwave radiation is really not very powerful, and the whole point is that if you stand more than a couple dozen feet away from the dome it's not even really going to affect your body, let alone the global atmosphere,” Swain adds.At the radar’s antenna, the average power is about 23.5 megawattsof energy, Fay says.But the energy from the radar signal dissipates very rapidly with distance: at just one kilometer from the radar, the power is 0.0000019 MW, and at the radar’s maximum range of 460 kilometers, it is 8.8 x 10–12 MW, Fay says. “Once you’re miles away, it’s just really not a dangerous amount” of energy, Swain said in his video.A supercell thunderstorm that produced an F4 tornado near Meriden, KS, in May 1960, as seen from the WSR-3 radar in Topeka. A supercell thunderstorm that produced an EF5 tornado in Moore, OK, in May 2013, as seen from a modern Doppler weather radar near Oklahoma City.NOAAAnd Doppler radars spend most of their time listening for returns. According to the NWS, for every hour of operation, a radar may spend as little as seven seconds sending out pulses.The idea that Doppler radar can control or affect the weather is “a long-standing conspiracythat has existed really for decades but has kind of accelerated in recent years,” Swain said in his video. It has resurfaced recently with threats to the National Oceanic and Atmospheric Administration radar system from an antigovernment militia group, as first reported by CNN. The Washington Post reported that the group’s founder said that its members were carrying out “attack simulations” on sites in order to later destroy the radars,—which the group believes are “weather weapons,” according to an internal NOAA e-mail. NOAA has advised radar technicians at the NWS’s offices to exercise caution and work in teams when going out to service radars—and to notify local law enforcement of any suspicious activity.“NOAA is aware of recent threats against NEXRAD weather radar sites and is working with local and other authorities in monitoring the situation closely,” wrote a NWS spokesperson in response to a request for comment from Scientific American.What happens if weather radars go offline?NOAA’s radars have been on duty for 24 hours a day, seven days a week and 365 days a year since 1988. “It’s amazing what workhorses these radars have been,” Hanrahan says.The image on the left shows a reflectivity radar image of a supercell thunderstorm that produced several tornadoes on April 19, 2023, near Oklahoma City, OK. The hook shape present often indicates rotation within the storm. The image on the right show velocity information that corresponds to the reflectivity image. Very strong inbound windsare next to very strong outbound winds. This very strong inbound/outbound “couplet” indicates the very strong rotation of a tornado.NOAABut they do require that periodic maintenance because of all the large moving parts needed to operate them. And with Trump administration cuts to NOAA staffing and freezes on some spending, “we just got rid of a lot of the radar maintenance technicians, and we got rid of the budget to repair a lot of these sites,” Swain said in his video. “Most of these are functioning fine right now. The question is: What happens once they go down, once they need a repair?”It is this outage possibility that most worries weather experts, particularly if the breakdowns occur during any kind of severe weather. “Radars are key instruments in issuing tornado warnings,” the Ohio State University’s Houser says. “If a radar goes down, we’re basically down as to what the larger picture is.”And for much of the country—particularly in the West—there is little to no overlap in the areas that each radar covers, meaning other sites would not be able to step in if a neighboring radar is out. Hanrahan says the information provided by the radars is irreplaceable, and the 2012 upgrades mean “we don’t even need to have eyes on a tornado now to know that it’s happening. It’s something that I think we take for granted now.”
    #how #doppler #radar #lets #meteorologists
    How Doppler Radar Lets Meteorologists Predict Weather and Save Lives
    May 30, 20256 min readInside the Lifesaving Power of Doppler Weather RadarDoppler radar is one of the most revolutionary and lifesaving tools of modern meteorology, which has experts worried about outages because of recent staffing cuts and conspiracy theoriesBy Andrea Thompson edited by Dean Visser Mfotophile/Getty ImagesOutside every National Weather Serviceoffice around the U.S. stands what looks like an enormous white soccer ball, perched atop metal scaffolding several stories high. These somewhat plain spheres look as ho-hum as a town water tower, but tucked inside each is one of modern meteorology’s most revolutionary and lifesaving tools: Doppler radar.The national network of 160 high-resolution radars, installed in 1988 and updated in 2012, sends out microwave pulses that bounce off raindrops or other precipitation to help forecasters see what is falling and how much—providing crucial early information about events ranging from flash floods to blizzards. And the network is especially irreplaceable when it comes to spotting tornadoes; it has substantially lengthened warning times and reduced deaths. Doppler radar has “really revolutionized how we’ve been able to issue warnings,” says Ryan Hanrahan, chief meteorologist of the NBC Connecticut StormTracker team.But now meteorologists and emergency managers are increasingly worried about what might happen if any of these radars go offline, whether because of cuts to the NWS made by the Trump administration or threats from groups that espouse conspiracy theories about the radars being used to control the weather. “Losing radar capabilities would “take us back in time by four decades,” says Jana Houser, a tornado researcher at the Ohio State University. If they go down, “there’s no way we’re going to be effective at storm warnings.”On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.How Doppler radars workThe NWS installations form a network called the Next Generation Weather Radar, or NEXRAD. Inside each giant white sphere is a device that looks like a larger version of a home satellite TV dish, with a transmitter that emits pulses in the microwave region of the electromagnetic spectrum. Those pulses bounce off raindrops, snowflakes, hailstones—what meteorologists collectively call hydrometeors—and back to the dish antenna.Amanda MontañezThe power of the returning signals lets experts create a picture of size, shape and intensity of any precipitation—and this is what you see on a phone app’s radar map or a TV broadcast.But NEXRAD can do much, much more than show how hard it’s raining. Within its sphere, each unit rotates and scans up and down through the sky, helping forecasters see what is happening at multiple levels of a storm system. These vertical profiles can show, for example, whether a tornado is forming or a storm is creating a downburst—a rapid downward blast of wind. “Doppler radar basically allows us to see in the clouds,” Hanrahan says.And then there’s the “Doppler” part itself. The name refers to a phenomenon that’s familiar to many, thanks to the electromagnetic waves’ acoustic counterpart. We’ve all experienced this, often most obviously when we hear an emergency vehicle siren pass nearby: the pitch increases as the car gets closer and decreases as it moves away. Similarly, the returning radar bounce from a rain droplet or piece of tornadic debris that is moving toward the emitter will have a shorter wavelength than the pulse that was sent out, and the signal from an object moving away from the radar will have a longer wavelength. This allows the radar to efficiently distinguish the tight circulation of a tornado.These two images show how dual-polarization helps NWS forecasters detect a tornado that is producing damage. The left image shows how the Doppler radar can detect rotation. Between the two yellow arrows, the red color indicates outbound wind, while the green color indicate inbound wind, relative to the location of the radar. The right image shows how dual-polarization information helps detect debris picked up by the tornado.NOAAThe nation’s radar system was upgraded in 2012 to include what is called dual polarization. This means the signal has both vertically and horizontally oriented wavelengths, providing information about precipitation in more than one dimension. “A drizzle droplet is almost perfectly spherical, so it returns the same amount of power in the horizontal and in the vertical,” Hanrahan says, whereas giant drops look almost like “hamburger buns” and so send back more power in the horizontal than the vertical.Are Doppler radars dangerous? Can they affect the weather?Doppler radars do not pose any danger to people, wildlife or structures—and they cannot affect the weather.Along the electromagnetic spectrum, it is the portions with shorter wavelengths such as gamma rays and ultraviolet radiation that can readily damage the human body—because their wavelengths are the right size to interact with and damage DNA or our cells. Doppler radars emit pulses in wavelengths about the size of a baseball.Amanda MontañezBeing hit by extremely concentrated microwave radiation could be harmful; this is why microwave ovens have mesh screens that keep the rays from escaping. Similarly, you wouldn’t want to stand directly in front of a radar microwave beam. Military radar technicians found this out years ago when working on radars under operation, University of California, Los Angeles, climate scientist Daniel Swain said during one of his regular YouTube talks. They “had experiences like the candy bar in their pocket instantly melting and then feeling their skin getting really hot,” he said.Similar to how a microwave oven works, when the microwave signal from a radar hits a hydrometeor, the water molecules vibrate and so generate heat because of friction and reradiate some of the received energy, says Cynthia Fay, who serves as a focal point for the National Weather Service’s Radar Operations Center. But “microwave radiation is really not very powerful, and the whole point is that if you stand more than a couple dozen feet away from the dome it's not even really going to affect your body, let alone the global atmosphere,” Swain adds.At the radar’s antenna, the average power is about 23.5 megawattsof energy, Fay says.But the energy from the radar signal dissipates very rapidly with distance: at just one kilometer from the radar, the power is 0.0000019 MW, and at the radar’s maximum range of 460 kilometers, it is 8.8 x 10–12 MW, Fay says. “Once you’re miles away, it’s just really not a dangerous amount” of energy, Swain said in his video.A supercell thunderstorm that produced an F4 tornado near Meriden, KS, in May 1960, as seen from the WSR-3 radar in Topeka. A supercell thunderstorm that produced an EF5 tornado in Moore, OK, in May 2013, as seen from a modern Doppler weather radar near Oklahoma City.NOAAAnd Doppler radars spend most of their time listening for returns. According to the NWS, for every hour of operation, a radar may spend as little as seven seconds sending out pulses.The idea that Doppler radar can control or affect the weather is “a long-standing conspiracythat has existed really for decades but has kind of accelerated in recent years,” Swain said in his video. It has resurfaced recently with threats to the National Oceanic and Atmospheric Administration radar system from an antigovernment militia group, as first reported by CNN. The Washington Post reported that the group’s founder said that its members were carrying out “attack simulations” on sites in order to later destroy the radars,—which the group believes are “weather weapons,” according to an internal NOAA e-mail. NOAA has advised radar technicians at the NWS’s offices to exercise caution and work in teams when going out to service radars—and to notify local law enforcement of any suspicious activity.“NOAA is aware of recent threats against NEXRAD weather radar sites and is working with local and other authorities in monitoring the situation closely,” wrote a NWS spokesperson in response to a request for comment from Scientific American.What happens if weather radars go offline?NOAA’s radars have been on duty for 24 hours a day, seven days a week and 365 days a year since 1988. “It’s amazing what workhorses these radars have been,” Hanrahan says.The image on the left shows a reflectivity radar image of a supercell thunderstorm that produced several tornadoes on April 19, 2023, near Oklahoma City, OK. The hook shape present often indicates rotation within the storm. The image on the right show velocity information that corresponds to the reflectivity image. Very strong inbound windsare next to very strong outbound winds. This very strong inbound/outbound “couplet” indicates the very strong rotation of a tornado.NOAABut they do require that periodic maintenance because of all the large moving parts needed to operate them. And with Trump administration cuts to NOAA staffing and freezes on some spending, “we just got rid of a lot of the radar maintenance technicians, and we got rid of the budget to repair a lot of these sites,” Swain said in his video. “Most of these are functioning fine right now. The question is: What happens once they go down, once they need a repair?”It is this outage possibility that most worries weather experts, particularly if the breakdowns occur during any kind of severe weather. “Radars are key instruments in issuing tornado warnings,” the Ohio State University’s Houser says. “If a radar goes down, we’re basically down as to what the larger picture is.”And for much of the country—particularly in the West—there is little to no overlap in the areas that each radar covers, meaning other sites would not be able to step in if a neighboring radar is out. Hanrahan says the information provided by the radars is irreplaceable, and the 2012 upgrades mean “we don’t even need to have eyes on a tornado now to know that it’s happening. It’s something that I think we take for granted now.” #how #doppler #radar #lets #meteorologists
    How Doppler Radar Lets Meteorologists Predict Weather and Save Lives
    www.scientificamerican.com
    May 30, 20256 min readInside the Lifesaving Power of Doppler Weather RadarDoppler radar is one of the most revolutionary and lifesaving tools of modern meteorology, which has experts worried about outages because of recent staffing cuts and conspiracy theoriesBy Andrea Thompson edited by Dean Visser Mfotophile/Getty ImagesOutside every National Weather Service (NWS) office around the U.S. stands what looks like an enormous white soccer ball, perched atop metal scaffolding several stories high. These somewhat plain spheres look as ho-hum as a town water tower, but tucked inside each is one of modern meteorology’s most revolutionary and lifesaving tools: Doppler radar.The national network of 160 high-resolution radars, installed in 1988 and updated in 2012, sends out microwave pulses that bounce off raindrops or other precipitation to help forecasters see what is falling and how much—providing crucial early information about events ranging from flash floods to blizzards. And the network is especially irreplaceable when it comes to spotting tornadoes; it has substantially lengthened warning times and reduced deaths. Doppler radar has “really revolutionized how we’ve been able to issue warnings,” says Ryan Hanrahan, chief meteorologist of the NBC Connecticut StormTracker team.But now meteorologists and emergency managers are increasingly worried about what might happen if any of these radars go offline, whether because of cuts to the NWS made by the Trump administration or threats from groups that espouse conspiracy theories about the radars being used to control the weather. “Losing radar capabilities would “take us back in time by four decades,” says Jana Houser, a tornado researcher at the Ohio State University. If they go down, “there’s no way we’re going to be effective at storm warnings.”On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.How Doppler radars workThe NWS installations form a network called the Next Generation Weather Radar, or NEXRAD. Inside each giant white sphere is a device that looks like a larger version of a home satellite TV dish, with a transmitter that emits pulses in the microwave region of the electromagnetic spectrum. Those pulses bounce off raindrops, snowflakes, hailstones—what meteorologists collectively call hydrometeors—and back to the dish antenna. (The pulses also sometimes bounce off bats, birds and even moving trains, which yield characteristic radar patterns that experts can usually identify.)Amanda MontañezThe power of the returning signals lets experts create a picture of size, shape and intensity of any precipitation—and this is what you see on a phone app’s radar map or a TV broadcast.But NEXRAD can do much, much more than show how hard it’s raining. Within its sphere, each unit rotates and scans up and down through the sky, helping forecasters see what is happening at multiple levels of a storm system. These vertical profiles can show, for example, whether a tornado is forming or a storm is creating a downburst—a rapid downward blast of wind. “Doppler radar basically allows us to see in the clouds,” Hanrahan says.And then there’s the “Doppler” part itself. The name refers to a phenomenon that’s familiar to many, thanks to the electromagnetic waves’ acoustic counterpart. We’ve all experienced this, often most obviously when we hear an emergency vehicle siren pass nearby: the pitch increases as the car gets closer and decreases as it moves away. Similarly, the returning radar bounce from a rain droplet or piece of tornadic debris that is moving toward the emitter will have a shorter wavelength than the pulse that was sent out, and the signal from an object moving away from the radar will have a longer wavelength. This allows the radar to efficiently distinguish the tight circulation of a tornado.These two images show how dual-polarization helps NWS forecasters detect a tornado that is producing damage. The left image shows how the Doppler radar can detect rotation. Between the two yellow arrows, the red color indicates outbound wind, while the green color indicate inbound wind, relative to the location of the radar. The right image shows how dual-polarization information helps detect debris picked up by the tornado.NOAAThe nation’s radar system was upgraded in 2012 to include what is called dual polarization. This means the signal has both vertically and horizontally oriented wavelengths, providing information about precipitation in more than one dimension. “A drizzle droplet is almost perfectly spherical, so it returns the same amount of power in the horizontal and in the vertical,” Hanrahan says, whereas giant drops look almost like “hamburger buns” and so send back more power in the horizontal than the vertical.Are Doppler radars dangerous? Can they affect the weather?Doppler radars do not pose any danger to people, wildlife or structures—and they cannot affect the weather.Along the electromagnetic spectrum, it is the portions with shorter wavelengths such as gamma rays and ultraviolet radiation that can readily damage the human body—because their wavelengths are the right size to interact with and damage DNA or our cells. Doppler radars emit pulses in wavelengths about the size of a baseball.Amanda MontañezBeing hit by extremely concentrated microwave radiation could be harmful; this is why microwave ovens have mesh screens that keep the rays from escaping. Similarly, you wouldn’t want to stand directly in front of a radar microwave beam. Military radar technicians found this out years ago when working on radars under operation, University of California, Los Angeles, climate scientist Daniel Swain said during one of his regular YouTube talks. They “had experiences like the candy bar in their pocket instantly melting and then feeling their skin getting really hot,” he said.Similar to how a microwave oven works, when the microwave signal from a radar hits a hydrometeor, the water molecules vibrate and so generate heat because of friction and reradiate some of the received energy, says Cynthia Fay, who serves as a focal point for the National Weather Service’s Radar Operations Center. But “microwave radiation is really not very powerful, and the whole point is that if you stand more than a couple dozen feet away from the dome it's not even really going to affect your body, let alone the global atmosphere,” Swain adds.At the radar’s antenna, the average power is about 23.5 megawatts (MW) of energy, Fay says. (A weak or moderate thunderstorm may generate about 18 MW in about an hour.) But the energy from the radar signal dissipates very rapidly with distance: at just one kilometer from the radar, the power is 0.0000019 MW, and at the radar’s maximum range of 460 kilometers, it is 8.8 x 10–12 MW, Fay says. “Once you’re miles away, it’s just really not a dangerous amount” of energy, Swain said in his video.A supercell thunderstorm that produced an F4 tornado near Meriden, KS, in May 1960, as seen from the WSR-3 radar in Topeka (left). A supercell thunderstorm that produced an EF5 tornado in Moore, OK, in May 2013, as seen from a modern Doppler weather radar near Oklahoma City (right).NOAAAnd Doppler radars spend most of their time listening for returns. According to the NWS, for every hour of operation, a radar may spend as little as seven seconds sending out pulses.The idea that Doppler radar can control or affect the weather is “a long-standing conspiracy [theory] that has existed really for decades but has kind of accelerated in recent years,” Swain said in his video. It has resurfaced recently with threats to the National Oceanic and Atmospheric Administration radar system from an antigovernment militia group, as first reported by CNN. The Washington Post reported that the group’s founder said that its members were carrying out “attack simulations” on sites in order to later destroy the radars,—which the group believes are “weather weapons,” according to an internal NOAA e-mail. NOAA has advised radar technicians at the NWS’s offices to exercise caution and work in teams when going out to service radars—and to notify local law enforcement of any suspicious activity.“NOAA is aware of recent threats against NEXRAD weather radar sites and is working with local and other authorities in monitoring the situation closely,” wrote a NWS spokesperson in response to a request for comment from Scientific American.What happens if weather radars go offline?NOAA’s radars have been on duty for 24 hours a day, seven days a week and 365 days a year since 1988 (with brief downtimes for maintenance and upgrades). “It’s amazing what workhorses these radars have been,” Hanrahan says.The image on the left shows a reflectivity radar image of a supercell thunderstorm that produced several tornadoes on April 19, 2023, near Oklahoma City, OK. The hook shape present often indicates rotation within the storm. The image on the right show velocity information that corresponds to the reflectivity image. Very strong inbound winds (green colors) are next to very strong outbound winds (bright red/yellow colors). This very strong inbound/outbound “couplet” indicates the very strong rotation of a tornado.NOAABut they do require that periodic maintenance because of all the large moving parts needed to operate them. And with Trump administration cuts to NOAA staffing and freezes on some spending, “we just got rid of a lot of the radar maintenance technicians, and we got rid of the budget to repair a lot of these sites,” Swain said in his video. “Most of these are functioning fine right now. The question is: What happens once they go down, once they need a repair?”It is this outage possibility that most worries weather experts, particularly if the breakdowns occur during any kind of severe weather. “Radars are key instruments in issuing tornado warnings,” the Ohio State University’s Houser says. “If a radar goes down, we’re basically down as to what the larger picture is.”And for much of the country—particularly in the West—there is little to no overlap in the areas that each radar covers, meaning other sites would not be able to step in if a neighboring radar is out. Hanrahan says the information provided by the radars is irreplaceable, and the 2012 upgrades mean “we don’t even need to have eyes on a tornado now to know that it’s happening. It’s something that I think we take for granted now.”
    0 Reacties ·0 aandelen ·0 voorbeeld
  • CASTING A BLACK MIRROR ON USS CALLISTER: INTO INFINITY

    By TREVOR HOGG

    Images courtesy of Netflix.

    Unlike North America where episodes tend to be no longer than an hour, it is not uncommon in Britain to have feature-length episodes, which explains why the seasons are shorter. Season 7 of Black Mirror has six episodes with the first sequel for the Netflix anthology series that explores the dark side of technology having a run time of 90 minutes. “USS Callister: Into Infinity” comes eight years after “USS Callister” went on to win four Emmys as part of Season 4 and expands the tale where illegally constructed digital clones from human DNA struggle to survive in a multiplayer online video game environment. Returning creative talent includes filmmaker Toby Hayness, writers Charlie Brooker and William Bridges, and cast members Cristin Milioti, Jimmi Simpson, Osy Ikhile, Milanka Brooks, Paul Raymond and Jesse Plemons. Stepping into the Star Trek-meets-The Twilight Zone proceedings for the first time is VFX Supervisor James MacLachlan, who previously handled the digital augmentation for Ted Lasso.

    “… We got on a train and went to the middle of Angleseyto a copper mine. The copper mine was absolutely stunning. … You’re a good 50 meters down, and there were little tunnels and caves where over the years things have been mined and stopped. … It was shot there, and we augmented some of it to help sell the fact that it wasn’t Earth. We put in these big beautiful arches of rock, Saturn-like planets up in the sky, a couple of moons, and clean-up of giveaways.”
    —James MacLachlan, Visual Effects Supervisor

    Taking advantage of the reflective quality of the bridge set was the LED wall utilized for the main viewscreen.

    Dealing with a sequel to a critically-acclaimed episode was not a daunting task. “It’s almost like I have a cheat code for what we need to do, which I quite like because there’s a language from the previous show, so we have a certain number of outlines and guidelines,” MacLachlan states. “But because this was set beyond where the previous one was. it’s a different kind of aesthetic. I didn’t feel the pressure.” No assets were reused. “We were lucky that the company that previously did the USS Callister ship packaged it out neatly for us, and we were able to take that model; however, it doesn’t fit in pipelines anymore in the same way with the layering and materials. It was different visual effects vendors as well. Union VFX was smashing out all our new ships, planets and the Heart of Infinity. There was a significant number of resources put into new content.” Old props were helpful. “The Metallica ship that shows up in this episode is actually the Valdack ship turned backwards, upside down, re-textured and re-modeled off a prop I happened to wander past and saw in Charlie Brooker’s and Jessica Rhoades’ office.” MacLachlan notes.

    Greenscreens were placed outside of the set windows for the USS Callister.

    “USS Callister: Into Infinity” required 669 visual effects shots while the other five episodes totaled 912. “Josie Henwood, the Visual Effects Producer, sat down with a calculator and did an amazing job of making sure that the budget distribution was well-weighted for each of the scripts,’ MacLachlan remarks. “We shot this one third and posted it all the way to the end, so it overlapped a lot with some of the others. It was almost an advantage because we could work out where we were at with the major numbers and balance things out. It was a huge benefit that Toby had directed ‘USS Callister’. We had conversations about how we could approach the visual effects and make sure they sat within the budget and timeframe.” Working across the series were Crafty Apes, Jam VFX, Jellyfish Pictures, Magic Lab, One of Us, Stargate Studios, Terraform Studios, Union VFX, and Bigtooth Studios.  “We had a spectrum of vendors that were brilliant and weighted so Union VFX took the heavy load on ‘USS Callister: Into Infinity,’ One of Us on ‘Eulogy’ and Jam VFX on ‘Hotel Riverie’ while the other vendors were used for all the shows.”

    “e had a matte painter at Territory Studio create some generic space looks like exteriors of planets in pre-production. We gave those to Union VFX who animated them so the stars gently drifted and the planets would slowly rotate. Everything in that set was chrome, so no matter where the camera was pointing, when we went to hyperspace, outside planets or in space, there were all of these beautiful reflections all over the surfaces of the USS Callister. What I did not anticipate is when the actors came onto the set not knowing it was going to be a LED wall. Their reaction was enough to say that we had made the right choice.”
    —James MacLachlan, Visual Effects Supervisor

    Miranda Jones looked after the production design and constructed a number of practical sets for the different sections of the USS Callister.

    A clever visual effect was deployed when a digital clone of Robert Dalyis in his garage crafting a new world for Infinity, which transforms from a horizontal landscape into a spherical planetary form. “A lot of it was based off current UI when you use your phone and scroll,” MacLachlan remarks. “It is weighted and slows down through an exponential curve, so we tried to do that with the rotational values. We also looked at people using HoloLenses and Minority Report with those gestural moments. It has a language that a number of people are comfortable with, and we have gotten there with AR.” Union VFX spent a lot of time working on the transition. “They had three levels of detail for each of the moments. We had the mountain range and talked about the Himalayas. Union VFX had these moments where they animated between the different sizes and scales of each of these models. The final one is a wrap and reveal to the sphere, so it’s like you’re scaling down and out of the moment, then it folds out from itself. It was really nice.”

    For safety reasons, weapons were digitally thrown. “We had a 3D prop printed for the shuriken and were able to get that out in front of the camera onstage,” MacLachlan explains. “Then we decided to have it stand out more, so asthrows it, it intentionally lights up. On set we couldn’t throw anything at Cristin, so some tracking markers were put on her top where it needed to land. Then we did that in CGI. When she is pulling it off her chest with her hand, the shuriken is all CGI. Because of the shape of the shuriken, we were able to have it poke through the fingers and was visible, so it worked well. Cristin did a convincing job of yanking the shuriken out. We added some blood and increased the size of the wound on her top, which we had to do for a couple of other scenes because blood goes dark when its dry, so it needed to be made redder.” Nanette Colethrows a ceremonial knife that hits Robert Daly directly in the head. “That was a crazy one. We had the full prop on the shelf in the beginning that she picks up and throws. The art department made a second one with a cutout section that was mounted to his head. Lucy Cainand I constructed a cage of hair clips and wire to hold it onto his head. Beyond that, we put tracking markers on his forehead, and we were able to add all of the blood. What we didn’t want to do was have too much blood and then have to remove it later. The decision was made to do the blood in post because you don’t want to be redressing it if you’re doing two or three takes; that can take a lot of time out of production.”

    “USS Callister: Into Infinity” required 669 visual effects shots.

    A digital clone of Robert Daly placed inside the game engine is responsible for creating the vast worlds found inside of Infinity.

    “We had a 3D prop printed for the shuriken… Then we decided to have it stand out more, so asthrows it, it intentionally lights up. On set we couldn’t throw anything at Cristin, so some tracking markers were put on her top where it needed to land. Then we did that in CGI. When she is pulling it off her chest with her hand, the shuriken is all CGI. Because of the shape of the shuriken, we were able to have it poke through the fingers and was visible… Cristin did a convincing job of yanking the shuriken out.”
    —James MacLachlan, Visual Effects Supervisor

    A cross between 2001: A Space Odyssey and Cast Away is the otherworldly planet where the digital clone of James Waltonis found. “We got on a train and went to the middle of Angleseyto a copper mine,” MacLachlan recounts. “The copper mine was absolutely stunning. It’s not as saturated. You’re a good 50 meters down, and there were little tunnels and caves where over the years things have been mined and stopped. We found moments that worked for the different areas. It was shot there, and we augmented some of it to help sell the fact that it wasn’t Earth. We put in these big beautiful arches of rock, Saturn-like planets up in the sky, a couple of moons, and clean-up of giveaways.”

    The blue teleportation ring was practically built and digitally enhanced.

    Set pieces were LiDAR scanned. “What was interesting about the ice planetthe art department built these amazing structures in the foreground and beyond that we had white drapes the whole way around, which fell off into darkness beautifully and naturally because of where the light was pulled by Stephan Pehrsson,” MacLachlan states. “On top of that, there was the special effects department, which was wafting in a lot of atmospherics. Some of the atmospherics were in-camera and others were augmented to even it out and boost it in places to help the situation. We did add foreground snow. There is a big crane shot in the beginning where Unreal Engine assisted in generating some material. Then we did matte painting and set extensions beyond that to create a larger scale and cool rock shapes that were on an angle.” The jungle setting was an actual location. “That’s Black Park, and because of the time of year, there are a lot of protected plants. We had a couple of moments where we weren’t allowed to walk in certain places. There is one big stunt where Nanette steps on a mine, and it explodes her back against a tree. That was a protected tree, so the whole thing was wrapped in this giant stunt mat while the stunt woman got thrown across it. Areas would be filled in with dressed plants to help the foreground, but we got most of the background in-camera. There were bits of clean-up where we spotted crew or trucks.”

    Large-scale and distinct rock shapes were placed at an angle to give the ice planet more of an alien quality.

    An exterior space shot of the USS Callister that is entirely CG.

    Twin versions of Nanette Cole and James Walton appear within the same frame. “Literally, we used every trick in the book the whole way through. Stephan and I went to see a motion control company that had a motion control camera on a TechnoDolly. Stephan could put it on his shoulder and record a move on a 20-foot crane. Once Stephan had done that first take, he would step away, then the motion control guys would do the same move again. You get this handheld feel through motion control rather than plotting two points and having it mechanical. You get a wide of a scene of clone Nanette in a chair and real Nanette standing in white, and you’ll notice the two Waltons in the background interacting with one another. Those shots were done on this motion control rig. We had motion control where we could plot points to make it feel like a tracking dolly. Then we also had our cameraman doing handheld moves pushing in and repeating himself. We had a wonderful double for Cristin who was excellent at mirroring what she was achieving, and they would switch and swap. You would have a shoulder or hair in the foreground in front of you, but then we would also stitch plates together that were handheld.”

    The USS Callister approaches the game engine situated at the Heart of Infinity.

    A homage to the fighter cockpit shots featured in the Star Wars franchise.

    USS Callister flies into the game engine while pursued by other Infinity players.

    A major story point is that the game engine is made to look complex but is in fact a façade.

    A copper mine served as the location for the planet where the digital clone of James Waltonis found.

    Principal photography for the jungle planet took place at Black Park in England.

    The blue skin of Elena Tulaskawas achieved with practical makeup.

    Assisting the lighting were some cool tools such as the teleportation ring. “We had this beautiful two-meter blue ring that we were able to put on the ground and light up as people step into it,” MacLachlan remarks. “You get these lovely reflections on their visors, helmets and kits. Then we augmented the blue ring in visual effects where it was replaced with more refined edging and lighting effects that stream up from it, which assisted with the integration with the teleportation effect because of their blue cyan tones.” Virtual production was utilized for the main viewscreen located on the bridge of the USS Callister. “In terms of reflections, the biggest boon for us in visual effects was the LED wall. The last time they did the big screen in the USS Callister was a greenscreen. We got a small version of a LED screen when the set was being built and did some tests. Then we had a matte painter at Territory Studio create some generic space looks like exteriors of planets in pre-production. We gave those to Union VFX who animated them so the stars gently drifted and the planets would slowly rotate. Everything in that set was chrome, so no matter where the camera was pointing, when we went to hyperspace or outside planets or in space, there were all of these beautiful reflections all over the surfaces of the USS Callister. What I did not anticipate is when the actors came onto the set not knowing it was going to be a LED wall. Their reaction was enough to say that we had made the right choice.”
    #casting #black #mirror #uss #callister
    CASTING A BLACK MIRROR ON USS CALLISTER: INTO INFINITY
    By TREVOR HOGG Images courtesy of Netflix. Unlike North America where episodes tend to be no longer than an hour, it is not uncommon in Britain to have feature-length episodes, which explains why the seasons are shorter. Season 7 of Black Mirror has six episodes with the first sequel for the Netflix anthology series that explores the dark side of technology having a run time of 90 minutes. “USS Callister: Into Infinity” comes eight years after “USS Callister” went on to win four Emmys as part of Season 4 and expands the tale where illegally constructed digital clones from human DNA struggle to survive in a multiplayer online video game environment. Returning creative talent includes filmmaker Toby Hayness, writers Charlie Brooker and William Bridges, and cast members Cristin Milioti, Jimmi Simpson, Osy Ikhile, Milanka Brooks, Paul Raymond and Jesse Plemons. Stepping into the Star Trek-meets-The Twilight Zone proceedings for the first time is VFX Supervisor James MacLachlan, who previously handled the digital augmentation for Ted Lasso. “… We got on a train and went to the middle of Angleseyto a copper mine. The copper mine was absolutely stunning. … You’re a good 50 meters down, and there were little tunnels and caves where over the years things have been mined and stopped. … It was shot there, and we augmented some of it to help sell the fact that it wasn’t Earth. We put in these big beautiful arches of rock, Saturn-like planets up in the sky, a couple of moons, and clean-up of giveaways.” —James MacLachlan, Visual Effects Supervisor Taking advantage of the reflective quality of the bridge set was the LED wall utilized for the main viewscreen. Dealing with a sequel to a critically-acclaimed episode was not a daunting task. “It’s almost like I have a cheat code for what we need to do, which I quite like because there’s a language from the previous show, so we have a certain number of outlines and guidelines,” MacLachlan states. “But because this was set beyond where the previous one was. it’s a different kind of aesthetic. I didn’t feel the pressure.” No assets were reused. “We were lucky that the company that previously did the USS Callister ship packaged it out neatly for us, and we were able to take that model; however, it doesn’t fit in pipelines anymore in the same way with the layering and materials. It was different visual effects vendors as well. Union VFX was smashing out all our new ships, planets and the Heart of Infinity. There was a significant number of resources put into new content.” Old props were helpful. “The Metallica ship that shows up in this episode is actually the Valdack ship turned backwards, upside down, re-textured and re-modeled off a prop I happened to wander past and saw in Charlie Brooker’s and Jessica Rhoades’ office.” MacLachlan notes. Greenscreens were placed outside of the set windows for the USS Callister. “USS Callister: Into Infinity” required 669 visual effects shots while the other five episodes totaled 912. “Josie Henwood, the Visual Effects Producer, sat down with a calculator and did an amazing job of making sure that the budget distribution was well-weighted for each of the scripts,’ MacLachlan remarks. “We shot this one third and posted it all the way to the end, so it overlapped a lot with some of the others. It was almost an advantage because we could work out where we were at with the major numbers and balance things out. It was a huge benefit that Toby had directed ‘USS Callister’. We had conversations about how we could approach the visual effects and make sure they sat within the budget and timeframe.” Working across the series were Crafty Apes, Jam VFX, Jellyfish Pictures, Magic Lab, One of Us, Stargate Studios, Terraform Studios, Union VFX, and Bigtooth Studios.  “We had a spectrum of vendors that were brilliant and weighted so Union VFX took the heavy load on ‘USS Callister: Into Infinity,’ One of Us on ‘Eulogy’ and Jam VFX on ‘Hotel Riverie’ while the other vendors were used for all the shows.” “e had a matte painter at Territory Studio create some generic space looks like exteriors of planets in pre-production. We gave those to Union VFX who animated them so the stars gently drifted and the planets would slowly rotate. Everything in that set was chrome, so no matter where the camera was pointing, when we went to hyperspace, outside planets or in space, there were all of these beautiful reflections all over the surfaces of the USS Callister. What I did not anticipate is when the actors came onto the set not knowing it was going to be a LED wall. Their reaction was enough to say that we had made the right choice.” —James MacLachlan, Visual Effects Supervisor Miranda Jones looked after the production design and constructed a number of practical sets for the different sections of the USS Callister. A clever visual effect was deployed when a digital clone of Robert Dalyis in his garage crafting a new world for Infinity, which transforms from a horizontal landscape into a spherical planetary form. “A lot of it was based off current UI when you use your phone and scroll,” MacLachlan remarks. “It is weighted and slows down through an exponential curve, so we tried to do that with the rotational values. We also looked at people using HoloLenses and Minority Report with those gestural moments. It has a language that a number of people are comfortable with, and we have gotten there with AR.” Union VFX spent a lot of time working on the transition. “They had three levels of detail for each of the moments. We had the mountain range and talked about the Himalayas. Union VFX had these moments where they animated between the different sizes and scales of each of these models. The final one is a wrap and reveal to the sphere, so it’s like you’re scaling down and out of the moment, then it folds out from itself. It was really nice.” For safety reasons, weapons were digitally thrown. “We had a 3D prop printed for the shuriken and were able to get that out in front of the camera onstage,” MacLachlan explains. “Then we decided to have it stand out more, so asthrows it, it intentionally lights up. On set we couldn’t throw anything at Cristin, so some tracking markers were put on her top where it needed to land. Then we did that in CGI. When she is pulling it off her chest with her hand, the shuriken is all CGI. Because of the shape of the shuriken, we were able to have it poke through the fingers and was visible, so it worked well. Cristin did a convincing job of yanking the shuriken out. We added some blood and increased the size of the wound on her top, which we had to do for a couple of other scenes because blood goes dark when its dry, so it needed to be made redder.” Nanette Colethrows a ceremonial knife that hits Robert Daly directly in the head. “That was a crazy one. We had the full prop on the shelf in the beginning that she picks up and throws. The art department made a second one with a cutout section that was mounted to his head. Lucy Cainand I constructed a cage of hair clips and wire to hold it onto his head. Beyond that, we put tracking markers on his forehead, and we were able to add all of the blood. What we didn’t want to do was have too much blood and then have to remove it later. The decision was made to do the blood in post because you don’t want to be redressing it if you’re doing two or three takes; that can take a lot of time out of production.” “USS Callister: Into Infinity” required 669 visual effects shots. A digital clone of Robert Daly placed inside the game engine is responsible for creating the vast worlds found inside of Infinity. “We had a 3D prop printed for the shuriken… Then we decided to have it stand out more, so asthrows it, it intentionally lights up. On set we couldn’t throw anything at Cristin, so some tracking markers were put on her top where it needed to land. Then we did that in CGI. When she is pulling it off her chest with her hand, the shuriken is all CGI. Because of the shape of the shuriken, we were able to have it poke through the fingers and was visible… Cristin did a convincing job of yanking the shuriken out.” —James MacLachlan, Visual Effects Supervisor A cross between 2001: A Space Odyssey and Cast Away is the otherworldly planet where the digital clone of James Waltonis found. “We got on a train and went to the middle of Angleseyto a copper mine,” MacLachlan recounts. “The copper mine was absolutely stunning. It’s not as saturated. You’re a good 50 meters down, and there were little tunnels and caves where over the years things have been mined and stopped. We found moments that worked for the different areas. It was shot there, and we augmented some of it to help sell the fact that it wasn’t Earth. We put in these big beautiful arches of rock, Saturn-like planets up in the sky, a couple of moons, and clean-up of giveaways.” The blue teleportation ring was practically built and digitally enhanced. Set pieces were LiDAR scanned. “What was interesting about the ice planetthe art department built these amazing structures in the foreground and beyond that we had white drapes the whole way around, which fell off into darkness beautifully and naturally because of where the light was pulled by Stephan Pehrsson,” MacLachlan states. “On top of that, there was the special effects department, which was wafting in a lot of atmospherics. Some of the atmospherics were in-camera and others were augmented to even it out and boost it in places to help the situation. We did add foreground snow. There is a big crane shot in the beginning where Unreal Engine assisted in generating some material. Then we did matte painting and set extensions beyond that to create a larger scale and cool rock shapes that were on an angle.” The jungle setting was an actual location. “That’s Black Park, and because of the time of year, there are a lot of protected plants. We had a couple of moments where we weren’t allowed to walk in certain places. There is one big stunt where Nanette steps on a mine, and it explodes her back against a tree. That was a protected tree, so the whole thing was wrapped in this giant stunt mat while the stunt woman got thrown across it. Areas would be filled in with dressed plants to help the foreground, but we got most of the background in-camera. There were bits of clean-up where we spotted crew or trucks.” Large-scale and distinct rock shapes were placed at an angle to give the ice planet more of an alien quality. An exterior space shot of the USS Callister that is entirely CG. Twin versions of Nanette Cole and James Walton appear within the same frame. “Literally, we used every trick in the book the whole way through. Stephan and I went to see a motion control company that had a motion control camera on a TechnoDolly. Stephan could put it on his shoulder and record a move on a 20-foot crane. Once Stephan had done that first take, he would step away, then the motion control guys would do the same move again. You get this handheld feel through motion control rather than plotting two points and having it mechanical. You get a wide of a scene of clone Nanette in a chair and real Nanette standing in white, and you’ll notice the two Waltons in the background interacting with one another. Those shots were done on this motion control rig. We had motion control where we could plot points to make it feel like a tracking dolly. Then we also had our cameraman doing handheld moves pushing in and repeating himself. We had a wonderful double for Cristin who was excellent at mirroring what she was achieving, and they would switch and swap. You would have a shoulder or hair in the foreground in front of you, but then we would also stitch plates together that were handheld.” The USS Callister approaches the game engine situated at the Heart of Infinity. A homage to the fighter cockpit shots featured in the Star Wars franchise. USS Callister flies into the game engine while pursued by other Infinity players. A major story point is that the game engine is made to look complex but is in fact a façade. A copper mine served as the location for the planet where the digital clone of James Waltonis found. Principal photography for the jungle planet took place at Black Park in England. The blue skin of Elena Tulaskawas achieved with practical makeup. Assisting the lighting were some cool tools such as the teleportation ring. “We had this beautiful two-meter blue ring that we were able to put on the ground and light up as people step into it,” MacLachlan remarks. “You get these lovely reflections on their visors, helmets and kits. Then we augmented the blue ring in visual effects where it was replaced with more refined edging and lighting effects that stream up from it, which assisted with the integration with the teleportation effect because of their blue cyan tones.” Virtual production was utilized for the main viewscreen located on the bridge of the USS Callister. “In terms of reflections, the biggest boon for us in visual effects was the LED wall. The last time they did the big screen in the USS Callister was a greenscreen. We got a small version of a LED screen when the set was being built and did some tests. Then we had a matte painter at Territory Studio create some generic space looks like exteriors of planets in pre-production. We gave those to Union VFX who animated them so the stars gently drifted and the planets would slowly rotate. Everything in that set was chrome, so no matter where the camera was pointing, when we went to hyperspace or outside planets or in space, there were all of these beautiful reflections all over the surfaces of the USS Callister. What I did not anticipate is when the actors came onto the set not knowing it was going to be a LED wall. Their reaction was enough to say that we had made the right choice.” #casting #black #mirror #uss #callister
    CASTING A BLACK MIRROR ON USS CALLISTER: INTO INFINITY
    www.vfxvoice.com
    By TREVOR HOGG Images courtesy of Netflix. Unlike North America where episodes tend to be no longer than an hour, it is not uncommon in Britain to have feature-length episodes, which explains why the seasons are shorter. Season 7 of Black Mirror has six episodes with the first sequel for the Netflix anthology series that explores the dark side of technology having a run time of 90 minutes. “USS Callister: Into Infinity” comes eight years after “USS Callister” went on to win four Emmys as part of Season 4 and expands the tale where illegally constructed digital clones from human DNA struggle to survive in a multiplayer online video game environment. Returning creative talent includes filmmaker Toby Hayness, writers Charlie Brooker and William Bridges, and cast members Cristin Milioti, Jimmi Simpson, Osy Ikhile, Milanka Brooks, Paul Raymond and Jesse Plemons. Stepping into the Star Trek-meets-The Twilight Zone proceedings for the first time is VFX Supervisor James MacLachlan, who previously handled the digital augmentation for Ted Lasso. “[For the planet where the digital clone of James Walton is found]… We got on a train and went to the middle of Anglesey [island in Wales] to a copper mine. The copper mine was absolutely stunning. … You’re a good 50 meters down, and there were little tunnels and caves where over the years things have been mined and stopped. … It was shot there, and we augmented some of it to help sell the fact that it wasn’t Earth. We put in these big beautiful arches of rock, Saturn-like planets up in the sky, a couple of moons, and clean-up of giveaways.” —James MacLachlan, Visual Effects Supervisor Taking advantage of the reflective quality of the bridge set was the LED wall utilized for the main viewscreen. Dealing with a sequel to a critically-acclaimed episode was not a daunting task. “It’s almost like I have a cheat code for what we need to do, which I quite like because there’s a language from the previous show, so we have a certain number of outlines and guidelines,” MacLachlan states. “But because this was set beyond where the previous one was. it’s a different kind of aesthetic. I didn’t feel the pressure.” No assets were reused. “We were lucky that the company that previously did the USS Callister ship packaged it out neatly for us, and we were able to take that model; however, it doesn’t fit in pipelines anymore in the same way with the layering and materials. It was different visual effects vendors as well. Union VFX was smashing out all our new ships, planets and the Heart of Infinity. There was a significant number of resources put into new content.” Old props were helpful. “The Metallica ship that shows up in this episode is actually the Valdack ship turned backwards, upside down, re-textured and re-modeled off a prop I happened to wander past and saw in Charlie Brooker’s and Jessica Rhoades’ office.” MacLachlan notes. Greenscreens were placed outside of the set windows for the USS Callister. “USS Callister: Into Infinity” required 669 visual effects shots while the other five episodes totaled 912. “Josie Henwood, the Visual Effects Producer, sat down with a calculator and did an amazing job of making sure that the budget distribution was well-weighted for each of the scripts,’ MacLachlan remarks. “We shot this one third and posted it all the way to the end, so it overlapped a lot with some of the others. It was almost an advantage because we could work out where we were at with the major numbers and balance things out. It was a huge benefit that Toby had directed ‘USS Callister’. We had conversations about how we could approach the visual effects and make sure they sat within the budget and timeframe.” Working across the series were Crafty Apes, Jam VFX, Jellyfish Pictures, Magic Lab, One of Us, Stargate Studios, Terraform Studios, Union VFX, and Bigtooth Studios.  “We had a spectrum of vendors that were brilliant and weighted so Union VFX took the heavy load on ‘USS Callister: Into Infinity,’ One of Us on ‘Eulogy’ and Jam VFX on ‘Hotel Riverie’ while the other vendors were used for all the shows.” “[W]e had a matte painter at Territory Studio create some generic space looks like exteriors of planets in pre-production. We gave those to Union VFX who animated them so the stars gently drifted and the planets would slowly rotate. Everything in that set was chrome, so no matter where the camera was pointing, when we went to hyperspace, outside planets or in space, there were all of these beautiful reflections all over the surfaces of the USS Callister. What I did not anticipate is when the actors came onto the set not knowing it was going to be a LED wall. Their reaction was enough to say that we had made the right choice.” —James MacLachlan, Visual Effects Supervisor Miranda Jones looked after the production design and constructed a number of practical sets for the different sections of the USS Callister. A clever visual effect was deployed when a digital clone of Robert Daly (Jesse Plemmons) is in his garage crafting a new world for Infinity, which transforms from a horizontal landscape into a spherical planetary form. “A lot of it was based off current UI when you use your phone and scroll,” MacLachlan remarks. “It is weighted and slows down through an exponential curve, so we tried to do that with the rotational values. We also looked at people using HoloLenses and Minority Report with those gestural moments. It has a language that a number of people are comfortable with, and we have gotten there with AR.” Union VFX spent a lot of time working on the transition. “They had three levels of detail for each of the moments. We had the mountain range and talked about the Himalayas. Union VFX had these moments where they animated between the different sizes and scales of each of these models. The final one is a wrap and reveal to the sphere, so it’s like you’re scaling down and out of the moment, then it folds out from itself. It was really nice.” For safety reasons, weapons were digitally thrown. “We had a 3D prop printed for the shuriken and were able to get that out in front of the camera onstage,” MacLachlan explains. “Then we decided to have it stand out more, so as [the Infinity Player] throws it, it intentionally lights up. On set we couldn’t throw anything at Cristin, so some tracking markers were put on her top where it needed to land. Then we did that in CGI. When she is pulling it off her chest with her hand, the shuriken is all CGI. Because of the shape of the shuriken, we were able to have it poke through the fingers and was visible, so it worked well. Cristin did a convincing job of yanking the shuriken out. We added some blood and increased the size of the wound on her top, which we had to do for a couple of other scenes because blood goes dark when its dry, so it needed to be made redder.” Nanette Cole (Cristin Milioti) throws a ceremonial knife that hits Robert Daly directly in the head. “That was a crazy one. We had the full prop on the shelf in the beginning that she picks up and throws. The art department made a second one with a cutout section that was mounted to his head. Lucy Cain [Makeup & Hair Designer] and I constructed a cage of hair clips and wire to hold it onto his head. Beyond that, we put tracking markers on his forehead, and we were able to add all of the blood. What we didn’t want to do was have too much blood and then have to remove it later. The decision was made to do the blood in post because you don’t want to be redressing it if you’re doing two or three takes; that can take a lot of time out of production.” “USS Callister: Into Infinity” required 669 visual effects shots. A digital clone of Robert Daly placed inside the game engine is responsible for creating the vast worlds found inside of Infinity. “We had a 3D prop printed for the shuriken [hidden hand weapon]… Then we decided to have it stand out more, so as [the Infinity Player] throws it, it intentionally lights up. On set we couldn’t throw anything at Cristin, so some tracking markers were put on her top where it needed to land. Then we did that in CGI. When she is pulling it off her chest with her hand, the shuriken is all CGI. Because of the shape of the shuriken, we were able to have it poke through the fingers and was visible… Cristin did a convincing job of yanking the shuriken out.” —James MacLachlan, Visual Effects Supervisor A cross between 2001: A Space Odyssey and Cast Away is the otherworldly planet where the digital clone of James Walton (Jimmi Simpson) is found. “We got on a train and went to the middle of Anglesey [island in Wales] to a copper mine,” MacLachlan recounts. “The copper mine was absolutely stunning. It’s not as saturated. You’re a good 50 meters down, and there were little tunnels and caves where over the years things have been mined and stopped. We found moments that worked for the different areas. It was shot there, and we augmented some of it to help sell the fact that it wasn’t Earth. We put in these big beautiful arches of rock, Saturn-like planets up in the sky, a couple of moons, and clean-up of giveaways.” The blue teleportation ring was practically built and digitally enhanced. Set pieces were LiDAR scanned. “What was interesting about the ice planet [was that] the art department built these amazing structures in the foreground and beyond that we had white drapes the whole way around, which fell off into darkness beautifully and naturally because of where the light was pulled by Stephan Pehrsson [Cinematographer],” MacLachlan states. “On top of that, there was the special effects department, which was wafting in a lot of atmospherics. Some of the atmospherics were in-camera and others were augmented to even it out and boost it in places to help the situation. We did add foreground snow. There is a big crane shot in the beginning where Unreal Engine assisted in generating some material. Then we did matte painting and set extensions beyond that to create a larger scale and cool rock shapes that were on an angle.” The jungle setting was an actual location. “That’s Black Park [in England], and because of the time of year, there are a lot of protected plants. We had a couple of moments where we weren’t allowed to walk in certain places. There is one big stunt where Nanette steps on a mine, and it explodes her back against a tree. That was a protected tree, so the whole thing was wrapped in this giant stunt mat while the stunt woman got thrown across it. Areas would be filled in with dressed plants to help the foreground, but we got most of the background in-camera. There were bits of clean-up where we spotted crew or trucks.” Large-scale and distinct rock shapes were placed at an angle to give the ice planet more of an alien quality. An exterior space shot of the USS Callister that is entirely CG. Twin versions of Nanette Cole and James Walton appear within the same frame. “Literally, we used every trick in the book the whole way through. Stephan and I went to see a motion control company that had a motion control camera on a TechnoDolly. Stephan could put it on his shoulder and record a move on a 20-foot crane. Once Stephan had done that first take, he would step away, then the motion control guys would do the same move again. You get this handheld feel through motion control rather than plotting two points and having it mechanical. You get a wide of a scene of clone Nanette in a chair and real Nanette standing in white, and you’ll notice the two Waltons in the background interacting with one another. Those shots were done on this motion control rig. We had motion control where we could plot points to make it feel like a tracking dolly. Then we also had our cameraman doing handheld moves pushing in and repeating himself. We had a wonderful double for Cristin who was excellent at mirroring what she was achieving, and they would switch and swap. You would have a shoulder or hair in the foreground in front of you, but then we would also stitch plates together that were handheld.” The USS Callister approaches the game engine situated at the Heart of Infinity. A homage to the fighter cockpit shots featured in the Star Wars franchise. USS Callister flies into the game engine while pursued by other Infinity players. A major story point is that the game engine is made to look complex but is in fact a façade. A copper mine served as the location for the planet where the digital clone of James Walton (Jimmi Simpson) is found. Principal photography for the jungle planet took place at Black Park in England. The blue skin of Elena Tulaska (Milanka Brooks) was achieved with practical makeup. Assisting the lighting were some cool tools such as the teleportation ring. “We had this beautiful two-meter blue ring that we were able to put on the ground and light up as people step into it,” MacLachlan remarks. “You get these lovely reflections on their visors, helmets and kits. Then we augmented the blue ring in visual effects where it was replaced with more refined edging and lighting effects that stream up from it, which assisted with the integration with the teleportation effect because of their blue cyan tones.” Virtual production was utilized for the main viewscreen located on the bridge of the USS Callister. “In terms of reflections, the biggest boon for us in visual effects was the LED wall. The last time they did the big screen in the USS Callister was a greenscreen. We got a small version of a LED screen when the set was being built and did some tests. Then we had a matte painter at Territory Studio create some generic space looks like exteriors of planets in pre-production. We gave those to Union VFX who animated them so the stars gently drifted and the planets would slowly rotate. Everything in that set was chrome, so no matter where the camera was pointing, when we went to hyperspace or outside planets or in space, there were all of these beautiful reflections all over the surfaces of the USS Callister. What I did not anticipate is when the actors came onto the set not knowing it was going to be a LED wall. Their reaction was enough to say that we had made the right choice.”
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Diving into the ocean with golf ball-inspired vehicles is what scientists are working on

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Diving into the ocean with golf ball-inspired vehicles is what scientists are working on

    Sayan Sen

    Neowin
    @ssc_combater007 ·

    May 24, 2025 16:24 EDT

    Image by Kindel Media via PexelsResearchers at the University of Michigan have come up with a new idea that could make underwater and aerial vehicles move more smoothly and efficiently. Their inspiration? The dimples on a golf ball.
    Golf balls fly farther than smooth ones because their dimples cut down on pressure drag—basically, the force that slows things down when moving through air or water. The researchers applied this concept to a new spherical prototype with dimples that can be adjusted. They tested its performance in a wind tunnel.
    “A dynamically programmable outer skin on an underwater vehicle could drastically reduce drag while eliminating the need for protruding appendages like fins or rudders for maneuvering,” said Anchal Sareen, an assistant professor at U-M. “By actively adjusting its surface texture, the vehicle could achieve precise maneuverability with enhanced efficiency and control.”

    This could be useful for things like ocean exploration, mapping, and gathering environmental data. The prototype is made by stretching a thin latex layer over a hollow sphere filled with tiny holes. When a vacuum pump is turned on, the latex gets pulled in, forming dimples. Turning off the pump makes the sphere smooth again.

    To measure how well the dimples reduced drag, researchers placed the sphere inside a three-meter-long wind tunnel, holding it in place with a thin rod. They changed the wind speed and adjusted the depth of the dimples. A load cell recorded the aerodynamic forces, while high-speed cameras tracked airflow patterns.
    The results showed that shallow dimples worked better at high wind speeds, while deeper dimples were more effective at lower speeds. Adjusting dimple depth helped cut drag by up to 50% compared to a smooth sphere.
    “The adaptive skin setup is able to notice changes in the speed of the incoming air and adjust dimples accordingly to maintain drag reductions,” said Rodrigo Vilumbrales-Garcia, a postdoctoral research fellow at U-M. “Applying this concept to underwater vehicles would reduce both drag and fuel consumption.”
    The researchers also discovered that the textured surface could generate lift, a force that helps steer the sphere. By activating dimples on only one side, they caused the air to flow differently, creating a force that pushed the sphere in a specific direction.
    Tests showed that, with the right dimple depth, the sphere could generate lift forces up to 80% of the drag force. This effect was similar to the Magnus effect, which typically requires constant rotation.
    “I was surprised that such a simple approach could produce results comparable to the Magnus effect,” said Putu Brahmanda Sudarsana, a graduate student at U-M.
    Looking ahead, Sareen hopes to collaborate with other experts to improve this technology. “This smart dynamic skin technology could be a game-changer for unmanned aerial and underwater vehicles, offering a lightweight, energy-efficient, and highly responsive alternative to traditional jointed control surfaces,” she said.
    Source: University of Michigan, AIP Publishing
    This article was generated with some help from AI and reviewed by an editor.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #diving #into #ocean #with #golf
    Diving into the ocean with golf ball-inspired vehicles is what scientists are working on
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Diving into the ocean with golf ball-inspired vehicles is what scientists are working on Sayan Sen Neowin @ssc_combater007 · May 24, 2025 16:24 EDT Image by Kindel Media via PexelsResearchers at the University of Michigan have come up with a new idea that could make underwater and aerial vehicles move more smoothly and efficiently. Their inspiration? The dimples on a golf ball. Golf balls fly farther than smooth ones because their dimples cut down on pressure drag—basically, the force that slows things down when moving through air or water. The researchers applied this concept to a new spherical prototype with dimples that can be adjusted. They tested its performance in a wind tunnel. “A dynamically programmable outer skin on an underwater vehicle could drastically reduce drag while eliminating the need for protruding appendages like fins or rudders for maneuvering,” said Anchal Sareen, an assistant professor at U-M. “By actively adjusting its surface texture, the vehicle could achieve precise maneuverability with enhanced efficiency and control.” This could be useful for things like ocean exploration, mapping, and gathering environmental data. The prototype is made by stretching a thin latex layer over a hollow sphere filled with tiny holes. When a vacuum pump is turned on, the latex gets pulled in, forming dimples. Turning off the pump makes the sphere smooth again. To measure how well the dimples reduced drag, researchers placed the sphere inside a three-meter-long wind tunnel, holding it in place with a thin rod. They changed the wind speed and adjusted the depth of the dimples. A load cell recorded the aerodynamic forces, while high-speed cameras tracked airflow patterns. The results showed that shallow dimples worked better at high wind speeds, while deeper dimples were more effective at lower speeds. Adjusting dimple depth helped cut drag by up to 50% compared to a smooth sphere. “The adaptive skin setup is able to notice changes in the speed of the incoming air and adjust dimples accordingly to maintain drag reductions,” said Rodrigo Vilumbrales-Garcia, a postdoctoral research fellow at U-M. “Applying this concept to underwater vehicles would reduce both drag and fuel consumption.” The researchers also discovered that the textured surface could generate lift, a force that helps steer the sphere. By activating dimples on only one side, they caused the air to flow differently, creating a force that pushed the sphere in a specific direction. Tests showed that, with the right dimple depth, the sphere could generate lift forces up to 80% of the drag force. This effect was similar to the Magnus effect, which typically requires constant rotation. “I was surprised that such a simple approach could produce results comparable to the Magnus effect,” said Putu Brahmanda Sudarsana, a graduate student at U-M. Looking ahead, Sareen hopes to collaborate with other experts to improve this technology. “This smart dynamic skin technology could be a game-changer for unmanned aerial and underwater vehicles, offering a lightweight, energy-efficient, and highly responsive alternative to traditional jointed control surfaces,” she said. Source: University of Michigan, AIP Publishing This article was generated with some help from AI and reviewed by an editor. Tags Report a problem with article Follow @NeowinFeed #diving #into #ocean #with #golf
    Diving into the ocean with golf ball-inspired vehicles is what scientists are working on
    www.neowin.net
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Diving into the ocean with golf ball-inspired vehicles is what scientists are working on Sayan Sen Neowin @ssc_combater007 · May 24, 2025 16:24 EDT Image by Kindel Media via PexelsResearchers at the University of Michigan have come up with a new idea that could make underwater and aerial vehicles move more smoothly and efficiently. Their inspiration? The dimples on a golf ball. Golf balls fly farther than smooth ones because their dimples cut down on pressure drag—basically, the force that slows things down when moving through air or water. The researchers applied this concept to a new spherical prototype with dimples that can be adjusted. They tested its performance in a wind tunnel. “A dynamically programmable outer skin on an underwater vehicle could drastically reduce drag while eliminating the need for protruding appendages like fins or rudders for maneuvering,” said Anchal Sareen, an assistant professor at U-M. “By actively adjusting its surface texture, the vehicle could achieve precise maneuverability with enhanced efficiency and control.” This could be useful for things like ocean exploration, mapping, and gathering environmental data. The prototype is made by stretching a thin latex layer over a hollow sphere filled with tiny holes. When a vacuum pump is turned on, the latex gets pulled in, forming dimples. Turning off the pump makes the sphere smooth again. To measure how well the dimples reduced drag, researchers placed the sphere inside a three-meter-long wind tunnel, holding it in place with a thin rod. They changed the wind speed and adjusted the depth of the dimples. A load cell recorded the aerodynamic forces, while high-speed cameras tracked airflow patterns. The results showed that shallow dimples worked better at high wind speeds, while deeper dimples were more effective at lower speeds. Adjusting dimple depth helped cut drag by up to 50% compared to a smooth sphere. “The adaptive skin setup is able to notice changes in the speed of the incoming air and adjust dimples accordingly to maintain drag reductions,” said Rodrigo Vilumbrales-Garcia, a postdoctoral research fellow at U-M. “Applying this concept to underwater vehicles would reduce both drag and fuel consumption.” The researchers also discovered that the textured surface could generate lift, a force that helps steer the sphere. By activating dimples on only one side, they caused the air to flow differently, creating a force that pushed the sphere in a specific direction. Tests showed that, with the right dimple depth, the sphere could generate lift forces up to 80% of the drag force. This effect was similar to the Magnus effect, which typically requires constant rotation. “I was surprised that such a simple approach could produce results comparable to the Magnus effect,” said Putu Brahmanda Sudarsana, a graduate student at U-M. Looking ahead, Sareen hopes to collaborate with other experts to improve this technology. “This smart dynamic skin technology could be a game-changer for unmanned aerial and underwater vehicles, offering a lightweight, energy-efficient, and highly responsive alternative to traditional jointed control surfaces,” she said. Source: University of Michigan, AIP Publishing This article was generated with some help from AI and reviewed by an editor. Tags Report a problem with article Follow @NeowinFeed
    0 Reacties ·0 aandelen ·0 voorbeeld
  • Ricoh is finally making a GR IV camera, and it’s coming in the fall

    In a pretty barebones press release accompanied by a couple of pictures and detailed specs, Ricoh surprise announced that its long-awaited GR IV camera will launch this fall. The GR IV will adhere closely to the design of the GR III from 2018, and it will continue to use an autofocusing 28mm-equivalent f/2.8 lens and only a rear LCD for composing photos and videos, with no electronic or optical viewfinder available.The GR IV’s exterior looks very similar to the GR III / GR IIIx, with an oval-shaped shutter button, on / off switch and mode dial up top, and a smattering of rear controls to the right of its LCD. Its buttons look redesigned, removing the spinning dial from around its four-way directional pad. And its adjustment thumb wheel, labeled “ADJ,” looks like it may be a fully turning dial instead of just a back-and-forth toggle that moves left or right.Image: RicohImage: RicohWhat’s known for certain based on its spec list is that the GR IV retains the built-in ND filter of the GR III, but it slightly ups the resolution of its large APS-C sensor from 24 megapixels to 26. It will also have a higher ISO range that reaches 204,800 at its maximum setting, and five-axis stabilization instead of three-axis stabilization. The GR IV’s lens may be the same focal length and maximum aperture as previous generations, but it’s a new seven-element design in a new arrangement utilizing an additional aspherical element that should yield better corrections. The upcoming camera will also have face and eye-detection for its autofocus tracking, and 53GB of usable built-in storage. Onboard storage is great, and it’s much more than the GR III’s 2GB, but the GR IV is also downsizing from full-size SD cards to microSD.While there isn’t a price yet, Ricoh has confirmed the GR IV is expected to release in the autumn of 2025, with a variant featuring a Highlight Diffusion Filterto come “after winter 2025.” The announcement also details that the GR III is scheduled to be discontinued in July, while the GR IIIx continues “for the time being.” The Ricoh GR cameras have carved out a niche among street photographers who value their super compact size and fairly affordable prices compared to a Fujifilm X100 or Leica Q. As cool and fun as I thought the just-announced Fujifilm X Half might be, the GR IV has instantly become my most anticipated camera of 2025.See More:
    #ricoh #finally #making #camera #itampamp8217s
    Ricoh is finally making a GR IV camera, and it’s coming in the fall
    In a pretty barebones press release accompanied by a couple of pictures and detailed specs, Ricoh surprise announced that its long-awaited GR IV camera will launch this fall. The GR IV will adhere closely to the design of the GR III from 2018, and it will continue to use an autofocusing 28mm-equivalent f/2.8 lens and only a rear LCD for composing photos and videos, with no electronic or optical viewfinder available.The GR IV’s exterior looks very similar to the GR III / GR IIIx, with an oval-shaped shutter button, on / off switch and mode dial up top, and a smattering of rear controls to the right of its LCD. Its buttons look redesigned, removing the spinning dial from around its four-way directional pad. And its adjustment thumb wheel, labeled “ADJ,” looks like it may be a fully turning dial instead of just a back-and-forth toggle that moves left or right.Image: RicohImage: RicohWhat’s known for certain based on its spec list is that the GR IV retains the built-in ND filter of the GR III, but it slightly ups the resolution of its large APS-C sensor from 24 megapixels to 26. It will also have a higher ISO range that reaches 204,800 at its maximum setting, and five-axis stabilization instead of three-axis stabilization. The GR IV’s lens may be the same focal length and maximum aperture as previous generations, but it’s a new seven-element design in a new arrangement utilizing an additional aspherical element that should yield better corrections. The upcoming camera will also have face and eye-detection for its autofocus tracking, and 53GB of usable built-in storage. Onboard storage is great, and it’s much more than the GR III’s 2GB, but the GR IV is also downsizing from full-size SD cards to microSD.While there isn’t a price yet, Ricoh has confirmed the GR IV is expected to release in the autumn of 2025, with a variant featuring a Highlight Diffusion Filterto come “after winter 2025.” The announcement also details that the GR III is scheduled to be discontinued in July, while the GR IIIx continues “for the time being.” The Ricoh GR cameras have carved out a niche among street photographers who value their super compact size and fairly affordable prices compared to a Fujifilm X100 or Leica Q. As cool and fun as I thought the just-announced Fujifilm X Half might be, the GR IV has instantly become my most anticipated camera of 2025.See More: #ricoh #finally #making #camera #itampamp8217s
    Ricoh is finally making a GR IV camera, and it’s coming in the fall
    www.theverge.com
    In a pretty barebones press release accompanied by a couple of pictures and detailed specs, Ricoh surprise announced that its long-awaited GR IV camera will launch this fall. The GR IV will adhere closely to the design of the GR III from 2018, and it will continue to use an autofocusing 28mm-equivalent f/2.8 lens and only a rear LCD for composing photos and videos, with no electronic or optical viewfinder available.The GR IV’s exterior looks very similar to the GR III / GR IIIx, with an oval-shaped shutter button, on / off switch and mode dial up top, and a smattering of rear controls to the right of its LCD. Its buttons look redesigned, removing the spinning dial from around its four-way directional pad. And its adjustment thumb wheel, labeled “ADJ,” looks like it may be a fully turning dial instead of just a back-and-forth toggle that moves left or right. (I may be wishcasting that last part, because I think the thumb toggle on the GR III is annoying and fiddly.)Image: RicohImage: RicohWhat’s known for certain based on its spec list is that the GR IV retains the built-in ND filter of the GR III, but it slightly ups the resolution of its large APS-C sensor from 24 megapixels to 26. It will also have a higher ISO range that reaches 204,800 at its maximum setting, and five-axis stabilization instead of three-axis stabilization. The GR IV’s lens may be the same focal length and maximum aperture as previous generations, but it’s a new seven-element design in a new arrangement utilizing an additional aspherical element that should yield better corrections. The upcoming camera will also have face and eye-detection for its autofocus tracking, and 53GB of usable built-in storage. Onboard storage is great, and it’s much more than the GR III’s 2GB, but the GR IV is also downsizing from full-size SD cards to microSD.While there isn’t a price yet, Ricoh has confirmed the GR IV is expected to release in the autumn of 2025, with a variant featuring a Highlight Diffusion Filter (HDF) to come “after winter 2025.” The announcement also details that the GR III is scheduled to be discontinued in July, while the GR IIIx continues “for the time being.” The Ricoh GR cameras have carved out a niche among street photographers who value their super compact size and fairly affordable prices compared to a Fujifilm X100 or Leica Q. As cool and fun as I thought the just-announced Fujifilm X Half might be, the GR IV has instantly become my most anticipated camera of 2025.See More:
    0 Reacties ·0 aandelen ·0 voorbeeld
CGShares https://cgshares.com