• Bonjour à tous ! Aujourd'hui, je suis tellement enthousiaste à l'idée de partager avec vous un projet incroyable qui redonne vie à notre patrimoine architectural : **Latidos de Piedra** !

    Dans la belle ville de Torredonjimeno, en Jaén, le centre éducatif IES Santo Reino a lancé cette initiative inspirante qui utilise l'impression 3D pour reconstruire des structures historiques disparues. Imaginez un monde où nous pouvons redonner vie à notre héritage culturel grâce à la technologie ! C'est comme si chaque pierre avait une histoire à raconter, et ce projet nous permet de les écouter à nouveau.

    Ce qui est vraiment fascinant, c'est que **Latidos de Piedra** ne se contente pas de restaurer des bâtiments ; il connecte les générations, permettant aux jeunes d'apprendre et de participer activement à la préservation de leur histoire. Ce projet est une véritable bouffée d'air frais pour notre patrimoine, et il montre à quel point nous pouvons être créatifs et innovants en utilisant les outils modernes à notre disposition !

    Chaque pièce créée par impression 3D est comme un battement de cœur, pulsant avec l'énergie des artisans et des étudiants qui travaillent ensemble pour faire revivre notre passé. Pensez à l'impact que cela a sur notre communauté ! Cela incarne l'esprit d'unité et de collaboration, et cela nous rappelle à quel point nous sommes tous liés par notre histoire.

    Alors, chers amis, que vous soyez passionnés d'architecture, amateurs d'histoire, ou simplement curieux de découvrir de nouvelles idées, je vous invite à vous joindre à moi pour soutenir ce projet extraordinaire. Partageons ensemble cette belle initiative et faisons en sorte que tous puissent entendre les **Latidos de Piedra** résonner dans nos cœurs !

    Ensemble, nous pouvons créer un avenir où notre patrimoine est non seulement préservé, mais célébré ! N'oubliez jamais : chaque petite action compte et peut mener à de grands changements. Alors, qu'attendez-vous pour faire entendre votre voix ?

    #LatidosDePiedra #PatrimoineArchitectural #Impression3D #Innovation #Inspiration
    🌟 Bonjour à tous ! Aujourd'hui, je suis tellement enthousiaste à l'idée de partager avec vous un projet incroyable qui redonne vie à notre patrimoine architectural : **Latidos de Piedra** ! 🏛️💖 Dans la belle ville de Torredonjimeno, en Jaén, le centre éducatif IES Santo Reino a lancé cette initiative inspirante qui utilise l'impression 3D pour reconstruire des structures historiques disparues. Imaginez un monde où nous pouvons redonner vie à notre héritage culturel grâce à la technologie ! C'est comme si chaque pierre avait une histoire à raconter, et ce projet nous permet de les écouter à nouveau. 📜✨ Ce qui est vraiment fascinant, c'est que **Latidos de Piedra** ne se contente pas de restaurer des bâtiments ; il connecte les générations, permettant aux jeunes d'apprendre et de participer activement à la préservation de leur histoire. Ce projet est une véritable bouffée d'air frais pour notre patrimoine, et il montre à quel point nous pouvons être créatifs et innovants en utilisant les outils modernes à notre disposition ! 🌈🔧 Chaque pièce créée par impression 3D est comme un battement de cœur, pulsant avec l'énergie des artisans et des étudiants qui travaillent ensemble pour faire revivre notre passé. Pensez à l'impact que cela a sur notre communauté ! Cela incarne l'esprit d'unité et de collaboration, et cela nous rappelle à quel point nous sommes tous liés par notre histoire. 🤝🌍 Alors, chers amis, que vous soyez passionnés d'architecture, amateurs d'histoire, ou simplement curieux de découvrir de nouvelles idées, je vous invite à vous joindre à moi pour soutenir ce projet extraordinaire. Partageons ensemble cette belle initiative et faisons en sorte que tous puissent entendre les **Latidos de Piedra** résonner dans nos cœurs ! 💪❤️ Ensemble, nous pouvons créer un avenir où notre patrimoine est non seulement préservé, mais célébré ! N'oubliez jamais : chaque petite action compte et peut mener à de grands changements. Alors, qu'attendez-vous pour faire entendre votre voix ? 🚀✨ #LatidosDePiedra #PatrimoineArchitectural #Impression3D #Innovation #Inspiration
    Latidos de Piedra, el proyecto que recrea el patrimonio arquitectónico con impresión 3D
    En Torredonjimeno, Jaén, el centro educativo IES Santo Reino ha desarrollado un proyecto llamado «Latidos de Piedra». Esta iniciativa utiliza la impresión 3D como herramienta clave para rescatar el patrimonio histórico desaparecido del municipio. El
    Like
    Love
    Wow
    Sad
    Angry
    252
    1 Commenti 0 condivisioni
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Commenti 0 condivisioni
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commenti 0 condivisioni
  • Revival's First Five Minutes Feature the Dead Coming Back to Life in a Surprising Way, People on Fire, and More

    IGN Live was able to exclusively reveal the first five minutes of SYFY's upcoming adaptation of Revival, and we also had the chance to speak to the series' co-creator and showrunner, Aaron B. Koontz, about why this show about the dead coming back to life in surprising way will be well worth a watch.Revival is set to debut on SYFY on June 12 and is based on the Harvey Award-nominated comic from Tim Seeley and Mike Norton that ran for 47 issues from 2012-2017. In our exclusive clip you can watch below, we are introduced to this world on 'Revival Day,' which is the day the dead rise. However, these aren't zombies; the undead are very much the same as they were when they were alive. PlayThese few minutes are very much the same as the opening of the comic, and Koontz shared why that was such a great thing."It's one of the first scenes in the comic, and we were like, this hooked us," Koontz said. "I thought it was really cool and had no idea where this was going to go. And I will say, without giving things away, this scene is also more than just a scene. You'll see, after watching later episodes, that these first few minutes were hiding so much more than you realized."One of the big moments from the clip is one of the undead being cremated at Randy's Crematorium in Wausau, Wisconsin, and trying to break free and then running around on fire. That was all practical and not some CG wizardry."I wanted to be ambitious," Koontz said. "I wanted to set people on fire and I didn't want to do CG. We were also in such a small town where there weren't a ton of extra ambulances and we felt really bad because so many medical personnel were there. I remember thinking, 'I hope nothing bad happens in the town tonight because they're all sitting on our set!'" And of course, we got to spend some time with Randy himself, who is played by Graeme Barrett of Divorced Dads and Court of Chaos fame. Koontz actually found him on Instagram and thought he "fits the vibe of what we want juxtaposed against the serious business.""I think one of my favorite things in the script was like, okay, but these are not our main characters," Koontz added. "So, how do you get them right to the main characters and what's there? And I love that Randy saying it's not my fault is also similar to the very first line you hear Wayne Cypress say. And you're immediately in a fight with Wayne and Dana, and so you're seeing and feeling the dynamic of the Cypress family, which is important because the Cypress family is the heart of this whole show."PlaySo, all in all, Revival isn't meant to be a "super serious scary thing." Instead it aims to have scary moments, melodramatic moments, and also a lot of fun moments. Koontz said it's a mix between Mare of Easttown and Fargo, filled with weird, quirky characters, full on horror thriller elements, and the good parts of the zombie tropes with a twist that makes it unique.Revival will premiere on SYFY on June 12 and will be available on Peacock the following week. The series stars Melanie Scrofano, Romy Weltman, David James Elliott, Andy McQueen, Steven Ogg, Phil Brooks aka CM Punk, Gia Sandhu, Katharine King So, Maia Jae, Nathan Dales, Mark Little, Glen Gould, Lara Jean Chorostecki, and Conrad Coates.The series is based on the Harvey Award-nominated 2010s comic title of the same name from writer Tim Seeley and artist Mike Norton, which ran for 47 issues from 2012-2017 from Image Comics.For more, check out our exclusive trailer reveal for Revival and everything else happening at IGN Live.Adam Bankhurst is a writer for IGN. You can follow him on X/Twitter @AdamBankhurst,Instagram, and TikTok, and listen to his show, Talking Disney Magic.
    #revival039s #first #five #minutes #feature
    Revival's First Five Minutes Feature the Dead Coming Back to Life in a Surprising Way, People on Fire, and More
    IGN Live was able to exclusively reveal the first five minutes of SYFY's upcoming adaptation of Revival, and we also had the chance to speak to the series' co-creator and showrunner, Aaron B. Koontz, about why this show about the dead coming back to life in surprising way will be well worth a watch.Revival is set to debut on SYFY on June 12 and is based on the Harvey Award-nominated comic from Tim Seeley and Mike Norton that ran for 47 issues from 2012-2017. In our exclusive clip you can watch below, we are introduced to this world on 'Revival Day,' which is the day the dead rise. However, these aren't zombies; the undead are very much the same as they were when they were alive. PlayThese few minutes are very much the same as the opening of the comic, and Koontz shared why that was such a great thing."It's one of the first scenes in the comic, and we were like, this hooked us," Koontz said. "I thought it was really cool and had no idea where this was going to go. And I will say, without giving things away, this scene is also more than just a scene. You'll see, after watching later episodes, that these first few minutes were hiding so much more than you realized."One of the big moments from the clip is one of the undead being cremated at Randy's Crematorium in Wausau, Wisconsin, and trying to break free and then running around on fire. That was all practical and not some CG wizardry."I wanted to be ambitious," Koontz said. "I wanted to set people on fire and I didn't want to do CG. We were also in such a small town where there weren't a ton of extra ambulances and we felt really bad because so many medical personnel were there. I remember thinking, 'I hope nothing bad happens in the town tonight because they're all sitting on our set!'" And of course, we got to spend some time with Randy himself, who is played by Graeme Barrett of Divorced Dads and Court of Chaos fame. Koontz actually found him on Instagram and thought he "fits the vibe of what we want juxtaposed against the serious business.""I think one of my favorite things in the script was like, okay, but these are not our main characters," Koontz added. "So, how do you get them right to the main characters and what's there? And I love that Randy saying it's not my fault is also similar to the very first line you hear Wayne Cypress say. And you're immediately in a fight with Wayne and Dana, and so you're seeing and feeling the dynamic of the Cypress family, which is important because the Cypress family is the heart of this whole show."PlaySo, all in all, Revival isn't meant to be a "super serious scary thing." Instead it aims to have scary moments, melodramatic moments, and also a lot of fun moments. Koontz said it's a mix between Mare of Easttown and Fargo, filled with weird, quirky characters, full on horror thriller elements, and the good parts of the zombie tropes with a twist that makes it unique.Revival will premiere on SYFY on June 12 and will be available on Peacock the following week. The series stars Melanie Scrofano, Romy Weltman, David James Elliott, Andy McQueen, Steven Ogg, Phil Brooks aka CM Punk, Gia Sandhu, Katharine King So, Maia Jae, Nathan Dales, Mark Little, Glen Gould, Lara Jean Chorostecki, and Conrad Coates.The series is based on the Harvey Award-nominated 2010s comic title of the same name from writer Tim Seeley and artist Mike Norton, which ran for 47 issues from 2012-2017 from Image Comics.For more, check out our exclusive trailer reveal for Revival and everything else happening at IGN Live.Adam Bankhurst is a writer for IGN. You can follow him on X/Twitter @AdamBankhurst,Instagram, and TikTok, and listen to his show, Talking Disney Magic. #revival039s #first #five #minutes #feature
    WWW.IGN.COM
    Revival's First Five Minutes Feature the Dead Coming Back to Life in a Surprising Way, People on Fire, and More
    IGN Live was able to exclusively reveal the first five minutes of SYFY's upcoming adaptation of Revival, and we also had the chance to speak to the series' co-creator and showrunner, Aaron B. Koontz, about why this show about the dead coming back to life in surprising way will be well worth a watch.Revival is set to debut on SYFY on June 12 and is based on the Harvey Award-nominated comic from Tim Seeley and Mike Norton that ran for 47 issues from 2012-2017. In our exclusive clip you can watch below, we are introduced to this world on 'Revival Day,' which is the day the dead rise. However, these aren't zombies; the undead are very much the same as they were when they were alive. PlayThese few minutes are very much the same as the opening of the comic, and Koontz shared why that was such a great thing."It's one of the first scenes in the comic, and we were like, this hooked us," Koontz said. "I thought it was really cool and had no idea where this was going to go. And I will say, without giving things away, this scene is also more than just a scene. You'll see, after watching later episodes, that these first few minutes were hiding so much more than you realized."One of the big moments from the clip is one of the undead being cremated at Randy's Crematorium in Wausau, Wisconsin, and trying to break free and then running around on fire. That was all practical and not some CG wizardry."I wanted to be ambitious," Koontz said. "I wanted to set people on fire and I didn't want to do CG. We were also in such a small town where there weren't a ton of extra ambulances and we felt really bad because so many medical personnel were there. I remember thinking, 'I hope nothing bad happens in the town tonight because they're all sitting on our set!'" And of course, we got to spend some time with Randy himself, who is played by Graeme Barrett of Divorced Dads and Court of Chaos fame. Koontz actually found him on Instagram and thought he "fits the vibe of what we want juxtaposed against the serious business.""I think one of my favorite things in the script was like, okay, but these are not our main characters," Koontz added. "So, how do you get them right to the main characters and what's there? And I love that Randy saying it's not my fault is also similar to the very first line you hear Wayne Cypress say. And you're immediately in a fight with Wayne and Dana, and so you're seeing and feeling the dynamic of the Cypress family, which is important because the Cypress family is the heart of this whole show."PlaySo, all in all, Revival isn't meant to be a "super serious scary thing." Instead it aims to have scary moments, melodramatic moments, and also a lot of fun moments. Koontz said it's a mix between Mare of Easttown and Fargo, filled with weird, quirky characters, full on horror thriller elements, and the good parts of the zombie tropes with a twist that makes it unique.Revival will premiere on SYFY on June 12 and will be available on Peacock the following week. The series stars Melanie Scrofano (Wynonna Earp), Romy Weltman (Backstage), David James Elliott (JAG), Andy McQueen (Mrs. Davis), Steven Ogg (The Walking Dead), Phil Brooks aka CM Punk (Mayans M.C.), Gia Sandhu (A Simple Favor), Katharine King So (The Recruit), Maia Jae (In the Dark), Nathan Dales (Letterkenny), Mark Little (Doomlands), Glen Gould (Tulsa King), Lara Jean Chorostecki (Nightmare Alley), and Conrad Coates (Fargo).The series is based on the Harvey Award-nominated 2010s comic title of the same name from writer Tim Seeley and artist Mike Norton, which ran for 47 issues from 2012-2017 from Image Comics.For more, check out our exclusive trailer reveal for Revival and everything else happening at IGN Live.Adam Bankhurst is a writer for IGN. You can follow him on X/Twitter @AdamBankhurst,Instagram, and TikTok, and listen to his show, Talking Disney Magic.
    Like
    Love
    Wow
    Sad
    Angry
    569
    0 Commenti 0 condivisioni
  • ACSA announces new JAE issue, edited by Rafael Longoria and Michelangelo Sabatino, after Palestine edition fallout

    It has been 3 months since ACSA canceled the Fall 2025 Journal of Architectural Educationissue about Palestine and fired its interim executive editor, McClain Clutter. In response, the JAE editorial board resigned in protest.
    ACSA subsequently put out a solicitation for services to honor its contract with Taylor & Francis, who publishes JAE, and it sought new theme editors to publish an alternative Winter 2025 issue. ACSA also hired Maverick Publishing Specialists to conduct an independent review of JAE and ACSA editorial policy and practices related to the terminated 79.2 Palestine issue.

    University of Houston’s Rafael Longoria and Michelangelo Sabatino of the Illinois Institute of Technology are the Winter 2025 JAE 79:2 editors. Their theme for the issue? “Educating Civic Architects.”
    The new call for papers situates itself in our “increasingly complex economic, environmental, and political reality.”
    Theme editors seek “contributions that explore the full range of expressions of civic architecture and community design—past, present, and future.” The word “civic” is repeated throughout the open call, which echoes the Trump administration’s recent mandate for “beautiful federal civic architecture.” 
    The open call asks:
    “How might educating civic-minded architects help inspire and guide the profession? How do architecture schools foster a culture of collaboration with community and city leaders? How can design research inform the evolving role that civic-minded architects can play? Beyond the design studio, what role should the teaching of history, theory, professional practice, policy, technology, and other disciplines play in educating civic architects?”
    The open call cites Luiz Paulo Conde, an architect who became the mayor of Rio de Janeiro; former Peruvian architect-turned-president ​​Fernando Belaúnde Terry; and Jaime Lerner, an architect-turned-mayor-turned-governor from Brazil as cases to emulate.
    Terry organized PREVI, an ambitious social housing competition in Lima in the 1970s, but he also launched a settler-colonial campaign in Peru’s Indigenous territories.Italian mayors Giulio Carlo Argan and Massimo Cacciari, of Rome and Venice, respectively, were other examples of aesthete politicians JAE cited.

    Domestic examples are also offered, like Joseph P. Riley Jr., former mayor of Charleston, South Carolina; Harvey Gantt, former mayor of Charlotte, North Carolina; and Maurice Cox, who was mayor of Charlottesville, Virginia, before going on to leadership roles in Detroit and Chicago city government.
    Practicing architects are cited, like Richard Rogers and Johanna Hurme of 5468796 Architecture. The editors also invite reflections on postmodernism as it led to “increased attention on contextual design, vernacular architecture, and perhaps more significantly a reinvigorated interest in urban design.”
    ACSA executive director Michael Monti told AN: “We solicited proposals from a number of potential editorial teams. The interim editorial team for the Fall 2025 issue was selected through a committee of the Board of Directors. They authored the theme and the Call for Papers.”
    “The organization continues with the next steps for the journal that we communicated to our membership in the spring. This includes an external assessment of decisions, processes, and structures related to JAE and ACSA,” Monti added. “We are also convening a special committee to provide guidance on broader threats and issues facing our member schools. Those steps will inform the appointment of a new Executive Editor and Editorial Board in the upcoming months as well as the direction of the journal.”

    Disclosure: The author previously responded to the JAE’s call for papers for its now-canceled issue on Palestine.
    #acsa #announces #new #jae #issue
    ACSA announces new JAE issue, edited by Rafael Longoria and Michelangelo Sabatino, after Palestine edition fallout
    It has been 3 months since ACSA canceled the Fall 2025 Journal of Architectural Educationissue about Palestine and fired its interim executive editor, McClain Clutter. In response, the JAE editorial board resigned in protest. ACSA subsequently put out a solicitation for services to honor its contract with Taylor & Francis, who publishes JAE, and it sought new theme editors to publish an alternative Winter 2025 issue. ACSA also hired Maverick Publishing Specialists to conduct an independent review of JAE and ACSA editorial policy and practices related to the terminated 79.2 Palestine issue. University of Houston’s Rafael Longoria and Michelangelo Sabatino of the Illinois Institute of Technology are the Winter 2025 JAE 79:2 editors. Their theme for the issue? “Educating Civic Architects.” The new call for papers situates itself in our “increasingly complex economic, environmental, and political reality.” Theme editors seek “contributions that explore the full range of expressions of civic architecture and community design—past, present, and future.” The word “civic” is repeated throughout the open call, which echoes the Trump administration’s recent mandate for “beautiful federal civic architecture.”  The open call asks: “How might educating civic-minded architects help inspire and guide the profession? How do architecture schools foster a culture of collaboration with community and city leaders? How can design research inform the evolving role that civic-minded architects can play? Beyond the design studio, what role should the teaching of history, theory, professional practice, policy, technology, and other disciplines play in educating civic architects?” The open call cites Luiz Paulo Conde, an architect who became the mayor of Rio de Janeiro; former Peruvian architect-turned-president ​​Fernando Belaúnde Terry; and Jaime Lerner, an architect-turned-mayor-turned-governor from Brazil as cases to emulate. Terry organized PREVI, an ambitious social housing competition in Lima in the 1970s, but he also launched a settler-colonial campaign in Peru’s Indigenous territories.Italian mayors Giulio Carlo Argan and Massimo Cacciari, of Rome and Venice, respectively, were other examples of aesthete politicians JAE cited. Domestic examples are also offered, like Joseph P. Riley Jr., former mayor of Charleston, South Carolina; Harvey Gantt, former mayor of Charlotte, North Carolina; and Maurice Cox, who was mayor of Charlottesville, Virginia, before going on to leadership roles in Detroit and Chicago city government. Practicing architects are cited, like Richard Rogers and Johanna Hurme of 5468796 Architecture. The editors also invite reflections on postmodernism as it led to “increased attention on contextual design, vernacular architecture, and perhaps more significantly a reinvigorated interest in urban design.” ACSA executive director Michael Monti told AN: “We solicited proposals from a number of potential editorial teams. The interim editorial team for the Fall 2025 issue was selected through a committee of the Board of Directors. They authored the theme and the Call for Papers.” “The organization continues with the next steps for the journal that we communicated to our membership in the spring. This includes an external assessment of decisions, processes, and structures related to JAE and ACSA,” Monti added. “We are also convening a special committee to provide guidance on broader threats and issues facing our member schools. Those steps will inform the appointment of a new Executive Editor and Editorial Board in the upcoming months as well as the direction of the journal.” Disclosure: The author previously responded to the JAE’s call for papers for its now-canceled issue on Palestine. #acsa #announces #new #jae #issue
    ACSA announces new JAE issue, edited by Rafael Longoria and Michelangelo Sabatino, after Palestine edition fallout
    It has been 3 months since ACSA canceled the Fall 2025 Journal of Architectural Education (JAE) issue about Palestine and fired its interim executive editor, McClain Clutter. In response, the JAE editorial board resigned in protest. ACSA subsequently put out a solicitation for services to honor its contract with Taylor & Francis, who publishes JAE, and it sought new theme editors to publish an alternative Winter 2025 issue. ACSA also hired Maverick Publishing Specialists to conduct an independent review of JAE and ACSA editorial policy and practices related to the terminated 79.2 Palestine issue. University of Houston’s Rafael Longoria and Michelangelo Sabatino of the Illinois Institute of Technology are the Winter 2025 JAE 79:2 editors. Their theme for the issue? “Educating Civic Architects.” The new call for papers situates itself in our “increasingly complex economic, environmental, and political reality.” Theme editors seek “contributions that explore the full range of expressions of civic architecture and community design—past, present, and future.” The word “civic” is repeated throughout the open call, which echoes the Trump administration’s recent mandate for “beautiful federal civic architecture.”  The open call asks: “How might educating civic-minded architects help inspire and guide the profession? How do architecture schools foster a culture of collaboration with community and city leaders? How can design research inform the evolving role that civic-minded architects can play? Beyond the design studio, what role should the teaching of history, theory, professional practice, policy, technology, and other disciplines play in educating civic architects?” The open call cites Luiz Paulo Conde, an architect who became the mayor of Rio de Janeiro; former Peruvian architect-turned-president ​​Fernando Belaúnde Terry; and Jaime Lerner, an architect-turned-mayor-turned-governor from Brazil as cases to emulate. Terry organized PREVI, an ambitious social housing competition in Lima in the 1970s, but he also launched a settler-colonial campaign in Peru’s Indigenous territories. (Terry’s effort was outlined in his 1965 book Peru’s Own Conquest.) Italian mayors Giulio Carlo Argan and Massimo Cacciari, of Rome and Venice, respectively, were other examples of aesthete politicians JAE cited. Domestic examples are also offered, like Joseph P. Riley Jr., former mayor of Charleston, South Carolina; Harvey Gantt, former mayor of Charlotte, North Carolina; and Maurice Cox, who was mayor of Charlottesville, Virginia, before going on to leadership roles in Detroit and Chicago city government. Practicing architects are cited, like Richard Rogers and Johanna Hurme of 5468796 Architecture. The editors also invite reflections on postmodernism as it led to “increased attention on contextual design, vernacular architecture, and perhaps more significantly a reinvigorated interest in urban design.” ACSA executive director Michael Monti told AN: “We solicited proposals from a number of potential editorial teams. The interim editorial team for the Fall 2025 issue was selected through a committee of the Board of Directors. They authored the theme and the Call for Papers.” “The organization continues with the next steps for the journal that we communicated to our membership in the spring. This includes an external assessment of decisions, processes, and structures related to JAE and ACSA,” Monti added. “We are also convening a special committee to provide guidance on broader threats and issues facing our member schools. Those steps will inform the appointment of a new Executive Editor and Editorial Board in the upcoming months as well as the direction of the journal.” Disclosure: The author previously responded to the JAE’s call for papers for its now-canceled issue on Palestine.
    Like
    Love
    Wow
    Sad
    Angry
    413
    0 Commenti 0 condivisioni
  • Netflix Tudum 2025: Everything Announced

    Netflix Tudum 2025 has begun and promises to reveal a ton of exciting details about the most-anticipated shows and movie heading to the streamer in the future, including Wake Up Dead Man: A Knives Out Mystery's release date and Squid Game's Season 3 trailer.There will be a ton of announcements during the Netflix Tudum 2025 livestream, and we'll be gathering all the big new right here as it happens, so make sure to stay tuned and refresh often!Wake Up Dead Man: A Knives Out Mystery Release Date RevealedRian Johnson's Wake Up Dead Man: A Knives Out Mystery's latest teaser trailer not only revealed more about Benoit Blanc's latest adventure, but it also shared it will arrive on Netflix on December 12, 2025.We don't know much about this new mystery yet, but Blanc himself has described this as his "most dangerous case yet." What we do know is that Daniel Craig's Blanc will be joined by Josh O'Connor, Glenn Close, Josh Brolin, Mila Kunis, Jeremy Renner, Kerry Washington, Andrew Scott, Cailee Spaeny, Daryl McCormack, and Thomas Haden Church.Squid Game Season 3 Trailer Teases the Final GamesSquid Game Season 3 is set to debut on Netflix on June 27, and Tudum shared with the world a new trailer that showcases what these final games have in in store for Lee Jung-jae's Gi-hun and more. “The new season will focus on what Gi-hun can and will do after all his efforts fail,” series creator Hwang Dong-hyuk said. "He is in utter despair after losing everything and watching all his efforts go in vain. The story then takes an interesting turn, questioning whether Gi-hun can overcome his shame and rise again to prove that values of humanity — like conscience and kindness — can exist in the arena.” Guillermo del Toro's Frankenstein Gets a Teaser Trailer That Shows Off Oscar Isaac's Victor Frankenstein and the 'Misbegotten Creature He's Created'Academy Award winner Guillermo del Toro's Frankenstein, which is an adaptation of Mary Shelley's iconic novel, got a new teaser trailer that shows off Oscar Isaac's Victor Frankenstein and the "misbegotten creaturehe's created." Alongside a glimpse at these film that will be released in November, fans of del Toro's work will note "plenty of familiar imagery in the new teaser, from Isaac’s Victor standing on a decaying staircase holding a candelabrato a blood-red angelic figure surrounded in flames. One Piece Season 2 Trailer Reveals the First Look at Tony Tony ChopperThe latest trailer for Season 2 of One Piece has arrived and it has given us our first look at Tony Tony Chopper, who is voiced by Mikaela Hoover. For those unfamiliar, Chopper is a blue-nosed reindeer-boy hybrid and is able to treat various illnesses and wants to travel the world and cure all the diseases that pop up. “What excited me about playing Chopper is the tug of war between his standoffishness and his huge heart,” Hoover told Tudum. “He tries so hard to hide his emotions and put on a tough exterior, but underneath, he’s a big softy, and his love can’t help but come out.“I believe there is a little Chopper in all of us,” she adds. “We all want to be loved and accepted. We go to great lengths to keep the people that we love safe. There’s a purity to his nature that reminds us of what’s good in the world.”Developing...
    #netflix #tudum #everything #announced
    Netflix Tudum 2025: Everything Announced
    Netflix Tudum 2025 has begun and promises to reveal a ton of exciting details about the most-anticipated shows and movie heading to the streamer in the future, including Wake Up Dead Man: A Knives Out Mystery's release date and Squid Game's Season 3 trailer.There will be a ton of announcements during the Netflix Tudum 2025 livestream, and we'll be gathering all the big new right here as it happens, so make sure to stay tuned and refresh often!Wake Up Dead Man: A Knives Out Mystery Release Date RevealedRian Johnson's Wake Up Dead Man: A Knives Out Mystery's latest teaser trailer not only revealed more about Benoit Blanc's latest adventure, but it also shared it will arrive on Netflix on December 12, 2025.We don't know much about this new mystery yet, but Blanc himself has described this as his "most dangerous case yet." What we do know is that Daniel Craig's Blanc will be joined by Josh O'Connor, Glenn Close, Josh Brolin, Mila Kunis, Jeremy Renner, Kerry Washington, Andrew Scott, Cailee Spaeny, Daryl McCormack, and Thomas Haden Church.Squid Game Season 3 Trailer Teases the Final GamesSquid Game Season 3 is set to debut on Netflix on June 27, and Tudum shared with the world a new trailer that showcases what these final games have in in store for Lee Jung-jae's Gi-hun and more. “The new season will focus on what Gi-hun can and will do after all his efforts fail,” series creator Hwang Dong-hyuk said. "He is in utter despair after losing everything and watching all his efforts go in vain. The story then takes an interesting turn, questioning whether Gi-hun can overcome his shame and rise again to prove that values of humanity — like conscience and kindness — can exist in the arena.” Guillermo del Toro's Frankenstein Gets a Teaser Trailer That Shows Off Oscar Isaac's Victor Frankenstein and the 'Misbegotten Creature He's Created'Academy Award winner Guillermo del Toro's Frankenstein, which is an adaptation of Mary Shelley's iconic novel, got a new teaser trailer that shows off Oscar Isaac's Victor Frankenstein and the "misbegotten creaturehe's created." Alongside a glimpse at these film that will be released in November, fans of del Toro's work will note "plenty of familiar imagery in the new teaser, from Isaac’s Victor standing on a decaying staircase holding a candelabrato a blood-red angelic figure surrounded in flames. One Piece Season 2 Trailer Reveals the First Look at Tony Tony ChopperThe latest trailer for Season 2 of One Piece has arrived and it has given us our first look at Tony Tony Chopper, who is voiced by Mikaela Hoover. For those unfamiliar, Chopper is a blue-nosed reindeer-boy hybrid and is able to treat various illnesses and wants to travel the world and cure all the diseases that pop up. “What excited me about playing Chopper is the tug of war between his standoffishness and his huge heart,” Hoover told Tudum. “He tries so hard to hide his emotions and put on a tough exterior, but underneath, he’s a big softy, and his love can’t help but come out.“I believe there is a little Chopper in all of us,” she adds. “We all want to be loved and accepted. We go to great lengths to keep the people that we love safe. There’s a purity to his nature that reminds us of what’s good in the world.”Developing... #netflix #tudum #everything #announced
    WWW.IGN.COM
    Netflix Tudum 2025: Everything Announced
    Netflix Tudum 2025 has begun and promises to reveal a ton of exciting details about the most-anticipated shows and movie heading to the streamer in the future, including Wake Up Dead Man: A Knives Out Mystery's release date and Squid Game's Season 3 trailer.There will be a ton of announcements during the Netflix Tudum 2025 livestream, and we'll be gathering all the big new right here as it happens, so make sure to stay tuned and refresh often!Wake Up Dead Man: A Knives Out Mystery Release Date RevealedRian Johnson's Wake Up Dead Man: A Knives Out Mystery's latest teaser trailer not only revealed more about Benoit Blanc's latest adventure, but it also shared it will arrive on Netflix on December 12, 2025.We don't know much about this new mystery yet, but Blanc himself has described this as his "most dangerous case yet." What we do know is that Daniel Craig's Blanc will be joined by Josh O'Connor, Glenn Close, Josh Brolin, Mila Kunis, Jeremy Renner, Kerry Washington, Andrew Scott, Cailee Spaeny, Daryl McCormack, and Thomas Haden Church.Squid Game Season 3 Trailer Teases the Final GamesSquid Game Season 3 is set to debut on Netflix on June 27, and Tudum shared with the world a new trailer that showcases what these final games have in in store for Lee Jung-jae's Gi-hun and more. “The new season will focus on what Gi-hun can and will do after all his efforts fail,” series creator Hwang Dong-hyuk said. "He is in utter despair after losing everything and watching all his efforts go in vain. The story then takes an interesting turn, questioning whether Gi-hun can overcome his shame and rise again to prove that values of humanity — like conscience and kindness — can exist in the arena.” Guillermo del Toro's Frankenstein Gets a Teaser Trailer That Shows Off Oscar Isaac's Victor Frankenstein and the 'Misbegotten Creature He's Created'Academy Award winner Guillermo del Toro's Frankenstein, which is an adaptation of Mary Shelley's iconic novel, got a new teaser trailer that shows off Oscar Isaac's Victor Frankenstein and the "misbegotten creature (Jacob Elordi) he's created." Alongside a glimpse at these film that will be released in November, fans of del Toro's work will note "plenty of familiar imagery in the new teaser, from Isaac’s Victor standing on a decaying staircase holding a candelabra (see: Crimson Peak) to a blood-red angelic figure surrounded in flames (see: the Angel of Death in Hellboy II: The Golden Army, the blue Wood Sprite and the sphinxlike Death in Pinocchio, and even the Faun in Pan’s Labyrinth). One Piece Season 2 Trailer Reveals the First Look at Tony Tony ChopperThe latest trailer for Season 2 of One Piece has arrived and it has given us our first look at Tony Tony Chopper, who is voiced by Mikaela Hoover (Beef, Guardians of the Galaxy Vol. 3, and Superman). For those unfamiliar, Chopper is a blue-nosed reindeer-boy hybrid and is able to treat various illnesses and wants to travel the world and cure all the diseases that pop up. “What excited me about playing Chopper is the tug of war between his standoffishness and his huge heart,” Hoover told Tudum. “He tries so hard to hide his emotions and put on a tough exterior, but underneath, he’s a big softy, and his love can’t help but come out.“I believe there is a little Chopper in all of us,” she adds. “We all want to be loved and accepted. We go to great lengths to keep the people that we love safe. There’s a purity to his nature that reminds us of what’s good in the world.”Developing...
    0 Commenti 0 condivisioni
  • Carlos el Rojo: Visual Illustration & Creative Sparks

    05/27 — 2025

    by abduzeedo

    Explore Carlos el Rojo's "Visual Games" — a masterclass in illustration, wit, and conceptual design. Discover his unique approach to visual storytelling.
    Hey, design enthusiasts! Today, let's dive into the captivating world of Carlos el Rojo, an illustrator whose work truly embodies the spirit of playful yet profound visual communication. His series, aptly titled "Visual Games," offers a fresh perspective on how ideas can take shape with minimal fuss but maximum impact.
    Carlos el Rojo, a celebrated illustrator, approaches his creations as "small visual ideas, quick to execute, but with a twist". These aren't your typical commissioned pieces. Instead, they are a testament to his dedication to "training the eye and saying something meaningful with as little as possible". This philosophy resonates deeply with anyone passionate about concise and powerful illustration.
    The Spark of Inspiration
    The journey into "Visual Games" began for Carlos after a transformative workshop with Javier Jaén. This experience, as he describes, "helped me unlock something". It fostered a habit of embracing ideas without overthinking, allowing for a fluid, organic creative process. This approach is something many designers can relate to—the freedom to just go with it when inspiration strikes.
    His work often carries layers of meaning. Sometimes it is ironic, sometimes poetic, and occasionally, it might even be a little uncomfortable. Yet, each piece shares a common thread: "they're made with curiosity and without pretense". This genuine curiosity is what makes his illustration so compelling.
    Each "game" is a lesson in visual identity and concise storytelling. They are not just pretty pictures; they are small puzzles, contradictions, or metaphors that challenge the viewer to think beyond the obvious. This approach to illustration is a masterclass in making every element count.
    The Power of Play
    Carlos el Rojo's "Visual Games" serve as a potent reminder that design, at its core, is about problem-solving and communication. By embracing curiosity and letting ideas flow, he creates work that is both engaging and thought-provoking. His illustrations are a testament to the power of a simple idea executed with precision and a touch of genius.
    For more inspiration and to explore his full portfolio, be sure to visit Carlos el Rojo's official website: carloselrojo.com and his Instagram.
    Art and illustration artifacts

    Tags

    art

    illustration
    #carlos #rojo #visual #illustration #ampamp
    Carlos el Rojo: Visual Illustration & Creative Sparks
    05/27 — 2025 by abduzeedo Explore Carlos el Rojo's "Visual Games" — a masterclass in illustration, wit, and conceptual design. Discover his unique approach to visual storytelling. Hey, design enthusiasts! Today, let's dive into the captivating world of Carlos el Rojo, an illustrator whose work truly embodies the spirit of playful yet profound visual communication. His series, aptly titled "Visual Games," offers a fresh perspective on how ideas can take shape with minimal fuss but maximum impact. Carlos el Rojo, a celebrated illustrator, approaches his creations as "small visual ideas, quick to execute, but with a twist". These aren't your typical commissioned pieces. Instead, they are a testament to his dedication to "training the eye and saying something meaningful with as little as possible". This philosophy resonates deeply with anyone passionate about concise and powerful illustration. The Spark of Inspiration The journey into "Visual Games" began for Carlos after a transformative workshop with Javier Jaén. This experience, as he describes, "helped me unlock something". It fostered a habit of embracing ideas without overthinking, allowing for a fluid, organic creative process. This approach is something many designers can relate to—the freedom to just go with it when inspiration strikes. His work often carries layers of meaning. Sometimes it is ironic, sometimes poetic, and occasionally, it might even be a little uncomfortable. Yet, each piece shares a common thread: "they're made with curiosity and without pretense". This genuine curiosity is what makes his illustration so compelling. Each "game" is a lesson in visual identity and concise storytelling. They are not just pretty pictures; they are small puzzles, contradictions, or metaphors that challenge the viewer to think beyond the obvious. This approach to illustration is a masterclass in making every element count. The Power of Play Carlos el Rojo's "Visual Games" serve as a potent reminder that design, at its core, is about problem-solving and communication. By embracing curiosity and letting ideas flow, he creates work that is both engaging and thought-provoking. His illustrations are a testament to the power of a simple idea executed with precision and a touch of genius. For more inspiration and to explore his full portfolio, be sure to visit Carlos el Rojo's official website: carloselrojo.com and his Instagram. Art and illustration artifacts Tags art illustration #carlos #rojo #visual #illustration #ampamp
    ABDUZEEDO.COM
    Carlos el Rojo: Visual Illustration & Creative Sparks
    05/27 — 2025 by abduzeedo Explore Carlos el Rojo's "Visual Games" — a masterclass in illustration, wit, and conceptual design. Discover his unique approach to visual storytelling. Hey, design enthusiasts! Today, let's dive into the captivating world of Carlos el Rojo, an illustrator whose work truly embodies the spirit of playful yet profound visual communication. His series, aptly titled "Visual Games," offers a fresh perspective on how ideas can take shape with minimal fuss but maximum impact. Carlos el Rojo, a celebrated illustrator, approaches his creations as "small visual ideas, quick to execute, but with a twist". These aren't your typical commissioned pieces. Instead, they are a testament to his dedication to "training the eye and saying something meaningful with as little as possible". This philosophy resonates deeply with anyone passionate about concise and powerful illustration. The Spark of Inspiration The journey into "Visual Games" began for Carlos after a transformative workshop with Javier Jaén. This experience, as he describes, "helped me unlock something". It fostered a habit of embracing ideas without overthinking, allowing for a fluid, organic creative process. This approach is something many designers can relate to—the freedom to just go with it when inspiration strikes. His work often carries layers of meaning. Sometimes it is ironic, sometimes poetic, and occasionally, it might even be a little uncomfortable. Yet, each piece shares a common thread: "they're made with curiosity and without pretense". This genuine curiosity is what makes his illustration so compelling. Each "game" is a lesson in visual identity and concise storytelling. They are not just pretty pictures; they are small puzzles, contradictions, or metaphors that challenge the viewer to think beyond the obvious. This approach to illustration is a masterclass in making every element count. The Power of Play Carlos el Rojo's "Visual Games" serve as a potent reminder that design, at its core, is about problem-solving and communication. By embracing curiosity and letting ideas flow, he creates work that is both engaging and thought-provoking. His illustrations are a testament to the power of a simple idea executed with precision and a touch of genius. For more inspiration and to explore his full portfolio, be sure to visit Carlos el Rojo's official website: carloselrojo.com and his Instagram. Art and illustration artifacts Tags art illustration
    0 Commenti 0 condivisioni
  • Just 6 days left — ready for some unfiltered AI truths at TechCrunch Sessions: AI?

    June 5 is almost here — bringing real, unfiltered AI conversations… and higher ticket prices. Lock in your savings now.
    Register now to save on your TechCrunch Sessions: AI pass — and get 50% off for your +1. Don’t wait for rates to spike when event doors open.
    Join us at UC Berkeley’s Zellerbach Hall — the one-day epicenter for next-gen AI insights, big questions, and actionable ideas from the builders, thinkers, and investors shaping the future.

    What’s on the agenda? A few highlights:

    The Frontier of AI: Fireside with Anthropic’s Jared Kaplan
    From Seed to Series C: What VCs Want from AI Founders
    How Founders Can Build on Existing Foundation Models
    Launching Against the Giants: Winning Against Incumbents
    Hard Talk on AI Ethics & Safety

    Explore the full agenda here.
    Hear from AI’s power players:
    Main stage and breakout sessions are packed with tactical insight and bold vision from leaders like:

    Iliana Quinonez, Director, Customer Engineering Google Cloud Startups, Google Cloud
    Hao Sang, Startups Lead, OpenAI
    Jae Lee, CEO, Twelve Labs
    Kanu Gulati, Partner, Khosla Ventures
    Kordel France, Principal AI Engineer, Toyota
    Logan Kilpatrick, Senior Product Manager, DeepMind
    Oliver Cameron, CEO, Odyssey
    …and many more — see the full speaker list

    Some of the many AI pioneers leading main stage and breakout sessions at TechCrunch Sessions: AI, taking place on June 5 at UC Berkeley’s Zellerbach Hall.Image Credits:TechCrunch
    Don’t just listen — Connect
    Whether you’re pitching your AI startup, swapping ideas with fellow builders, or just getting started, networking at TC Sessions: AI is smarter — thanks to the Braindate app. Use the app to match on topics, meet face-to-face, and make meaningful connections with people who care about what you care about.
    And when the event’s a wrap? The conversations keep flowing at Side Events

    Techcrunch event

    now through June 4 for TechCrunch Sessions: AI
    on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    The real AI talk starts in 6 days — Are you in?
    This isn’t another hype-filled AI event. It’s where the noise drops out — and the real conversations begin.
    Only 6 days left to lock in your low ticket rate for TechCrunch Sessions: AI. Don’t sit this one out — save on your pass, and get 50% off for your +1.
    Register now before prices jump at the door, and be part of the conversations actually shaping the future of AI.
    Think you know AI? Win a bigger discount here
    Interested in a deeper discount? Participate in our AI trivia for a chance to purchase a ticket at and receive a second ticket for free.
    #just #days #left #ready #some
    Just 6 days left — ready for some unfiltered AI truths at TechCrunch Sessions: AI?
    June 5 is almost here — bringing real, unfiltered AI conversations… and higher ticket prices. Lock in your savings now. Register now to save on your TechCrunch Sessions: AI pass — and get 50% off for your +1. Don’t wait for rates to spike when event doors open. Join us at UC Berkeley’s Zellerbach Hall — the one-day epicenter for next-gen AI insights, big questions, and actionable ideas from the builders, thinkers, and investors shaping the future. What’s on the agenda? A few highlights: The Frontier of AI: Fireside with Anthropic’s Jared Kaplan From Seed to Series C: What VCs Want from AI Founders How Founders Can Build on Existing Foundation Models Launching Against the Giants: Winning Against Incumbents Hard Talk on AI Ethics & Safety Explore the full agenda here. Hear from AI’s power players: Main stage and breakout sessions are packed with tactical insight and bold vision from leaders like: Iliana Quinonez, Director, Customer Engineering Google Cloud Startups, Google Cloud Hao Sang, Startups Lead, OpenAI Jae Lee, CEO, Twelve Labs Kanu Gulati, Partner, Khosla Ventures Kordel France, Principal AI Engineer, Toyota Logan Kilpatrick, Senior Product Manager, DeepMind Oliver Cameron, CEO, Odyssey …and many more — see the full speaker list Some of the many AI pioneers leading main stage and breakout sessions at TechCrunch Sessions: AI, taking place on June 5 at UC Berkeley’s Zellerbach Hall.Image Credits:TechCrunch Don’t just listen — Connect Whether you’re pitching your AI startup, swapping ideas with fellow builders, or just getting started, networking at TC Sessions: AI is smarter — thanks to the Braindate app. Use the app to match on topics, meet face-to-face, and make meaningful connections with people who care about what you care about. And when the event’s a wrap? The conversations keep flowing at Side Events Techcrunch event now through June 4 for TechCrunch Sessions: AI on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The real AI talk starts in 6 days — Are you in? This isn’t another hype-filled AI event. It’s where the noise drops out — and the real conversations begin. Only 6 days left to lock in your low ticket rate for TechCrunch Sessions: AI. Don’t sit this one out — save on your pass, and get 50% off for your +1. Register now before prices jump at the door, and be part of the conversations actually shaping the future of AI. Think you know AI? Win a bigger discount here Interested in a deeper discount? Participate in our AI trivia for a chance to purchase a ticket at and receive a second ticket for free. #just #days #left #ready #some
    TECHCRUNCH.COM
    Just 6 days left — ready for some unfiltered AI truths at TechCrunch Sessions: AI?
    June 5 is almost here — bringing real, unfiltered AI conversations… and higher ticket prices. Lock in your savings now. Register now to save $300 on your TechCrunch Sessions: AI pass — and get 50% off for your +1. Don’t wait for rates to spike when event doors open. Join us at UC Berkeley’s Zellerbach Hall — the one-day epicenter for next-gen AI insights, big questions, and actionable ideas from the builders, thinkers, and investors shaping the future. What’s on the agenda? A few highlights: The Frontier of AI: Fireside with Anthropic’s Jared Kaplan From Seed to Series C: What VCs Want from AI Founders How Founders Can Build on Existing Foundation Models Launching Against the Giants: Winning Against Incumbents Hard Talk on AI Ethics & Safety Explore the full agenda here. Hear from AI’s power players: Main stage and breakout sessions are packed with tactical insight and bold vision from leaders like: Iliana Quinonez, Director, Customer Engineering Google Cloud Startups, Google Cloud Hao Sang, Startups Lead, OpenAI Jae Lee, CEO, Twelve Labs Kanu Gulati, Partner, Khosla Ventures Kordel France, Principal AI Engineer, Toyota Logan Kilpatrick, Senior Product Manager, DeepMind Oliver Cameron, CEO, Odyssey …and many more — see the full speaker list Some of the many AI pioneers leading main stage and breakout sessions at TechCrunch Sessions: AI, taking place on June 5 at UC Berkeley’s Zellerbach Hall.Image Credits:TechCrunch Don’t just listen — Connect Whether you’re pitching your AI startup, swapping ideas with fellow builders, or just getting started, networking at TC Sessions: AI is smarter — thanks to the Braindate app. Use the app to match on topics, meet face-to-face, and make meaningful connections with people who care about what you care about. And when the event’s a wrap? The conversations keep flowing at Side Events Techcrunch event Save now through June 4 for TechCrunch Sessions: AI Save $300 on your ticket to TC Sessions: AI—and get 50% off a second. Hear from leaders at OpenAI, Anthropic, Khosla Ventures, and more during a full day of expert insights, hands-on workshops, and high-impact networking. These low-rate deals disappear when the doors open on June 5. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW The real AI talk starts in 6 days — Are you in? This isn’t another hype-filled AI event. It’s where the noise drops out — and the real conversations begin. Only 6 days left to lock in your low ticket rate for TechCrunch Sessions: AI. Don’t sit this one out — save $300 on your pass, and get 50% off for your +1. Register now before prices jump at the door, and be part of the conversations actually shaping the future of AI. Think you know AI? Win a bigger discount here Interested in a deeper discount? Participate in our AI trivia for a chance to purchase a ticket at $200 and receive a second ticket for free.
    0 Commenti 0 condivisioni
  • Pentagram’s galloping horse logo steers TwelveLabs rebrand

    Pentagram partners Jody Hudson-Powell and Luke Powell have created a dynamic equine identity for AI video company TwelveLabs.
    Based between San Francisco and Seoul, TwelveLabs describes itself as “the world’s most powerful video intelligence platform.”
    Unlike generative video tools which help users create videos from scratch, TwelveLabs uses AI analysis to help people understand their existing videos at a very granular level, which makes them more searchable.
    Co-founder and CEO Jae Lee explains that communicating this difference – between video generation and video understanding – was at the heart of their work with Pentagram.
    “In the middle of last year our models were improving pretty rapidly, and we thought we needed to up our game in terms of our storytelling, why we matter, and to match the design, the tone, and the messaging to our ambition,” he says.
    Lee described the previous branding as “straight out of Silicon Valley” and they chose Hudson–Powell and his team due to their tech-savvy design practice.
    In creating a new identity, it was important not to be “lumped in” with other generative AI video companies, Lee says, but also to differentiate themselves from other video analysis tools.
    “Our competitors essentially do frame-by-frame analysis, but we look at it temporally,” lead product designer Sean Barclay explains. “That’s what differentiates us, and we wanted to convey that secret sauce.”
    “On the first call, they had me at temporal reasoning,” Hudson-Powell laughs.
    His team had to avoid the visual cliches AI tools tend to embrace – “it’s a very noisy category with lots of sparkles.”  But they also had to capture and communicate TwelveLabs’ offering in a way that was accessible and exciting, but not dumbed down.
    “We had a distinct stream of work that wasn’t strategic or creative – it was just understanding the technology,” Hudson-Powell says. “We kept asking them, could we imagine your technology to look something like this? Or this?
    “We were trying to put some kind of conceptual apparatus around the technology, to see if we could find a visual communication language that we could start to build on.”
    “Jody was very good at pulling out those threads about what video looks like in our brains,” Lee says.
    Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs
    The Pentagram team homed in on the core idea of “video as volume” rather than a timeline, and they built a series of thread-based diagrams to help explain how it works. This visual motif could be scaled across the touchpoints, from product pages to sales and branding.
    “You get this graphic stretch, so you’re speaking to different audiences with the same concept,” Hudson-Powell explains.
    The horse logo was grounded in what Hudson-Powell calls TwelveLabs’ existing “lore” – Lee says they were inspired by Eadweard Muybridge’s famous 1887 animation of a horse, and he likes the metaphor of a user as a jockey steering their technology.
    The logo – which has 12 layers in a nod to the company’s name – is often used in motion, galloping across a screen.
    “We worked a lot of animation into the identity,” Hudson-Powell says. “Animation can be quite frivolous, but we did it really intentionally. The logo gives you this feeling of perpetual motion, this rhythm at the heart of the brand, which is really important.”


    The team chose Milling for the typeface for its combination of “technicality and soft edges” and the visual identity uses the LCH colour system, which, compared to RGB, represents colour in a more similar way to how our eyes perceive colour.
    “You can match any two colours and they’ll be harmonious, which you don’t get with RGB,” Hudson-Powell says. “We can find infinite combinations.”
    There were also three colour subsets for TwelveLabs’ three key features – pink-purple for search, orange-yellow for generate and green-blue for embed.
    Pentagram’s Luke Powell and Jody Hudson Powell’s new colour palette for TwelveLabs
    Lee says the new identity has resonated with investors, employees and most importantly, customers.
    “It’s given them this confidence that they’re working with not only a super-technical team, but also a team that cares deeply about video,” he says. “So we can communicate with our science community, but also with the people who are building the content we love consuming. There’s a duality which feels really connected.”
    Barclay agrees, and adds that it helps people grasp what TwelveLabs does – and what it might do for them – more quickly.
    “It’s definitely improved our website tremendously in terms of telling a better story,” he says. “Before it took a lot of time to comprehend what TwelveLabs is, and what we’re offering. We have definitely shortened that.”
    Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs
    Pentagram’s Luke Powell and Jody Hudson Powell’s new logo for TwelveLabs
    Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs
    Pentagram’s Luke Powell and Jody Hudson Powell’s new icons for TwelveLabs
    Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs
    Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs
    #pentagrams #galloping #horse #logo #steers
    Pentagram’s galloping horse logo steers TwelveLabs rebrand
    Pentagram partners Jody Hudson-Powell and Luke Powell have created a dynamic equine identity for AI video company TwelveLabs. Based between San Francisco and Seoul, TwelveLabs describes itself as “the world’s most powerful video intelligence platform.” Unlike generative video tools which help users create videos from scratch, TwelveLabs uses AI analysis to help people understand their existing videos at a very granular level, which makes them more searchable. Co-founder and CEO Jae Lee explains that communicating this difference – between video generation and video understanding – was at the heart of their work with Pentagram. “In the middle of last year our models were improving pretty rapidly, and we thought we needed to up our game in terms of our storytelling, why we matter, and to match the design, the tone, and the messaging to our ambition,” he says. Lee described the previous branding as “straight out of Silicon Valley” and they chose Hudson–Powell and his team due to their tech-savvy design practice. In creating a new identity, it was important not to be “lumped in” with other generative AI video companies, Lee says, but also to differentiate themselves from other video analysis tools. “Our competitors essentially do frame-by-frame analysis, but we look at it temporally,” lead product designer Sean Barclay explains. “That’s what differentiates us, and we wanted to convey that secret sauce.” “On the first call, they had me at temporal reasoning,” Hudson-Powell laughs. His team had to avoid the visual cliches AI tools tend to embrace – “it’s a very noisy category with lots of sparkles.”  But they also had to capture and communicate TwelveLabs’ offering in a way that was accessible and exciting, but not dumbed down. “We had a distinct stream of work that wasn’t strategic or creative – it was just understanding the technology,” Hudson-Powell says. “We kept asking them, could we imagine your technology to look something like this? Or this? “We were trying to put some kind of conceptual apparatus around the technology, to see if we could find a visual communication language that we could start to build on.” “Jody was very good at pulling out those threads about what video looks like in our brains,” Lee says. Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs The Pentagram team homed in on the core idea of “video as volume” rather than a timeline, and they built a series of thread-based diagrams to help explain how it works. This visual motif could be scaled across the touchpoints, from product pages to sales and branding. “You get this graphic stretch, so you’re speaking to different audiences with the same concept,” Hudson-Powell explains. The horse logo was grounded in what Hudson-Powell calls TwelveLabs’ existing “lore” – Lee says they were inspired by Eadweard Muybridge’s famous 1887 animation of a horse, and he likes the metaphor of a user as a jockey steering their technology. The logo – which has 12 layers in a nod to the company’s name – is often used in motion, galloping across a screen. “We worked a lot of animation into the identity,” Hudson-Powell says. “Animation can be quite frivolous, but we did it really intentionally. The logo gives you this feeling of perpetual motion, this rhythm at the heart of the brand, which is really important.” The team chose Milling for the typeface for its combination of “technicality and soft edges” and the visual identity uses the LCH colour system, which, compared to RGB, represents colour in a more similar way to how our eyes perceive colour. “You can match any two colours and they’ll be harmonious, which you don’t get with RGB,” Hudson-Powell says. “We can find infinite combinations.” There were also three colour subsets for TwelveLabs’ three key features – pink-purple for search, orange-yellow for generate and green-blue for embed. Pentagram’s Luke Powell and Jody Hudson Powell’s new colour palette for TwelveLabs Lee says the new identity has resonated with investors, employees and most importantly, customers. “It’s given them this confidence that they’re working with not only a super-technical team, but also a team that cares deeply about video,” he says. “So we can communicate with our science community, but also with the people who are building the content we love consuming. There’s a duality which feels really connected.” Barclay agrees, and adds that it helps people grasp what TwelveLabs does – and what it might do for them – more quickly. “It’s definitely improved our website tremendously in terms of telling a better story,” he says. “Before it took a lot of time to comprehend what TwelveLabs is, and what we’re offering. We have definitely shortened that.” Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new logo for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new icons for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs #pentagrams #galloping #horse #logo #steers
    WWW.DESIGNWEEK.CO.UK
    Pentagram’s galloping horse logo steers TwelveLabs rebrand
    Pentagram partners Jody Hudson-Powell and Luke Powell have created a dynamic equine identity for AI video company TwelveLabs. Based between San Francisco and Seoul, TwelveLabs describes itself as “the world’s most powerful video intelligence platform.” Unlike generative video tools which help users create videos from scratch, TwelveLabs uses AI analysis to help people understand their existing videos at a very granular level, which makes them more searchable. Co-founder and CEO Jae Lee explains that communicating this difference – between video generation and video understanding – was at the heart of their work with Pentagram. “In the middle of last year our models were improving pretty rapidly, and we thought we needed to up our game in terms of our storytelling, why we matter, and to match the design, the tone, and the messaging to our ambition,” he says. Lee described the previous branding as “straight out of Silicon Valley” and they chose Hudson–Powell and his team due to their tech-savvy design practice. In creating a new identity, it was important not to be “lumped in” with other generative AI video companies, Lee says, but also to differentiate themselves from other video analysis tools. “Our competitors essentially do frame-by-frame analysis, but we look at it temporally,” lead product designer Sean Barclay explains. “That’s what differentiates us, and we wanted to convey that secret sauce.” “On the first call, they had me at temporal reasoning,” Hudson-Powell laughs. His team had to avoid the visual cliches AI tools tend to embrace – “it’s a very noisy category with lots of sparkles.”  But they also had to capture and communicate TwelveLabs’ offering in a way that was accessible and exciting, but not dumbed down. “We had a distinct stream of work that wasn’t strategic or creative – it was just understanding the technology,” Hudson-Powell says. “We kept asking them, could we imagine your technology to look something like this? Or this? “We were trying to put some kind of conceptual apparatus around the technology, to see if we could find a visual communication language that we could start to build on.” “Jody was very good at pulling out those threads about what video looks like in our brains,” Lee says. Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs The Pentagram team homed in on the core idea of “video as volume” rather than a timeline, and they built a series of thread-based diagrams to help explain how it works. This visual motif could be scaled across the touchpoints, from product pages to sales and branding. “You get this graphic stretch, so you’re speaking to different audiences with the same concept,” Hudson-Powell explains. The horse logo was grounded in what Hudson-Powell calls TwelveLabs’ existing “lore” – Lee says they were inspired by Eadweard Muybridge’s famous 1887 animation of a horse, and he likes the metaphor of a user as a jockey steering their technology. The logo – which has 12 layers in a nod to the company’s name – is often used in motion, galloping across a screen. “We worked a lot of animation into the identity,” Hudson-Powell says. “Animation can be quite frivolous, but we did it really intentionally. The logo gives you this feeling of perpetual motion, this rhythm at the heart of the brand, which is really important.” https://d3faj0w6aqatyx.cloudfront.net/uploads/2025/05/01_TL_Logo_Gradient_16x9_60fps_10s_LOW.mp4 The team chose Milling for the typeface for its combination of “technicality and soft edges” and the visual identity uses the LCH colour system, which, compared to RGB, represents colour in a more similar way to how our eyes perceive colour. “You can match any two colours and they’ll be harmonious, which you don’t get with RGB,” Hudson-Powell says. “We can find infinite combinations.” There were also three colour subsets for TwelveLabs’ three key features – pink-purple for search, orange-yellow for generate and green-blue for embed. Pentagram’s Luke Powell and Jody Hudson Powell’s new colour palette for TwelveLabs Lee says the new identity has resonated with investors, employees and most importantly, customers. “It’s given them this confidence that they’re working with not only a super-technical team, but also a team that cares deeply about video,” he says. “So we can communicate with our science community, but also with the people who are building the content we love consuming. There’s a duality which feels really connected.” Barclay agrees, and adds that it helps people grasp what TwelveLabs does – and what it might do for them – more quickly. “It’s definitely improved our website tremendously in terms of telling a better story,” he says. “Before it took a lot of time to comprehend what TwelveLabs is, and what we’re offering. We have definitely shortened that.” Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new logo for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new icons for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs Pentagram’s Luke Powell and Jody Hudson Powell’s new identity palette for TwelveLabs
    0 Commenti 0 condivisioni