• Audio Localization Gear Built On The Cheap

    Most humans with two ears have a pretty good sense of directional hearing. However, you can build equipment to localize audio sources, too. That’s precisely what [Sam], [Ezra], and [Ari] …read more
    Audio Localization Gear Built On The Cheap Most humans with two ears have a pretty good sense of directional hearing. However, you can build equipment to localize audio sources, too. That’s precisely what [Sam], [Ezra], and [Ari] …read more
    HACKADAY.COM
    Audio Localization Gear Built On The Cheap
    Most humans with two ears have a pretty good sense of directional hearing. However, you can build equipment to localize audio sources, too. That’s precisely what [Sam], [Ezra], and [Ari] …read more
    1 Comentários 0 Compartilhamentos
  • Meet Cucumber, The Robot Dog

    Robots can look like all sorts of things, but they’re often more fun if you make them look like some kind of charming animal. That’s precisely what [Ananya], [Laurence] and …read more
    Meet Cucumber, The Robot Dog Robots can look like all sorts of things, but they’re often more fun if you make them look like some kind of charming animal. That’s precisely what [Ananya], [Laurence] and …read more
    HACKADAY.COM
    Meet Cucumber, The Robot Dog
    Robots can look like all sorts of things, but they’re often more fun if you make them look like some kind of charming animal. That’s precisely what [Ananya], [Laurence] and …read more
    2 Comentários 0 Compartilhamentos
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 Comentários 0 Compartilhamentos
  • casque XR, Samsung, Project Moohan, réalité virtuelle, disponibilité, nouvelles technologies, rumeurs, Android, lancement, innovation

    ---

    ## Introduction

    Dans un monde où la technologie évolue à une vitesse fulgurante, chaque nouvelle annonce suscite un espoir fou. Le casque XR de Samsung, connu sous le nom de Project Moohan, est l’un des produits les plus attendus de ces dernières années. Mais alors que les rumeurs sur sa disponibilité se précisent, une ombre de désespoir plane sur les passi...
    casque XR, Samsung, Project Moohan, réalité virtuelle, disponibilité, nouvelles technologies, rumeurs, Android, lancement, innovation --- ## Introduction Dans un monde où la technologie évolue à une vitesse fulgurante, chaque nouvelle annonce suscite un espoir fou. Le casque XR de Samsung, connu sous le nom de Project Moohan, est l’un des produits les plus attendus de ces dernières années. Mais alors que les rumeurs sur sa disponibilité se précisent, une ombre de désespoir plane sur les passi...
    Disponibilité du casque XR Samsung : Entre espoir et désespoir
    casque XR, Samsung, Project Moohan, réalité virtuelle, disponibilité, nouvelles technologies, rumeurs, Android, lancement, innovation --- ## Introduction Dans un monde où la technologie évolue à une vitesse fulgurante, chaque nouvelle annonce suscite un espoir fou. Le casque XR de Samsung, connu sous le nom de Project Moohan, est l’un des produits les plus attendus de ces dernières années....
    Like
    Love
    Wow
    Sad
    Angry
    230
    1 Comentários 0 Compartilhamentos
  • Il est vraiment inacceptable de voir le Festival d'Annecy 2025 se dérouler sous un soleil de plomb, alors que le MIFA (Marché International du Film d'Animation) semble s'enliser dans une absence totale de transparence et de chiffres concrets. Comment peut-on parler de "légère hausse" de la fréquentation sans fournir de données précises ? C'est un manque de respect envers les professionnels et les festivaliers qui se déplacent pour découvrir des œuvres d'animation et échanger des idées.

    Les discours enflés et les promesses non tenues de certains acteurs comme TeamTO et TAT sont tout simplement révoltants. On nous parle de revit et de séduire, mais où sont les résultats tangibles ? Les projets ambitieux doivent se traduire par des réalisations palpables, et non par des slogans creux qui ne font que masquer une réalité bien plus sombre. Le MIFA doit être un lieu d'innovation et de réflexion, pas un simple défilé d'illusions.

    Et que dire de l'organisation elle-même ? Les conférences, bien que nombreuses, manquent souvent de substance. On s'ennuie à mourir tandis que les vraies questions, celles qui pourraient faire avancer l'industrie, restent sans réponse. Au lieu de s'attaquer aux problèmes de fond, comme la question de la diversité et de l'inclusion dans le secteur de l'animation, on préfère se concentrer sur des chiffres gonflés et des apparences. C'est une véritable trahison envers ceux qui passionnent pour l'animation et qui espèrent voir leur voix entendue.

    Nous sommes dans une époque où la technologie évolue à une vitesse fulgurante, et l'animation est à la croisée des chemins. Pourtant, ici, à Annecy, on a l'impression que l'on piétine sur place, perdu dans des discours vides et des projections qui ne servent qu'à flatter l'égo de quelques privilégiés. Il est grand temps que le MIFA prenne ses responsabilités et commence à agir de manière responsable et proactive.

    Pourquoi ne pas offrir une vraie plateforme pour les jeunes talents ? Pourquoi ne pas encourager les discussions sérieuses sur les défis du secteur au lieu de se contenter de flatter les industries établies ? Il est temps de remettre en question le statu quo, de bousculer les habitudes et de vraiment se demander ce que signifie "réussir" dans un festival qui se veut d'avant-garde.

    En somme, le Festival d'Annecy et le MIFA doivent se réveiller. Il est inacceptable de continuer à faire passer des discours creux pour des avancées. Les passionnés d'animation méritent mieux que cela. Ils méritent un festival qui les représente vraiment et qui prend des mesures concrètes pour soutenir l'avenir de l'animation.

    #FestivalAnnecy #MIFA2025 #Animation #TeamTO #TAT
    Il est vraiment inacceptable de voir le Festival d'Annecy 2025 se dérouler sous un soleil de plomb, alors que le MIFA (Marché International du Film d'Animation) semble s'enliser dans une absence totale de transparence et de chiffres concrets. Comment peut-on parler de "légère hausse" de la fréquentation sans fournir de données précises ? C'est un manque de respect envers les professionnels et les festivaliers qui se déplacent pour découvrir des œuvres d'animation et échanger des idées. Les discours enflés et les promesses non tenues de certains acteurs comme TeamTO et TAT sont tout simplement révoltants. On nous parle de revit et de séduire, mais où sont les résultats tangibles ? Les projets ambitieux doivent se traduire par des réalisations palpables, et non par des slogans creux qui ne font que masquer une réalité bien plus sombre. Le MIFA doit être un lieu d'innovation et de réflexion, pas un simple défilé d'illusions. Et que dire de l'organisation elle-même ? Les conférences, bien que nombreuses, manquent souvent de substance. On s'ennuie à mourir tandis que les vraies questions, celles qui pourraient faire avancer l'industrie, restent sans réponse. Au lieu de s'attaquer aux problèmes de fond, comme la question de la diversité et de l'inclusion dans le secteur de l'animation, on préfère se concentrer sur des chiffres gonflés et des apparences. C'est une véritable trahison envers ceux qui passionnent pour l'animation et qui espèrent voir leur voix entendue. Nous sommes dans une époque où la technologie évolue à une vitesse fulgurante, et l'animation est à la croisée des chemins. Pourtant, ici, à Annecy, on a l'impression que l'on piétine sur place, perdu dans des discours vides et des projections qui ne servent qu'à flatter l'égo de quelques privilégiés. Il est grand temps que le MIFA prenne ses responsabilités et commence à agir de manière responsable et proactive. Pourquoi ne pas offrir une vraie plateforme pour les jeunes talents ? Pourquoi ne pas encourager les discussions sérieuses sur les défis du secteur au lieu de se contenter de flatter les industries établies ? Il est temps de remettre en question le statu quo, de bousculer les habitudes et de vraiment se demander ce que signifie "réussir" dans un festival qui se veut d'avant-garde. En somme, le Festival d'Annecy et le MIFA doivent se réveiller. Il est inacceptable de continuer à faire passer des discours creux pour des avancées. Les passionnés d'animation méritent mieux que cela. Ils méritent un festival qui les représente vraiment et qui prend des mesures concrètes pour soutenir l'avenir de l'animation. #FestivalAnnecy #MIFA2025 #Animation #TeamTO #TAT
    Annecy, jour 3 : TeamTO revit, TAT séduit, le MIFA questionne
    Le Festival d’Annecy 2025 se poursuit, sous un soleil de plomb. Les festivaliers sont très nombreux à se presser pour aller voir projections et conférences et la fréquentation est en légère hausse par rapport à l’an passé au niveau global
    Like
    Love
    Wow
    Sad
    Angry
    593
    1 Comentários 0 Compartilhamentos
  • Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die

    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    #stanford #doctors #invent #device #that
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article #stanford #doctors #invent #device #that
    FUTURISM.COM
    Stanford Doctors Invent Device That Appears to Be Able to Save Tons of Stroke Patients Before They Die
    Image by Andrew BrodheadResearchers have developed a novel device that literally spins away the clots that block blood flow to the brain and cause strokes.As Stanford explains in a blurb, the novel milli-spinner device may be able to save the lives of patients who experience "ischemic stroke" from brain stem clotting.Traditional clot removal, a process known as thrombectomy, generally uses a catheter that either vacuums up the blood blockage or uses a wire mesh to ensnare it — a procedure that's as rough and imprecise as it sounds. Conventional thrombectomy has a very low efficacy rate because of this imprecision, and the procedure can result in pieces of the clot breaking off and moving to more difficult-to-reach regions.Thrombectomy via milli-spinner also enters the brain with a catheter, but instead of using a normal vacuum device, it employs a spinning tube outfitted with fins and slits that can suck up the clot much more meticulously.Stanford neuroimaging expert Jeremy Heit, who also coauthored a new paper about the device in the journal Nature, explained in the school's press release that the efficacy of the milli-spinner is "unbelievable.""For most cases, we’re more than doubling the efficacy of current technology, and for the toughest clots — which we’re only removing about 11 percent of the time with current devices — we’re getting the artery open on the first try 90 percent of the time," Heit said. "This is a sea-change technology that will drastically improve our ability to help people."Renee Zhao, the senior author of the Nature paper who teaches mechanical engineering at Stanford and creates what she calls "millirobots," said that conventional thrombectomies just aren't cutting it."With existing technology, there’s no way to reduce the size of the clot," Zhao said. "They rely on deforming and rupturing the clot to remove it.""What’s unique about the milli-spinner is that it applies compression and shear forces to shrink the entire clot," she continued, "dramatically reducing the volume without causing rupture."Indeed, as the team discovered, the device can cut and vacuum up to five percent of its original size."It works so well, for a wide range of clot compositions and sizes," Zhao said. "Even for tough... clots, which are impossible to treat with current technologies, our milli-spinner can treat them using this simple yet powerful mechanics concept to densify the fibrin network and shrink the clot."Though its main experimental use case is brain clot removal, Zhao is excited about its other uses, too."We’re exploring other biomedical applications for the milli-spinner design, and even possibilities beyond medicine," the engineer said. "There are some very exciting opportunities ahead."More on brains: The Microplastics in Your Brain May Be Causing Mental Health IssuesShare This Article
    Like
    Love
    Wow
    Sad
    Angry
    478
    2 Comentários 0 Compartilhamentos
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Comentários 0 Compartilhamentos
  • Keep an eye on Planet of Lana 2 — the first one was a secret gem of 2023

    May 2023 was kind of a big deal. A little ol’ game called The Legend of Zelda: Tears of the Kingdomwas released, and everyone was playing it; Tears sold almost 20 million copies in under two months. However, it wasn’t the only game that came out that month. While it may not have generated as much buzz at the time, Planet of Lana is one of 2023’s best indies — and it’s getting a sequel next year.Planet of Lana is a cinematic puzzle-platformer. You play as Lana as she tries to rescue her best friend and fellow villagers after they were taken by mechanical alien beings. She’s accompanied by a little cat-like creature named Mui. Together, they outwit the alien robots in various puzzles on their way to rescuing the villagers.The puzzles aren’t too difficult, but they still provide a welcome challenge; some require precise execution lest the alien robots grab Lana too. Danger lurks everywhere as there are also native predators vying to get a bite out of Lana and her void of a cat companion. Mui is often at the center of solving environmental puzzles, which rely on a dash of stealth, to get around those dangerous creatures.Planet of Lana’s art style is immediately eye-catching; its palette of soft, inviting colors contrasts with the comparatively dark storyline. Lana and Mui travel through the grassy plains surrounding her village, an underground cave, and through a desert. The visuals are bested only by Planet of Lana’s music, which is both chill and powerful in parts.Of course, all ends well — this is a game starring a child and an alien cat, after all. Nothing bad was really going to happen to them. Or at least, that was certainly the case in the first game, but the trailer for Planet of Lana 2: Children of the Leaf ends with a shot of poor Mui lying in some sort of hospital bed or perhaps at a research station. Lana looks on, and her worry is palpable in the frame.But, Planet of Lana 2 won’t come out until 2026, so I don’t want to spend too much time worrying about the little dude. The cat’s fine. What’s not fine, however, is Lana’s village and her people. In the trailer for the second game, we see more alien robots trying to zap her and her friend, and a young villager falls into a faint.Children of the Leaf is certainly upping the stakes and widening its scope. Ships from outer space zoom through a lush forest, and we get exciting shots of Lana hopping from ship to ship. Lana also travels across various environments, including a gorgeous underwater level, and rides on the back of one of the alien robots from the first game.I’m very excited to see how the lore of Planet of Lana expands with its sequel, and I can’t wait to tag along for another journey with Lana and Mui when Planet of Lana 2: Children of the Leaf launches in 2026. You can check out the first game on Nintendo Switch, PS4, PS5, Xbox One, Xbox Series X, and Windows PC.See More:
    #keep #eye #planet #lana #first
    Keep an eye on Planet of Lana 2 — the first one was a secret gem of 2023
    May 2023 was kind of a big deal. A little ol’ game called The Legend of Zelda: Tears of the Kingdomwas released, and everyone was playing it; Tears sold almost 20 million copies in under two months. However, it wasn’t the only game that came out that month. While it may not have generated as much buzz at the time, Planet of Lana is one of 2023’s best indies — and it’s getting a sequel next year.Planet of Lana is a cinematic puzzle-platformer. You play as Lana as she tries to rescue her best friend and fellow villagers after they were taken by mechanical alien beings. She’s accompanied by a little cat-like creature named Mui. Together, they outwit the alien robots in various puzzles on their way to rescuing the villagers.The puzzles aren’t too difficult, but they still provide a welcome challenge; some require precise execution lest the alien robots grab Lana too. Danger lurks everywhere as there are also native predators vying to get a bite out of Lana and her void of a cat companion. Mui is often at the center of solving environmental puzzles, which rely on a dash of stealth, to get around those dangerous creatures.Planet of Lana’s art style is immediately eye-catching; its palette of soft, inviting colors contrasts with the comparatively dark storyline. Lana and Mui travel through the grassy plains surrounding her village, an underground cave, and through a desert. The visuals are bested only by Planet of Lana’s music, which is both chill and powerful in parts.Of course, all ends well — this is a game starring a child and an alien cat, after all. Nothing bad was really going to happen to them. Or at least, that was certainly the case in the first game, but the trailer for Planet of Lana 2: Children of the Leaf ends with a shot of poor Mui lying in some sort of hospital bed or perhaps at a research station. Lana looks on, and her worry is palpable in the frame.But, Planet of Lana 2 won’t come out until 2026, so I don’t want to spend too much time worrying about the little dude. The cat’s fine. What’s not fine, however, is Lana’s village and her people. In the trailer for the second game, we see more alien robots trying to zap her and her friend, and a young villager falls into a faint.Children of the Leaf is certainly upping the stakes and widening its scope. Ships from outer space zoom through a lush forest, and we get exciting shots of Lana hopping from ship to ship. Lana also travels across various environments, including a gorgeous underwater level, and rides on the back of one of the alien robots from the first game.I’m very excited to see how the lore of Planet of Lana expands with its sequel, and I can’t wait to tag along for another journey with Lana and Mui when Planet of Lana 2: Children of the Leaf launches in 2026. You can check out the first game on Nintendo Switch, PS4, PS5, Xbox One, Xbox Series X, and Windows PC.See More: #keep #eye #planet #lana #first
    WWW.POLYGON.COM
    Keep an eye on Planet of Lana 2 — the first one was a secret gem of 2023
    May 2023 was kind of a big deal. A little ol’ game called The Legend of Zelda: Tears of the Kingdom (ring any bells?) was released, and everyone was playing it; Tears sold almost 20 million copies in under two months. However, it wasn’t the only game that came out that month. While it may not have generated as much buzz at the time, Planet of Lana is one of 2023’s best indies — and it’s getting a sequel next year.Planet of Lana is a cinematic puzzle-platformer. You play as Lana as she tries to rescue her best friend and fellow villagers after they were taken by mechanical alien beings. She’s accompanied by a little cat-like creature named Mui (because any game is made better by having a cat in it). Together, they outwit the alien robots in various puzzles on their way to rescuing the villagers.The puzzles aren’t too difficult, but they still provide a welcome challenge; some require precise execution lest the alien robots grab Lana too. Danger lurks everywhere as there are also native predators vying to get a bite out of Lana and her void of a cat companion. Mui is often at the center of solving environmental puzzles, which rely on a dash of stealth, to get around those dangerous creatures.Planet of Lana’s art style is immediately eye-catching; its palette of soft, inviting colors contrasts with the comparatively dark storyline. Lana and Mui travel through the grassy plains surrounding her village, an underground cave, and through a desert. The visuals are bested only by Planet of Lana’s music, which is both chill and powerful in parts.Of course, all ends well — this is a game starring a child and an alien cat, after all. Nothing bad was really going to happen to them. Or at least, that was certainly the case in the first game, but the trailer for Planet of Lana 2: Children of the Leaf ends with a shot of poor Mui lying in some sort of hospital bed or perhaps at a research station. Lana looks on, and her worry is palpable in the frame.But, Planet of Lana 2 won’t come out until 2026, so I don’t want to spend too much time worrying about the little dude. The cat’s fine (Right? Right?). What’s not fine, however, is Lana’s village and her people. In the trailer for the second game, we see more alien robots trying to zap her and her friend, and a young villager falls into a faint.Children of the Leaf is certainly upping the stakes and widening its scope. Ships from outer space zoom through a lush forest, and we get exciting shots of Lana hopping from ship to ship. Lana also travels across various environments, including a gorgeous underwater level, and rides on the back of one of the alien robots from the first game.I’m very excited to see how the lore of Planet of Lana expands with its sequel, and I can’t wait to tag along for another journey with Lana and Mui when Planet of Lana 2: Children of the Leaf launches in 2026. You can check out the first game on Nintendo Switch, PS4, PS5, Xbox One, Xbox Series X, and Windows PC.See More:
    0 Comentários 0 Compartilhamentos
  • Editorial Design: '100 Beste Plakate 24' Showcase

    06/12 — 2025

    by abduzeedo

    Explore "100 Beste Plakate 24," a stunning yearbook by Tristesse and Slanted Publishers. Dive into cutting-edge editorial design and visual identity.
    Design enthusiasts, get ready to dive into the latest from the German-speaking design scene. The "100 Beste Plakate 24" yearbook offers a compelling showcase of contemporary graphic design. It's more than just a collection; it's a deep exploration of visual identity and editorial design.
    This yearbook, published by Slanted Publishers and edited by 100 beste Plakate e. V. and Fons Hickmann, is a testament to the power of impactful poster design. The design studio Tristesse from Basel took the reins for the overall concept, delivering a fresh and cheeky aesthetic that makes the "100 best posters" feel like leading actors on a vibrant stage. Their in-house approach to layout, typography, and photography truly shines.
    Unpacking the Visuals
    The book's formatand 256 pages allow for large-format images, providing ample space to appreciate each poster's intricate details. It includes detailed credits, content descriptions, and creation contexts. This commitment to detail in the editorial design elevates the reading experience.
    One notable example within the yearbook is the "To-Do: Diplome 24" poster campaign by Atelier HKB. Designed under Marco Matti's project management, this series features twelve motifs for the Bern University of the Arts graduation events. These posters highlight effective graphic design and visual communication. Another standout is the "Rettungsplakate" by klotz-studio für gestaltung. These "rescue posters," printed on actual rescue blankets, address homelessness in Germany. The raw, impactful visual approach paired with a tangible medium demonstrates powerful design with a purpose.
    Beyond the Imagery
    Beyond the stunning visuals, the yearbook offers insightful essays and interviews on current poster design trends. The introductory section features jury members, their works, and statements on the selection process, alongside forewords from the association president and jury chair. This editorial content offers valuable context and insights into the evolving landscape of graphic design.
    The book’s concept playfully questions the seriousness and benevolence of the honorary certificates awarded to the winning designers. This subtle irony adds a unique layer to the publication, transforming it from a mere compilation into a thoughtful commentary on the design world itself. It's an inspiring showcase of the cutting edge of contemporary graphic design.
    The Art of Editorial Design
    "100 Beste Plakate 24" is a prime example of exceptional editorial design. It's not just about compiling images; it's about curating a narrative. The precise layout, thoughtful typography choices, and the deliberate flow of content all contribute to a cohesive and engaging experience. This book highlights how editorial design can transform a collection of works into a compelling story, inviting readers to delve deeper into each piece.
    The attention to detail, from the softcover with flaps to the thread-stitching and hot-foil embossing, speaks volumes about the dedication to craftsmanship. This is where illustration, graphic design, and branding converge to create a truly immersive experience.
    Final Thoughts
    This yearbook is a must-have for anyone passionate about graphic design and visual identity. It offers a fresh perspective on contemporary poster design, highlighting both aesthetic excellence and social relevance. The detailed insights into the design process and the designers' intentions make it an invaluable resource. Pick up a copy and see how impactful design can be.
    You can learn more about this incredible work and acquire your copy at slanted.de/product/100-beste-plakate-24.
    Editorial design artifacts

    Tags

    editorial design
    #editorial #design #beste #plakate #showcase
    Editorial Design: '100 Beste Plakate 24' Showcase
    06/12 — 2025 by abduzeedo Explore "100 Beste Plakate 24," a stunning yearbook by Tristesse and Slanted Publishers. Dive into cutting-edge editorial design and visual identity. Design enthusiasts, get ready to dive into the latest from the German-speaking design scene. The "100 Beste Plakate 24" yearbook offers a compelling showcase of contemporary graphic design. It's more than just a collection; it's a deep exploration of visual identity and editorial design. This yearbook, published by Slanted Publishers and edited by 100 beste Plakate e. V. and Fons Hickmann, is a testament to the power of impactful poster design. The design studio Tristesse from Basel took the reins for the overall concept, delivering a fresh and cheeky aesthetic that makes the "100 best posters" feel like leading actors on a vibrant stage. Their in-house approach to layout, typography, and photography truly shines. Unpacking the Visuals The book's formatand 256 pages allow for large-format images, providing ample space to appreciate each poster's intricate details. It includes detailed credits, content descriptions, and creation contexts. This commitment to detail in the editorial design elevates the reading experience. One notable example within the yearbook is the "To-Do: Diplome 24" poster campaign by Atelier HKB. Designed under Marco Matti's project management, this series features twelve motifs for the Bern University of the Arts graduation events. These posters highlight effective graphic design and visual communication. Another standout is the "Rettungsplakate" by klotz-studio für gestaltung. These "rescue posters," printed on actual rescue blankets, address homelessness in Germany. The raw, impactful visual approach paired with a tangible medium demonstrates powerful design with a purpose. Beyond the Imagery Beyond the stunning visuals, the yearbook offers insightful essays and interviews on current poster design trends. The introductory section features jury members, their works, and statements on the selection process, alongside forewords from the association president and jury chair. This editorial content offers valuable context and insights into the evolving landscape of graphic design. The book’s concept playfully questions the seriousness and benevolence of the honorary certificates awarded to the winning designers. This subtle irony adds a unique layer to the publication, transforming it from a mere compilation into a thoughtful commentary on the design world itself. It's an inspiring showcase of the cutting edge of contemporary graphic design. The Art of Editorial Design "100 Beste Plakate 24" is a prime example of exceptional editorial design. It's not just about compiling images; it's about curating a narrative. The precise layout, thoughtful typography choices, and the deliberate flow of content all contribute to a cohesive and engaging experience. This book highlights how editorial design can transform a collection of works into a compelling story, inviting readers to delve deeper into each piece. The attention to detail, from the softcover with flaps to the thread-stitching and hot-foil embossing, speaks volumes about the dedication to craftsmanship. This is where illustration, graphic design, and branding converge to create a truly immersive experience. Final Thoughts This yearbook is a must-have for anyone passionate about graphic design and visual identity. It offers a fresh perspective on contemporary poster design, highlighting both aesthetic excellence and social relevance. The detailed insights into the design process and the designers' intentions make it an invaluable resource. Pick up a copy and see how impactful design can be. You can learn more about this incredible work and acquire your copy at slanted.de/product/100-beste-plakate-24. Editorial design artifacts Tags editorial design #editorial #design #beste #plakate #showcase
    ABDUZEEDO.COM
    Editorial Design: '100 Beste Plakate 24' Showcase
    06/12 — 2025 by abduzeedo Explore "100 Beste Plakate 24," a stunning yearbook by Tristesse and Slanted Publishers. Dive into cutting-edge editorial design and visual identity. Design enthusiasts, get ready to dive into the latest from the German-speaking design scene. The "100 Beste Plakate 24" yearbook offers a compelling showcase of contemporary graphic design. It's more than just a collection; it's a deep exploration of visual identity and editorial design. This yearbook, published by Slanted Publishers and edited by 100 beste Plakate e. V. and Fons Hickmann, is a testament to the power of impactful poster design. The design studio Tristesse from Basel took the reins for the overall concept, delivering a fresh and cheeky aesthetic that makes the "100 best posters" feel like leading actors on a vibrant stage. Their in-house approach to layout, typography, and photography truly shines. Unpacking the Visuals The book's format (17×24 cm) and 256 pages allow for large-format images, providing ample space to appreciate each poster's intricate details. It includes detailed credits, content descriptions, and creation contexts. This commitment to detail in the editorial design elevates the reading experience. One notable example within the yearbook is the "To-Do: Diplome 24" poster campaign by Atelier HKB. Designed under Marco Matti's project management, this series features twelve motifs for the Bern University of the Arts graduation events. These posters highlight effective graphic design and visual communication. Another standout is the "Rettungsplakate" by klotz-studio für gestaltung. These "rescue posters," printed on actual rescue blankets, address homelessness in Germany. The raw, impactful visual approach paired with a tangible medium demonstrates powerful design with a purpose. Beyond the Imagery Beyond the stunning visuals, the yearbook offers insightful essays and interviews on current poster design trends. The introductory section features jury members, their works, and statements on the selection process, alongside forewords from the association president and jury chair. This editorial content offers valuable context and insights into the evolving landscape of graphic design. The book’s concept playfully questions the seriousness and benevolence of the honorary certificates awarded to the winning designers. This subtle irony adds a unique layer to the publication, transforming it from a mere compilation into a thoughtful commentary on the design world itself. It's an inspiring showcase of the cutting edge of contemporary graphic design. The Art of Editorial Design "100 Beste Plakate 24" is a prime example of exceptional editorial design. It's not just about compiling images; it's about curating a narrative. The precise layout, thoughtful typography choices, and the deliberate flow of content all contribute to a cohesive and engaging experience. This book highlights how editorial design can transform a collection of works into a compelling story, inviting readers to delve deeper into each piece. The attention to detail, from the softcover with flaps to the thread-stitching and hot-foil embossing, speaks volumes about the dedication to craftsmanship. This is where illustration, graphic design, and branding converge to create a truly immersive experience. Final Thoughts This yearbook is a must-have for anyone passionate about graphic design and visual identity. It offers a fresh perspective on contemporary poster design, highlighting both aesthetic excellence and social relevance. The detailed insights into the design process and the designers' intentions make it an invaluable resource. Pick up a copy and see how impactful design can be. You can learn more about this incredible work and acquire your copy at slanted.de/product/100-beste-plakate-24. Editorial design artifacts Tags editorial design
    0 Comentários 0 Compartilhamentos
  • ‘Cattle Crisis’ Scrambles Between Shots to Collect Powerful Cows

    Cattle Crisis sees aliens making off with our precious cows, so you’ll need to gun those ships down to rescue our beloved bovine buddies.

    The game only features a single level, but you’d best believe I have not been able to complete it yet as it makes the most of its brief playtime. Enemies come at you hard and fast, each covering the screen in various bullet patterns that overlap with only a little room to spare. Thankfully it’s just enough to scoot your ship to safety most of the time if you’re precise, but I am unfortunately not particularly precise with my controller movements the more frantic things get on the screen. Even with generous checkpoints and a single level, my shaky hands don’t make this easy.

    Like most spaceship shooters, there’s some extra powerful boosts you can put to smart use. As you down ships, you will release the kidnapped cows that are inside some of them. Collecting these cows increases your Hyper Bar, and once that’s halfway full, you can shift into Hyper Mode and really blast your enemies. Taking a hit in this mode won’t kill you, but you will drop out of Hyper Mode. You can earn way more cowsby shooting foes point-blank, so it’s up to you how much risk you want to take to get into Hyper Mode faster and crank up that score.
    Cattle Crisis is short but offers a great deal of challenge and possibilities for high scores through careful risk-taking. Just the same, it offers a handful of checkpoints you can start fromif you feel your shooter skills aren’t amazing. It’s sharp, sounds great, and handles well, creating a nice bite-sized shooter package.

    Cattle Crisis is available now on itch.io.
    About The Author
    #cattle #crisis #scrambles #between #shots
    ‘Cattle Crisis’ Scrambles Between Shots to Collect Powerful Cows
    Cattle Crisis sees aliens making off with our precious cows, so you’ll need to gun those ships down to rescue our beloved bovine buddies. The game only features a single level, but you’d best believe I have not been able to complete it yet as it makes the most of its brief playtime. Enemies come at you hard and fast, each covering the screen in various bullet patterns that overlap with only a little room to spare. Thankfully it’s just enough to scoot your ship to safety most of the time if you’re precise, but I am unfortunately not particularly precise with my controller movements the more frantic things get on the screen. Even with generous checkpoints and a single level, my shaky hands don’t make this easy. Like most spaceship shooters, there’s some extra powerful boosts you can put to smart use. As you down ships, you will release the kidnapped cows that are inside some of them. Collecting these cows increases your Hyper Bar, and once that’s halfway full, you can shift into Hyper Mode and really blast your enemies. Taking a hit in this mode won’t kill you, but you will drop out of Hyper Mode. You can earn way more cowsby shooting foes point-blank, so it’s up to you how much risk you want to take to get into Hyper Mode faster and crank up that score. Cattle Crisis is short but offers a great deal of challenge and possibilities for high scores through careful risk-taking. Just the same, it offers a handful of checkpoints you can start fromif you feel your shooter skills aren’t amazing. It’s sharp, sounds great, and handles well, creating a nice bite-sized shooter package. Cattle Crisis is available now on itch.io. About The Author #cattle #crisis #scrambles #between #shots
    INDIEGAMESPLUS.COM
    ‘Cattle Crisis’ Scrambles Between Shots to Collect Powerful Cows
    Cattle Crisis sees aliens making off with our precious cows, so you’ll need to gun those ships down to rescue our beloved bovine buddies. The game only features a single level, but you’d best believe I have not been able to complete it yet as it makes the most of its brief playtime. Enemies come at you hard and fast, each covering the screen in various bullet patterns that overlap with only a little room to spare. Thankfully it’s just enough to scoot your ship to safety most of the time if you’re precise, but I am unfortunately not particularly precise with my controller movements the more frantic things get on the screen. Even with generous checkpoints and a single level, my shaky hands don’t make this easy. Like most spaceship shooters, there’s some extra powerful boosts you can put to smart use. As you down ships, you will release the kidnapped cows that are inside some of them. Collecting these cows increases your Hyper Bar, and once that’s halfway full, you can shift into Hyper Mode and really blast your enemies. Taking a hit in this mode won’t kill you, but you will drop out of Hyper Mode (and it’s smarter to end it early by hitting the Hyper Mode button again to launch a screen-clearing bomb). You can earn way more cows (and a higher score) by shooting foes point-blank, so it’s up to you how much risk you want to take to get into Hyper Mode faster and crank up that score. Cattle Crisis is short but offers a great deal of challenge and possibilities for high scores through careful risk-taking. Just the same, it offers a handful of checkpoints you can start from (even though it’s just one stage) if you feel your shooter skills aren’t amazing (like mine). It’s sharp, sounds great, and handles well, creating a nice bite-sized shooter package. Cattle Crisis is available now on itch.io (and you can try it out in your browser). About The Author
    0 Comentários 0 Compartilhamentos
Páginas Impulsionadas