• Hey VR enthusiasts! Are you ready to dive into the exhilarating world of Virtual Reality? Keeping up with the latest news is essential, and I've got just the thing for you! Check out the article on the "Top 8 Best VR News Sites" that will keep you informed and inspired!

    Staying updated not only fuels your passion but also opens up new horizons in the VR universe! Let's embrace the future together and explore the endless possibilities that await us. Remember, knowledge is power!

    #VirtualReality #VRNews #StayInformed #TechLovers #FutureIsNow
    🌟 Hey VR enthusiasts! 🌟 Are you ready to dive into the exhilarating world of Virtual Reality? 🚀 Keeping up with the latest news is essential, and I've got just the thing for you! Check out the article on the "Top 8 Best VR News Sites" that will keep you informed and inspired! 🌈✨ Staying updated not only fuels your passion but also opens up new horizons in the VR universe! Let's embrace the future together and explore the endless possibilities that await us. Remember, knowledge is power! 💪💖 #VirtualReality #VRNews #StayInformed #TechLovers #FutureIsNow
    Les meilleurs sites d’actualité VR : le top 8
    Les passionnés de réalité virtuelle aiment rester informés des dernières nouveautés, et trouver une source fiable n’est […] Cet article Les meilleurs sites d’actualité VR : le top 8 a été publié sur REALITE-VIRTUELLE.COM.
    1 Comentários 0 Compartilhamentos
  • What on earth is going on with the VFX in Netflix's "The Snow Sister"? Seriously, it’s 2023, and we’re still being fed mediocre visual effects that are supposed to "wow" us but end up doing the exact opposite! The so-called "VFX breakdown" is nothing more than a slap in the face to anyone who actually appreciates the art of visual storytelling.

    Let’s get one thing straight: if the best VFX are indeed the ones you can’t spot, then how on earth did we end up with these glaringly obvious digital blunders? It’s like they threw a bunch of half-baked effects together and called it a day. Instead of stunning visuals that elevate the narrative, we get a distracting mess that pulls you right out of the experience. Who are they kidding?

    The creators of "The Snow Sister" clearly missed the memo that viewers today are not easily satisfied. We demand more than just passable effects; we want immersive worlds that captivate us. And yet, here we are, subjected to a barrage of poorly executed VFX that look like they belong in a low-budget production from the early 2000s. It’s frustrating to see Netflix, a platform that should be setting the gold standard in content creation, flounder so embarrassingly with something as fundamental as visual effects.

    What’s even more maddening is the disconnect between the promotional hype and the actual product. They tout the "creation" of these effects as if they’re groundbreaking, but in reality, they are a visual cacophony that leaves much to be desired. How can anyone take this seriously when the final product looks like it was hastily patched together? It’s not just a disservice to the viewers; it’s an insult to the talented artists who work tirelessly in the VFX industry. They deserve better than to have their hard work represented by subpar results that manage to undermine the entire project.

    Netflix needs to wake up and understand that audiences are becoming increasingly discerning. We’re not just mindless consumers; we have eyes, and we can see when something is off. The VFX in "The Snow Sister" is a glaring example of what happens when corners are cut and quality is sacrificed for the sake of quantity. We expect innovation, creativity, and, above all, professionalism. Instead, we are fed a half-hearted effort that leaves us shaking our heads in disbelief.

    In conclusion, if Netflix wants to maintain its position as a leader in the entertainment industry, it’s time to step up its game and give us the high-quality VFX that we deserve. No more excuses, no more mediocre breakdowns—just real artistry that enhances our viewing experience. Let’s hold them accountable and demand better!

    #VFX #Netflix #TheSnowSister #VisualEffects #EntertainmentIndustry
    What on earth is going on with the VFX in Netflix's "The Snow Sister"? Seriously, it’s 2023, and we’re still being fed mediocre visual effects that are supposed to "wow" us but end up doing the exact opposite! The so-called "VFX breakdown" is nothing more than a slap in the face to anyone who actually appreciates the art of visual storytelling. Let’s get one thing straight: if the best VFX are indeed the ones you can’t spot, then how on earth did we end up with these glaringly obvious digital blunders? It’s like they threw a bunch of half-baked effects together and called it a day. Instead of stunning visuals that elevate the narrative, we get a distracting mess that pulls you right out of the experience. Who are they kidding? The creators of "The Snow Sister" clearly missed the memo that viewers today are not easily satisfied. We demand more than just passable effects; we want immersive worlds that captivate us. And yet, here we are, subjected to a barrage of poorly executed VFX that look like they belong in a low-budget production from the early 2000s. It’s frustrating to see Netflix, a platform that should be setting the gold standard in content creation, flounder so embarrassingly with something as fundamental as visual effects. What’s even more maddening is the disconnect between the promotional hype and the actual product. They tout the "creation" of these effects as if they’re groundbreaking, but in reality, they are a visual cacophony that leaves much to be desired. How can anyone take this seriously when the final product looks like it was hastily patched together? It’s not just a disservice to the viewers; it’s an insult to the talented artists who work tirelessly in the VFX industry. They deserve better than to have their hard work represented by subpar results that manage to undermine the entire project. Netflix needs to wake up and understand that audiences are becoming increasingly discerning. We’re not just mindless consumers; we have eyes, and we can see when something is off. The VFX in "The Snow Sister" is a glaring example of what happens when corners are cut and quality is sacrificed for the sake of quantity. We expect innovation, creativity, and, above all, professionalism. Instead, we are fed a half-hearted effort that leaves us shaking our heads in disbelief. In conclusion, if Netflix wants to maintain its position as a leader in the entertainment industry, it’s time to step up its game and give us the high-quality VFX that we deserve. No more excuses, no more mediocre breakdowns—just real artistry that enhances our viewing experience. Let’s hold them accountable and demand better! #VFX #Netflix #TheSnowSister #VisualEffects #EntertainmentIndustry
    VFX breakdown: Netflix's The Snow sister
    Enjoy seeing how the VFX in The Snow Sister were created. As always, the best VFX are the ones you can't spot! Source
    Like
    Love
    Wow
    Sad
    Angry
    187
    1 Comentários 0 Compartilhamentos
  • La critique du matelas en latex organique Silk & Snow S&S. Honnêtement, c’est un matelas, et il est un peu comme n’importe quel autre. Il est censé être doux comme un nuage, ce qui est un bon point, je suppose. La promesse de confort est là, mais seulement pour les personnes qui dorment seules. Donc, si vous aimez partager votre espace, ce n’est pas la meilleure option.

    Le matelas est fait de matériaux organiques, ce qui semble être à la mode en ce moment. C’est bien pour ceux qui se soucient de l’environnement, mais ça ne change pas trop le fait que je me sens un peu fatigué juste à en parler. Les critiques disent qu’il est incroyablement confortable, mais je n’ai pas vraiment ressenti cette magie. Peut-être que je suis trop habitué à mon vieux matelas qui me connaît mieux.

    Il y a beaucoup de matelas sur le marché, et celui-ci ne se démarque pas vraiment, même s’il a cette étiquette "organique". Je suppose que pour une personne qui aime dormir seule, c’est une option. Mais pour moi, c’est juste un autre matelas qui fait le job. Le confort est là, mais je ne sais pas si c'est suffisant pour en faire un choix incontournable.

    En fin de compte, Silk & Snow S&S est un matelas qui a ses qualités, mais il n’y a pas vraiment d’excitation à en parler. Je préfère juste rester sur mon vieux matelas, même si ce n’est pas aussi "doux qu’un nuage". Peut-être qu'un jour, je me sentirai d’humeur à essayer quelque chose de nouveau, mais aujourd'hui, ce n'est pas le cas.

    #Matelas #SilkEtSnow #Confort #Dormir #Organiques
    La critique du matelas en latex organique Silk & Snow S&S. Honnêtement, c’est un matelas, et il est un peu comme n’importe quel autre. Il est censé être doux comme un nuage, ce qui est un bon point, je suppose. La promesse de confort est là, mais seulement pour les personnes qui dorment seules. Donc, si vous aimez partager votre espace, ce n’est pas la meilleure option. Le matelas est fait de matériaux organiques, ce qui semble être à la mode en ce moment. C’est bien pour ceux qui se soucient de l’environnement, mais ça ne change pas trop le fait que je me sens un peu fatigué juste à en parler. Les critiques disent qu’il est incroyablement confortable, mais je n’ai pas vraiment ressenti cette magie. Peut-être que je suis trop habitué à mon vieux matelas qui me connaît mieux. Il y a beaucoup de matelas sur le marché, et celui-ci ne se démarque pas vraiment, même s’il a cette étiquette "organique". Je suppose que pour une personne qui aime dormir seule, c’est une option. Mais pour moi, c’est juste un autre matelas qui fait le job. Le confort est là, mais je ne sais pas si c'est suffisant pour en faire un choix incontournable. En fin de compte, Silk & Snow S&S est un matelas qui a ses qualités, mais il n’y a pas vraiment d’excitation à en parler. Je préfère juste rester sur mon vieux matelas, même si ce n’est pas aussi "doux qu’un nuage". Peut-être qu'un jour, je me sentirai d’humeur à essayer quelque chose de nouveau, mais aujourd'hui, ce n'est pas le cas. #Matelas #SilkEtSnow #Confort #Dormir #Organiques
    Silk & Snow S&S Organic Mattress Review: Soft as a Cloud
    Silk & Snow’s got an organic mattress in its lineup, and it’s amazingly comfortable—but only for solo sleepers.
    Like
    Love
    Wow
    Sad
    Angry
    542
    1 Comentários 0 Compartilhamentos
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Comentários 0 Compartilhamentos
  • Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue

    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL!
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson.
    The film contains creatures, spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time.

    The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures.

    You can grab the issue in PRINT from Amazon, or as a DIGITAL EDITION on Patreon.
    Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released.

    Hope you enjoy the latest issue!
    Here’s the links to various Amazon stores:
    USA: 
    UK: 
    Canada: 
    Germany: 
    France: 
    Spain: 
    Italy: 
    Australia: 
    Japan: 
    Sweden: 
    Poland: 
    Netherlands: 
     
    The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters.
    #twinning #creepers #more #vfx #covered
    Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL! Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson. The film contains creatures, spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time. The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures. You can grab the issue in PRINT from Amazon, or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. Hope you enjoy the latest issue! Here’s the links to various Amazon stores: USA:  UK:  Canada:  Germany:  France:  Spain:  Italy:  Australia:  Japan:  Sweden:  Poland:  Netherlands:    The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters. #twinning #creepers #more #vfx #covered
    BEFORESANDAFTERS.COM
    Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL! Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson. The film contains creatures (the Creepers), spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time. The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures. You can grab the issue in PRINT from Amazon (that’s the US store, make sure you try your local Amazon store, too), or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. Hope you enjoy the latest issue! Here’s the links to various Amazon stores: USA: https://www.amazon.com/dp/B0FCYRV86J UK: https://www.amazon.co.uk/dp/B0FCYRV86J Canada: https://www.amazon.ca/dp/B0FCYRV86J Germany: https://www.amazon.de/dp/B0FCYRV86J France: https://www.amazon.fr/dp/B0FCYRV86J Spain: https://www.amazon.es/dp/B0FCYRV86J Italy: https://www.amazon.it/dp/B0FCYRV86J Australia: https://www.amazon.com.au/dp/B0FCYRV86J Japan: https://www.amazon.co.jp/dp/B0FCYRV86J Sweden: https://www.amazon.se/dp/B0FCYRV86J Poland: https://www.amazon.pl/dp/B0FCYRV86J Netherlands: https://www.amazon.nl/dp/B0FCYRV86J   The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters.
    Like
    Love
    Wow
    Angry
    Sad
    443
    2 Comentários 0 Compartilhamentos
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Comentários 0 Compartilhamentos
  • NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]

    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report:
    Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good!
    ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord.

    I’m also excited to announce that the game will release on Steam on 8 July 2025!
    Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button
    Many Balancing tweaks
    Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP
    Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal
    Fine tuned the color of some weapons to improve the visibility
    Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot
    Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else
    Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features
    ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably.
    Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered.
    I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall.
    Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    #noobs #are #coming #demo #free
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply #noobs #are #coming #demo #free
    OVERBOY.ITCH.IO
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day ago(+1)Very addictive!ReplyTrashpanda1191 day ago(+1)love the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day ago(+1)really fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day ago(+1)Thanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day ago (2 edits) (+1)Thanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days ago (1 edit) (+1)Very nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days ago (1 edit) (+2)Okay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms (instant win), or far from them (oh no)"   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin. (i.e. the spike ball)Reply1Soultaken4 days agoAnyone know good combos for the items? (I just pick randomly.)Replydave99994 days ago (1 edit) (+2)lasers plus amount+adept some arcane for basic dmg (its instable to setup and only overboy starts with one) totems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy (realized in the end that all size was wasted on this) soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damage (I understand that the more gimmicky things like pets and kickables give the game versatility but to min max they are not that competative)Replydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days ago (1 edit) (+1)my best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days ago(+2)Lmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days ago (1 edit) Thank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35 (06 June 2025)Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventories (useful to check the scaling of current equipped attacks for example) Singleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various ways (lower values gained from items) Balancing: nerfed Knockback in various ways (lower values gained, higher item rarity, lower max applied value) Balancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymore (only number of ballistas scales with amount) If Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days ago (1 edit) Thanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days ago (1 edit) (+1)Thanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days ago (2 edits) Life steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days ago (2 edits) (+1)thanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming for (minus some issues like you pointed out, and of course some balancing required on specific builds and items)There is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days ago (4 edits) I did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issue (or a bug) with how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days ago (2 edits) (+1)Hey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns (active skills you can trigger by picking orbs on the map)- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff (you can even play soccer with your minions or other co-op players)- Playable character merge feature (get the effect of 2 different characters or more at the same time)- Dozens and dozens of unique effects (turning enemies into Sheep, or Golden Statues, or both?)I'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    0 Comentários 0 Compartilhamentos
  • Gardenful / TAOA

    Gardenful / TAOASave this picture!© Tao LeiLandscape Architecture•Beijing, China

    Architects:
    TAOA
    Area
    Area of this architecture project

    Area: 
    227 m²

    Year
    Completion year of this architecture project

    Year: 

    2024

    Photographs

    Photographs:Tao LeiMore SpecsLess Specs
    this picture!
    Text description provided by the architects. This is an urban garden built for private use. As a corner of the city, I hope to fill the whole garden with abundant nature in this small space. The site is an open space in a villa compound, surrounded by a cluster of European-style single-family villas typical of Chinese real estate. Modern buildings greatly meet the requirements of indoor temperature and humidity comfort because of their complete facilities, but the building also has a clear climate boundary, cutting off the connection between indoor and outdoor, but also cut off the continuity of nature and life.this picture!this picture!There is no simple definition of the project as a garden or a building, too simple definition will only fall into the narrow imagination, the purpose is only to establish a place that can accommodate a piece of real nature, can give people shelter, can also walk in it. It is the original intention of this design to build a quiet place where you can be alone, a semi-indoor and semi-outdoor space, and re-lead the enclosed life to the outdoors and into the nature.this picture!this picture!The square site in the middle of the garden, which is a relatively independent space, the top shelter provides a comfortable life and cozy, the middle of the garden exposed a sky, sunshine and rain and snow will be staged here. With the corresponding land below, the trees and vegetation of the mountains are introduced into it, maintaining the most primitive wildness. To remain wild in this exquisite urban space, in this abstract geometric order, will naturally get rid of the wild gas of the original nature. A spatial transformation is made on both sides to the north, through the stairway and the upward pull of the roof space, extending the narrow auxiliary garden, which has no roof and is therefore bright, maintaining a different light and shade relationship from the central garden, which is filled with rocks and plants transplanted from the mountains.this picture!this picture!this picture!The structure of the garden is thin and dense synthetic bamboo, and the cross combination of dense structures forms a partition of the space, like a bamboo fence, forming a soft boundary. The interior of the space is lined with wooden panels, and the exterior is covered with thin and crisp aluminum panels. The "bridge" made of stone panels passes through different Spaces, sometimes standing between the bamboo structures, sometimes crossing the rocks, walking between them. Moving between order and wildness.this picture!Nature is difficult to measure, and because of its rich and ever-changing qualities, nature provides richness to Spaces. This is from the mountains to large trees, rocks, small flowers and plants, as far as possible to avoid artificial nursery plants. The structure of the garden will geometrically order the nature, eliminating the wild sense of nature. The details of nature can be discovered, and the life force released can be unconsciously perceived. The nature of fragments is real, is wild, and does not want to lose vitality and richness because of artificial transplantation. The superposition of wild abundance and modern geometric space makes it alive with elegance and decency.this picture!this picture!The nature is independent of the high-density urban space, becoming an independent world, shielding the noise of the city. These are integrated into a continuous and integral "pavilion" and "corridor" constitute the carrier of outdoor life of the family, while sheltering from the wind and rain, under the four eaves also create the relationship between light and dark space, the middle highlights the nature, especially bright, and becomes the center of life. From any Angle one can see a picture of hierarchy and order, a real fragment of nature, built into a new context by geometric order. The richness of nature is therefore more easily perceived, and the changes of nature are constantly played out in daily life and can be seen throughout the year.this picture!

    Project gallerySee allShow less
    Project locationAddress:Beijing, ChinaLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTAOAOffice•••
    Published on June 15, 2025Cite: "Gardenful / TAOA" 15 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save想阅读文章的中文版本吗?满园 / TAOA 陶磊建筑是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #gardenful #taoa
    Gardenful / TAOA
    Gardenful / TAOASave this picture!© Tao LeiLandscape Architecture•Beijing, China Architects: TAOA Area Area of this architecture project Area:  227 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Tao LeiMore SpecsLess Specs this picture! Text description provided by the architects. This is an urban garden built for private use. As a corner of the city, I hope to fill the whole garden with abundant nature in this small space. The site is an open space in a villa compound, surrounded by a cluster of European-style single-family villas typical of Chinese real estate. Modern buildings greatly meet the requirements of indoor temperature and humidity comfort because of their complete facilities, but the building also has a clear climate boundary, cutting off the connection between indoor and outdoor, but also cut off the continuity of nature and life.this picture!this picture!There is no simple definition of the project as a garden or a building, too simple definition will only fall into the narrow imagination, the purpose is only to establish a place that can accommodate a piece of real nature, can give people shelter, can also walk in it. It is the original intention of this design to build a quiet place where you can be alone, a semi-indoor and semi-outdoor space, and re-lead the enclosed life to the outdoors and into the nature.this picture!this picture!The square site in the middle of the garden, which is a relatively independent space, the top shelter provides a comfortable life and cozy, the middle of the garden exposed a sky, sunshine and rain and snow will be staged here. With the corresponding land below, the trees and vegetation of the mountains are introduced into it, maintaining the most primitive wildness. To remain wild in this exquisite urban space, in this abstract geometric order, will naturally get rid of the wild gas of the original nature. A spatial transformation is made on both sides to the north, through the stairway and the upward pull of the roof space, extending the narrow auxiliary garden, which has no roof and is therefore bright, maintaining a different light and shade relationship from the central garden, which is filled with rocks and plants transplanted from the mountains.this picture!this picture!this picture!The structure of the garden is thin and dense synthetic bamboo, and the cross combination of dense structures forms a partition of the space, like a bamboo fence, forming a soft boundary. The interior of the space is lined with wooden panels, and the exterior is covered with thin and crisp aluminum panels. The "bridge" made of stone panels passes through different Spaces, sometimes standing between the bamboo structures, sometimes crossing the rocks, walking between them. Moving between order and wildness.this picture!Nature is difficult to measure, and because of its rich and ever-changing qualities, nature provides richness to Spaces. This is from the mountains to large trees, rocks, small flowers and plants, as far as possible to avoid artificial nursery plants. The structure of the garden will geometrically order the nature, eliminating the wild sense of nature. The details of nature can be discovered, and the life force released can be unconsciously perceived. The nature of fragments is real, is wild, and does not want to lose vitality and richness because of artificial transplantation. The superposition of wild abundance and modern geometric space makes it alive with elegance and decency.this picture!this picture!The nature is independent of the high-density urban space, becoming an independent world, shielding the noise of the city. These are integrated into a continuous and integral "pavilion" and "corridor" constitute the carrier of outdoor life of the family, while sheltering from the wind and rain, under the four eaves also create the relationship between light and dark space, the middle highlights the nature, especially bright, and becomes the center of life. From any Angle one can see a picture of hierarchy and order, a real fragment of nature, built into a new context by geometric order. The richness of nature is therefore more easily perceived, and the changes of nature are constantly played out in daily life and can be seen throughout the year.this picture! Project gallerySee allShow less Project locationAddress:Beijing, ChinaLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTAOAOffice••• Published on June 15, 2025Cite: "Gardenful / TAOA" 15 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save想阅读文章的中文版本吗?满园 / TAOA 陶磊建筑是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #gardenful #taoa
    WWW.ARCHDAILY.COM
    Gardenful / TAOA
    Gardenful / TAOASave this picture!© Tao LeiLandscape Architecture•Beijing, China Architects: TAOA Area Area of this architecture project Area:  227 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Tao LeiMore SpecsLess Specs Save this picture! Text description provided by the architects. This is an urban garden built for private use. As a corner of the city, I hope to fill the whole garden with abundant nature in this small space. The site is an open space in a villa compound, surrounded by a cluster of European-style single-family villas typical of Chinese real estate. Modern buildings greatly meet the requirements of indoor temperature and humidity comfort because of their complete facilities, but the building also has a clear climate boundary, cutting off the connection between indoor and outdoor, but also cut off the continuity of nature and life.Save this picture!Save this picture!There is no simple definition of the project as a garden or a building, too simple definition will only fall into the narrow imagination, the purpose is only to establish a place that can accommodate a piece of real nature, can give people shelter, can also walk in it. It is the original intention of this design to build a quiet place where you can be alone, a semi-indoor and semi-outdoor space, and re-lead the enclosed life to the outdoors and into the nature.Save this picture!Save this picture!The square site in the middle of the garden, which is a relatively independent space, the top shelter provides a comfortable life and cozy, the middle of the garden exposed a sky, sunshine and rain and snow will be staged here. With the corresponding land below, the trees and vegetation of the mountains are introduced into it, maintaining the most primitive wildness. To remain wild in this exquisite urban space, in this abstract geometric order, will naturally get rid of the wild gas of the original nature. A spatial transformation is made on both sides to the north, through the stairway and the upward pull of the roof space, extending the narrow auxiliary garden, which has no roof and is therefore bright, maintaining a different light and shade relationship from the central garden, which is filled with rocks and plants transplanted from the mountains.Save this picture!Save this picture!Save this picture!The structure of the garden is thin and dense synthetic bamboo, and the cross combination of dense structures forms a partition of the space, like a bamboo fence, forming a soft boundary. The interior of the space is lined with wooden panels, and the exterior is covered with thin and crisp aluminum panels. The "bridge" made of stone panels passes through different Spaces, sometimes standing between the bamboo structures, sometimes crossing the rocks, walking between them. Moving between order and wildness.Save this picture!Nature is difficult to measure, and because of its rich and ever-changing qualities, nature provides richness to Spaces. This is from the mountains to large trees, rocks, small flowers and plants, as far as possible to avoid artificial nursery plants. The structure of the garden will geometrically order the nature, eliminating the wild sense of nature. The details of nature can be discovered, and the life force released can be unconsciously perceived. The nature of fragments is real, is wild, and does not want to lose vitality and richness because of artificial transplantation. The superposition of wild abundance and modern geometric space makes it alive with elegance and decency.Save this picture!Save this picture!The nature is independent of the high-density urban space, becoming an independent world, shielding the noise of the city. These are integrated into a continuous and integral "pavilion" and "corridor" constitute the carrier of outdoor life of the family, while sheltering from the wind and rain, under the four eaves also create the relationship between light and dark space, the middle highlights the nature, especially bright, and becomes the center of life. From any Angle one can see a picture of hierarchy and order, a real fragment of nature, built into a new context by geometric order. The richness of nature is therefore more easily perceived, and the changes of nature are constantly played out in daily life and can be seen throughout the year.Save this picture! Project gallerySee allShow less Project locationAddress:Beijing, ChinaLocation to be used only as a reference. It could indicate city/country but not exact address.About this officeTAOAOffice••• Published on June 15, 2025Cite: "Gardenful / TAOA" 15 Jun 2025. ArchDaily. Accessed . <https://www.archdaily.com/1028408/gardenful-taoa&gt ISSN 0719-8884Save想阅读文章的中文版本吗?满园 / TAOA 陶磊建筑是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Comentários 0 Compartilhamentos
  • This Marvel Rivals hero became unstoppable after the controversial change

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations.
    In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move.
    Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate
    With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move.
    So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Womanand Luna Snow, both of whom are considered amazing healers.
    Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer
    In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%.
    Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments.

    Marvel Rivals

    Platform:
    macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X

    Genre:
    Fighting, Shooter

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #this #marvel #rivals #hero #became
    This Marvel Rivals hero became unstoppable after the controversial change
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations. In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move. Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move. So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Womanand Luna Snow, both of whom are considered amazing healers. Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%. Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments. Marvel Rivals Platform: macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X Genre: Fighting, Shooter Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #this #marvel #rivals #hero #became
    WWW.VIDEOGAMER.COM
    This Marvel Rivals hero became unstoppable after the controversial change
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations. In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move. Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move. So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Woman (45.54%) and Luna Snow (45.28%), both of whom are considered amazing healers. Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%. Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments. Marvel Rivals Platform(s): macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X Genre(s): Fighting, Shooter Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Comentários 0 Compartilhamentos
  • DISCOVERING ELIO

    By TREVOR HOGG

    Images courtesy of Pixar.

    The character design of Glordon is based on a tardigrade, which is a microscopic water bear.

    Rather than look at the unknown as something to be feared, Pixar has decided to do some wish fulfillment with Elio, where a lonely adolescent astrophile gets abducted by aliens and is mistaken as the leader of Earth. Originally conceived and directed by Adrian Molina, the coming-of-age science fiction adventure was shepherded by Domee Shi and Madeline Sharafian, who had previously worked together on Turning Red.
    “Space is often seen as dark, mysterious and scary, but there is also so much hope, wonder and curiosity,” notes Shi, director of Elio. “It’s like anything is ‘out there.’ Elio captures how a lot of us feel at different points of our lives, when we were kids like him, or even now wanting to be off of this current planet because it’s just too much. For Elio, it’s a rescue. I feel that there’s something so universal about that feeling of wanting to be taken away and taken care of. To know that you’re not alone and somebody chose you and picked you up.”

    The character design of Glordon is based on a tardigrade, which is a microscopic water bear.

    There is a stark contrast between how Earth and the alien world, known as the Communiverse, are portrayed. “The more we worked with the animators on Glordon and Helix, they began to realize that Domee and I respond positively when thosecharacters are exaggerated, made cute, round and chubby,” states Sharafian, director of Elio. “That automatically started to differentiate the way the Earth and space feel.” A certain question had to be answered when designing the United Nations-inspired Communiverse. “It was coming from a place of this lonely kid who feels like no one wants him on Earth,” Shi explains. “What would be heaven and paradise for him? The Communiverse was built around that idea.” A sense of belonging is an important theme. “It’s also inspired by Adrian Molina’s backstory, and our backstories too, of going to animation college,” Sharafian remarks. “For the first time, we said, ‘This is where everybody like me is!’”

    Green is the thematic color for Elio.

    Visual effects are an important storytelling tool. “Especially, for our movie, which is about this boy going to this crazy incredible world of the Communiverse,” Shi observes. “It has to be dazzling and look spectacular on the big screen and feel like paradise. Elio is such a visual feast, and you do feel like, ‘I want to stay here no matter what. I can’t believe that this place even exists.’ Visual effects are a powerful tool to help you feel what the characters are feeling.” A wishlist became a reality for the directors. “Claudia Chung Saniigave Domee and me carte blanche for wish fulfillment for ourselves,” Sharafian remarks. “What do you want Elio’s outfit in space to look like? It was a difficult costume, but now when we watch the movie, we’re all so proud of it. Elio looks fabulous, and he’s so happy to be wearing that outfit. Who would want to take that off?”

    The Communiverse was meant to feel like a place that a child would love to visit and explore.

    Methodology rather than technology went through the biggest change for the production. “The Communiverse is super complex and has lots of moving pieces. But there’s not much CG can’t do anymore,” notes Claudia Chung Sanii. “Elemental did effects characters. We did long curly hair, dresses, capes, water and fire. What we hadn’t done before was be a part of that design process. How do we get lighting into layout? How do we see the shaders in animation in layout? The tools department was working on a software called Luna which does that. I went to the tools department and asked, ‘Can I play around with it?’ They were like, ‘Okay. But it’s not ready yet.’ Tools will basically be bringing RenderMan and an interactive lighting workflow to the pipeline across all of these DCCs. Because we light in Katana, you can’t get back upstream. The conceit that we were dipping our toe in on Elio was, ‘Whatever you do in lighting, anyone on the pipeline can see it.’”

    The influence of microscopic forms and macro photography grounded the Communiverse in natural phenomena.

    The variety in the Communiverse is a contrast to the regimented world on the military base.

    There were no departmental borders, in particular with cinematography. “We had our layout and lighting DPs start on the same day. Derek Williams wouldn’t shoot anything without Jordan Rempel, our lighting DP, seeing it,” Sanii states. “Jordan would drop in lighting and start doing key lighting as Derek’s team was laying out. It wasn’t like you had to hit the render button, wait for the render to come up and go, ‘Oh, my god, it’s dark! I didn’t know that it was nighttime.’” A new term was adopted. “Meredith Homand I pulled the entire crew and leadership into this mental concept that we called the ‘college project.’ For some of us, college was a time when we didn’t have titles and crafts. You begged, borrowed and stole to hit that deadline. So much of our world has become linear in our process that I wanted to break that down to, ‘No. We’re all working together. The scope of this film is too large for us to wait for each other to finish our piece. If this person is slammed, fine. Figure out a different idea to do it with what tools you have.’”

    Directors Domee Shi and Madeline Sharafian are drawn to chubby, exaggerated and cute characters.

    Forgoing the word ‘no’ led to the technology breaking down. “I remember times when crowdsis dressing all of the aliens and because of forgetting to constrain it to the Communiverse, they all show up at the origin, and you’re going, ‘Why is there a whole party going on over there?’” Sanii laughs. “On Elio, it was always forward. There were no rules about locking things down or not installing over the weekend. It was always like, ‘Put it all in, and we’ll deal with it on Monday.’ There would be some funny stuff. We never QC’d something before walking it into the room. Everyone saw how the sausage was made. It was fun and not fun for Harley Jessupbecause sometimes there would be a big thing in the middle screen, and he would say, ‘Is that finished?’ There was no way we could get through this film if we kept trying to fix the thing that broke.”

    An aerial image of Elio as he attempts to get abducted by aliens.

    Part of the design of the Coummuniverse was inspired by Chinese puzzle balls.

    A former visual effects art director at ILM, Harley Jessup found his previous experiences on projects like Innerspace to be helpful on Elio. “I liked that the directors wanted to build on the effects films from the 1980s and early 1990s,” reflects Jessup. “I was there and part of that. It was fun to look back. At the time, the techniques were all practical, matte paintings and miniatures, which are fun to work with, but without the safety net of CG. One thing Dennis Murenwas keen on, was how people see things like the natural phenomenon you might see in a microscopic or macro photography form. We were using that. I was looking at the mothership of Close Encounters of the Third Kind, which Dennis shot when he was a young artist. It was nice to be able to bring all of that history to this film.”
    Earth was impacted by a comment made by Pete Docter. “He said, ‘The military base should feel like a parking lot,” Jessup reveals. “You should know why Elio wants to be anywhere else. And the Communiverse needs to be inviting. We built a lot of contrast into those two worlds. The brutalist architecture on the military base, with its hard edges and heavy horizontal forms close to the earth, needed to be harsh but beautiful in its own way, so we tried for that. The Communiverse would be in contrast and be all curves, translucent surfaces and stained-glass backlit effects. Things were wide open about what it could be because each of the aliens are from a different climate and gravity. There are some buildings that are actually upside down on it, and the whole thing is rotating inside like clockwork. It is hopefully an appealing, fun world. It’s not a dystopian outer space.”

    Exploring various facial expressions for Elio.

    A tough character to get right was Aunt Olga, who struggles to be the guardian of her nephew.

    Character designs of Elio and Glordon. which shows them interacting with each other.

    Architecture was devised to reflect the desired tone for scenes. “In the Grand Assembly Hall where each alien has a desk and booth, the booth is shaped like an eyelid that can close or open,” Jessup explains. “It increases the feeling that they’re evaluating and observing Elio and each of the candidates that have come to join the Communiverse.” A couple of iconic cinematic franchises were avoided for aesthetic reasons. “As much as I love Star Wars and Star Trek, we wanted to be different from those kinds of aliens that are often more humanoid.” Ooooo was the first alien to be designed. “We did Ooooo in collaboration with the effects team, which was small at that time. She was described as a liquid supercomputer. We actually used the wireframe that was turning up and asked, what if it ended up being this network of little lights that are moving around and can express how much she was thinking? Ooooo is Elio’s guide to the Communiverse; her body would deform, so she could become a big screen or reach out and pluck things. Ooooo has an ability like an amoeba to stretch.”
    Flexibility is important when figuring out shot design. “On Elio, we provided the layout department with a rudimentary version of our environments,” states David Luoh, Sets Supervisor. “It might be simple geometry. We’re not worried necessarily about shading, color and material yet. Things are roughly in place but also built in a way that is flexible. As they’re sorting out the camera and testing out staging, they can move elements of the set around. Maybe this architectural piece needs to be shifted or larger or smaller. There was a variation on what was typically expected of set deliveries of environments to our layout department. That bar was lowered to give the layout department something to work with sooner and also with more flexibility. From their work we get context as to how we partner with our art and design department to build and finalize those environments.”

    Regional biomes known as disks are part of the Communiverse. “There are aquatic, lush forest, snow and ice, and hot lava disks,” Luoh remarks. “The hot disk is grounded in the desert, volcanic rock and lava, while for the lush disk we looked at interesting plant life found in the world around us.” The Communiverse is a complex geometric form. “We wanted these natural arrangements of alien districts, and that was all happening on this twisting and curving terrain in a way that made traditional dressing approaches clunky. Oftentimes, you’re putting something on the ground or mounted, and the ground is always facing upward. But if you have to dress the wall or ceiling, it becomes a lot more difficult to manipulate and place on something with that dynamic and shape. You have stuff that casts light, is see-through and shifting over time. Ooooo is a living character that looks like electronic circuitry that is constantly moving, and we also have that element in the walls, floors and bubble transport that carry the characters around.”
    Sets were adjusted throughout the production. “We try to anticipate situations that might come up,” Luoh states. “What if we have a series of shots where you’re getting closer and closer to the Communiverse and you have to bridge the distance between your hero and set extension background? There is a partnership with story, but certainly with our layout camera staging department. As we see shots come out of their work, we know where we need to spend the time to figure out, are we going to see the distant hills in this way? We’re not going to build it until we know because it can be labor-intensive. There is a responsiveness to what we are starting to see as shots get made.” Combining the familiar into something unfamiliar was a process. “There was this curation of being inspired by existing alien sci-fi depictions, but also reaching back into biological phenomena or interesting material because we wanted to ground a lot of those visual elements and ideas in something that people could intuitively grasp on to, even if they were combined or arranged in a way that is surprising, strange and delightful.”
    #discovering #elio
    DISCOVERING ELIO
    By TREVOR HOGG Images courtesy of Pixar. The character design of Glordon is based on a tardigrade, which is a microscopic water bear. Rather than look at the unknown as something to be feared, Pixar has decided to do some wish fulfillment with Elio, where a lonely adolescent astrophile gets abducted by aliens and is mistaken as the leader of Earth. Originally conceived and directed by Adrian Molina, the coming-of-age science fiction adventure was shepherded by Domee Shi and Madeline Sharafian, who had previously worked together on Turning Red. “Space is often seen as dark, mysterious and scary, but there is also so much hope, wonder and curiosity,” notes Shi, director of Elio. “It’s like anything is ‘out there.’ Elio captures how a lot of us feel at different points of our lives, when we were kids like him, or even now wanting to be off of this current planet because it’s just too much. For Elio, it’s a rescue. I feel that there’s something so universal about that feeling of wanting to be taken away and taken care of. To know that you’re not alone and somebody chose you and picked you up.” The character design of Glordon is based on a tardigrade, which is a microscopic water bear. There is a stark contrast between how Earth and the alien world, known as the Communiverse, are portrayed. “The more we worked with the animators on Glordon and Helix, they began to realize that Domee and I respond positively when thosecharacters are exaggerated, made cute, round and chubby,” states Sharafian, director of Elio. “That automatically started to differentiate the way the Earth and space feel.” A certain question had to be answered when designing the United Nations-inspired Communiverse. “It was coming from a place of this lonely kid who feels like no one wants him on Earth,” Shi explains. “What would be heaven and paradise for him? The Communiverse was built around that idea.” A sense of belonging is an important theme. “It’s also inspired by Adrian Molina’s backstory, and our backstories too, of going to animation college,” Sharafian remarks. “For the first time, we said, ‘This is where everybody like me is!’” Green is the thematic color for Elio. Visual effects are an important storytelling tool. “Especially, for our movie, which is about this boy going to this crazy incredible world of the Communiverse,” Shi observes. “It has to be dazzling and look spectacular on the big screen and feel like paradise. Elio is such a visual feast, and you do feel like, ‘I want to stay here no matter what. I can’t believe that this place even exists.’ Visual effects are a powerful tool to help you feel what the characters are feeling.” A wishlist became a reality for the directors. “Claudia Chung Saniigave Domee and me carte blanche for wish fulfillment for ourselves,” Sharafian remarks. “What do you want Elio’s outfit in space to look like? It was a difficult costume, but now when we watch the movie, we’re all so proud of it. Elio looks fabulous, and he’s so happy to be wearing that outfit. Who would want to take that off?” The Communiverse was meant to feel like a place that a child would love to visit and explore. Methodology rather than technology went through the biggest change for the production. “The Communiverse is super complex and has lots of moving pieces. But there’s not much CG can’t do anymore,” notes Claudia Chung Sanii. “Elemental did effects characters. We did long curly hair, dresses, capes, water and fire. What we hadn’t done before was be a part of that design process. How do we get lighting into layout? How do we see the shaders in animation in layout? The tools department was working on a software called Luna which does that. I went to the tools department and asked, ‘Can I play around with it?’ They were like, ‘Okay. But it’s not ready yet.’ Tools will basically be bringing RenderMan and an interactive lighting workflow to the pipeline across all of these DCCs. Because we light in Katana, you can’t get back upstream. The conceit that we were dipping our toe in on Elio was, ‘Whatever you do in lighting, anyone on the pipeline can see it.’” The influence of microscopic forms and macro photography grounded the Communiverse in natural phenomena. The variety in the Communiverse is a contrast to the regimented world on the military base. There were no departmental borders, in particular with cinematography. “We had our layout and lighting DPs start on the same day. Derek Williams wouldn’t shoot anything without Jordan Rempel, our lighting DP, seeing it,” Sanii states. “Jordan would drop in lighting and start doing key lighting as Derek’s team was laying out. It wasn’t like you had to hit the render button, wait for the render to come up and go, ‘Oh, my god, it’s dark! I didn’t know that it was nighttime.’” A new term was adopted. “Meredith Homand I pulled the entire crew and leadership into this mental concept that we called the ‘college project.’ For some of us, college was a time when we didn’t have titles and crafts. You begged, borrowed and stole to hit that deadline. So much of our world has become linear in our process that I wanted to break that down to, ‘No. We’re all working together. The scope of this film is too large for us to wait for each other to finish our piece. If this person is slammed, fine. Figure out a different idea to do it with what tools you have.’” Directors Domee Shi and Madeline Sharafian are drawn to chubby, exaggerated and cute characters. Forgoing the word ‘no’ led to the technology breaking down. “I remember times when crowdsis dressing all of the aliens and because of forgetting to constrain it to the Communiverse, they all show up at the origin, and you’re going, ‘Why is there a whole party going on over there?’” Sanii laughs. “On Elio, it was always forward. There were no rules about locking things down or not installing over the weekend. It was always like, ‘Put it all in, and we’ll deal with it on Monday.’ There would be some funny stuff. We never QC’d something before walking it into the room. Everyone saw how the sausage was made. It was fun and not fun for Harley Jessupbecause sometimes there would be a big thing in the middle screen, and he would say, ‘Is that finished?’ There was no way we could get through this film if we kept trying to fix the thing that broke.” An aerial image of Elio as he attempts to get abducted by aliens. Part of the design of the Coummuniverse was inspired by Chinese puzzle balls. A former visual effects art director at ILM, Harley Jessup found his previous experiences on projects like Innerspace to be helpful on Elio. “I liked that the directors wanted to build on the effects films from the 1980s and early 1990s,” reflects Jessup. “I was there and part of that. It was fun to look back. At the time, the techniques were all practical, matte paintings and miniatures, which are fun to work with, but without the safety net of CG. One thing Dennis Murenwas keen on, was how people see things like the natural phenomenon you might see in a microscopic or macro photography form. We were using that. I was looking at the mothership of Close Encounters of the Third Kind, which Dennis shot when he was a young artist. It was nice to be able to bring all of that history to this film.” Earth was impacted by a comment made by Pete Docter. “He said, ‘The military base should feel like a parking lot,” Jessup reveals. “You should know why Elio wants to be anywhere else. And the Communiverse needs to be inviting. We built a lot of contrast into those two worlds. The brutalist architecture on the military base, with its hard edges and heavy horizontal forms close to the earth, needed to be harsh but beautiful in its own way, so we tried for that. The Communiverse would be in contrast and be all curves, translucent surfaces and stained-glass backlit effects. Things were wide open about what it could be because each of the aliens are from a different climate and gravity. There are some buildings that are actually upside down on it, and the whole thing is rotating inside like clockwork. It is hopefully an appealing, fun world. It’s not a dystopian outer space.” Exploring various facial expressions for Elio. A tough character to get right was Aunt Olga, who struggles to be the guardian of her nephew. Character designs of Elio and Glordon. which shows them interacting with each other. Architecture was devised to reflect the desired tone for scenes. “In the Grand Assembly Hall where each alien has a desk and booth, the booth is shaped like an eyelid that can close or open,” Jessup explains. “It increases the feeling that they’re evaluating and observing Elio and each of the candidates that have come to join the Communiverse.” A couple of iconic cinematic franchises were avoided for aesthetic reasons. “As much as I love Star Wars and Star Trek, we wanted to be different from those kinds of aliens that are often more humanoid.” Ooooo was the first alien to be designed. “We did Ooooo in collaboration with the effects team, which was small at that time. She was described as a liquid supercomputer. We actually used the wireframe that was turning up and asked, what if it ended up being this network of little lights that are moving around and can express how much she was thinking? Ooooo is Elio’s guide to the Communiverse; her body would deform, so she could become a big screen or reach out and pluck things. Ooooo has an ability like an amoeba to stretch.” Flexibility is important when figuring out shot design. “On Elio, we provided the layout department with a rudimentary version of our environments,” states David Luoh, Sets Supervisor. “It might be simple geometry. We’re not worried necessarily about shading, color and material yet. Things are roughly in place but also built in a way that is flexible. As they’re sorting out the camera and testing out staging, they can move elements of the set around. Maybe this architectural piece needs to be shifted or larger or smaller. There was a variation on what was typically expected of set deliveries of environments to our layout department. That bar was lowered to give the layout department something to work with sooner and also with more flexibility. From their work we get context as to how we partner with our art and design department to build and finalize those environments.” Regional biomes known as disks are part of the Communiverse. “There are aquatic, lush forest, snow and ice, and hot lava disks,” Luoh remarks. “The hot disk is grounded in the desert, volcanic rock and lava, while for the lush disk we looked at interesting plant life found in the world around us.” The Communiverse is a complex geometric form. “We wanted these natural arrangements of alien districts, and that was all happening on this twisting and curving terrain in a way that made traditional dressing approaches clunky. Oftentimes, you’re putting something on the ground or mounted, and the ground is always facing upward. But if you have to dress the wall or ceiling, it becomes a lot more difficult to manipulate and place on something with that dynamic and shape. You have stuff that casts light, is see-through and shifting over time. Ooooo is a living character that looks like electronic circuitry that is constantly moving, and we also have that element in the walls, floors and bubble transport that carry the characters around.” Sets were adjusted throughout the production. “We try to anticipate situations that might come up,” Luoh states. “What if we have a series of shots where you’re getting closer and closer to the Communiverse and you have to bridge the distance between your hero and set extension background? There is a partnership with story, but certainly with our layout camera staging department. As we see shots come out of their work, we know where we need to spend the time to figure out, are we going to see the distant hills in this way? We’re not going to build it until we know because it can be labor-intensive. There is a responsiveness to what we are starting to see as shots get made.” Combining the familiar into something unfamiliar was a process. “There was this curation of being inspired by existing alien sci-fi depictions, but also reaching back into biological phenomena or interesting material because we wanted to ground a lot of those visual elements and ideas in something that people could intuitively grasp on to, even if they were combined or arranged in a way that is surprising, strange and delightful.” #discovering #elio
    WWW.VFXVOICE.COM
    DISCOVERING ELIO
    By TREVOR HOGG Images courtesy of Pixar. The character design of Glordon is based on a tardigrade, which is a microscopic water bear. Rather than look at the unknown as something to be feared, Pixar has decided to do some wish fulfillment with Elio, where a lonely adolescent astrophile gets abducted by aliens and is mistaken as the leader of Earth. Originally conceived and directed by Adrian Molina, the coming-of-age science fiction adventure was shepherded by Domee Shi and Madeline Sharafian, who had previously worked together on Turning Red. “Space is often seen as dark, mysterious and scary, but there is also so much hope, wonder and curiosity,” notes Shi, director of Elio. “It’s like anything is ‘out there.’ Elio captures how a lot of us feel at different points of our lives, when we were kids like him, or even now wanting to be off of this current planet because it’s just too much. For Elio, it’s a rescue. I feel that there’s something so universal about that feeling of wanting to be taken away and taken care of. To know that you’re not alone and somebody chose you and picked you up.” The character design of Glordon is based on a tardigrade, which is a microscopic water bear. There is a stark contrast between how Earth and the alien world, known as the Communiverse, are portrayed. “The more we worked with the animators on Glordon and Helix, they began to realize that Domee and I respond positively when those [alien] characters are exaggerated, made cute, round and chubby,” states Sharafian, director of Elio. “That automatically started to differentiate the way the Earth and space feel.” A certain question had to be answered when designing the United Nations-inspired Communiverse. “It was coming from a place of this lonely kid who feels like no one wants him on Earth,” Shi explains. “What would be heaven and paradise for him? The Communiverse was built around that idea.” A sense of belonging is an important theme. “It’s also inspired by Adrian Molina’s backstory, and our backstories too, of going to animation college,” Sharafian remarks. “For the first time, we said, ‘This is where everybody like me is!’” Green is the thematic color for Elio. Visual effects are an important storytelling tool. “Especially, for our movie, which is about this boy going to this crazy incredible world of the Communiverse,” Shi observes. “It has to be dazzling and look spectacular on the big screen and feel like paradise. Elio is such a visual feast, and you do feel like, ‘I want to stay here no matter what. I can’t believe that this place even exists.’ Visual effects are a powerful tool to help you feel what the characters are feeling.” A wishlist became a reality for the directors. “Claudia Chung Sanii [Visual Effects Supervisor] gave Domee and me carte blanche for wish fulfillment for ourselves,” Sharafian remarks. “What do you want Elio’s outfit in space to look like? It was a difficult costume, but now when we watch the movie, we’re all so proud of it. Elio looks fabulous, and he’s so happy to be wearing that outfit. Who would want to take that off?” The Communiverse was meant to feel like a place that a child would love to visit and explore. Methodology rather than technology went through the biggest change for the production. “The Communiverse is super complex and has lots of moving pieces. But there’s not much CG can’t do anymore,” notes Claudia Chung Sanii. “Elemental did effects characters. We did long curly hair, dresses, capes, water and fire. What we hadn’t done before was be a part of that design process. How do we get lighting into layout? How do we see the shaders in animation in layout? The tools department was working on a software called Luna which does that. I went to the tools department and asked, ‘Can I play around with it?’ They were like, ‘Okay. But it’s not ready yet.’ Tools will basically be bringing RenderMan and an interactive lighting workflow to the pipeline across all of these DCCs. Because we light in Katana, you can’t get back upstream. The conceit that we were dipping our toe in on Elio was, ‘Whatever you do in lighting, anyone on the pipeline can see it.’” The influence of microscopic forms and macro photography grounded the Communiverse in natural phenomena. The variety in the Communiverse is a contrast to the regimented world on the military base. There were no departmental borders, in particular with cinematography. “We had our layout and lighting DPs start on the same day. Derek Williams wouldn’t shoot anything without Jordan Rempel, our lighting DP, seeing it,” Sanii states. “Jordan would drop in lighting and start doing key lighting as Derek’s team was laying out. It wasn’t like you had to hit the render button, wait for the render to come up and go, ‘Oh, my god, it’s dark! I didn’t know that it was nighttime.’” A new term was adopted. “Meredith Hom [Production Manager] and I pulled the entire crew and leadership into this mental concept that we called the ‘college project.’ For some of us, college was a time when we didn’t have titles and crafts. You begged, borrowed and stole to hit that deadline. So much of our world has become linear in our process that I wanted to break that down to, ‘No. We’re all working together. The scope of this film is too large for us to wait for each other to finish our piece. If this person is slammed, fine. Figure out a different idea to do it with what tools you have.’” Directors Domee Shi and Madeline Sharafian are drawn to chubby, exaggerated and cute characters. Forgoing the word ‘no’ led to the technology breaking down. “I remember times when crowds [department] is dressing all of the aliens and because of forgetting to constrain it to the Communiverse, they all show up at the origin, and you’re going, ‘Why is there a whole party going on over there?’” Sanii laughs. “On Elio, it was always forward. There were no rules about locking things down or not installing over the weekend. It was always like, ‘Put it all in, and we’ll deal with it on Monday.’ There would be some funny stuff. We never QC’d something before walking it into the room. Everyone saw how the sausage was made. It was fun and not fun for Harley Jessup [Production Designer] because sometimes there would be a big thing in the middle screen, and he would say, ‘Is that finished?’ There was no way we could get through this film if we kept trying to fix the thing that broke.” An aerial image of Elio as he attempts to get abducted by aliens. Part of the design of the Coummuniverse was inspired by Chinese puzzle balls. A former visual effects art director at ILM, Harley Jessup found his previous experiences on projects like Innerspace to be helpful on Elio. “I liked that the directors wanted to build on the effects films from the 1980s and early 1990s,” reflects Jessup. “I was there and part of that. It was fun to look back. At the time, the techniques were all practical, matte paintings and miniatures, which are fun to work with, but without the safety net of CG. One thing Dennis Muren [VES] was keen on, was how people see things like the natural phenomenon you might see in a microscopic or macro photography form. We were using that. I was looking at the mothership of Close Encounters of the Third Kind, which Dennis shot when he was a young artist. It was nice to be able to bring all of that history to this film.” Earth was impacted by a comment made by Pete Docter (CCO, Pixar). “He said, ‘The military base should feel like a parking lot,” Jessup reveals. “You should know why Elio wants to be anywhere else. And the Communiverse needs to be inviting. We built a lot of contrast into those two worlds. The brutalist architecture on the military base, with its hard edges and heavy horizontal forms close to the earth, needed to be harsh but beautiful in its own way, so we tried for that. The Communiverse would be in contrast and be all curves, translucent surfaces and stained-glass backlit effects. Things were wide open about what it could be because each of the aliens are from a different climate and gravity. There are some buildings that are actually upside down on it, and the whole thing is rotating inside like clockwork. It is hopefully an appealing, fun world. It’s not a dystopian outer space.” Exploring various facial expressions for Elio. A tough character to get right was Aunt Olga, who struggles to be the guardian of her nephew. Character designs of Elio and Glordon. which shows them interacting with each other. Architecture was devised to reflect the desired tone for scenes. “In the Grand Assembly Hall where each alien has a desk and booth, the booth is shaped like an eyelid that can close or open,” Jessup explains. “It increases the feeling that they’re evaluating and observing Elio and each of the candidates that have come to join the Communiverse.” A couple of iconic cinematic franchises were avoided for aesthetic reasons. “As much as I love Star Wars and Star Trek, we wanted to be different from those kinds of aliens that are often more humanoid.” Ooooo was the first alien to be designed. “We did Ooooo in collaboration with the effects team, which was small at that time. She was described as a liquid supercomputer. We actually used the wireframe that was turning up and asked, what if it ended up being this network of little lights that are moving around and can express how much she was thinking? Ooooo is Elio’s guide to the Communiverse; her body would deform, so she could become a big screen or reach out and pluck things. Ooooo has an ability like an amoeba to stretch.” Flexibility is important when figuring out shot design. “On Elio, we provided the layout department with a rudimentary version of our environments,” states David Luoh, Sets Supervisor. “It might be simple geometry. We’re not worried necessarily about shading, color and material yet. Things are roughly in place but also built in a way that is flexible. As they’re sorting out the camera and testing out staging, they can move elements of the set around. Maybe this architectural piece needs to be shifted or larger or smaller. There was a variation on what was typically expected of set deliveries of environments to our layout department. That bar was lowered to give the layout department something to work with sooner and also with more flexibility. From their work we get context as to how we partner with our art and design department to build and finalize those environments.” Regional biomes known as disks are part of the Communiverse. “There are aquatic, lush forest, snow and ice, and hot lava disks,” Luoh remarks. “The hot disk is grounded in the desert, volcanic rock and lava, while for the lush disk we looked at interesting plant life found in the world around us.” The Communiverse is a complex geometric form. “We wanted these natural arrangements of alien districts, and that was all happening on this twisting and curving terrain in a way that made traditional dressing approaches clunky. Oftentimes, you’re putting something on the ground or mounted, and the ground is always facing upward. But if you have to dress the wall or ceiling, it becomes a lot more difficult to manipulate and place on something with that dynamic and shape. You have stuff that casts light, is see-through and shifting over time. Ooooo is a living character that looks like electronic circuitry that is constantly moving, and we also have that element in the walls, floors and bubble transport that carry the characters around.” Sets were adjusted throughout the production. “We try to anticipate situations that might come up,” Luoh states. “What if we have a series of shots where you’re getting closer and closer to the Communiverse and you have to bridge the distance between your hero and set extension background? There is a partnership with story, but certainly with our layout camera staging department. As we see shots come out of their work, we know where we need to spend the time to figure out, are we going to see the distant hills in this way? We’re not going to build it until we know because it can be labor-intensive. There is a responsiveness to what we are starting to see as shots get made.” Combining the familiar into something unfamiliar was a process. “There was this curation of being inspired by existing alien sci-fi depictions, but also reaching back into biological phenomena or interesting material because we wanted to ground a lot of those visual elements and ideas in something that people could intuitively grasp on to, even if they were combined or arranged in a way that is surprising, strange and delightful.”
    0 Comentários 0 Compartilhamentos
Páginas Impulsionadas