• Just when you thought the future couldn't get any more dystopian, Tesla rolls out its new diner that screams "Fallout" louder than your last existential crisis. Who knew the perfect blend of electric cars and post-apocalyptic vibes would come with a side of fries? I can hardly wait to enjoy my meal while dodging irradiated ghouls and contemplating life choices in a place that looks like it was designed by a team of Mad Max enthusiasts. It's not just dining; it's an immersive experience—complete with the ambiance of impending doom. Bon appétit, fellow survivors!

    #TeslaDiner #DystopianDining #FalloutVibes #FutureIsNow #PostApocalypticCuisine
    Just when you thought the future couldn't get any more dystopian, Tesla rolls out its new diner that screams "Fallout" louder than your last existential crisis. Who knew the perfect blend of electric cars and post-apocalyptic vibes would come with a side of fries? I can hardly wait to enjoy my meal while dodging irradiated ghouls and contemplating life choices in a place that looks like it was designed by a team of Mad Max enthusiasts. It's not just dining; it's an immersive experience—complete with the ambiance of impending doom. Bon appétit, fellow survivors! #TeslaDiner #DystopianDining #FalloutVibes #FutureIsNow #PostApocalypticCuisine
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Exciting times ahead with Samsung's Project Moohan! The new XR headset is set to revolutionize how we interact with technology, and the voice command feature is at its core! Just imagine the possibilities of controlling your virtual world with just your voice!

    This innovative approach not only enhances our experience but also brings us closer to a future where technology seamlessly integrates into our lives. Let's embrace this change with open hearts and minds!

    Stay tuned for this game-changing launch, and remember: the future is bright!

    #SamsungXR #VoiceCommand #Innovation #ProjectMoohan #FutureIsNow
    🎉🌟 Exciting times ahead with Samsung's Project Moohan! 🚀 The new XR headset is set to revolutionize how we interact with technology, and the voice command feature is at its core! Just imagine the possibilities of controlling your virtual world with just your voice! 🎤✨ This innovative approach not only enhances our experience but also brings us closer to a future where technology seamlessly integrates into our lives. Let's embrace this change with open hearts and minds! 💖💡 Stay tuned for this game-changing launch, and remember: the future is bright! 🌈🌍 #SamsungXR #VoiceCommand #Innovation #ProjectMoohan #FutureIsNow
    La commande vocale au cœur du casque XR de Samsung
    Samsung se prépare à lancer son casque XR (réalité étendue) baptisé Project Moohan, attendu pour […] Cet article La commande vocale au cœur du casque XR de Samsung a été publié sur REALITE-VIRTUELLE.COM.
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Camping chairs are a thing, I guess. If you're into hiking, tailgating, or just sitting around in your garden, there are some options like Snow Peak, Kelty, and Helinox. WIRED tested them for whatever reason. They say these chairs help you relax outdoors, but honestly, it just sounds like more stuff to carry. Anyway, if you're looking for the best camping chairs in 2025, you might want to check these out, or not.

    #CampingChairs #OutdoorLiving #Relaxation #SnowPeak #Kelty
    Camping chairs are a thing, I guess. If you're into hiking, tailgating, or just sitting around in your garden, there are some options like Snow Peak, Kelty, and Helinox. WIRED tested them for whatever reason. They say these chairs help you relax outdoors, but honestly, it just sounds like more stuff to carry. Anyway, if you're looking for the best camping chairs in 2025, you might want to check these out, or not. #CampingChairs #OutdoorLiving #Relaxation #SnowPeak #Kelty
    Best Camping Chairs (2025): Snow Peak, Kelty, Helinox, and More
    Whether you’re hiking, tailgating, or relaxing in the garden, take the weight off in style with these WIRED-tested chairs for the great outdoors.
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Exciting news, everyone! Asus is diving into the world of Virtual Reality (VR) and it’s about to change everything! After dominating the gaming scene with their amazing PCs and ROG products, they are now set to conquer the VR universe. Imagine the immersive experiences and endless possibilities that await us!

    This is just the beginning of a thrilling journey into a new dimension of gaming and technology. Let's embrace this innovation together and look forward to a future filled with adventure and creativity!

    Stay tuned for more updates and get ready to explore what Asus has in store for us in VR!

    #AsusVR #VirtualReality #GamingInnovation #FutureIsNow #ROG
    🌟 Exciting news, everyone! Asus is diving into the world of Virtual Reality (VR) and it’s about to change everything! 🎮✨ After dominating the gaming scene with their amazing PCs and ROG products, they are now set to conquer the VR universe. Imagine the immersive experiences and endless possibilities that await us! 💫 This is just the beginning of a thrilling journey into a new dimension of gaming and technology. Let's embrace this innovation together and look forward to a future filled with adventure and creativity! 🚀💖 Stay tuned for more updates and get ready to explore what Asus has in store for us in VR! 🎉 #AsusVR #VirtualReality #GamingInnovation #FutureIsNow #ROG
    Asus se lance dans la VR : A quoi s’attendre ?
    Après avoir conquis le monde du gaming avec ses PC et produits ROG, Asus s’attaque […] Cet article Asus se lance dans la VR : A quoi s’attendre ? a été publié sur REALITE-VIRTUELLE.COM.
    1 Commentarii 0 Distribuiri 0 previzualizare
  • What on earth is going on with the VFX in Netflix's "The Snow Sister"? Seriously, it’s 2023, and we’re still being fed mediocre visual effects that are supposed to "wow" us but end up doing the exact opposite! The so-called "VFX breakdown" is nothing more than a slap in the face to anyone who actually appreciates the art of visual storytelling.

    Let’s get one thing straight: if the best VFX are indeed the ones you can’t spot, then how on earth did we end up with these glaringly obvious digital blunders? It’s like they threw a bunch of half-baked effects together and called it a day. Instead of stunning visuals that elevate the narrative, we get a distracting mess that pulls you right out of the experience. Who are they kidding?

    The creators of "The Snow Sister" clearly missed the memo that viewers today are not easily satisfied. We demand more than just passable effects; we want immersive worlds that captivate us. And yet, here we are, subjected to a barrage of poorly executed VFX that look like they belong in a low-budget production from the early 2000s. It’s frustrating to see Netflix, a platform that should be setting the gold standard in content creation, flounder so embarrassingly with something as fundamental as visual effects.

    What’s even more maddening is the disconnect between the promotional hype and the actual product. They tout the "creation" of these effects as if they’re groundbreaking, but in reality, they are a visual cacophony that leaves much to be desired. How can anyone take this seriously when the final product looks like it was hastily patched together? It’s not just a disservice to the viewers; it’s an insult to the talented artists who work tirelessly in the VFX industry. They deserve better than to have their hard work represented by subpar results that manage to undermine the entire project.

    Netflix needs to wake up and understand that audiences are becoming increasingly discerning. We’re not just mindless consumers; we have eyes, and we can see when something is off. The VFX in "The Snow Sister" is a glaring example of what happens when corners are cut and quality is sacrificed for the sake of quantity. We expect innovation, creativity, and, above all, professionalism. Instead, we are fed a half-hearted effort that leaves us shaking our heads in disbelief.

    In conclusion, if Netflix wants to maintain its position as a leader in the entertainment industry, it’s time to step up its game and give us the high-quality VFX that we deserve. No more excuses, no more mediocre breakdowns—just real artistry that enhances our viewing experience. Let’s hold them accountable and demand better!

    #VFX #Netflix #TheSnowSister #VisualEffects #EntertainmentIndustry
    What on earth is going on with the VFX in Netflix's "The Snow Sister"? Seriously, it’s 2023, and we’re still being fed mediocre visual effects that are supposed to "wow" us but end up doing the exact opposite! The so-called "VFX breakdown" is nothing more than a slap in the face to anyone who actually appreciates the art of visual storytelling. Let’s get one thing straight: if the best VFX are indeed the ones you can’t spot, then how on earth did we end up with these glaringly obvious digital blunders? It’s like they threw a bunch of half-baked effects together and called it a day. Instead of stunning visuals that elevate the narrative, we get a distracting mess that pulls you right out of the experience. Who are they kidding? The creators of "The Snow Sister" clearly missed the memo that viewers today are not easily satisfied. We demand more than just passable effects; we want immersive worlds that captivate us. And yet, here we are, subjected to a barrage of poorly executed VFX that look like they belong in a low-budget production from the early 2000s. It’s frustrating to see Netflix, a platform that should be setting the gold standard in content creation, flounder so embarrassingly with something as fundamental as visual effects. What’s even more maddening is the disconnect between the promotional hype and the actual product. They tout the "creation" of these effects as if they’re groundbreaking, but in reality, they are a visual cacophony that leaves much to be desired. How can anyone take this seriously when the final product looks like it was hastily patched together? It’s not just a disservice to the viewers; it’s an insult to the talented artists who work tirelessly in the VFX industry. They deserve better than to have their hard work represented by subpar results that manage to undermine the entire project. Netflix needs to wake up and understand that audiences are becoming increasingly discerning. We’re not just mindless consumers; we have eyes, and we can see when something is off. The VFX in "The Snow Sister" is a glaring example of what happens when corners are cut and quality is sacrificed for the sake of quantity. We expect innovation, creativity, and, above all, professionalism. Instead, we are fed a half-hearted effort that leaves us shaking our heads in disbelief. In conclusion, if Netflix wants to maintain its position as a leader in the entertainment industry, it’s time to step up its game and give us the high-quality VFX that we deserve. No more excuses, no more mediocre breakdowns—just real artistry that enhances our viewing experience. Let’s hold them accountable and demand better! #VFX #Netflix #TheSnowSister #VisualEffects #EntertainmentIndustry
    VFX breakdown: Netflix's The Snow sister
    Enjoy seeing how the VFX in The Snow Sister were created. As always, the best VFX are the ones you can't spot! Source
    Like
    Love
    Wow
    Sad
    Angry
    187
    1 Commentarii 0 Distribuiri 0 previzualizare
  • La critique du matelas en latex organique Silk & Snow S&S. Honnêtement, c’est un matelas, et il est un peu comme n’importe quel autre. Il est censé être doux comme un nuage, ce qui est un bon point, je suppose. La promesse de confort est là, mais seulement pour les personnes qui dorment seules. Donc, si vous aimez partager votre espace, ce n’est pas la meilleure option.

    Le matelas est fait de matériaux organiques, ce qui semble être à la mode en ce moment. C’est bien pour ceux qui se soucient de l’environnement, mais ça ne change pas trop le fait que je me sens un peu fatigué juste à en parler. Les critiques disent qu’il est incroyablement confortable, mais je n’ai pas vraiment ressenti cette magie. Peut-être que je suis trop habitué à mon vieux matelas qui me connaît mieux.

    Il y a beaucoup de matelas sur le marché, et celui-ci ne se démarque pas vraiment, même s’il a cette étiquette "organique". Je suppose que pour une personne qui aime dormir seule, c’est une option. Mais pour moi, c’est juste un autre matelas qui fait le job. Le confort est là, mais je ne sais pas si c'est suffisant pour en faire un choix incontournable.

    En fin de compte, Silk & Snow S&S est un matelas qui a ses qualités, mais il n’y a pas vraiment d’excitation à en parler. Je préfère juste rester sur mon vieux matelas, même si ce n’est pas aussi "doux qu’un nuage". Peut-être qu'un jour, je me sentirai d’humeur à essayer quelque chose de nouveau, mais aujourd'hui, ce n'est pas le cas.

    #Matelas #SilkEtSnow #Confort #Dormir #Organiques
    La critique du matelas en latex organique Silk & Snow S&S. Honnêtement, c’est un matelas, et il est un peu comme n’importe quel autre. Il est censé être doux comme un nuage, ce qui est un bon point, je suppose. La promesse de confort est là, mais seulement pour les personnes qui dorment seules. Donc, si vous aimez partager votre espace, ce n’est pas la meilleure option. Le matelas est fait de matériaux organiques, ce qui semble être à la mode en ce moment. C’est bien pour ceux qui se soucient de l’environnement, mais ça ne change pas trop le fait que je me sens un peu fatigué juste à en parler. Les critiques disent qu’il est incroyablement confortable, mais je n’ai pas vraiment ressenti cette magie. Peut-être que je suis trop habitué à mon vieux matelas qui me connaît mieux. Il y a beaucoup de matelas sur le marché, et celui-ci ne se démarque pas vraiment, même s’il a cette étiquette "organique". Je suppose que pour une personne qui aime dormir seule, c’est une option. Mais pour moi, c’est juste un autre matelas qui fait le job. Le confort est là, mais je ne sais pas si c'est suffisant pour en faire un choix incontournable. En fin de compte, Silk & Snow S&S est un matelas qui a ses qualités, mais il n’y a pas vraiment d’excitation à en parler. Je préfère juste rester sur mon vieux matelas, même si ce n’est pas aussi "doux qu’un nuage". Peut-être qu'un jour, je me sentirai d’humeur à essayer quelque chose de nouveau, mais aujourd'hui, ce n'est pas le cas. #Matelas #SilkEtSnow #Confort #Dormir #Organiques
    Silk & Snow S&S Organic Mattress Review: Soft as a Cloud
    Silk & Snow’s got an organic mattress in its lineup, and it’s amazingly comfortable—but only for solo sleepers.
    Like
    Love
    Wow
    Sad
    Angry
    542
    1 Commentarii 0 Distribuiri 0 previzualizare
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue

    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL!
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson.
    The film contains creatures, spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time.

    The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures.

    You can grab the issue in PRINT from Amazon, or as a DIGITAL EDITION on Patreon.
    Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released.

    Hope you enjoy the latest issue!
    Here’s the links to various Amazon stores:
    USA: 
    UK: 
    Canada: 
    Germany: 
    France: 
    Spain: 
    Italy: 
    Australia: 
    Japan: 
    Sweden: 
    Poland: 
    Netherlands: 
     
    The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters.
    #twinning #creepers #more #vfx #covered
    Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL! Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson. The film contains creatures, spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time. The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures. You can grab the issue in PRINT from Amazon, or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. Hope you enjoy the latest issue! Here’s the links to various Amazon stores: USA:  UK:  Canada:  Germany:  France:  Spain:  Italy:  Australia:  Japan:  Sweden:  Poland:  Netherlands:    The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters. #twinning #creepers #more #vfx #covered
    BEFORESANDAFTERS.COM
    Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue
    Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL! Issue #32 of befores & afters magazine is now out in PRINT and DIGITAL. It’s a deep dive into the visual effects of Bong Joon Ho’s Mickey 17, starring Robert Pattinson. The film contains creatures (the Creepers), spacecraft, snow-filled landscapes and several scenes where actor Robert Pattinson appears as two ‘expendable’ clone characters—Mickey 17 and Mickey 18—on screen at the same time. The new issue explores this twinning work, as well as going into detail on the creatures and environment visual effects largely orchestrated by DNEG, Framestore, Rising Sun Pictures and Turncoat Pictures. You can grab the issue in PRINT from Amazon (that’s the US store, make sure you try your local Amazon store, too), or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. Hope you enjoy the latest issue! Here’s the links to various Amazon stores: USA: https://www.amazon.com/dp/B0FCYRV86J UK: https://www.amazon.co.uk/dp/B0FCYRV86J Canada: https://www.amazon.ca/dp/B0FCYRV86J Germany: https://www.amazon.de/dp/B0FCYRV86J France: https://www.amazon.fr/dp/B0FCYRV86J Spain: https://www.amazon.es/dp/B0FCYRV86J Italy: https://www.amazon.it/dp/B0FCYRV86J Australia: https://www.amazon.com.au/dp/B0FCYRV86J Japan: https://www.amazon.co.jp/dp/B0FCYRV86J Sweden: https://www.amazon.se/dp/B0FCYRV86J Poland: https://www.amazon.pl/dp/B0FCYRV86J Netherlands: https://www.amazon.nl/dp/B0FCYRV86J   The post Twinning, Creepers and more VFX covered in the ‘Mickey 17’ issue appeared first on befores & afters.
    Like
    Love
    Wow
    Angry
    Sad
    443
    2 Commentarii 0 Distribuiri 0 previzualizare
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Commentarii 0 Distribuiri 0 previzualizare
  • NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]

    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report:
    Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good!
    ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord.

    I’m also excited to announce that the game will release on Steam on 8 July 2025!
    Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button
    Many Balancing tweaks
    Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP
    Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal
    Fine tuned the color of some weapons to improve the visibility
    Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot
    Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else
    Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features
    ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably.
    Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered.
    I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall.
    Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    #noobs #are #coming #demo #free
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day agoVery addictive!ReplyTrashpanda1191 day agolove the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day agoreally fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day agoThanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day agoThanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days agoVery nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days agoOkay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms , or far from them "   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin.Reply1Soultaken4 days agoAnyone know good combos for the items?Replydave99994 days agolasers plus amount+adept some arcane for basic dmgtotems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damageReplydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days agomy best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days agoLmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days agoThank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventoriesSingleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various waysBalancing: nerfed Knockback in various waysBalancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymoreIf Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days agoThanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days agoThanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days agoLife steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days agothanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming forThere is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days agoI did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issuewith how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days agoHey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff- Playable character merge feature- Dozens and dozens of unique effectsI'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply #noobs #are #coming #demo #free
    OVERBOY.ITCH.IO
    NOOBS ARE COMING (Demo) [Free] [Action] [Windows] [Linux]
    SirCozyCrow5 hours agoThe sound track is PEAK! I loved playing this, and my partner who normally doesn't play games like this one had a good time as well. I enjoyed the learning curve and I can't wait to play the harder difficulties.Here's a video I made, my partner jumped in for a few minutes as well.Replyso funReplyDrew.a.Chain1 day ago(+1)Very addictive!ReplyTrashpanda1191 day ago(+1)love the playstyle and the art style definitly fun to play plus the music is the cherry on topReplyAhoOppai1 day ago(+1)really fun game cant wait for the full gameReplyDin Xavier coding1 day agoI chose the laser eye. How do I turn the attack around? Can I even do that?Replyoverboy1 day agoHey, the laser eye gets a random direction at the start of each wave, it's one of the specificities of this attack ;)ReplyFort Kenmei1 day agoGameplay and Critique ;)Replyoverboy1 day ago(+1)Thanks a lot for the awesome video and the feedback! :)ReplyTLGaby2 days agoJust to know browser progress keep getting reset.Replyoverboy1 day ago (2 edits) (+1)Thanks for the report! Could it be due to some of your browser settings?Unfortunately, browser-based games can't always guarantee reliable local saves due to how browsers handle storage.To avoid this in the future, I recommend trying the downloadable version of the demo,  it provides a more stable environment for saving progress. :)Replyepic.Replyoleekconder2 days ago (1 edit) (+1)Very nice. Spent couple hours easy=) UPD: And some moreReplyMaximusR3 days agoes un juego que ya jugue en su momento cuando tenias menos cosas y ahora que esta actualizado quisiera grabarlo otra vezReplyEPIClove the spiders ♥ReplynineGardens3 days ago (1 edit) (+2)Okay so.... tried out a few things, and some Dev suggestions to report: Bigfoot is such a cool idea, and running around at that speed with like.... all THAT going on just gave me motion sickness.Summoner is hysterical fun. All hail spiders. Tomatoe's are pretty fun too.The Adept is so cool in theory, but... once you have the right build is a bit of a "standing still simulator"  Also, if you have totoms or other turrets, there's very much the question each round of "Will my circle spawn NEAR the totoms (instant win), or far from them (oh no)"   I kind of wonder if the mage circle should like... fizzle out after 20 seconds and appear somewhere else. Just... something to give a bit more dynamism, and to make the original spawn point less critical.Okay: added thoughts:Watering psycotic tomatoes feels great.Being a malevolent spider with 8 arms feels amazing. Feels very good and natural."Orbital" is one of the greatest and most fun abilities in the game.  I would take this even without the damage boost.Lots of fun, but also very silly. Good job.Replydave99993 days agowith some size you can kick the totems around to reposition them towards your circle, it benefits them too, adept can choose the wand at the start and with it you have no sustain problem anyway whatever build you want to set upReplynineGardens3 days agoOh damn- only just found out you can kick the totems!Okay, yeah in this case all is well. Or at least.... I still think a moving circle could be cool, but the fact that you can move your totems over to where the circle is makes things much better.Replyjust get enough amount+size and they hit everything, bounce is overkill ReplyLost track of time 10 hours in and still hooked. Absolutely love it! Can't wait for the full releaseReplyDriftedVoid4 days agoPretty good! ReplyIndyot4 days agoIt's a pretty addictive game, congrats! I lowkey missed a bit of satisfaction on the weapons though.ReplyCongrats on the game! I really like the weapons that you interact with which gives it a fun spin. (i.e. the spike ball)Reply1Soultaken4 days agoAnyone know good combos for the items? (I just pick randomly.)Replydave99994 days ago (1 edit) (+2)lasers plus amount+adept some arcane for basic dmg (its instable to setup and only overboy starts with one) totems +amount+ bounce+adept optional size and arcane you can stand still in the endall shovels with crit, strength their extra souls help you snowball hard and easy probably the most straightforward and stable very good build you can beat the game with nearly anything its well balanced but this one is very strong and easy (realized in the end that all size was wasted on this) soul flask, more chests are near always must pick, the high luck value ones give you better items the free reroll is a must pick, lightning dagger is somewhat unique as it  can carry you the entire early game even if you do not get enough element damage (I understand that the more gimmicky things like pets and kickables give the game versatility but to min max they are not that competative)Replydave99998 days agounderestimated totems Replylimey8 days agoi like how you made like MULTITUDES of updates on this so like as soon as i check my feed its just thisReplydave99998 days ago (1 edit) (+1)my best run so far,  there s a hidden mechanic that  makes weapons  you have more likely to drop?Replyoverboy8 days ago(+2)Lmao, awesome — looks like a really fun build to play! Yeah, Shop RNG uses a lot of hidden tricks to help you find relevant attacks, while still allowing unrelated ones to appear. That way, you can discover unique builds and experiment freely!Replyoverboy8 days ago (1 edit) Thank you so much for the incredible reception of the web demo on Itch, and to everyone who wishlisted the game! Many of the changes—along with much more to come in future updates—are directly based on your feedback here and on the game’s Discord. I’m also excited to announce that the game will release on Steam on 8 July 2025! Demo - Update 35 (06 June 2025)Singleplayer UI: Level Up Upgrade Phase and Chest Pickup Phase UI now display the items and attacks inventories (useful to check the scaling of current equipped attacks for example) Singleplayer Shop: subtle animation while selecting a Buy Button Many Balancing tweaks Balancing: nerfed Life Steal in various ways (lower values gained from items) Balancing: nerfed Knockback in various ways (lower values gained, higher item rarity, lower max applied value) Balancing: too much items enhancing HP Max were put in the Demo, this means it was easier to get a lot of HP and to survive in the Demo due to higher ratio of items providing HP Added a subtle duration during which the player can still pickup Souls even if they’re slurped by the Soul Portal Fine tuned the color of some weapons to improve the visibility Balancing: Ballista don’t double their projectiles based on amount anymore (only number of ballistas scales with amount) If Player HP is Full and HP Max > 20, the player can’t be one-shot Bugfix: in-game achievement pop up could be displayed below other UI elements while it should always be above everything else Potential Bugfix for a rare bug happening in Multiplayer shop where player2 Shop sections wasn’t displayed at allRework the save system in preparation for upcoming features ReplyxHELLO_WORLDx10 days agocontracts on the gameReplydave999910 days agoelijah_ap10 days agoLove the art style, upgrades, controls, etc. Balance might be the only thing off about this. If you were to add anything, I would want to see more variety in the stages, similar to Vampire Survivor. Otherwise- really great.ReplyThank you so much! I’ll keep working on the balance with each update, and I appreciate the suggestion on stage variety!ReplyNetsmile10 days agoTorch IV has a problem rounding numbers in the stats hover over display. Other levels of torches workReplyoverboy10 days ago (1 edit) Thanks, I'll fix this displayed rounding number issue soon!ReplySkeppartorsk10 days agoFor now I'd say it's fun, but lacking a bit in balance. I absolutely suck at brotatolikes. But find this one easy, so it's probably undertuned as far as difficulty is concerned. The power and availability of HP and regen items, makes you just literally not care if you get hit. Then the relatively strong armor on top and you're just too tanky for anything to feasibly ever kill you.Replyoverboy10 days ago (1 edit) (+1)Thanks for the feedback! Sounds like tanky builds might be a bit too forgiving right now, i'll do some balancing changesReplySkeppartorsk9 days ago (2 edits) Life steal has similar issues too. There's also the standard issue with knockback in these kinds of games. The lack of any enemy resistance/diminishing returns, means it's way too easy to get enough knockback that enemies cannot touch you anymore. Ranged attacks are too few and far between to worry about with the current levels of sustain. Meaning you can just Stand Still and Kill way too realiably. Edit: Lategame with 6x Wands I'm getting so much screen shake it's triggering simulation sickness. It was due to having Pierce + Bounce. The screen shake from my projectiles bouncing off the edge of the map.Replyoverboy8 days ago (2 edits) (+1)thanks for your feedback, it will help for the game balancing!For now I try to avoid diminishing returns by design to make sure each feature and stat is super easy to understand because I dislike when roguelike gets too opaque, I prefer that the player fully and easily undestand each of its choices, but yeah that involves a good balance to find!In future updates, Life Steal will become harder to get, Knockback will be capped at lower maximum applied values.Regarding the overall difficulty, the full version has 3 extra level of difficulties, and based on some feedbacks i have from beta testers, the balance between the 5 difficulty modes seem to be close to what i'm aiming for (minus some issues like you pointed out, and of course some balancing required on specific builds and items)There is already an option to disable screenshakes ;)Edit: Would you be interested to join the beta-test of the full game? If so please join the Discord and ping me in DM ;)ReplySkeppartorsk8 days ago (4 edits) I did notice that you could turn off screen shake entirely. But admittedly a lot of the visceral feel of the combat goes away when you fully disable the screen shake. But when you have too many Leeroy/knockback projectiles/bouncing projectiles. It just reaches the point where simulation sickness sets in. Wish there was something like an intensity setting, or a way for it to cap out at how often a screen shake can get triggered. I agree on the opaque thing. But I was more thinking something akin to how CC Diminishing Returns works in WoW. Where 1st hit = full value, 2nd hit within 10s = half value, 3rd hit = 1/4 value. Then 10s of immunity before it resets. That way you still get knockback when you pick knockback. But you can't just perma nail enemies against the wall. Edit: Also there's a wording issue (or a bug) with how multiple pentagrams work. If you have adept pentagram and the item pentagram the wording is "when you stand inside a pentagram" But the item one gives the 20% damage ONLY and the adept one gives the adept bonuses ONLY. The wording would mean that both pentagrams should give adept bonus AND 20% damage bonus.Edit2: I'd suggest reformatting Grimorius tooltip so that the -10% armor is above the "on level up"portion. The indentation difference between the +1% speed and -10% armor is small enough that I read it as losing 10% armor on every level up.Replyoverboy8 days agoThanks a lot for the interesting insights!I nerfed HP, Lifesteal and Knockback using various techniques in the last update, along with many other changes.Just tested Pentagram/Adept and it works as expected: the 2 effects stack correctly as the wording impliedI reformatted Grimorius tooltip as you suggested ;)ReplyView more in threadBad Piggy11 days agoVery cool in it's current state. I love how much it really emphasises movement like how some active abilities need to be grabbed from around the arena to do themThat said, I think enemy projectiles could honestly stand out more. I could hardly see them at times in all the chaos.Still, I think this is a pretty solid base right now, and as always, you have a beautiful visual style, though I feel like the game suffers a little from how busy it can get. Great stuff so far thoughReplyThanks Bad Piggy! Really glad you’re enjoying the mechanics. I appreciate the feedback on projectile visibility and how busy things can get. I’ll definitely look into ways to improve those aspects. Really grateful for the kind words and thoughtful feedback!ReplyLeoLohandro11 days agoA copy of the brotato), but still fun.Replyoverboy11 days ago (2 edits) (+1)Hey thanks a lot! Yes this game is a Brotato-like with many twists and new innovative mechanics, such as:- Equippable Boss Patterns (active skills you can trigger by picking orbs on the map)- Minion Summoning- Growing Plant Minions with a watercan- Amount and Size stats - Physics-Based Weapons – like chained spikeballs- Kickable stuff (you can even play soccer with your minions or other co-op players)- Playable character merge feature (get the effect of 2 different characters or more at the same time)- Dozens and dozens of unique effects (turning enemies into Sheep, or Golden Statues, or both?)I'm aiming for something like The Binding of Isaac meets Brotato — a deep, replayable experience full of chaotic synergies and wild builds that feel totally unique each run, with all the "being a boss fantasy and humor" deeply included in the mechanics and content :)Reply
    0 Commentarii 0 Distribuiri 0 previzualizare
Sponsorizeaza Paginile
CGShares https://cgshares.com