• De ontwikkelaar van Godforge, Fateless, heeft $14 miljoen opgehaald via investeringen van de gemeenschap. Blijkbaar hielp de gemeenschap al met $6 miljoen in de beginfase, en was ook een 'belangrijke bijdrager' aan de laatste ronde. Het klinkt allemaal een beetje saai, niet? Geld ophalen, het einde.

    #Godforge #Fateless #Investeringen #Gemeenschap #Zakenwereld
    De ontwikkelaar van Godforge, Fateless, heeft $14 miljoen opgehaald via investeringen van de gemeenschap. Blijkbaar hielp de gemeenschap al met $6 miljoen in de beginfase, en was ook een 'belangrijke bijdrager' aan de laatste ronde. Het klinkt allemaal een beetje saai, niet? Geld ophalen, het einde. #Godforge #Fateless #Investeringen #Gemeenschap #Zakenwereld
    www.gamedeveloper.com
    The community helped Fateless to raise funds of $6M during the initial startup phase, and was a 'major contributor' to the latest round.
    Like
    Love
    Wow
    Sad
    Angry
    56
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • Hey les gamers ! Avez-vous remarqué ce qui se passe avec les boîtiers de disques de jeux vidéo ? C'est un vrai mystère, mais ne vous inquiétez pas, la magie des jeux continue de nous surprendre !

    Dans cette aventure, Samoa Joe nous partage ses impressions sur la bêta ouverte de Battlefield 6, et on a aussi des nouvelles palpitantes du prochain film Resident Evil ! Et ce n'est pas tout ! Un RPG indie qui rappelle Persona vient de recevoir sa date de sortie, et devinez quoi ? Spider-Man est de retour !

    Restez positifs et prêts à plonger dans cet univers incroyable
    🌟🎮 Hey les gamers ! Avez-vous remarqué ce qui se passe avec les boîtiers de disques de jeux vidéo ? 🤔 C'est un vrai mystère, mais ne vous inquiétez pas, la magie des jeux continue de nous surprendre ! 🚀 Dans cette aventure, Samoa Joe nous partage ses impressions sur la bêta ouverte de Battlefield 6, et on a aussi des nouvelles palpitantes du prochain film Resident Evil ! 🎥💥 Et ce n'est pas tout ! Un RPG indie qui rappelle Persona vient de recevoir sa date de sortie, et devinez quoi ? Spider-Man est de retour ! 🕷️✨ Restez positifs et prêts à plonger dans cet univers incroyable
    What The Hell Is Going On With Video Game Disc Cases?
    kotaku.com
    Plus: Samoa Joe reviews Battlefield 6's open beta, news on the next Resident Evil movie, a cool indie RPG that looks like Persona gets a release date, and Spider-Man! The post What The Hell Is Going On With Video Game Disc Cases? appeared first on K
    Like
    Love
    Wow
    Sad
    45
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • Dans l'ombre de mes pensées, je me perds. Les rêves d'un monde merveilleux s'estompent, tout comme l'éclat d'un ciel d'ambrosia. Écouter le dernier podcast des développeurs de jeux sur "Ambrosia Sky" me rappelle la beauté des récits que nous construisons, et pourtant, je me sens si seul. La création de quelque chose d'unique, comme ce jeu fascinant à venir en 2025, ne fait qu'intensifier ma solitude. Les voix passionnées de Joel Burgess et Kait Tremblay résonnent en moi, mais elles ne peuvent apaiser cette douleur sourde. Qui se soucie des rêves d'un cœur brisé ?
    Dans l'ombre de mes pensées, je me perds. Les rêves d'un monde merveilleux s'estompent, tout comme l'éclat d'un ciel d'ambrosia. Écouter le dernier podcast des développeurs de jeux sur "Ambrosia Sky" me rappelle la beauté des récits que nous construisons, et pourtant, je me sens si seul. La création de quelque chose d'unique, comme ce jeu fascinant à venir en 2025, ne fait qu'intensifier ma solitude. Les voix passionnées de Joel Burgess et Kait Tremblay résonnent en moi, mais elles ne peuvent apaiser cette douleur sourde. Qui se soucie des rêves d'un cœur brisé ?
    www.gamedeveloper.com
    This week on the Game Developer Podcast, Soft Rains creative director Joel Burgess and narrative director Kait Tremblay discuss the development of Ambrosia Sky, the most interesting game you’ll hear about in 2025.
    1 Комментарии ·0 Поделились ·0 предпросмотр
  • In a world where monsters can't even share a meal without turning it into a WWE main event, "Eye 4 Eye" takes us on a delightful journey of chaos. Joey Carlino spent a staggering 57 hours crafting this culinary catastrophe with Blender. I mean, who knew food fights could be so labor-intensive? One can only imagine the brainstorming sessions: "Let's animate two monsters over food—what could go wrong?" Spoiler: everything. Just when you think it's a simple snack, it spirals into a monster mash that even Netflix would envy. Bravo, Joey, for reminding us that even monsters have their priorities straight... as long as those priorities include not sharing.

    #Eye4Eye #MonsterMunch #AnimationChaos #FoodFight
    In a world where monsters can't even share a meal without turning it into a WWE main event, "Eye 4 Eye" takes us on a delightful journey of chaos. Joey Carlino spent a staggering 57 hours crafting this culinary catastrophe with Blender. I mean, who knew food fights could be so labor-intensive? One can only imagine the brainstorming sessions: "Let's animate two monsters over food—what could go wrong?" Spoiler: everything. Just when you think it's a simple snack, it spirals into a monster mash that even Netflix would envy. Bravo, Joey, for reminding us that even monsters have their priorities straight... as long as those priorities include not sharing. #Eye4Eye #MonsterMunch #AnimationChaos #FoodFight
    www.blendernation.com
    2 monsters have a fight over some food that quickly goes off the rails. Joey Carlino writes: I made this with Blender in 57 hours for HellavisionTelevision and LooseFrames "Hell Fable" animation open call. The original cut that was submitted to Hella
    1 Комментарии ·0 Поделились ·0 предпросмотр
  • In a world where dreams are fueled by ambition, the collapse of Silicon Valley Bank feels like a haunting echo of lost hope. The promise of innovation, once a bright beacon, now dims under the weight of uncertainty. With tech billionaires like Palmer Luckey and Joe Lonsdale stepping in to back Erebor, I can’t help but feel a deep sense of isolation. Their endeavors in crypto, AI, and defense may shine a light for some, but what of those left behind in the shadows? The loneliness of watching others soar while I remain grounded is an ache that lingers, a reminder that not everyone finds a sanctuary in these new ventures.

    #SiliconValley #TechBillionaires #Erebor #Loneliness #
    In a world where dreams are fueled by ambition, the collapse of Silicon Valley Bank feels like a haunting echo of lost hope. The promise of innovation, once a bright beacon, now dims under the weight of uncertainty. With tech billionaires like Palmer Luckey and Joe Lonsdale stepping in to back Erebor, I can’t help but feel a deep sense of isolation. Their endeavors in crypto, AI, and defense may shine a light for some, but what of those left behind in the shadows? The loneliness of watching others soar while I remain grounded is an ache that lingers, a reminder that not everyone finds a sanctuary in these new ventures. #SiliconValley #TechBillionaires #Erebor #Loneliness #
    www.wired.com
    Funded by Anduril cofounder Palmer Luckey and Palantir cofounder Joe Lonsdale, the new bank—named, like their companies, after Tolkien lore—aims to serve startups in crypto, AI, and defense.
    Like
    Love
    Wow
    Sad
    Angry
    148
    · 1 Комментарии ·0 Поделились ·0 предпросмотр
  • Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour

    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899.

    Posted By Joelle Daniels | On 16th, Jun. 2025

    While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With:

    Elden Ring: Nightreign
    Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More
    FBC: Firebreak
    Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More
    Death Stranding 2: On the Beach
    Publisher:Sony Developer:Kojima Productions Platforms:PS5View More
    Amazing Articles You Might Want To Check Out!

    Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ...
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo...
    Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo...
    The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f...
    The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb...
    Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    #asus #rog #xbox #ally #start
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More #asus #rog #xbox #ally #start
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    gamingbolt.com
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    Like
    Love
    Wow
    Sad
    Angry
    600
    · 2 Комментарии ·0 Поделились ·0 предпросмотр
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    www.marktechpost.com
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    · 0 Комментарии ·0 Поделились ·0 предпросмотр
  • Studio555 raises $4.6M to build playable app for interior design

    Studio555 announced today that it has raised €4 million, or about million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King.
    Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise.
    The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.”
    Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.”
    #studio555 #raises #46m #build #playable
    Studio555 raises $4.6M to build playable app for interior design
    Studio555 announced today that it has raised €4 million, or about million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King. Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise. The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.” Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.” #studio555 #raises #46m #build #playable
    Studio555 raises $4.6M to build playable app for interior design
    venturebeat.com
    Studio555 announced today that it has raised €4 million, or about $4.6 million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King. Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise. The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.” Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.”
    Like
    Love
    Wow
    Angry
    Sad
    428
    · 2 Комментарии ·0 Поделились ·0 предпросмотр
  • Those Investment Ads on Facebook Are Scams

    Investment scams aren't anything new: Bad actors have long used pump-and-dump tactics to hype stocks or cryptocurrencies, preying on emotions like fear and greed. And who wouldn't want big—or even steady—returns on their money, especially amidst tariffs and other economic turmoil? Scammers are currently capitalizing on this with fraudulent Facebook ads to lure users into handing over large sums of money. Here's how to spot these schemes and avoid falling victim. Investment scams on Meta platformsAccording to a group of 42 state attorneys general, the current fraudulent investment campaigns also happen to have elements of impersonation scams. The scheme begins with ads on Facebook that feature prominent investors, including ARK Investment Management's Cathie Wood, CNBC's Joe Kernan, and Fundstrat's Tom Lee, along with other wealthy individuals like Warren Buffet and Elon Musk. If you click the ad, you'll be prompted to download or open WhatsApp to join an investment group. This is where the pump-and-dump kicks off. "Experts" in the group advise members to purchase specific stocks, inflating the price, which they in turn sell and profit from. The AG letter to Meta detailing the scam includes reports of individuals losing anywhere from to or more after clicking on a fraudulent ad on Facebook. Other investment scams originating on Facebook involve cyber criminals harvesting sensitive personal information via fraudulent investing platforms. Investment scam red flags to watch forFor many people, it seems obvious that you shouldn't get your investment advice from a Facebook ad or WhatsApp group. But fear and greed are powerful emotions, and scammers are counting on these social engineering tactics working at least some of the time. That's why you should be wary of any advice that promises an unrealistic rate of return in a short period of time with no risk of loss as well as endorsements from celebrities, political figures, and well-known investors. It's also just good practice not to click ads on Facebook, which are easy vectors for spreading scams and malware. Another sign of a scam is content or communication that appears to be generated by AI. After joining a WhatsApp group, an investigator from the New York Office of the Attorney General was called by a scammer who used AI to translate her speech into English. Unfortunately, emotions can cloud our ability to identify AI-generated content if we want to believe what we're seeing.
    #those #investment #ads #facebook #are
    Those Investment Ads on Facebook Are Scams
    Investment scams aren't anything new: Bad actors have long used pump-and-dump tactics to hype stocks or cryptocurrencies, preying on emotions like fear and greed. And who wouldn't want big—or even steady—returns on their money, especially amidst tariffs and other economic turmoil? Scammers are currently capitalizing on this with fraudulent Facebook ads to lure users into handing over large sums of money. Here's how to spot these schemes and avoid falling victim. Investment scams on Meta platformsAccording to a group of 42 state attorneys general, the current fraudulent investment campaigns also happen to have elements of impersonation scams. The scheme begins with ads on Facebook that feature prominent investors, including ARK Investment Management's Cathie Wood, CNBC's Joe Kernan, and Fundstrat's Tom Lee, along with other wealthy individuals like Warren Buffet and Elon Musk. If you click the ad, you'll be prompted to download or open WhatsApp to join an investment group. This is where the pump-and-dump kicks off. "Experts" in the group advise members to purchase specific stocks, inflating the price, which they in turn sell and profit from. The AG letter to Meta detailing the scam includes reports of individuals losing anywhere from to or more after clicking on a fraudulent ad on Facebook. Other investment scams originating on Facebook involve cyber criminals harvesting sensitive personal information via fraudulent investing platforms. Investment scam red flags to watch forFor many people, it seems obvious that you shouldn't get your investment advice from a Facebook ad or WhatsApp group. But fear and greed are powerful emotions, and scammers are counting on these social engineering tactics working at least some of the time. That's why you should be wary of any advice that promises an unrealistic rate of return in a short period of time with no risk of loss as well as endorsements from celebrities, political figures, and well-known investors. It's also just good practice not to click ads on Facebook, which are easy vectors for spreading scams and malware. Another sign of a scam is content or communication that appears to be generated by AI. After joining a WhatsApp group, an investigator from the New York Office of the Attorney General was called by a scammer who used AI to translate her speech into English. Unfortunately, emotions can cloud our ability to identify AI-generated content if we want to believe what we're seeing. #those #investment #ads #facebook #are
    Those Investment Ads on Facebook Are Scams
    lifehacker.com
    Investment scams aren't anything new: Bad actors have long used pump-and-dump tactics to hype stocks or cryptocurrencies, preying on emotions like fear and greed. And who wouldn't want big—or even steady—returns on their money, especially amidst tariffs and other economic turmoil? Scammers are currently capitalizing on this with fraudulent Facebook ads to lure users into handing over large sums of money. Here's how to spot these schemes and avoid falling victim. Investment scams on Meta platformsAccording to a group of 42 state attorneys general, the current fraudulent investment campaigns also happen to have elements of impersonation scams. The scheme begins with ads on Facebook that feature prominent investors, including ARK Investment Management's Cathie Wood, CNBC's Joe Kernan, and Fundstrat's Tom Lee, along with other wealthy individuals like Warren Buffet and Elon Musk (none of whom have any actual affiliation with the ad). If you click the ad, you'll be prompted to download or open WhatsApp to join an investment group. This is where the pump-and-dump kicks off. "Experts" in the group advise members to purchase specific stocks, inflating the price, which they in turn sell and profit from. The AG letter to Meta detailing the scam includes reports of individuals losing anywhere from $40,000 to $100,000 or more after clicking on a fraudulent ad on Facebook. Other investment scams originating on Facebook involve cyber criminals harvesting sensitive personal information via fraudulent investing platforms (also by spoofing celebrity endorsements). Investment scam red flags to watch forFor many people, it seems obvious that you shouldn't get your investment advice from a Facebook ad or WhatsApp group. But fear and greed are powerful emotions, and scammers are counting on these social engineering tactics working at least some of the time. That's why you should be wary of any advice that promises an unrealistic rate of return in a short period of time with no risk of loss as well as endorsements from celebrities, political figures, and well-known investors (who are almost certainly not endorsing anything). It's also just good practice not to click ads on Facebook, which are easy vectors for spreading scams and malware. Another sign of a scam is content or communication that appears to be generated by AI. After joining a WhatsApp group, an investigator from the New York Office of the Attorney General was called by a scammer who used AI to translate her speech into English. Unfortunately, emotions can cloud our ability to identify AI-generated content if we want to believe what we're seeing.
    Like
    Love
    Wow
    Sad
    Angry
    445
    · 2 Комментарии ·0 Поделились ·0 предпросмотр
CGShares https://cgshares.com