• In a world saturated with images, I find myself lost in the shadows of disappointment. As Erik Christensen introduces the Scan Space Image Processor, a free tool for photo treatment, I can't help but feel the weight of unmet expectations. It's hard to swallow that even the best tools like Lightroom can't always capture the essence of what we wish to preserve. Each click of the shutter resonates with a sense of longing, yet the final product often falls short. The loneliness of creation is a heavy burden, where every processed photo feels like a reminder of unfulfilled dreams.

    #Photogrammetry #ScanSpace #Loneliness #Heartbreak #Photography
    In a world saturated with images, I find myself lost in the shadows of disappointment. As Erik Christensen introduces the Scan Space Image Processor, a free tool for photo treatment, I can't help but feel the weight of unmet expectations. It's hard to swallow that even the best tools like Lightroom can't always capture the essence of what we wish to preserve. Each click of the shutter resonates with a sense of longing, yet the final product often falls short. The loneliness of creation is a heavy burden, where every processed photo feels like a reminder of unfulfilled dreams. 💔 #Photogrammetry #ScanSpace #Loneliness #Heartbreak #Photography
    3dvf.com
    Erik Christensen lance Scan Space Image Processor, un logiciel gratuit destiné au traitement de photos avant leur usage dans un outil de photogrammétrie tel que RealityScan, Metashape ou encore 3DF Zephyr. Meilleur que Lightroom ? Il est parti du con
    1 Comments ·0 Shares ·0 Reviews
  • In a world where connections fade like whispers in the wind, Google introduces its smart calling feature, woven with artificial intelligence to bridge the gaps. Yet, as I ponder this innovation, I can’t shake the feeling of loneliness that clings like a shadow. The idea of talking to a machine instead of a friend leaves me feeling more isolated, as if even technology cannot fill the void.

    Amidst the buzz of progress, I find myself yearning for genuine connection, for a voice that truly understands the weight of silence. The loneliness in this digital age feels heavier than ever.

    #Loneliness #AI #SmartCalling #Isolation #EmotionalJourney
    In a world where connections fade like whispers in the wind, Google introduces its smart calling feature, woven with artificial intelligence to bridge the gaps. Yet, as I ponder this innovation, I can’t shake the feeling of loneliness that clings like a shadow. The idea of talking to a machine instead of a friend leaves me feeling more isolated, as if even technology cannot fill the void. Amidst the buzz of progress, I find myself yearning for genuine connection, for a voice that truly understands the weight of silence. The loneliness in this digital age feels heavier than ever. #Loneliness #AI #SmartCalling #Isolation #EmotionalJourney
    جوجل تطلق ميزة المكالمات الذكية عبر الذكاء الاصطناعي لخدمة المستخدمين
    arabhardware.net
    The post جوجل تطلق ميزة المكالمات الذكية عبر الذكاء الاصطناعي لخدمة المستخدمين appeared first on عرب هاردوير.
    1 Comments ·0 Shares ·0 Reviews
  • Laura Boráros Dances Between Dreams and Reality in a Surreal Short Film

    If you’ve ever had an upstairs neighbor, you’re probably familiar with the sounds of echoing footsteps, resonant laughing, glass breaking, and the muffled weight of too many voices speaking atop one another during a late-night gathering.

    In a short film titled “Snovník,” or “Dreamer,” Czech Rpublic-based filmmaker Laura Boráros introduces a bright red protagonist who is unable to sleep when he can’t ignore the rowdiness resonating from above his bedroom ceiling. Taking matters into his own hands, he makes his way upstairs and knocks on his neighbor’s door—only to become engulfed by the fun himself by peering into a small keyhole.

    Boráros immerses the audience with a flurry with bold colors, painted and snipped into a mirage of shapes and scenes. Using stop-motion, the animation strikes a mechanical yet fluid tone, creating a surreal environment that accurately captures the experience of the very fever dream “Snovník” depicts.

    Watch the full film on Vimeo, and get a peek at the artist’s process on Instagram.

    Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as per month. The article Laura Boráros Dances Between Dreams and Reality in a Surreal Short Film appeared first on Colossal.
    #laura #boráros #dances #between #dreams
    Laura Boráros Dances Between Dreams and Reality in a Surreal Short Film
    If you’ve ever had an upstairs neighbor, you’re probably familiar with the sounds of echoing footsteps, resonant laughing, glass breaking, and the muffled weight of too many voices speaking atop one another during a late-night gathering. In a short film titled “Snovník,” or “Dreamer,” Czech Rpublic-based filmmaker Laura Boráros introduces a bright red protagonist who is unable to sleep when he can’t ignore the rowdiness resonating from above his bedroom ceiling. Taking matters into his own hands, he makes his way upstairs and knocks on his neighbor’s door—only to become engulfed by the fun himself by peering into a small keyhole. Boráros immerses the audience with a flurry with bold colors, painted and snipped into a mirage of shapes and scenes. Using stop-motion, the animation strikes a mechanical yet fluid tone, creating a surreal environment that accurately captures the experience of the very fever dream “Snovník” depicts. Watch the full film on Vimeo, and get a peek at the artist’s process on Instagram. Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as per month. The article Laura Boráros Dances Between Dreams and Reality in a Surreal Short Film appeared first on Colossal. #laura #boráros #dances #between #dreams
    Laura Boráros Dances Between Dreams and Reality in a Surreal Short Film
    www.thisiscolossal.com
    If you’ve ever had an upstairs neighbor, you’re probably familiar with the sounds of echoing footsteps, resonant laughing, glass breaking, and the muffled weight of too many voices speaking atop one another during a late-night gathering. In a short film titled “Snovník,” or “Dreamer,” Czech Rpublic-based filmmaker Laura Boráros introduces a bright red protagonist who is unable to sleep when he can’t ignore the rowdiness resonating from above his bedroom ceiling. Taking matters into his own hands, he makes his way upstairs and knocks on his neighbor’s door—only to become engulfed by the fun himself by peering into a small keyhole. Boráros immerses the audience with a flurry with bold colors, painted and snipped into a mirage of shapes and scenes. Using stop-motion, the animation strikes a mechanical yet fluid tone, creating a surreal environment that accurately captures the experience of the very fever dream “Snovník” depicts. Watch the full film on Vimeo, and get a peek at the artist’s process on Instagram. Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $7 per month. The article Laura Boráros Dances Between Dreams and Reality in a Surreal Short Film appeared first on Colossal.
    Like
    Love
    Wow
    Sad
    Angry
    444
    · 0 Comments ·0 Shares ·0 Reviews
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    www.marktechpost.com
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    · 0 Comments ·0 Shares ·0 Reviews
  • Dispatch offers something new for superhero video games — engaging deskwork

    While we’ve had plenty of superhero games come out over the past decade and a half, most have either been open-world adventures or fighting games. I’m as excited as anyone for the upcoming Marvel Tōkon and Invincible VS, but I’m also ready for a little something different. That’s where Dispatch from AdHoc Studio comes in.

    Dispatch is a game made for people who enjoy watching a rerun of The Office as a palate cleanser after the bloody battles of Invincible. So, me. You’re cast as Robert Robertson, the former superhero known as Mecha Man. He has to step away from frontline superheroics as the mech suit he relied on was destroyed in battle. Needing a job, he starts work at a dispatch center for superheroes, and the demo takes you through a small, 30-minute chunk of his first day.

    You’ll notice Dispatch’s crude humor early on. The first thing you can do in Dispatch is give a colleague a “bro fist” at a urinal, and the juvenile jokes don’t stop there. Middle school boys are going to love it, though I’d be lying if I said a few of the jokes didn’t get chuckles from me.

    Another of Robertson’s co-workers, who also used to be a superhero until his powers caused him to rapidly age, introduces Robertson’s team of misfit heroes, though that term should be used loosely. He notes they’re a “motley crew of dangerous fuck-ups” as Robertson examines their files, each with a mugshot and rapsheet. Robertson isn’t in charge of the Avengers — he’s leading a D-List Suicide Squad. The cast, however, is full of A-listers: Laura Bailey, Matthew Mercer, Aaron Paul, and Jeffrey Wright are among those lending their voices to Dispatch.

    Much like The Boys, Dispatch plays with the idea of the corporatization of superheroes. These heroes aren’t a lone Spider-Man swinging through Manhattan on patrol — they’re employees waiting for an assignment. Gameplay consists of matching the righthero to the job. Some assignments I saw in the demo included breaking up a robbery, catching a 12-year-old thief, and grabbing a kid’s balloon from a tree while also making sure the kid didn’t cry. Seeing as how one of your misfits is a literal bat man and another looks like a tiefling, you have to choose wisely.

    The real draw of Dispatch for me isn’t the point-and-click assignment gameplay, but rather the choice-based dialogue. It’s developed by AdHoc Studio, which was formed in 2018 by former developers who had worked on Telltale titles like The Wolf Among Us, The Walking Dead, and Tales from the Borderlands, and you can easily see the throughline from those titles to Dispatch. At various points, you have a limited time to select Robertson’s dialogue, and occasionally a pop-up saying a character “will remember that” appears. How much Robertson’s choices actually have consequences or influence his relationships with others remains to be seen, though I have no doubt those choices will be fun to make.

    After its reveal at The Game Awards six months ago, Dispatch will be coming to Windows PC and unspecified consoles sometime this year. You can check out its demo now on Steam.
    #dispatch #offers #something #new #superhero
    Dispatch offers something new for superhero video games — engaging deskwork
    While we’ve had plenty of superhero games come out over the past decade and a half, most have either been open-world adventures or fighting games. I’m as excited as anyone for the upcoming Marvel Tōkon and Invincible VS, but I’m also ready for a little something different. That’s where Dispatch from AdHoc Studio comes in. Dispatch is a game made for people who enjoy watching a rerun of The Office as a palate cleanser after the bloody battles of Invincible. So, me. You’re cast as Robert Robertson, the former superhero known as Mecha Man. He has to step away from frontline superheroics as the mech suit he relied on was destroyed in battle. Needing a job, he starts work at a dispatch center for superheroes, and the demo takes you through a small, 30-minute chunk of his first day. You’ll notice Dispatch’s crude humor early on. The first thing you can do in Dispatch is give a colleague a “bro fist” at a urinal, and the juvenile jokes don’t stop there. Middle school boys are going to love it, though I’d be lying if I said a few of the jokes didn’t get chuckles from me. Another of Robertson’s co-workers, who also used to be a superhero until his powers caused him to rapidly age, introduces Robertson’s team of misfit heroes, though that term should be used loosely. He notes they’re a “motley crew of dangerous fuck-ups” as Robertson examines their files, each with a mugshot and rapsheet. Robertson isn’t in charge of the Avengers — he’s leading a D-List Suicide Squad. The cast, however, is full of A-listers: Laura Bailey, Matthew Mercer, Aaron Paul, and Jeffrey Wright are among those lending their voices to Dispatch. Much like The Boys, Dispatch plays with the idea of the corporatization of superheroes. These heroes aren’t a lone Spider-Man swinging through Manhattan on patrol — they’re employees waiting for an assignment. Gameplay consists of matching the righthero to the job. Some assignments I saw in the demo included breaking up a robbery, catching a 12-year-old thief, and grabbing a kid’s balloon from a tree while also making sure the kid didn’t cry. Seeing as how one of your misfits is a literal bat man and another looks like a tiefling, you have to choose wisely. The real draw of Dispatch for me isn’t the point-and-click assignment gameplay, but rather the choice-based dialogue. It’s developed by AdHoc Studio, which was formed in 2018 by former developers who had worked on Telltale titles like The Wolf Among Us, The Walking Dead, and Tales from the Borderlands, and you can easily see the throughline from those titles to Dispatch. At various points, you have a limited time to select Robertson’s dialogue, and occasionally a pop-up saying a character “will remember that” appears. How much Robertson’s choices actually have consequences or influence his relationships with others remains to be seen, though I have no doubt those choices will be fun to make. After its reveal at The Game Awards six months ago, Dispatch will be coming to Windows PC and unspecified consoles sometime this year. You can check out its demo now on Steam. #dispatch #offers #something #new #superhero
    Dispatch offers something new for superhero video games — engaging deskwork
    www.polygon.com
    While we’ve had plenty of superhero games come out over the past decade and a half (and I’m always down for more), most have either been open-world adventures or fighting games. I’m as excited as anyone for the upcoming Marvel Tōkon and Invincible VS, but I’m also ready for a little something different. That’s where Dispatch from AdHoc Studio comes in. Dispatch is a game made for people who enjoy watching a rerun of The Office as a palate cleanser after the bloody battles of Invincible. So, me. You’re cast as Robert Robertson, the former superhero known as Mecha Man. He has to step away from frontline superheroics as the mech suit he relied on was destroyed in battle. Needing a job, he starts work at a dispatch center for superheroes, and the demo takes you through a small, 30-minute chunk of his first day. You’ll notice Dispatch’s crude humor early on. The first thing you can do in Dispatch is give a colleague a “bro fist” at a urinal, and the juvenile jokes don’t stop there. Middle school boys are going to love it, though I’d be lying if I said a few of the jokes didn’t get chuckles from me. Another of Robertson’s co-workers, who also used to be a superhero until his powers caused him to rapidly age, introduces Robertson’s team of misfit heroes, though that term should be used loosely. He notes they’re a “motley crew of dangerous fuck-ups” as Robertson examines their files, each with a mugshot and rapsheet. Robertson isn’t in charge of the Avengers — he’s leading a D-List Suicide Squad. The cast, however, is full of A-listers: Laura Bailey, Matthew Mercer, Aaron Paul, and Jeffrey Wright are among those lending their voices to Dispatch. Much like The Boys, Dispatch plays with the idea of the corporatization of superheroes (though without the satire of and parallels to modern-day politics). These heroes aren’t a lone Spider-Man swinging through Manhattan on patrol — they’re employees waiting for an assignment. Gameplay consists of matching the right (or perhaps “good enough”) hero to the job. Some assignments I saw in the demo included breaking up a robbery, catching a 12-year-old thief, and grabbing a kid’s balloon from a tree while also making sure the kid didn’t cry. Seeing as how one of your misfits is a literal bat man and another looks like a tiefling, you have to choose wisely. The real draw of Dispatch for me isn’t the point-and-click assignment gameplay, but rather the choice-based dialogue. It’s developed by AdHoc Studio, which was formed in 2018 by former developers who had worked on Telltale titles like The Wolf Among Us, The Walking Dead, and Tales from the Borderlands, and you can easily see the throughline from those titles to Dispatch. At various points, you have a limited time to select Robertson’s dialogue, and occasionally a pop-up saying a character “will remember that” appears. How much Robertson’s choices actually have consequences or influence his relationships with others remains to be seen, though I have no doubt those choices will be fun to make. After its reveal at The Game Awards six months ago, Dispatch will be coming to Windows PC and unspecified consoles sometime this year. You can check out its demo now on Steam.
    Like
    Love
    Wow
    Sad
    Angry
    431
    · 0 Comments ·0 Shares ·0 Reviews
  • Can Sonic Racing: CrossWorlds Outrun Mario Kart World?

    Mario Kart World is one of the year's hottest games, but its pivot to an open world setting, while peeling back kart customization options, opened a massive rift for Sonic Racing: CrossWorlds to drift into. And Sega is determined to do everything possible to make its kart racer the one to beat by including numerous guest characters and cross-platform, multiplayer contests. I took Sonic Racing: CrossWorlds for a test drive at the Summer Game Fest, and it's a strong contender racing game of the year.Sonic Racing: CrossWorlds' Deep Kart CustomizationThe biggest difference between Sonic Racing: CrossWorlds and Mario Kart World is that Sega's title focuses on kart customization. I'm not just talking about colors and tires; CrossWorlds introduces Gadgets, add-ons that augment your car, giving your whip helpful abilities to bring into the race. Each ride has a license plate with six slots where you can slot your chosen Gadgets. A Gadget can take up one, two, or three slots, so the idea is to find a mix that pairs well with character traits. There's a surprising amount of depth for people who want to min/max their favorite anthropomorphic animal.I chose Sonic, a speed character, and added a Gadget that started him with two boosts, a Gadget that improved his speed while trailing an opponent, and a Gadget that improved acceleration. There were so many Gadgets that I could have easily spent my entire demo session building a car to match my playstyle. I envision people happily getting lost in the weeds before participating in their first race.Gameplay: This Ain't Mario Kart WorldAlthough it's not an open world like Mario Kart World, Sonic Racing: CrossWorlds injects a unique spin on traditional kart racing. The familiar trappings are all here, such as rings to boost your top speed. Each Grand Prix consists of three maps, but the gimmick at play is stage transitions. Recommended by Our EditorsAbout a third of the way down a course, a giant ring-portal opens, presenting a new world and track. The shift in tone and terrain keeps the races fast-paced and unpredictable. I particularly liked how whoever is in first place can sometimes choose which CrossWorlds track to go down, controlling the tempo. With every race completion, you earn credits based on your performance that you can cash in for new car parts.In a stark contrast to Mario Kart World, Sonic Racing: CrossWorlds is far more aggressive, even on lower difficulties. At the start of each grand prix, the game assigns you a rival—this is the character to beat, and the one who taunts you all match. Beat them all, and you can race high-powered Super variants.Just about everything caused you to lose rings: bumping into other players, the walls, and, of course, getting hit by items. The series' trademark rubberband AI is still in place, too. Even in the press demo, I wasn't safe from taking four items back to back and being knocked off the stage mere feet away from the finish line.The demo didn't include the new characters that debuted at the Summer Game Fest, but I studied the character screen to see who else could be coming to the game. Including the 12 Sonic characters available in the demo, I counted a whopping 64 character slots. They include Hatsune Miku, Joker, Ichiban Kasuga, and Steve. However, I hope to see other classic Sega IPs like in previous Sonic Racing titles.Platforms and Release DateWill Sega do what Nintendon't? I had an exhilarating time playing Sonic Racing: CrossWorld, and I can't wait to see more wild track compositions. Sonic Racing: CrossWorlds will be available on Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X/S on Sept. 25, 2025. A Nintendo Switch 2 version is planned for later in the year.
    #can #sonic #racing #crossworlds #outrun
    Can Sonic Racing: CrossWorlds Outrun Mario Kart World?
    Mario Kart World is one of the year's hottest games, but its pivot to an open world setting, while peeling back kart customization options, opened a massive rift for Sonic Racing: CrossWorlds to drift into. And Sega is determined to do everything possible to make its kart racer the one to beat by including numerous guest characters and cross-platform, multiplayer contests. I took Sonic Racing: CrossWorlds for a test drive at the Summer Game Fest, and it's a strong contender racing game of the year.Sonic Racing: CrossWorlds' Deep Kart CustomizationThe biggest difference between Sonic Racing: CrossWorlds and Mario Kart World is that Sega's title focuses on kart customization. I'm not just talking about colors and tires; CrossWorlds introduces Gadgets, add-ons that augment your car, giving your whip helpful abilities to bring into the race. Each ride has a license plate with six slots where you can slot your chosen Gadgets. A Gadget can take up one, two, or three slots, so the idea is to find a mix that pairs well with character traits. There's a surprising amount of depth for people who want to min/max their favorite anthropomorphic animal.I chose Sonic, a speed character, and added a Gadget that started him with two boosts, a Gadget that improved his speed while trailing an opponent, and a Gadget that improved acceleration. There were so many Gadgets that I could have easily spent my entire demo session building a car to match my playstyle. I envision people happily getting lost in the weeds before participating in their first race.Gameplay: This Ain't Mario Kart WorldAlthough it's not an open world like Mario Kart World, Sonic Racing: CrossWorlds injects a unique spin on traditional kart racing. The familiar trappings are all here, such as rings to boost your top speed. Each Grand Prix consists of three maps, but the gimmick at play is stage transitions. Recommended by Our EditorsAbout a third of the way down a course, a giant ring-portal opens, presenting a new world and track. The shift in tone and terrain keeps the races fast-paced and unpredictable. I particularly liked how whoever is in first place can sometimes choose which CrossWorlds track to go down, controlling the tempo. With every race completion, you earn credits based on your performance that you can cash in for new car parts.In a stark contrast to Mario Kart World, Sonic Racing: CrossWorlds is far more aggressive, even on lower difficulties. At the start of each grand prix, the game assigns you a rival—this is the character to beat, and the one who taunts you all match. Beat them all, and you can race high-powered Super variants.Just about everything caused you to lose rings: bumping into other players, the walls, and, of course, getting hit by items. The series' trademark rubberband AI is still in place, too. Even in the press demo, I wasn't safe from taking four items back to back and being knocked off the stage mere feet away from the finish line.The demo didn't include the new characters that debuted at the Summer Game Fest, but I studied the character screen to see who else could be coming to the game. Including the 12 Sonic characters available in the demo, I counted a whopping 64 character slots. They include Hatsune Miku, Joker, Ichiban Kasuga, and Steve. However, I hope to see other classic Sega IPs like in previous Sonic Racing titles.Platforms and Release DateWill Sega do what Nintendon't? I had an exhilarating time playing Sonic Racing: CrossWorld, and I can't wait to see more wild track compositions. Sonic Racing: CrossWorlds will be available on Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X/S on Sept. 25, 2025. A Nintendo Switch 2 version is planned for later in the year. #can #sonic #racing #crossworlds #outrun
    Can Sonic Racing: CrossWorlds Outrun Mario Kart World?
    me.pcmag.com
    Mario Kart World is one of the year's hottest games, but its pivot to an open world setting, while peeling back kart customization options, opened a massive rift for Sonic Racing: CrossWorlds to drift into. And Sega is determined to do everything possible to make its kart racer the one to beat by including numerous guest characters and cross-platform, multiplayer contests. I took Sonic Racing: CrossWorlds for a test drive at the Summer Game Fest, and it's a strong contender racing game of the year.Sonic Racing: CrossWorlds' Deep Kart CustomizationThe biggest difference between Sonic Racing: CrossWorlds and Mario Kart World is that Sega's title focuses on kart customization. I'm not just talking about colors and tires; CrossWorlds introduces Gadgets, add-ons that augment your car, giving your whip helpful abilities to bring into the race. (Credit: Sega)Each ride has a license plate with six slots where you can slot your chosen Gadgets. A Gadget can take up one, two, or three slots, so the idea is to find a mix that pairs well with character traits. There's a surprising amount of depth for people who want to min/max their favorite anthropomorphic animal.I chose Sonic, a speed character, and added a Gadget that started him with two boosts (one slot), a Gadget that improved his speed while trailing an opponent (two slots), and a Gadget that improved acceleration (three slots). There were so many Gadgets that I could have easily spent my entire demo session building a car to match my playstyle. I envision people happily getting lost in the weeds before participating in their first race.(Credit: Sega)Gameplay: This Ain't Mario Kart WorldAlthough it's not an open world like Mario Kart World, Sonic Racing: CrossWorlds injects a unique spin on traditional kart racing. The familiar trappings are all here, such as rings to boost your top speed. Each Grand Prix consists of three maps, but the gimmick at play is stage transitions. Recommended by Our EditorsAbout a third of the way down a course, a giant ring-portal opens, presenting a new world and track (hence the name "CrossWorlds"). The shift in tone and terrain keeps the races fast-paced and unpredictable. I particularly liked how whoever is in first place can sometimes choose which CrossWorlds track to go down, controlling the tempo. With every race completion, you earn credits based on your performance that you can cash in for new car parts.In a stark contrast to Mario Kart World, Sonic Racing: CrossWorlds is far more aggressive, even on lower difficulties. At the start of each grand prix, the game assigns you a rival—this is the character to beat, and the one who taunts you all match. Beat them all, and you can race high-powered Super variants.Just about everything caused you to lose rings: bumping into other players, the walls, and, of course, getting hit by items. The series' trademark rubberband AI is still in place, too. Even in the press demo, I wasn't safe from taking four items back to back and being knocked off the stage mere feet away from the finish line.(Credit: Sega)The demo didn't include the new characters that debuted at the Summer Game Fest, but I studied the character screen to see who else could be coming to the game. Including the 12 Sonic characters available in the demo, I counted a whopping 64 character slots. They include Hatsune Miku (the ultra-popular Vocaloid), Joker (from Persona 5), Ichiban Kasuga (from Like a Dragon), and Steve (from Minecraft). However, I hope to see other classic Sega IPs like in previous Sonic Racing titles.Platforms and Release DateWill Sega do what Nintendon't? I had an exhilarating time playing Sonic Racing: CrossWorld, and I can't wait to see more wild track compositions. Sonic Racing: CrossWorlds will be available on Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, and Xbox Series X/S on Sept. 25, 2025. A Nintendo Switch 2 version is planned for later in the year.
    0 Comments ·0 Shares ·0 Reviews
  • Komires: Matali Physics 6.9 Released

    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more.

    Posted by komires on Jun 3rd, 2025
    What is Matali Physics?
    Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects.
    What's new in version 6.9?

    Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others;
    Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes;
    Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.;
    Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol;
    Other improvements and fixes which complete list is available on the History webpage.

    What platforms does Matali Physics support?

    Android
    Android TV
    *BSD
    iOS
    iPadOS
    LinuxmacOS
    Steam Deck
    tvOS
    UWPWindowsWhat are the benefits of using Matali Physics?

    Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy
    Composed of dedicated modules that do not require additional licences and fees
    Supports fully dynamic and destructible scenes
    Supports physics-based behavioral animations
    Supports physical AI, object motion and state change control
    Supports physics-based GUI
    Supports physics-based particle effects
    Supports multi-scene physics simulation and scene combining
    Supports physics-based photo mode
    Supports physics-driven sound
    Supports physics-driven music
    Supports debug visualization
    Fully serializable and deserializable
    Available for all major mobile, desktop and TV platforms
    New features on request
    Dedicated technical support
    Regular updates and fixes

    If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    #komires #matali #physics #released
    Komires: Matali Physics 6.9 Released
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS LinuxmacOS Steam Deck tvOS UWPWindowsWhat are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us. #komires #matali #physics #released
    Komires: Matali Physics 6.9 Released
    www.indiedb.com
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illumination (GI) in some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illumination (GI) in some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS Linux (distributions) macOS Steam Deck tvOS UWP (Desktop, Xbox Series X/S) Windows (Classic, GDK, Handheld consoles) What are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    0 Comments ·0 Shares ·0 Reviews
  • Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker

    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately This model marks a step in Nike’s exploration of additive manufacturing, enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear.
    Building Buzz Online
    The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on.
    Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth.
    Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit.
    Reimagining the Air Max Legacy
    Drawing inspiration from the original Air Max 1, the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld.
    Zellerfeld’s fused filament fabricationprocess enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic.
    Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components.
    Expansion of 3D Printed Footwear Technology
    The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect.
    Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth.

    Paloma Duran
    Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    #nike #introduces #air #max #its
    Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker
    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately This model marks a step in Nike’s exploration of additive manufacturing, enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear. Building Buzz Online The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on. Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit. Reimagining the Air Max Legacy Drawing inspiration from the original Air Max 1, the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld. Zellerfeld’s fused filament fabricationprocess enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic. Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components. Expansion of 3D Printed Footwear Technology The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect. Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future. #nike #introduces #air #max #its
    Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker
    3dprintingindustry.com
    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately $210. This model marks a step in Nike’s exploration of additive manufacturing (AM), enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear. Building Buzz Online The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on. Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit. Reimagining the Air Max Legacy Drawing inspiration from the original Air Max 1 (1987), the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld. Zellerfeld’s fused filament fabrication (FFF) process enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic. Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components. Expansion of 3D Printed Footwear Technology The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025 (SS25) collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect. Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    0 Comments ·0 Shares ·0 Reviews
  • inZOI is on sale for the first time to celebrate the big June update

    A Discount, You Say?

    inZOI is on sale for the first time to celebrate the big June update
    If you recently felt an itch to check in on inZOI, but never actually bought it, Krafton has a deal for you.

    Image credit: Krafton

    News

    by Sherif Saed
    Contributing Editor

    Published on June 13, 2025

    It seems like things are getting exciting in the world of inZOI once again, after what felt like months of no comms and some patch delays. Earlier this week, the team behind the life sim finally announced a release date for its next update.
    What was initially billed as the May update missed that window by quite a margin, but was officially given a proper release date - in June - just a few days ago. Update v0.2.0 arrives today, and with it, a discount that could maybe tempt those who have yet to jump in.

    To see this content please enable targeting cookies.

    This is inZOI’s first-ever discount since its release back in March. It’s part of a larger sale for publisher Krafton, which also happens to be the company’s first publisher sale on Steam. 17 titles from the publisher’s catalogue are on sale from now until Thursday, June 26.
    Sale percentages vary, and in inZOI’s case, the discount is a bit meagre, slashing the price by just 10%. Now, the game is still in Early Access, and never had the price of a AAA title to begin with, so it’s not exactly the sort of thing to get a bigger 20-30% off - at least not quite yet.

    Watch on YouTube
    The latest inZOI patch introduces official mod support to the game in the form of ModKit, adds same-sex relationships, the ability for Zois to have - and adopt - children, and much more besides.
    Things are popping off elsewhere in the world of life sims and The Sims-likes, too. Paralives, a highly-anticipated, long-in-development Sims-like, recently set a release date for its Steam Early Access launch. The Sims 4 itself just dropped the first trailer for Enchanted by Nature, the game’s next expansion which is set to arrive in July.
    #inzoi #sale #first #time #celebrate
    inZOI is on sale for the first time to celebrate the big June update
    A Discount, You Say? inZOI is on sale for the first time to celebrate the big June update If you recently felt an itch to check in on inZOI, but never actually bought it, Krafton has a deal for you. Image credit: Krafton News by Sherif Saed Contributing Editor Published on June 13, 2025 It seems like things are getting exciting in the world of inZOI once again, after what felt like months of no comms and some patch delays. Earlier this week, the team behind the life sim finally announced a release date for its next update. What was initially billed as the May update missed that window by quite a margin, but was officially given a proper release date - in June - just a few days ago. Update v0.2.0 arrives today, and with it, a discount that could maybe tempt those who have yet to jump in. To see this content please enable targeting cookies. This is inZOI’s first-ever discount since its release back in March. It’s part of a larger sale for publisher Krafton, which also happens to be the company’s first publisher sale on Steam. 17 titles from the publisher’s catalogue are on sale from now until Thursday, June 26. Sale percentages vary, and in inZOI’s case, the discount is a bit meagre, slashing the price by just 10%. Now, the game is still in Early Access, and never had the price of a AAA title to begin with, so it’s not exactly the sort of thing to get a bigger 20-30% off - at least not quite yet. Watch on YouTube The latest inZOI patch introduces official mod support to the game in the form of ModKit, adds same-sex relationships, the ability for Zois to have - and adopt - children, and much more besides. Things are popping off elsewhere in the world of life sims and The Sims-likes, too. Paralives, a highly-anticipated, long-in-development Sims-like, recently set a release date for its Steam Early Access launch. The Sims 4 itself just dropped the first trailer for Enchanted by Nature, the game’s next expansion which is set to arrive in July. #inzoi #sale #first #time #celebrate
    inZOI is on sale for the first time to celebrate the big June update
    www.vg247.com
    A Discount, You Say? inZOI is on sale for the first time to celebrate the big June update If you recently felt an itch to check in on inZOI, but never actually bought it, Krafton has a deal for you. Image credit: Krafton News by Sherif Saed Contributing Editor Published on June 13, 2025 It seems like things are getting exciting in the world of inZOI once again, after what felt like months of no comms and some patch delays. Earlier this week, the team behind the life sim finally announced a release date for its next update. What was initially billed as the May update missed that window by quite a margin, but was officially given a proper release date - in June - just a few days ago. Update v0.2.0 arrives today, and with it, a discount that could maybe tempt those who have yet to jump in. To see this content please enable targeting cookies. This is inZOI’s first-ever discount since its release back in March. It’s part of a larger sale for publisher Krafton, which also happens to be the company’s first publisher sale on Steam. 17 titles from the publisher’s catalogue are on sale from now until Thursday, June 26. Sale percentages vary, and in inZOI’s case, the discount is a bit meagre, slashing the price by just 10%. Now, the game is still in Early Access, and never had the price of a AAA title to begin with, so it’s not exactly the sort of thing to get a bigger 20-30% off - at least not quite yet. Watch on YouTube The latest inZOI patch introduces official mod support to the game in the form of ModKit, adds same-sex relationships, the ability for Zois to have - and adopt - children, and much more besides. Things are popping off elsewhere in the world of life sims and The Sims-likes, too. Paralives, a highly-anticipated, long-in-development Sims-like, recently set a release date for its Steam Early Access launch. The Sims 4 itself just dropped the first trailer for Enchanted by Nature, the game’s next expansion which is set to arrive in July.
    0 Comments ·0 Shares ·0 Reviews
  • OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs
    Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty. 
    Limitations of Existing Training-Based and Training-Free Approaches
    Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly. 
    Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework
    Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks. 
    System Architecture: Reasoning Pruning and Dual-Reference Optimization
    The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth. 

    Empirical Evaluation and Comparative Performance
    The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning. 

    Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems
    In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future. 

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    www.marktechpost.com
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    0 Comments ·0 Shares ·0 Reviews
More Results
CGShares https://cgshares.com