• Sharpen the story – a design guide to start-up’s pitch decks

    In early-stage start-ups, the pitch deck is often the first thing investors see. Sometimes, it’s the only thing. And yet, it rarely gets the same attention as the website or the socials. Most decks are pulled together last minute, with slides that feel rushed, messy, or just off.
    That’s where designers can really make a difference.
    The deck might seem like just another task, but it’s a chance to work on something strategic early on and help shape how the company is understood. It offers a rare opportunity to collaborate closely with copywriters, strategists and the founders to turn their vision into a clear and convincing story.
    Founders bring the vision, but more and more, design and brand teams are being asked to shape how that vision is told, and sold. So here are five handy things we’ve learned at SIDE ST for the next time you’re asked to design a deck.
    Think in context
    Designers stepping into pitch work should begin by understanding the full picture – who the deck is for, what outcomes it’s meant to drive and how it fits into the broader brand and business context. Their role isn’t just to make things look good, but to prioritise clarity over surface-level aesthetics.
    It’s about getting into the founders’ mindset, shaping visuals and copy around the message, and connecting with the intended audience. Every decision, from slide hierarchy to image selection, should reinforce the business goals behind the deck.
    Support the narrative
    Visuals are more subjective than words, and that’s exactly what gives them power. The right image can suggest an idea, reinforce a value, or subtly shift perception without a single word.
    Whether it’s hinting at accessibility, signalling innovation, or grounding the product in context, design plays a strategic role in how a company is understood. It gives designers the opportunity to take centre stage in the storytelling, shaping how the company is understood through visual choices.
    But that influence works both ways. Used thoughtlessly, visuals can distort the story, suggesting the wrong market, implying a different stage of maturity, or confusing people about the product itself. When used with care, they become a powerful design tool to sharpen the narrative and spark interest from the very first slide.
    Keep it real
    Stock photos can be tempting. They’re high-quality and easy to drop in, especially when the real images a start-up has can be grainy, unfinished, or simply not there yet.
    But in early-stage pitch decks, they often work against your client. Instead of supporting the story, they flatten it, and rarely reflect the actual team, product, or context.
    This is your chance as a designer to lean into what’s real, even if it’s a bit rough. Designers can elevate even scrappy assets with thoughtful framing and treatment, turning rough imagery into a strength. In early-stage storytelling, “real” often resonates more than “perfect.”
    Pay attention to the format
    Even if you’re brought in just to design the deck, don’t treat it as a standalone piece. It’s often the first brand touchpoint investors will see—but it won’t be the last. They’ll go on to check the website, scroll through social posts, and form an impression based on how it all fits together.
    Early-stage startups might not have full brand guidelines in place yet, but that doesn’t mean there’s no need for consistency. In fact, it gives designers a unique opportunity to lay the foundation. A strong, thoughtful deck can help shape the early visual language and give the team something to build on as the brand grows.
    Before you hit export
    For designers, the deck isn’t just another deliverable. It’s an early tool that shapes and impacts investor perception, internal alignment and founder confidence. It’s a strategic design moment to influence the trajectory of a company before it’s fully formed.
    Designers who understand the pressure, pace and uncertainty founders face at this stage are better equipped to deliver work that resonates. This is about more than simply polishing slides, it’s about helping early-stage teams tell a sharper, more human story when it matters most.
    Maor Ofek is founder of SIDE ST, a brand consultancy that works mainly with start-ups. 
    #sharpen #story #design #guide #startups
    Sharpen the story – a design guide to start-up’s pitch decks
    In early-stage start-ups, the pitch deck is often the first thing investors see. Sometimes, it’s the only thing. And yet, it rarely gets the same attention as the website or the socials. Most decks are pulled together last minute, with slides that feel rushed, messy, or just off. That’s where designers can really make a difference. The deck might seem like just another task, but it’s a chance to work on something strategic early on and help shape how the company is understood. It offers a rare opportunity to collaborate closely with copywriters, strategists and the founders to turn their vision into a clear and convincing story. Founders bring the vision, but more and more, design and brand teams are being asked to shape how that vision is told, and sold. So here are five handy things we’ve learned at SIDE ST for the next time you’re asked to design a deck. Think in context Designers stepping into pitch work should begin by understanding the full picture – who the deck is for, what outcomes it’s meant to drive and how it fits into the broader brand and business context. Their role isn’t just to make things look good, but to prioritise clarity over surface-level aesthetics. It’s about getting into the founders’ mindset, shaping visuals and copy around the message, and connecting with the intended audience. Every decision, from slide hierarchy to image selection, should reinforce the business goals behind the deck. Support the narrative Visuals are more subjective than words, and that’s exactly what gives them power. The right image can suggest an idea, reinforce a value, or subtly shift perception without a single word. Whether it’s hinting at accessibility, signalling innovation, or grounding the product in context, design plays a strategic role in how a company is understood. It gives designers the opportunity to take centre stage in the storytelling, shaping how the company is understood through visual choices. But that influence works both ways. Used thoughtlessly, visuals can distort the story, suggesting the wrong market, implying a different stage of maturity, or confusing people about the product itself. When used with care, they become a powerful design tool to sharpen the narrative and spark interest from the very first slide. Keep it real Stock photos can be tempting. They’re high-quality and easy to drop in, especially when the real images a start-up has can be grainy, unfinished, or simply not there yet. But in early-stage pitch decks, they often work against your client. Instead of supporting the story, they flatten it, and rarely reflect the actual team, product, or context. This is your chance as a designer to lean into what’s real, even if it’s a bit rough. Designers can elevate even scrappy assets with thoughtful framing and treatment, turning rough imagery into a strength. In early-stage storytelling, “real” often resonates more than “perfect.” Pay attention to the format Even if you’re brought in just to design the deck, don’t treat it as a standalone piece. It’s often the first brand touchpoint investors will see—but it won’t be the last. They’ll go on to check the website, scroll through social posts, and form an impression based on how it all fits together. Early-stage startups might not have full brand guidelines in place yet, but that doesn’t mean there’s no need for consistency. In fact, it gives designers a unique opportunity to lay the foundation. A strong, thoughtful deck can help shape the early visual language and give the team something to build on as the brand grows. Before you hit export For designers, the deck isn’t just another deliverable. It’s an early tool that shapes and impacts investor perception, internal alignment and founder confidence. It’s a strategic design moment to influence the trajectory of a company before it’s fully formed. Designers who understand the pressure, pace and uncertainty founders face at this stage are better equipped to deliver work that resonates. This is about more than simply polishing slides, it’s about helping early-stage teams tell a sharper, more human story when it matters most. Maor Ofek is founder of SIDE ST, a brand consultancy that works mainly with start-ups.  #sharpen #story #design #guide #startups
    WWW.DESIGNWEEK.CO.UK
    Sharpen the story – a design guide to start-up’s pitch decks
    In early-stage start-ups, the pitch deck is often the first thing investors see. Sometimes, it’s the only thing. And yet, it rarely gets the same attention as the website or the socials. Most decks are pulled together last minute, with slides that feel rushed, messy, or just off. That’s where designers can really make a difference. The deck might seem like just another task, but it’s a chance to work on something strategic early on and help shape how the company is understood. It offers a rare opportunity to collaborate closely with copywriters, strategists and the founders to turn their vision into a clear and convincing story. Founders bring the vision, but more and more, design and brand teams are being asked to shape how that vision is told, and sold. So here are five handy things we’ve learned at SIDE ST for the next time you’re asked to design a deck. Think in context Designers stepping into pitch work should begin by understanding the full picture – who the deck is for, what outcomes it’s meant to drive and how it fits into the broader brand and business context. Their role isn’t just to make things look good, but to prioritise clarity over surface-level aesthetics. It’s about getting into the founders’ mindset, shaping visuals and copy around the message, and connecting with the intended audience. Every decision, from slide hierarchy to image selection, should reinforce the business goals behind the deck. Support the narrative Visuals are more subjective than words, and that’s exactly what gives them power. The right image can suggest an idea, reinforce a value, or subtly shift perception without a single word. Whether it’s hinting at accessibility, signalling innovation, or grounding the product in context, design plays a strategic role in how a company is understood. It gives designers the opportunity to take centre stage in the storytelling, shaping how the company is understood through visual choices. But that influence works both ways. Used thoughtlessly, visuals can distort the story, suggesting the wrong market, implying a different stage of maturity, or confusing people about the product itself. When used with care, they become a powerful design tool to sharpen the narrative and spark interest from the very first slide. Keep it real Stock photos can be tempting. They’re high-quality and easy to drop in, especially when the real images a start-up has can be grainy, unfinished, or simply not there yet. But in early-stage pitch decks, they often work against your client. Instead of supporting the story, they flatten it, and rarely reflect the actual team, product, or context. This is your chance as a designer to lean into what’s real, even if it’s a bit rough. Designers can elevate even scrappy assets with thoughtful framing and treatment, turning rough imagery into a strength. In early-stage storytelling, “real” often resonates more than “perfect.” Pay attention to the format Even if you’re brought in just to design the deck, don’t treat it as a standalone piece. It’s often the first brand touchpoint investors will see—but it won’t be the last. They’ll go on to check the website, scroll through social posts, and form an impression based on how it all fits together. Early-stage startups might not have full brand guidelines in place yet, but that doesn’t mean there’s no need for consistency. In fact, it gives designers a unique opportunity to lay the foundation. A strong, thoughtful deck can help shape the early visual language and give the team something to build on as the brand grows. Before you hit export For designers, the deck isn’t just another deliverable. It’s an early tool that shapes and impacts investor perception, internal alignment and founder confidence. It’s a strategic design moment to influence the trajectory of a company before it’s fully formed. Designers who understand the pressure, pace and uncertainty founders face at this stage are better equipped to deliver work that resonates. This is about more than simply polishing slides, it’s about helping early-stage teams tell a sharper, more human story when it matters most. Maor Ofek is founder of SIDE ST, a brand consultancy that works mainly with start-ups. 
    Like
    Love
    Wow
    Sad
    Angry
    557
    2 Комментарии 0 Поделились 0 предпросмотр
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Комментарии 0 Поделились 0 предпросмотр
  • ByteDance Researchers Introduce DetailFlow: A 1D Coarse-to-Fine Autoregressive Framework for Faster, Token-Efficient Image Generation

    Autoregressive image generation has been shaped by advances in sequential modeling, originally seen in natural language processing. This field focuses on generating images one token at a time, similar to how sentences are constructed in language models. The appeal of this approach lies in its ability to maintain structural coherence across the image while allowing for high levels of control during the generation process. As researchers began to apply these techniques to visual data, they found that structured prediction not only preserved spatial integrity but also supported tasks like image manipulation and multimodal translation effectively.
    Despite these benefits, generating high-resolution images remains computationally expensive and slow. A primary issue is the number of tokens needed to represent complex visuals. Raster-scan methods that flatten 2D images into linear sequences require thousands of tokens for detailed images, resulting in long inference times and high memory consumption. Models like Infinity need over 10,000 tokens for a 1024×1024 image. This becomes unsustainable for real-time applications or when scaling to more extensive datasets. Reducing the token burden while preserving or improving output quality has become a pressing challenge.

    Efforts to mitigate token inflation have led to innovations like next-scale prediction seen in VAR and FlexVAR. These models create images by predicting progressively finer scales, which imitates the human tendency to sketch rough outlines before adding detail. However, they still rely on hundreds of tokens—680 in the case of VAR and FlexVAR for 256×256 images. Moreover, approaches like TiTok and FlexTok use 1D tokenization to compress spatial redundancy, but they often fail to scale efficiently. For example, FlexTok’s gFID increases from 1.9 at 32 tokens to 2.5 at 256 tokens, highlighting a degradation in output quality as the token count grows.
    Researchers from ByteDance introduced DetailFlow, a 1D autoregressive image generation framework. This method arranges token sequences from global to fine detail using a process called next-detail prediction. Unlike traditional 2D raster-scan or scale-based techniques, DetailFlow employs a 1D tokenizer trained on progressively degraded images. This design allows the model to prioritize foundational image structures before refining visual details. By mapping tokens directly to resolution levels, DetailFlow significantly reduces token requirements, enabling images to be generated in a semantically ordered, coarse-to-fine manner.

    The mechanism in DetailFlow centers on a 1D latent space where each token contributes incrementally more detail. Earlier tokens encode global features, while later tokens refine specific visual aspects. To train this, the researchers created a resolution mapping function that links token count to target resolution. During training, the model is exposed to images of varying quality levels and learns to predict progressively higher-resolution outputs as more tokens are introduced. It also implements parallel token prediction by grouping sequences and predicting entire sets at once. Since parallel prediction can introduce sampling errors, a self-correction mechanism was integrated. This system perturbs certain tokens during training and teaches subsequent tokens to compensate, ensuring that final images maintain structural and visual integrity.
    The results from the experiments on the ImageNet 256×256 benchmark were noteworthy. DetailFlow achieved a gFID score of 2.96 using only 128 tokens, outperforming VAR at 3.3 and FlexVAR at 3.05, both of which used 680 tokens. Even more impressive, DetailFlow-64 reached a gFID of 2.62 using 512 tokens. In terms of speed, it delivered nearly double the inference rate of VAR and FlexVAR. A further ablation study confirmed that the self-correction training and semantic ordering of tokens substantially improved output quality. For example, enabling self-correction dropped the gFID from 4.11 to 3.68 in one setting. These metrics demonstrate both higher quality and faster generation compared to established models.

    By focusing on semantic structure and reducing redundancy, DetailFlow presents a viable solution to long-standing issues in autoregressive image generation. The method’s coarse-to-fine approach, efficient parallel decoding, and ability to self-correct highlight how architectural innovations can address performance and scalability limitations. Through their structured use of 1D tokens, the researchers from ByteDance have demonstrated a model that maintains high image fidelity while significantly reducing computational load, making it a valuable addition to image synthesis research.

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Teaching AI to Say ‘I Don’t Know’: A New Dataset Mitigates Hallucinations from Reinforcement FinetuningNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal ReasoningNikhilhttps://www.marktechpost.com/author/nikhil0980/NVIDIA AI Introduces Fast-dLLM: A Training-Free Framework That Brings KV Caching and Parallel Decoding to Diffusion LLMsNikhilhttps://www.marktechpost.com/author/nikhil0980/Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation
    #bytedance #researchers #introduce #detailflow #coarsetofine
    ByteDance Researchers Introduce DetailFlow: A 1D Coarse-to-Fine Autoregressive Framework for Faster, Token-Efficient Image Generation
    Autoregressive image generation has been shaped by advances in sequential modeling, originally seen in natural language processing. This field focuses on generating images one token at a time, similar to how sentences are constructed in language models. The appeal of this approach lies in its ability to maintain structural coherence across the image while allowing for high levels of control during the generation process. As researchers began to apply these techniques to visual data, they found that structured prediction not only preserved spatial integrity but also supported tasks like image manipulation and multimodal translation effectively. Despite these benefits, generating high-resolution images remains computationally expensive and slow. A primary issue is the number of tokens needed to represent complex visuals. Raster-scan methods that flatten 2D images into linear sequences require thousands of tokens for detailed images, resulting in long inference times and high memory consumption. Models like Infinity need over 10,000 tokens for a 1024×1024 image. This becomes unsustainable for real-time applications or when scaling to more extensive datasets. Reducing the token burden while preserving or improving output quality has become a pressing challenge. Efforts to mitigate token inflation have led to innovations like next-scale prediction seen in VAR and FlexVAR. These models create images by predicting progressively finer scales, which imitates the human tendency to sketch rough outlines before adding detail. However, they still rely on hundreds of tokens—680 in the case of VAR and FlexVAR for 256×256 images. Moreover, approaches like TiTok and FlexTok use 1D tokenization to compress spatial redundancy, but they often fail to scale efficiently. For example, FlexTok’s gFID increases from 1.9 at 32 tokens to 2.5 at 256 tokens, highlighting a degradation in output quality as the token count grows. Researchers from ByteDance introduced DetailFlow, a 1D autoregressive image generation framework. This method arranges token sequences from global to fine detail using a process called next-detail prediction. Unlike traditional 2D raster-scan or scale-based techniques, DetailFlow employs a 1D tokenizer trained on progressively degraded images. This design allows the model to prioritize foundational image structures before refining visual details. By mapping tokens directly to resolution levels, DetailFlow significantly reduces token requirements, enabling images to be generated in a semantically ordered, coarse-to-fine manner. The mechanism in DetailFlow centers on a 1D latent space where each token contributes incrementally more detail. Earlier tokens encode global features, while later tokens refine specific visual aspects. To train this, the researchers created a resolution mapping function that links token count to target resolution. During training, the model is exposed to images of varying quality levels and learns to predict progressively higher-resolution outputs as more tokens are introduced. It also implements parallel token prediction by grouping sequences and predicting entire sets at once. Since parallel prediction can introduce sampling errors, a self-correction mechanism was integrated. This system perturbs certain tokens during training and teaches subsequent tokens to compensate, ensuring that final images maintain structural and visual integrity. The results from the experiments on the ImageNet 256×256 benchmark were noteworthy. DetailFlow achieved a gFID score of 2.96 using only 128 tokens, outperforming VAR at 3.3 and FlexVAR at 3.05, both of which used 680 tokens. Even more impressive, DetailFlow-64 reached a gFID of 2.62 using 512 tokens. In terms of speed, it delivered nearly double the inference rate of VAR and FlexVAR. A further ablation study confirmed that the self-correction training and semantic ordering of tokens substantially improved output quality. For example, enabling self-correction dropped the gFID from 4.11 to 3.68 in one setting. These metrics demonstrate both higher quality and faster generation compared to established models. By focusing on semantic structure and reducing redundancy, DetailFlow presents a viable solution to long-standing issues in autoregressive image generation. The method’s coarse-to-fine approach, efficient parallel decoding, and ability to self-correct highlight how architectural innovations can address performance and scalability limitations. Through their structured use of 1D tokens, the researchers from ByteDance have demonstrated a model that maintains high image fidelity while significantly reducing computational load, making it a valuable addition to image synthesis research. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Teaching AI to Say ‘I Don’t Know’: A New Dataset Mitigates Hallucinations from Reinforcement FinetuningNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal ReasoningNikhilhttps://www.marktechpost.com/author/nikhil0980/NVIDIA AI Introduces Fast-dLLM: A Training-Free Framework That Brings KV Caching and Parallel Decoding to Diffusion LLMsNikhilhttps://www.marktechpost.com/author/nikhil0980/Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation #bytedance #researchers #introduce #detailflow #coarsetofine
    WWW.MARKTECHPOST.COM
    ByteDance Researchers Introduce DetailFlow: A 1D Coarse-to-Fine Autoregressive Framework for Faster, Token-Efficient Image Generation
    Autoregressive image generation has been shaped by advances in sequential modeling, originally seen in natural language processing. This field focuses on generating images one token at a time, similar to how sentences are constructed in language models. The appeal of this approach lies in its ability to maintain structural coherence across the image while allowing for high levels of control during the generation process. As researchers began to apply these techniques to visual data, they found that structured prediction not only preserved spatial integrity but also supported tasks like image manipulation and multimodal translation effectively. Despite these benefits, generating high-resolution images remains computationally expensive and slow. A primary issue is the number of tokens needed to represent complex visuals. Raster-scan methods that flatten 2D images into linear sequences require thousands of tokens for detailed images, resulting in long inference times and high memory consumption. Models like Infinity need over 10,000 tokens for a 1024×1024 image. This becomes unsustainable for real-time applications or when scaling to more extensive datasets. Reducing the token burden while preserving or improving output quality has become a pressing challenge. Efforts to mitigate token inflation have led to innovations like next-scale prediction seen in VAR and FlexVAR. These models create images by predicting progressively finer scales, which imitates the human tendency to sketch rough outlines before adding detail. However, they still rely on hundreds of tokens—680 in the case of VAR and FlexVAR for 256×256 images. Moreover, approaches like TiTok and FlexTok use 1D tokenization to compress spatial redundancy, but they often fail to scale efficiently. For example, FlexTok’s gFID increases from 1.9 at 32 tokens to 2.5 at 256 tokens, highlighting a degradation in output quality as the token count grows. Researchers from ByteDance introduced DetailFlow, a 1D autoregressive image generation framework. This method arranges token sequences from global to fine detail using a process called next-detail prediction. Unlike traditional 2D raster-scan or scale-based techniques, DetailFlow employs a 1D tokenizer trained on progressively degraded images. This design allows the model to prioritize foundational image structures before refining visual details. By mapping tokens directly to resolution levels, DetailFlow significantly reduces token requirements, enabling images to be generated in a semantically ordered, coarse-to-fine manner. The mechanism in DetailFlow centers on a 1D latent space where each token contributes incrementally more detail. Earlier tokens encode global features, while later tokens refine specific visual aspects. To train this, the researchers created a resolution mapping function that links token count to target resolution. During training, the model is exposed to images of varying quality levels and learns to predict progressively higher-resolution outputs as more tokens are introduced. It also implements parallel token prediction by grouping sequences and predicting entire sets at once. Since parallel prediction can introduce sampling errors, a self-correction mechanism was integrated. This system perturbs certain tokens during training and teaches subsequent tokens to compensate, ensuring that final images maintain structural and visual integrity. The results from the experiments on the ImageNet 256×256 benchmark were noteworthy. DetailFlow achieved a gFID score of 2.96 using only 128 tokens, outperforming VAR at 3.3 and FlexVAR at 3.05, both of which used 680 tokens. Even more impressive, DetailFlow-64 reached a gFID of 2.62 using 512 tokens. In terms of speed, it delivered nearly double the inference rate of VAR and FlexVAR. A further ablation study confirmed that the self-correction training and semantic ordering of tokens substantially improved output quality. For example, enabling self-correction dropped the gFID from 4.11 to 3.68 in one setting. These metrics demonstrate both higher quality and faster generation compared to established models. By focusing on semantic structure and reducing redundancy, DetailFlow presents a viable solution to long-standing issues in autoregressive image generation. The method’s coarse-to-fine approach, efficient parallel decoding, and ability to self-correct highlight how architectural innovations can address performance and scalability limitations. Through their structured use of 1D tokens, the researchers from ByteDance have demonstrated a model that maintains high image fidelity while significantly reducing computational load, making it a valuable addition to image synthesis research. Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Teaching AI to Say ‘I Don’t Know’: A New Dataset Mitigates Hallucinations from Reinforcement FinetuningNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces LLaDA-V: A Purely Diffusion-Based Multimodal Large Language Model for Visual Instruction Tuning and Multimodal ReasoningNikhilhttps://www.marktechpost.com/author/nikhil0980/NVIDIA AI Introduces Fast-dLLM: A Training-Free Framework That Brings KV Caching and Parallel Decoding to Diffusion LLMsNikhilhttps://www.marktechpost.com/author/nikhil0980/Meet NovelSeek: A Unified Multi-Agent Framework for Autonomous Scientific Research from Hypothesis Generation to Experimental Validation
    Like
    Love
    Wow
    Sad
    Angry
    821
    0 Комментарии 0 Поделились 0 предпросмотр
  • Inside The AI-Powered Modeling Agency Boom — And What Comes Next

    From lifelike avatars to automated fan interactions, AI is remaking digital modeling. But can tech ... More scale intimacy — or will it erode the human spark behind the screen?getty
    The AI boom has been defined by unprecedented innovation across nearly every sector. From improving flight punctuality through AI-powered scheduling to detecting early markers of Alzheimer’s disease, AI is modifying how we live and work. And the advertising world isn’t left out.

    In March of this year, OpenAI’s GPT-4o sent the internet into a frenzy with its ability to generate Studio Ghibli-style images. The model produces realistic, emotionally nuanced visuals from a series of prompts — a feat that has led some to predict the demise of visual arts as we know them. While such conclusions may be premature, there’s growing belief among industry players that AI could transform how digital model agencies operate.

    That belief isn’t limited to one startup. A new class of AI-powered agencies — including FanPro, Lalaland.ai, Deep Agency andThe Diigitals — is testing whether modeling can be automated without losing its creative edge. Some use AI to generate lifelike avatars. Others offer virtual photo studios, CRM — customer relationship management — integrations, or creator monetization tools. Together, they reflect a big shift in how digital modeling agencies think about labor, revenue and scale.

    FanPro — founded by Tyron Humphris in 2023 to help digital model agencies scale efficiently — offers a striking case study. Fully self-funded, Humphris said in an interview that the company reached million in revenue within its first 90 days and crossed eight figures by 2024, all while maintaining a lean team by automating nearly every process.

    As Humphris noted, “the companies that will lead this next decade won’t just be the ones with the best marketing or biggest budgets. They’ll be the ones who use AI, automation and systems thinking to scale with precision, all while staying lean and agile.”
    That explains the big bet that startups like FanPro are making — but how far can it really go? And why should digital model agencies care in the first place?
    Automation In Digital Model Agencies
    To understand how automation works in the digital modeling industry — a fast-rising corner of the creator economy — it helps to understand what it’s replacing. A typical digital model agency juggles five or more monetization platforms per creator — from OnlyFans and Fansly to TikTok and Instagram. But behind every viral post is a grind of scheduling, analytics, upselling, customer support and retention. The average agency may need 10 to 15 contractors to manage a roster of just a few high-performing creators.
    These agencies oversee a complex cycle: content creation, onboarding, audience engagement and sales funnel optimization, usually across several monetization platforms. According to Humphris, there’s often a misconception that running a digital model agency is just about posting pretty pictures. But in reality, he noted, it’s more. “It’s CRM, data science and psychology all wrapped in one. If AI can streamline even half of that, it’s a game-changer.”

    That claim reflects a growing pain point in the creator economy, where agencies swim in an ocean of tools in an attempt to monetize attention for creators while simultaneously managing marketing, sales and customer support. For context, a 2024 Clevertouch Consulting study revealed that 54% of marketers use more than 50 tools to manage operations — many stitched together with Zapier or manual workarounds.
    Tyron Humphris, founder of FanProFanPro
    But, according to Humphris, “no matter how strong your offer is, if you don’t have systems, processes and accountability built into the business, it’s going to collapse under pressure.”
    And that’s where AI steps in. Beyond handling routine tasks, large language models and automation stacks now allow agencies to scale operations while staying lean. With minimal human input, agencies can schedule posts, auto-respond to DMs, upsell subscriptions, track social analytics and manage retention flows. What once required a full team of marketers, virtual assistants and sales reps can now be executed by a few well-trained AI agents.
    FanPro claims that over 90% of its operations — from dynamic pricing to fan interactions — are now handled by automation. Likewise, Deep Agency allows creators to generate professional-grade photo shoots without booking a studio or hiring staff and Lalaland.ai helps fashion brands generate AI avatars to reduce production costs and increase diversity in representation.
    A Necessary Human Touch
    Still, not everyone is convinced that AI can capture the nuance of digital intimacy. Some experts have raised concerns that hyper automation in creator-driven industries could flatten human expression into predictable engagement patterns, risking long-term user loyalty.
    A 2024 ContentGrip study of 1,000 consumers found 80% of respondents would likely switch brands that rely heavily on AI-generated emails, citing a loss of authenticity. Nearly half said such messages made them feel “less connected” to the brand.
    Humphris doesn’t disagree.
    “AI can do a lot, but it needs to be paired with someone who understands psychology,” he said. “We didn’t scale because we had the best tech. We scaled because we understood human behavior and built systems that respected it.”
    Humphris’ sentiment isn’t a mere anecdote but one rooted in research. For example, a recent study by Northeastern University showed that AI influencers often reduce brand trust — especially when users aren’t aware the content is AI-generated. The implication is clear: over-automating the wrong parts of human interaction can backfire.
    Automation doesn’t — and shouldn’t — mean that human input becomes obsolete. Rather, as many industry experts have noted, it will enhance efficiency but not replace empathy. While AI can process data at speed and generate alluring visuals, it cannot replicate human creativity or emotional intelligence. Neither does AI know the psychology of human behavior like humans do, a trait Humphris credits for their almost-instant success.
    What’s Working — And What’s Not
    Lalaland.ai and The Diigitals have earned praise for enhancing inclusivity, enabling brands to feature underrepresented body types, skin tones and styles. Meanwhile, FanPro focuses on building AI “growth engines” for agencies — full-stack systems that combine monetization tools, CRM and content flows.
    But not all reactions have been positive.
    In November 2024, fashion brand Mango faced backlash for its use of AI-generated models, which critics called “false advertising” and “a threat to real jobs.” The New York Post covered the fallout in detail, highlighting how ethical lines are still being drawn.
    As brands look to balance cost savings with authenticity, some have begun labeling AI-generated content more clearly — or embedding human oversight into workflows, rather than removing it.
    Despite offering an automation stack, FanPro itself wasn’t an immediate adopter of automation in its processes. But, as Humphris noted, embracing AI made all the difference for the company. “If we had adopted AI and automation earlier, we would’ve hit 8 figures much faster and with far less stress,” he noted.
    Automation In The New Era
    FanPro is a great example of how AI integration, when done the right way, could be a profitable venture for digital model agencies.
    Whether or not the company’s model becomes the blueprint for AI-first digital agencies, it’s clear that there’s a big shift in the creator economy, where automation isn’t only viewed as a time-saver, but also as a foundational pillar for businesses.
    As digital model agencies lean further into an AI-centric future, the bigger task is remembering what not to automate — the spark of human connection that built the industry in the first place.
    “In this new era of automation,” Humphris said, “the smartest agencies won’t just ask what AI can do. They’ll ask what it shouldn’t.”
    #inside #aipowered #modeling #agency #boom
    Inside The AI-Powered Modeling Agency Boom — And What Comes Next
    From lifelike avatars to automated fan interactions, AI is remaking digital modeling. But can tech ... More scale intimacy — or will it erode the human spark behind the screen?getty The AI boom has been defined by unprecedented innovation across nearly every sector. From improving flight punctuality through AI-powered scheduling to detecting early markers of Alzheimer’s disease, AI is modifying how we live and work. And the advertising world isn’t left out. In March of this year, OpenAI’s GPT-4o sent the internet into a frenzy with its ability to generate Studio Ghibli-style images. The model produces realistic, emotionally nuanced visuals from a series of prompts — a feat that has led some to predict the demise of visual arts as we know them. While such conclusions may be premature, there’s growing belief among industry players that AI could transform how digital model agencies operate. That belief isn’t limited to one startup. A new class of AI-powered agencies — including FanPro, Lalaland.ai, Deep Agency andThe Diigitals — is testing whether modeling can be automated without losing its creative edge. Some use AI to generate lifelike avatars. Others offer virtual photo studios, CRM — customer relationship management — integrations, or creator monetization tools. Together, they reflect a big shift in how digital modeling agencies think about labor, revenue and scale. FanPro — founded by Tyron Humphris in 2023 to help digital model agencies scale efficiently — offers a striking case study. Fully self-funded, Humphris said in an interview that the company reached million in revenue within its first 90 days and crossed eight figures by 2024, all while maintaining a lean team by automating nearly every process. As Humphris noted, “the companies that will lead this next decade won’t just be the ones with the best marketing or biggest budgets. They’ll be the ones who use AI, automation and systems thinking to scale with precision, all while staying lean and agile.” That explains the big bet that startups like FanPro are making — but how far can it really go? And why should digital model agencies care in the first place? Automation In Digital Model Agencies To understand how automation works in the digital modeling industry — a fast-rising corner of the creator economy — it helps to understand what it’s replacing. A typical digital model agency juggles five or more monetization platforms per creator — from OnlyFans and Fansly to TikTok and Instagram. But behind every viral post is a grind of scheduling, analytics, upselling, customer support and retention. The average agency may need 10 to 15 contractors to manage a roster of just a few high-performing creators. These agencies oversee a complex cycle: content creation, onboarding, audience engagement and sales funnel optimization, usually across several monetization platforms. According to Humphris, there’s often a misconception that running a digital model agency is just about posting pretty pictures. But in reality, he noted, it’s more. “It’s CRM, data science and psychology all wrapped in one. If AI can streamline even half of that, it’s a game-changer.” That claim reflects a growing pain point in the creator economy, where agencies swim in an ocean of tools in an attempt to monetize attention for creators while simultaneously managing marketing, sales and customer support. For context, a 2024 Clevertouch Consulting study revealed that 54% of marketers use more than 50 tools to manage operations — many stitched together with Zapier or manual workarounds. Tyron Humphris, founder of FanProFanPro But, according to Humphris, “no matter how strong your offer is, if you don’t have systems, processes and accountability built into the business, it’s going to collapse under pressure.” And that’s where AI steps in. Beyond handling routine tasks, large language models and automation stacks now allow agencies to scale operations while staying lean. With minimal human input, agencies can schedule posts, auto-respond to DMs, upsell subscriptions, track social analytics and manage retention flows. What once required a full team of marketers, virtual assistants and sales reps can now be executed by a few well-trained AI agents. FanPro claims that over 90% of its operations — from dynamic pricing to fan interactions — are now handled by automation. Likewise, Deep Agency allows creators to generate professional-grade photo shoots without booking a studio or hiring staff and Lalaland.ai helps fashion brands generate AI avatars to reduce production costs and increase diversity in representation. A Necessary Human Touch Still, not everyone is convinced that AI can capture the nuance of digital intimacy. Some experts have raised concerns that hyper automation in creator-driven industries could flatten human expression into predictable engagement patterns, risking long-term user loyalty. A 2024 ContentGrip study of 1,000 consumers found 80% of respondents would likely switch brands that rely heavily on AI-generated emails, citing a loss of authenticity. Nearly half said such messages made them feel “less connected” to the brand. Humphris doesn’t disagree. “AI can do a lot, but it needs to be paired with someone who understands psychology,” he said. “We didn’t scale because we had the best tech. We scaled because we understood human behavior and built systems that respected it.” Humphris’ sentiment isn’t a mere anecdote but one rooted in research. For example, a recent study by Northeastern University showed that AI influencers often reduce brand trust — especially when users aren’t aware the content is AI-generated. The implication is clear: over-automating the wrong parts of human interaction can backfire. Automation doesn’t — and shouldn’t — mean that human input becomes obsolete. Rather, as many industry experts have noted, it will enhance efficiency but not replace empathy. While AI can process data at speed and generate alluring visuals, it cannot replicate human creativity or emotional intelligence. Neither does AI know the psychology of human behavior like humans do, a trait Humphris credits for their almost-instant success. What’s Working — And What’s Not Lalaland.ai and The Diigitals have earned praise for enhancing inclusivity, enabling brands to feature underrepresented body types, skin tones and styles. Meanwhile, FanPro focuses on building AI “growth engines” for agencies — full-stack systems that combine monetization tools, CRM and content flows. But not all reactions have been positive. In November 2024, fashion brand Mango faced backlash for its use of AI-generated models, which critics called “false advertising” and “a threat to real jobs.” The New York Post covered the fallout in detail, highlighting how ethical lines are still being drawn. As brands look to balance cost savings with authenticity, some have begun labeling AI-generated content more clearly — or embedding human oversight into workflows, rather than removing it. Despite offering an automation stack, FanPro itself wasn’t an immediate adopter of automation in its processes. But, as Humphris noted, embracing AI made all the difference for the company. “If we had adopted AI and automation earlier, we would’ve hit 8 figures much faster and with far less stress,” he noted. Automation In The New Era FanPro is a great example of how AI integration, when done the right way, could be a profitable venture for digital model agencies. Whether or not the company’s model becomes the blueprint for AI-first digital agencies, it’s clear that there’s a big shift in the creator economy, where automation isn’t only viewed as a time-saver, but also as a foundational pillar for businesses. As digital model agencies lean further into an AI-centric future, the bigger task is remembering what not to automate — the spark of human connection that built the industry in the first place. “In this new era of automation,” Humphris said, “the smartest agencies won’t just ask what AI can do. They’ll ask what it shouldn’t.” #inside #aipowered #modeling #agency #boom
    WWW.FORBES.COM
    Inside The AI-Powered Modeling Agency Boom — And What Comes Next
    From lifelike avatars to automated fan interactions, AI is remaking digital modeling. But can tech ... More scale intimacy — or will it erode the human spark behind the screen?getty The AI boom has been defined by unprecedented innovation across nearly every sector. From improving flight punctuality through AI-powered scheduling to detecting early markers of Alzheimer’s disease, AI is modifying how we live and work. And the advertising world isn’t left out. In March of this year, OpenAI’s GPT-4o sent the internet into a frenzy with its ability to generate Studio Ghibli-style images. The model produces realistic, emotionally nuanced visuals from a series of prompts — a feat that has led some to predict the demise of visual arts as we know them. While such conclusions may be premature, there’s growing belief among industry players that AI could transform how digital model agencies operate. That belief isn’t limited to one startup. A new class of AI-powered agencies — including FanPro, Lalaland.ai, Deep Agency andThe Diigitals — is testing whether modeling can be automated without losing its creative edge. Some use AI to generate lifelike avatars. Others offer virtual photo studios, CRM — customer relationship management — integrations, or creator monetization tools. Together, they reflect a big shift in how digital modeling agencies think about labor, revenue and scale. FanPro — founded by Tyron Humphris in 2023 to help digital model agencies scale efficiently — offers a striking case study. Fully self-funded, Humphris said in an interview that the company reached $1 million in revenue within its first 90 days and crossed eight figures by 2024, all while maintaining a lean team by automating nearly every process. As Humphris noted, “the companies that will lead this next decade won’t just be the ones with the best marketing or biggest budgets. They’ll be the ones who use AI, automation and systems thinking to scale with precision, all while staying lean and agile.” That explains the big bet that startups like FanPro are making — but how far can it really go? And why should digital model agencies care in the first place? Automation In Digital Model Agencies To understand how automation works in the digital modeling industry — a fast-rising corner of the creator economy — it helps to understand what it’s replacing. A typical digital model agency juggles five or more monetization platforms per creator — from OnlyFans and Fansly to TikTok and Instagram. But behind every viral post is a grind of scheduling, analytics, upselling, customer support and retention. The average agency may need 10 to 15 contractors to manage a roster of just a few high-performing creators. These agencies oversee a complex cycle: content creation, onboarding, audience engagement and sales funnel optimization, usually across several monetization platforms. According to Humphris, there’s often a misconception that running a digital model agency is just about posting pretty pictures. But in reality, he noted, it’s more. “It’s CRM, data science and psychology all wrapped in one. If AI can streamline even half of that, it’s a game-changer.” That claim reflects a growing pain point in the creator economy, where agencies swim in an ocean of tools in an attempt to monetize attention for creators while simultaneously managing marketing, sales and customer support. For context, a 2024 Clevertouch Consulting study revealed that 54% of marketers use more than 50 tools to manage operations — many stitched together with Zapier or manual workarounds. Tyron Humphris, founder of FanProFanPro But, according to Humphris, “no matter how strong your offer is, if you don’t have systems, processes and accountability built into the business, it’s going to collapse under pressure.” And that’s where AI steps in. Beyond handling routine tasks, large language models and automation stacks now allow agencies to scale operations while staying lean. With minimal human input, agencies can schedule posts, auto-respond to DMs, upsell subscriptions, track social analytics and manage retention flows. What once required a full team of marketers, virtual assistants and sales reps can now be executed by a few well-trained AI agents. FanPro claims that over 90% of its operations — from dynamic pricing to fan interactions — are now handled by automation. Likewise, Deep Agency allows creators to generate professional-grade photo shoots without booking a studio or hiring staff and Lalaland.ai helps fashion brands generate AI avatars to reduce production costs and increase diversity in representation. A Necessary Human Touch Still, not everyone is convinced that AI can capture the nuance of digital intimacy. Some experts have raised concerns that hyper automation in creator-driven industries could flatten human expression into predictable engagement patterns, risking long-term user loyalty. A 2024 ContentGrip study of 1,000 consumers found 80% of respondents would likely switch brands that rely heavily on AI-generated emails, citing a loss of authenticity. Nearly half said such messages made them feel “less connected” to the brand. Humphris doesn’t disagree. “AI can do a lot, but it needs to be paired with someone who understands psychology,” he said. “We didn’t scale because we had the best tech. We scaled because we understood human behavior and built systems that respected it.” Humphris’ sentiment isn’t a mere anecdote but one rooted in research. For example, a recent study by Northeastern University showed that AI influencers often reduce brand trust — especially when users aren’t aware the content is AI-generated. The implication is clear: over-automating the wrong parts of human interaction can backfire. Automation doesn’t — and shouldn’t — mean that human input becomes obsolete. Rather, as many industry experts have noted, it will enhance efficiency but not replace empathy. While AI can process data at speed and generate alluring visuals, it cannot replicate human creativity or emotional intelligence. Neither does AI know the psychology of human behavior like humans do, a trait Humphris credits for their almost-instant success. What’s Working — And What’s Not Lalaland.ai and The Diigitals have earned praise for enhancing inclusivity, enabling brands to feature underrepresented body types, skin tones and styles. Meanwhile, FanPro focuses on building AI “growth engines” for agencies — full-stack systems that combine monetization tools, CRM and content flows. But not all reactions have been positive. In November 2024, fashion brand Mango faced backlash for its use of AI-generated models, which critics called “false advertising” and “a threat to real jobs.” The New York Post covered the fallout in detail, highlighting how ethical lines are still being drawn. As brands look to balance cost savings with authenticity, some have begun labeling AI-generated content more clearly — or embedding human oversight into workflows, rather than removing it. Despite offering an automation stack, FanPro itself wasn’t an immediate adopter of automation in its processes. But, as Humphris noted, embracing AI made all the difference for the company. “If we had adopted AI and automation earlier, we would’ve hit 8 figures much faster and with far less stress,” he noted. Automation In The New Era FanPro is a great example of how AI integration, when done the right way, could be a profitable venture for digital model agencies. Whether or not the company’s model becomes the blueprint for AI-first digital agencies, it’s clear that there’s a big shift in the creator economy, where automation isn’t only viewed as a time-saver, but also as a foundational pillar for businesses. As digital model agencies lean further into an AI-centric future, the bigger task is remembering what not to automate — the spark of human connection that built the industry in the first place. “In this new era of automation,” Humphris said, “the smartest agencies won’t just ask what AI can do. They’ll ask what it shouldn’t.”
    Like
    Love
    Wow
    Angry
    Sad
    218
    4 Комментарии 0 Поделились 0 предпросмотр
  • 6 "Unexpected Spots" You Should Tidy Up This Summer, According to Pro Organizers

    The hibernation haze of winter has finally cleared, which means you're probably waking up right now to all the clutter you've accumulated throughout the season—and you’re not alone. According to Anton Liakhov, an interior designer, organizer, and founder of Roomtery, spring and summer's first rays of sunshine have a direct effect on your psyche. The warm light, paired with blue skies and cool breezes, inspires a fresh start, a full social calendar, and the dire need to clean house. “It's like waking up to discover we're surrounded by items we no longer need,” Liakhov says. “That seasonal nudge gives us permission to let go and begin again for brighter, lighter living.” While things like gift wrapping essentials, cold-weather accessories, seasonal decor, cozy candles, and throw blankets create a snug ambience, they also clutter your space. Spring, he says, triggers an awakening to it all. Related StoriesSo, where do you start? Spring and summer are optimal times to tackle areas like entryways, mudrooms, garages, as well as outdoor storage. If it’s warm enough to open the doors and work outside, “it’s time to sort through patio furniture, gardening equipment, and outdoor toys," says Liakhov. Of course, what you choose to toss is up to your discretion. The best things to declutter at any time of year are the things taking up physical and mental space. Still, if you’re looking for specific items, we asked three professional organziers which things you should part with to keep your house spick and span for the warm-weather months ahead. Holiday Wrapping EssentialsJohner Images//Getty ImagesThe holiday season is still far off in the distance, so use this time as an opportunity to get rid of excess wrapping paper, tissue paper, gift boxes, and bags. "We accumulate more Christmas wrap than we'll ever need," says Liakhov. what you will realistically use, and toss or donate the rest. Worn-out OuterwearKypros//Getty ImagesSpeaking of seasonal items to sift through, Shantae Duckworth, founder of Shantaeize Your Space, says spring is the perfect time to re-evaluate the winter coats hanging in your closet. “If you didn’t reach for it this winter, you probably won’t next year either,” Duckworth says. In other words, ditch the bulk. If you're tight on space, consider using vacuum compression bags to flatten your remaining outerwear and store it flat in the attic or basement until temperatures dip again.Spring/Summer Clothes You Don’t WearKinga Krzeminska//Getty ImagesOne of the perks of an early summer clean is streamlining your seasonal closet before the heat really settles in. According to Nick Friedman, cofounder of College HUNKS Hauling Junk & Moving, last year’s untouched spring or summer clothes have no place in your closet or dresser drawers. You can tackle your closet by adhering to Friedman’s golden rule: If you didn’t wear it in the past 12 months, donate it. From there, Friedman recommends curating your wardrobe “like a seasonal menu; keep only what you reach for on display and use vacuum bags or rolling under-the-bed bins to store off-season items.” Unused Beauty ItemsCatherine Falls Commercial//Getty ImagesIn addition to clothing, Friedman says that beauty cabinets also require frequent decluttering, especially ahead of the summer months when you typically favor sunscreen and skincare products over makeup. “If your sunscreen has expired, it’s not doing you any favors,” he points out. Old SPF, makeup, and/or skincare products can go, and unused or unopened items can usually be donated. “Not only does this clear up bathroom clutter,” Friedman adds, “but it also protects your health.”Expired Pantry & Freezer Foodsfcafotodigital//Getty ImagesLiakhov and Duckworth agree: Springtime is the time to clean out your pantry and freezer of any expired items, like aged spices, canned foods, and sauces. You can even take it one step further and recycle any containers that are past their prime, including cracked lids or warped bottoms.Seasonal Sports EquipmentJulius//Getty ImagesWinter sports aficionados, this one’s for you. Liakhov lists snowshoes, sleds, and ski helmets as equipment to evaluate before storing them away for the season. “If it's broken, hasn't been touched in years, or no longer a part of your lifestyle, let it go before it gets another dust coating,” he adds. While you're at it, give your summer gear a once-over so you can replace anything broken or faulty before it all disappears from stores. Follow House Beautiful on Instagram and TikTok.
    #quotunexpected #spotsquot #you #should #tidy
    6 "Unexpected Spots" You Should Tidy Up This Summer, According to Pro Organizers
    The hibernation haze of winter has finally cleared, which means you're probably waking up right now to all the clutter you've accumulated throughout the season—and you’re not alone. According to Anton Liakhov, an interior designer, organizer, and founder of Roomtery, spring and summer's first rays of sunshine have a direct effect on your psyche. The warm light, paired with blue skies and cool breezes, inspires a fresh start, a full social calendar, and the dire need to clean house. “It's like waking up to discover we're surrounded by items we no longer need,” Liakhov says. “That seasonal nudge gives us permission to let go and begin again for brighter, lighter living.” While things like gift wrapping essentials, cold-weather accessories, seasonal decor, cozy candles, and throw blankets create a snug ambience, they also clutter your space. Spring, he says, triggers an awakening to it all. Related StoriesSo, where do you start? Spring and summer are optimal times to tackle areas like entryways, mudrooms, garages, as well as outdoor storage. If it’s warm enough to open the doors and work outside, “it’s time to sort through patio furniture, gardening equipment, and outdoor toys," says Liakhov. Of course, what you choose to toss is up to your discretion. The best things to declutter at any time of year are the things taking up physical and mental space. Still, if you’re looking for specific items, we asked three professional organziers which things you should part with to keep your house spick and span for the warm-weather months ahead. Holiday Wrapping EssentialsJohner Images//Getty ImagesThe holiday season is still far off in the distance, so use this time as an opportunity to get rid of excess wrapping paper, tissue paper, gift boxes, and bags. "We accumulate more Christmas wrap than we'll ever need," says Liakhov. what you will realistically use, and toss or donate the rest. Worn-out OuterwearKypros//Getty ImagesSpeaking of seasonal items to sift through, Shantae Duckworth, founder of Shantaeize Your Space, says spring is the perfect time to re-evaluate the winter coats hanging in your closet. “If you didn’t reach for it this winter, you probably won’t next year either,” Duckworth says. In other words, ditch the bulk. If you're tight on space, consider using vacuum compression bags to flatten your remaining outerwear and store it flat in the attic or basement until temperatures dip again.Spring/Summer Clothes You Don’t WearKinga Krzeminska//Getty ImagesOne of the perks of an early summer clean is streamlining your seasonal closet before the heat really settles in. According to Nick Friedman, cofounder of College HUNKS Hauling Junk & Moving, last year’s untouched spring or summer clothes have no place in your closet or dresser drawers. You can tackle your closet by adhering to Friedman’s golden rule: If you didn’t wear it in the past 12 months, donate it. From there, Friedman recommends curating your wardrobe “like a seasonal menu; keep only what you reach for on display and use vacuum bags or rolling under-the-bed bins to store off-season items.” Unused Beauty ItemsCatherine Falls Commercial//Getty ImagesIn addition to clothing, Friedman says that beauty cabinets also require frequent decluttering, especially ahead of the summer months when you typically favor sunscreen and skincare products over makeup. “If your sunscreen has expired, it’s not doing you any favors,” he points out. Old SPF, makeup, and/or skincare products can go, and unused or unopened items can usually be donated. “Not only does this clear up bathroom clutter,” Friedman adds, “but it also protects your health.”Expired Pantry & Freezer Foodsfcafotodigital//Getty ImagesLiakhov and Duckworth agree: Springtime is the time to clean out your pantry and freezer of any expired items, like aged spices, canned foods, and sauces. You can even take it one step further and recycle any containers that are past their prime, including cracked lids or warped bottoms.Seasonal Sports EquipmentJulius//Getty ImagesWinter sports aficionados, this one’s for you. Liakhov lists snowshoes, sleds, and ski helmets as equipment to evaluate before storing them away for the season. “If it's broken, hasn't been touched in years, or no longer a part of your lifestyle, let it go before it gets another dust coating,” he adds. While you're at it, give your summer gear a once-over so you can replace anything broken or faulty before it all disappears from stores. Follow House Beautiful on Instagram and TikTok. #quotunexpected #spotsquot #you #should #tidy
    WWW.HOUSEBEAUTIFUL.COM
    6 "Unexpected Spots" You Should Tidy Up This Summer, According to Pro Organizers
    The hibernation haze of winter has finally cleared, which means you're probably waking up right now to all the clutter you've accumulated throughout the season—and you’re not alone. According to Anton Liakhov, an interior designer, organizer, and founder of Roomtery, spring and summer's first rays of sunshine have a direct effect on your psyche. The warm light, paired with blue skies and cool breezes, inspires a fresh start, a full social calendar, and the dire need to clean house. “It's like waking up to discover we're surrounded by items we no longer need,” Liakhov says. “That seasonal nudge gives us permission to let go and begin again for brighter, lighter living.” While things like gift wrapping essentials, cold-weather accessories, seasonal decor, cozy candles, and throw blankets create a snug ambience, they also clutter your space. Spring, he says, triggers an awakening to it all. Related StoriesSo, where do you start? Spring and summer are optimal times to tackle areas like entryways, mudrooms, garages, as well as outdoor storage. If it’s warm enough to open the doors and work outside, “it’s time to sort through patio furniture, gardening equipment, and outdoor toys," says Liakhov. Of course, what you choose to toss is up to your discretion. The best things to declutter at any time of year are the things taking up physical and mental space. Still, if you’re looking for specific items, we asked three professional organziers which things you should part with to keep your house spick and span for the warm-weather months ahead. Holiday Wrapping EssentialsJohner Images//Getty ImagesThe holiday season is still far off in the distance, so use this time as an opportunity to get rid of excess wrapping paper, tissue paper, gift boxes, and bags. "We accumulate more Christmas wrap than we'll ever need," says Liakhov. Save what you will realistically use, and toss or donate the rest. Worn-out OuterwearKypros//Getty ImagesSpeaking of seasonal items to sift through, Shantae Duckworth, founder of Shantaeize Your Space, says spring is the perfect time to re-evaluate the winter coats hanging in your closet. “If you didn’t reach for it this winter, you probably won’t next year either,” Duckworth says. In other words, ditch the bulk. If you're tight on space, consider using vacuum compression bags to flatten your remaining outerwear and store it flat in the attic or basement until temperatures dip again.Spring/Summer Clothes You Don’t WearKinga Krzeminska//Getty ImagesOne of the perks of an early summer clean is streamlining your seasonal closet before the heat really settles in. According to Nick Friedman, cofounder of College HUNKS Hauling Junk & Moving, last year’s untouched spring or summer clothes have no place in your closet or dresser drawers. You can tackle your closet by adhering to Friedman’s golden rule: If you didn’t wear it in the past 12 months, donate it. From there, Friedman recommends curating your wardrobe “like a seasonal menu; keep only what you reach for on display and use vacuum bags or rolling under-the-bed bins to store off-season items.” Unused Beauty ItemsCatherine Falls Commercial//Getty ImagesIn addition to clothing, Friedman says that beauty cabinets also require frequent decluttering, especially ahead of the summer months when you typically favor sunscreen and skincare products over makeup. “If your sunscreen has expired, it’s not doing you any favors,” he points out. Old SPF, makeup, and/or skincare products can go, and unused or unopened items can usually be donated. “Not only does this clear up bathroom clutter,” Friedman adds, “but it also protects your health.”Expired Pantry & Freezer Foodsfcafotodigital//Getty ImagesLiakhov and Duckworth agree: Springtime is the time to clean out your pantry and freezer of any expired items, like aged spices, canned foods, and sauces. You can even take it one step further and recycle any containers that are past their prime, including cracked lids or warped bottoms.Seasonal Sports EquipmentJulius//Getty ImagesWinter sports aficionados, this one’s for you. Liakhov lists snowshoes, sleds, and ski helmets as equipment to evaluate before storing them away for the season. “If it's broken, hasn't been touched in years, or no longer a part of your lifestyle, let it go before it gets another dust coating,” he adds. While you're at it, give your summer gear a once-over so you can replace anything broken or faulty before it all disappears from stores. Follow House Beautiful on Instagram and TikTok.
    0 Комментарии 0 Поделились 0 предпросмотр
  • RoadCraft Explained: Your Complete Guide to Building Roads

    The developers of SnowRunner have combined its vehicular physics simulator gameplay with design elements from the building simulator genre, to bring us Roadcraft, a unique game that requires you to use an array of vehicles and construction machinery to do everything from clearing debris to rebuilding roads and laying cable. Road construction is a primary aspect of gameplay, and involves multiple steps: resource collection, logistical transport, route planning, and actual road building.

    While there is a very in-depth tutorial in-game that holds your hand every step of the way, there are a lot of nuances to road construction that you may not be aware of, and this RoadCraft guide has everything you need to know about those game mechanics.

    Scout Vehicle Selection

    The Scout is a critical vehicle for its scanning and winching capabilities, and while there are 7 to choose from, only 2 are available initially. Between these first two Scouts, the Armiger Thunder IV should be your preferred choice, due to its higher mobility and shorter wheelbase. Your eventual Scout vehicle should be the Tuz 119 “Lynx” which becomes available in the Deluge campaign for a price of nearly 25,000. The winch capability will come in handy as you clear debris to establish routes, and scan for terrain and objects.

    Field Recovery Vehicles

    These vehicles serve the purpose of behaving as spawn points for your other utility vehicles. This will be tremendously helpful in situations where you will require multiple pieces of equipment at a given location. Simply drive one of these to the work site such as a road construction objective, and you can simply spawn all task-related vehicles there, at the cost of Recovery Tokens. The free KHAN Lo “Strannik” Field Service Vehicle will more than suffice for this purpose, while also being equipped with a winch for manual towing.

    Equipment Transporters

    In the absence of fuel tokens, vehicle haulers can also be used to accomplish the task of manually delivering multiple vehicles to a work site. The Zikz 605E Heavy Equipment Transporter will be your preferred choice at a cost of 25,000, and will serve you well throughout the campaign. However, the Step 39331 “Pike” Light Equipment Transporter can function just as well, since it can use both its flatbed and its winch to haul two vehicles at a time, with maximum tonnage capability being the true limitation.

    Crane Trucks

    A great deal of what you will be doing in the game will involve picking stuff up off of the ground with a crane and placing it onto a flatbed for transportation. While you will be provided with two separate vehicles to accomplish this, it will be quite a tedious process especially for solo players. This is where the Mule T1 Cargo Crane Truck comes in immediately useful, at a price of 27,000. It will save you a great deal of time and effort over multiple instances in the campaign. Eventually, you will purchase better Craned vehicles, but the older ones will never use utility, as you can leave them in place at your various facilities to act as on-site loaders.

    In a pinch, Crane Trucks can serve as a winch vehicle for any situation that may arise.

    Road Construction

    Your AI convoys are going to get hung up on every little obstacle along their dirt path routes, and this is where good road building comes into play. Locate a quarry source for sand, and begin filling in the route with your loaded Dump Truck.

    Next, use your Dozer set to sand leveling or better yet, your Roller, to perform multiple passes in order to flatten the sand, with two passes being an absolute minimum. Be sure to proceed as slowly and carefully as you can on the second and further passes, listening carefully for audio feedback while traversing the route, which will sound different when traveling over fully flattened ground compared to slightly uneven terrain.

    A recommended method is to go down the center twice, once in each direction, in order to perfect the two ends. Then travel once along each side to spread the sand evenly, followed by a single final pass down the center again.

    Asphalt Paving

    Strictly speaking, asphalt paving is unnecessary in the vast majority of situations, unless required by mission objectives. However, if you do elect to pave all your roads, there are some important steps to take.

    The sand must be perfectly flattened, else the paving machines will frequently snag on unseen obstacles. One way to mitigate this, is to use the Paver while traveling in reverse. While this may seem odd, since the asphalt is deposited from the front and flattened by the rear, the game still allows it.

    An even better option is to hoist the Paver with a mobile Crane and float it low over the planned route, and then drive the Crane along the path instead, which is significantly faster and avoids physics bugs.

    Deploy the Roller next, and use the same leveling process as you do with sand: down the center once in each direction, then each side once, and one last time down the center again.

    Leveling Uneven Roads

    Failure to properly level the sand before laying down asphalt can lead to significantly large bumps in your roads. You can still recover from a situation like this without having to resort to using your Dozer’s Asphalt Destruction grader mode. Take your Roller out instead and perform multiple passes over the bump with it, and it will flatten out eventually.

    Plotting Routes For AI Convoys

    While creating routes for your transport vehicles to follow, be sure to set them along one side of the road rather than down the center. This will mitigate head-on collisions between AI traffic traveling in opposite directions, as their pathing can be quite poor. Also avoid placing an excessive number of waypoints wherever possible as this is interpreted as a direction change. While the vehicles will not get turned completely around, they can bug out and end up in an environmental hazard.

    Be sure to delete the routes once you have completed the related objectives and collected all of the rewards, in order to maintain a clean infrastructure map.

    That is everything you need to know about constructing proper roads in RoadCraft.
    #roadcraft #explained #your #complete #guide
    RoadCraft Explained: Your Complete Guide to Building Roads
    The developers of SnowRunner have combined its vehicular physics simulator gameplay with design elements from the building simulator genre, to bring us Roadcraft, a unique game that requires you to use an array of vehicles and construction machinery to do everything from clearing debris to rebuilding roads and laying cable. Road construction is a primary aspect of gameplay, and involves multiple steps: resource collection, logistical transport, route planning, and actual road building. While there is a very in-depth tutorial in-game that holds your hand every step of the way, there are a lot of nuances to road construction that you may not be aware of, and this RoadCraft guide has everything you need to know about those game mechanics. Scout Vehicle Selection The Scout is a critical vehicle for its scanning and winching capabilities, and while there are 7 to choose from, only 2 are available initially. Between these first two Scouts, the Armiger Thunder IV should be your preferred choice, due to its higher mobility and shorter wheelbase. Your eventual Scout vehicle should be the Tuz 119 “Lynx” which becomes available in the Deluge campaign for a price of nearly 25,000. The winch capability will come in handy as you clear debris to establish routes, and scan for terrain and objects. Field Recovery Vehicles These vehicles serve the purpose of behaving as spawn points for your other utility vehicles. This will be tremendously helpful in situations where you will require multiple pieces of equipment at a given location. Simply drive one of these to the work site such as a road construction objective, and you can simply spawn all task-related vehicles there, at the cost of Recovery Tokens. The free KHAN Lo “Strannik” Field Service Vehicle will more than suffice for this purpose, while also being equipped with a winch for manual towing. Equipment Transporters In the absence of fuel tokens, vehicle haulers can also be used to accomplish the task of manually delivering multiple vehicles to a work site. The Zikz 605E Heavy Equipment Transporter will be your preferred choice at a cost of 25,000, and will serve you well throughout the campaign. However, the Step 39331 “Pike” Light Equipment Transporter can function just as well, since it can use both its flatbed and its winch to haul two vehicles at a time, with maximum tonnage capability being the true limitation. Crane Trucks A great deal of what you will be doing in the game will involve picking stuff up off of the ground with a crane and placing it onto a flatbed for transportation. While you will be provided with two separate vehicles to accomplish this, it will be quite a tedious process especially for solo players. This is where the Mule T1 Cargo Crane Truck comes in immediately useful, at a price of 27,000. It will save you a great deal of time and effort over multiple instances in the campaign. Eventually, you will purchase better Craned vehicles, but the older ones will never use utility, as you can leave them in place at your various facilities to act as on-site loaders. In a pinch, Crane Trucks can serve as a winch vehicle for any situation that may arise. Road Construction Your AI convoys are going to get hung up on every little obstacle along their dirt path routes, and this is where good road building comes into play. Locate a quarry source for sand, and begin filling in the route with your loaded Dump Truck. Next, use your Dozer set to sand leveling or better yet, your Roller, to perform multiple passes in order to flatten the sand, with two passes being an absolute minimum. Be sure to proceed as slowly and carefully as you can on the second and further passes, listening carefully for audio feedback while traversing the route, which will sound different when traveling over fully flattened ground compared to slightly uneven terrain. A recommended method is to go down the center twice, once in each direction, in order to perfect the two ends. Then travel once along each side to spread the sand evenly, followed by a single final pass down the center again. Asphalt Paving Strictly speaking, asphalt paving is unnecessary in the vast majority of situations, unless required by mission objectives. However, if you do elect to pave all your roads, there are some important steps to take. The sand must be perfectly flattened, else the paving machines will frequently snag on unseen obstacles. One way to mitigate this, is to use the Paver while traveling in reverse. While this may seem odd, since the asphalt is deposited from the front and flattened by the rear, the game still allows it. An even better option is to hoist the Paver with a mobile Crane and float it low over the planned route, and then drive the Crane along the path instead, which is significantly faster and avoids physics bugs. Deploy the Roller next, and use the same leveling process as you do with sand: down the center once in each direction, then each side once, and one last time down the center again. Leveling Uneven Roads Failure to properly level the sand before laying down asphalt can lead to significantly large bumps in your roads. You can still recover from a situation like this without having to resort to using your Dozer’s Asphalt Destruction grader mode. Take your Roller out instead and perform multiple passes over the bump with it, and it will flatten out eventually. Plotting Routes For AI Convoys While creating routes for your transport vehicles to follow, be sure to set them along one side of the road rather than down the center. This will mitigate head-on collisions between AI traffic traveling in opposite directions, as their pathing can be quite poor. Also avoid placing an excessive number of waypoints wherever possible as this is interpreted as a direction change. While the vehicles will not get turned completely around, they can bug out and end up in an environmental hazard. Be sure to delete the routes once you have completed the related objectives and collected all of the rewards, in order to maintain a clean infrastructure map. That is everything you need to know about constructing proper roads in RoadCraft. #roadcraft #explained #your #complete #guide
    GAMINGBOLT.COM
    RoadCraft Explained: Your Complete Guide to Building Roads
    The developers of SnowRunner have combined its vehicular physics simulator gameplay with design elements from the building simulator genre, to bring us Roadcraft, a unique game that requires you to use an array of vehicles and construction machinery to do everything from clearing debris to rebuilding roads and laying cable. Road construction is a primary aspect of gameplay, and involves multiple steps: resource collection, logistical transport, route planning, and actual road building. While there is a very in-depth tutorial in-game that holds your hand every step of the way, there are a lot of nuances to road construction that you may not be aware of, and this RoadCraft guide has everything you need to know about those game mechanics. Scout Vehicle Selection The Scout is a critical vehicle for its scanning and winching capabilities, and while there are 7 to choose from, only 2 are available initially. Between these first two Scouts, the Armiger Thunder IV should be your preferred choice, due to its higher mobility and shorter wheelbase. Your eventual Scout vehicle should be the Tuz 119 “Lynx” which becomes available in the Deluge campaign for a price of nearly 25,000. The winch capability will come in handy as you clear debris to establish routes, and scan for terrain and objects. Field Recovery Vehicles These vehicles serve the purpose of behaving as spawn points for your other utility vehicles. This will be tremendously helpful in situations where you will require multiple pieces of equipment at a given location. Simply drive one of these to the work site such as a road construction objective, and you can simply spawn all task-related vehicles there, at the cost of Recovery Tokens. The free KHAN Lo “Strannik” Field Service Vehicle will more than suffice for this purpose, while also being equipped with a winch for manual towing. Equipment Transporters In the absence of fuel tokens, vehicle haulers can also be used to accomplish the task of manually delivering multiple vehicles to a work site. The Zikz 605E Heavy Equipment Transporter will be your preferred choice at a cost of 25,000, and will serve you well throughout the campaign. However, the Step 39331 “Pike” Light Equipment Transporter can function just as well, since it can use both its flatbed and its winch to haul two vehicles at a time, with maximum tonnage capability being the true limitation. Crane Trucks A great deal of what you will be doing in the game will involve picking stuff up off of the ground with a crane and placing it onto a flatbed for transportation. While you will be provided with two separate vehicles to accomplish this, it will be quite a tedious process especially for solo players. This is where the Mule T1 Cargo Crane Truck comes in immediately useful, at a price of 27,000. It will save you a great deal of time and effort over multiple instances in the campaign. Eventually, you will purchase better Craned vehicles, but the older ones will never use utility, as you can leave them in place at your various facilities to act as on-site loaders. In a pinch, Crane Trucks can serve as a winch vehicle for any situation that may arise. Road Construction Your AI convoys are going to get hung up on every little obstacle along their dirt path routes, and this is where good road building comes into play. Locate a quarry source for sand, and begin filling in the route with your loaded Dump Truck. Next, use your Dozer set to sand leveling or better yet, your Roller, to perform multiple passes in order to flatten the sand, with two passes being an absolute minimum. Be sure to proceed as slowly and carefully as you can on the second and further passes, listening carefully for audio feedback while traversing the route, which will sound different when traveling over fully flattened ground compared to slightly uneven terrain. A recommended method is to go down the center twice, once in each direction, in order to perfect the two ends. Then travel once along each side to spread the sand evenly, followed by a single final pass down the center again. Asphalt Paving Strictly speaking, asphalt paving is unnecessary in the vast majority of situations, unless required by mission objectives. However, if you do elect to pave all your roads, there are some important steps to take. The sand must be perfectly flattened, else the paving machines will frequently snag on unseen obstacles. One way to mitigate this, is to use the Paver while traveling in reverse. While this may seem odd, since the asphalt is deposited from the front and flattened by the rear, the game still allows it. An even better option is to hoist the Paver with a mobile Crane and float it low over the planned route, and then drive the Crane along the path instead, which is significantly faster and avoids physics bugs. Deploy the Roller next, and use the same leveling process as you do with sand: down the center once in each direction, then each side once, and one last time down the center again. Leveling Uneven Roads Failure to properly level the sand before laying down asphalt can lead to significantly large bumps in your roads. You can still recover from a situation like this without having to resort to using your Dozer’s Asphalt Destruction grader mode. Take your Roller out instead and perform multiple passes over the bump with it, and it will flatten out eventually. Plotting Routes For AI Convoys While creating routes for your transport vehicles to follow, be sure to set them along one side of the road rather than down the center. This will mitigate head-on collisions between AI traffic traveling in opposite directions, as their pathing can be quite poor. Also avoid placing an excessive number of waypoints wherever possible as this is interpreted as a direction change. While the vehicles will not get turned completely around, they can bug out and end up in an environmental hazard. Be sure to delete the routes once you have completed the related objectives and collected all of the rewards, in order to maintain a clean infrastructure map. That is everything you need to know about constructing proper roads in RoadCraft.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Thrustmaster T598 + Hypercar Wheel review: a great value PC/PS5 sim racing wheel and pedals built on novel tech

    Thrustmaster T598 + Hypercar Wheel review: a great value PC/PS5 sim racing wheel and pedals built on novel tech
    Direct axial drive impresses, despite limited software and a firmly mid stock wheel.

    Image credit: Digital Foundry

    Review

    by Will Judd
    Deputy Editor, Digital Foundry

    Published on June 1, 2025

    We've seen an explosion in the number of affordable direct driveracing wheels over the past couple of years, with Fanatec and Moza offering increasingly inexpensive options that still deliver the precise, quick and long-lasting force feedback that cheaper gear- or belt-driven wheels can't match.
    Now, Thrustmaster is intruding on that territory with the T598, a PlayStation/PC direct drive wheel, wheel base and pedals that costs just £449/That's on a similar level to the PC-only £459/Moza R5 bundle and the €399/Fanatec CSL DD bundle, so how does the newcomer compare? And what's changed from the more expensive T818 we reviewed before?
    We've been testing the T598 - and the fancy upgraded HyperCar wheel that's available as an upgrade option - for weeks to find out. Our full review follows, so read on - or check out the quick links below to jump to what you're most interested in.

    To see this content please enable targeting cookies.

    Thrustmaster T598 wheel base review: direct axial drive vs traditional direct drive
    Interestingly, the T598 arguably comes with a more advanced DD motor than the more expensive T818 does. It uses a "direct axial drive" versus the standard "direct radial drive", where the magnets are aligned parallel to the wheel shaft rather than perpendicular. This ought to allow for more efficient torque generation, producing less waste heat, minimising precision-sapping magnetic interference and requiring less copper to produce. It also means the T598 can "overshoot" to deliver more than its rated 5nm of constant torque for short periods.
    However, this design also requires a physically taller yet slimmer enclosure, potentially blocking the view forward and requiring a different bolt pattern to attach the base to your desk or sim racing cockpit - both of which are slight annoyances with the T598.Interestingly, you can also feel a slight vibration and hear a quiet crackling noise emanating from the T598 base while idle - something I haven't heard or felt with other direct drive motors and is reportedly inherent to this design.

    There's a lot going on inside this wheel base - including some genuine innovation. | Image credit: Thrustmaster/Digital Foundry

    Thrustmaster has written a pair of white papers to explain why their take on direct driveis better than what came before. Image credit: Thrustmaster

    In terms of the force feedback itself, Thrustmaster have achieved something quite special here. In some titles with a good force feedback implementation - Assetto Corsa, Assetto Corsa Evo and F1 23 stood out to me here - the wheel feels great, with strong force feedback and plenty of detail. If you run up on a kerb or start to lose traction, you know about it right away and can take corrective action. I also appreciated the way that turning the wheel feels perfectly smooth when turning, without any cogging - the slightly jerky sensation common to low-end and mid-range direct drive motors that corresponds to slight attraction as you pass each magnet.
    However, balancing this, the wheel's force feedback feels a little less consistent than others I've tested from the likes of Fanatec or Moza at a similar price point, with some games like Project Cars 3 and Forza Motorsport feeling almost bereft of force feedback by comparison. You also have that slight vibration when the wheel is stationary, which is potentially more noticeable than the cogging sensation in traditional DD designs. The overshoot is also a mixed bag - as the sudden jump in torque can feel a little artificial in some scenarios, eg when you're warming your tyres by weaving in F1 before a safety car restart.
    I'd say that these positives and negatives largely cancel each other out, and you're left with force feedback that is good, way better than non-DD wheels, but not noticeably better than more common radial direct drive designs. Depending on the games you play, either DD style could be preferable. It'll be interesting to see if Thrustmaster are able to tune out some of these negative characteristics through firmware updates - or simply in later products using the same technology.

    1 of 7

    Caption

    Attribution

    Here's how the T598 looks IRL - from the wheel base itself to the default rim, the upgraded Hypercar wheel and the included dual pedals. Click to enlarge.

    Apart from the novel motor, the rest of the wheel base is fairly standard - there's a smalldisplay on the top for adjusting your settings and seeing in-game info like a rev counter, four large circular buttons buttons, the usual Thrustmaster quick release lock for securing your wheel rim and a small button on the back to turn the wheel base on and off. There are connection options for power, USB and connecting other components like pedals or shifters on the back too.
    Weirdly, there's no ability to change settings in the PC Thrustmaster Panel app - it just says this functionality is "coming soon!" - so right now you can only use it for updating firmware, testing buttons and changing between profiles.

    "Coming soon!" starts to become a little less believable six months after the first reviews hit. | Image credit: Digital Foundry

    Instead, you'll be using the built-in screen for making changes, which works well enough but doesn't provide any allowance for extra information - so you'll be sticking to the four basic pre-made profiles, referring to the manual or checking suggested setups online rather than reading built-in tool tips.
    You still get access to the full whack of settings here, and of course this works well for PS5/PS4 users who wouldn't expect a software experience anyway, but PC users may be disappointed to learn that there's no intuitive software interface here. I found the Boosted Media YT review of the wheelbase to offer some good insight into what settings you're likely to want to change from their default values.
    Thrustmaster T598 Sportcar wheel review: a workable default option

    The Sportcar wheel rim looks good - but a plastic construction and relatively spartan controls make it "OK" at best.

    The "Sportcar" wheel provided in the bundle is a little less impressive-looking than the base itself, with a plasticky feel throughout and fairly mushy buttons - though the paddles are snappy enough and feel good to use. The usual PS-style face buttons are split into two clumps up top with L2 and R2, which is a bit odd, with four individual directional buttons in the lower left, start/select/PS in the lower middle and four configuration buttons in the lower right.
    Those configuration buttons require extra explanation, so here we go: the P button at the top swaps between four different pages, indicated with a different colour LED, allowing the remaining three physical buttons to activate up to 12 different functions.There are no rotary encoders or other additional controls here, so PC players that prefer more complicated racing sims may feel a bit underserved by this clunky, cost-saving solution.
    The 815g wheel is at least sized reasonably, with 300mm circular shape that particularly suits drifting, rally and trucking - though all forms of driving and racing are of course possible. The rubber grips under your hands are reasonably comfortable, but you can still feel seams in various places. Overall, the wheel is possibly the weakest part of the package, but perfectly usable and acceptable for the price point.
    Thrustmaster Hypercar Wheel Add-On review: true luxury

    An incredible wheel with premium materials, excellent controls and a more specialised shape.

    Thrustmaster also sent over the £339/Hypercar wheel rim for testing, which is an upgrade option that uses significantly better materials - leather, alcantara, aluminium and carbon - and offers a huge number of extra controls. Its oval shape feels a bit more responsive for faster vehiclesthat require a quick change of direction, but drifting and rally doesn't feel natural. It supports the same PS4, PS5 and PC platforms as the stock option, but there are no legends printed on the buttons to help you.
    The difference in quality here is immediately apparent, with much better tactile feedback from the buttons and a huge number of additional controls for adjusting stuff like ERS deployment or brake bias. Each control feels well-placed, even if the T-shaped layout for the face buttons is slightly unnatural at first, and the paddles for shifting and the clutch are particularly well engineered. I also found holding the wheel a bit more comfortable thanks to that flattened out shape, the more premium materials and the absence of bumps or seams anywhere you're likely to hold.
    It's a huge upgrade in terms of feel and features then, as you'd hope for a wheel rim that costs nearly as much as the entire T598 kit and caboodle. As an upgrade option, I do rate it, though it perhaps makes slightly more sense for T818 owners that have already invested a bit more in the Thrusmaster ecosystem. Regardless, it was this rim that I used for the majority of my time with the T598, and the wheel base feels significantly better with the upgrade.
    Thrustmaster T598 Raceline pedals review: great feedback, but no clutch and no load cell upgrade offered at present

    Surprisingly good for two add-in pedals, in terms of feedback and flexibility.

    The pedals that come with the T598 are surprisingly good, with an accelerator, a brake pedaland no clutch pedal. Each pedal's spring assembly can be pushed into one of three positions to change the amount of pre-load - ie make it a bit softer or harder to press and the pedal plates can be shifted up and down. The narrow dimensions of the metal wheel plate meant that it was impossible to mount directly in the centre of the Playseat Trophy I used for testing, but the slightly off-centre installation I ended up with still worked just fine. They connect using a non-USB connection, so you can't use the pedals with other wheel bases.
    Using the middle distance setting and the firmer of the two springs for the brake, I found the T598 produced good results, on par or perhaps even a tad better than other metal-construction Hall Effect position sensorpedals I've tested such as the Moza SR-P Lite and Fanatec CSL. Braking is the critical point here, as you want to be able to feel when the brake has mechanically reached its threshold and then modulate your inputs from there, and the T598 pedals do allow for this quite easily. They're also not so hard to actuate that you end up having to hard-mount them to a sim rig for good results, and the included carpet spikes are reasonably effective in keeping the pedals in place.
    Presumably, it ought to be possible to add on a load cell brake pedal down the line to upgrade to a properthree pedal setup. For the F1 style driving that I prefer, the clutch pedal isn't used anyway, so it wasn't a massive issue for me - and we frequently see companies like Moza and Fanatec drop the clutch pedal on these aggessively priced bundles so Thrustmaster aren't losing ground by following suit.
    Thrustmaster T598 final verdict: a competitive £450 package with potential

    For PlayStation owners, this is an incredible value pickup that ranks among the cheapest DD options - and PC owners ought to consider it too.

    For £449/the Thrustmaster T598 is an excellent value direct drive wheel and pedal bundle for PlayStation and PC with some relatively minor quirks. The wheel base is powerful, detailed and responsive in most games, with some advantages over traditional DD designs but also some disadvantages - notably the taller shape and a slight hum/vibration while stationary. Traditional DD designs from the likes of Fanatec and Moza can offer more reliable force feedback that works over a wider range of games, cars and tracks, while also benefitting from better PC software, but there's certainly potential for Thrustmaster to improve here.
    The included wheel feels a bit cheap, with a predominantly plastic design, spongey buttons and a slightly odd layout, but the full circle shape and full PS5/PS4 compatibility is most welcome. Upgrading to the HyperCar wheel provides a huge uptick in materials, tactile feedback and number of controls, though this does come at a fairly steep price of £339/If you plan to use the T598 for years and have the budget for it, this is a super upgrade to aim for.
    The included Raceline LTE pedals are the most surprising element for me. These consist of only an accelerator and a brake with only moderate adjustability and a narrow base plate, but they feel great to use, are made from durable metal with HE sensors, and only really lose out to significantly more expensive load cell options. For an add-in for a relatively cheap DD bundle, they're a solid inclusion, and I hope Thrustmaster release a load cell brake pedal for users to upgrade to a better three-pedal setup later.
    Overall, it's an competitive first outing for Thrustmaster with the T598 and direct axial drive, and I'm curious to see where the company - and the tech - goes from here. With Fanatec still on the rebuild after being acquired by Corsair and Moza's offerings being hard to order online in some regions, Thrustmaster has a golden opportunity to seize a share of the mid-range and entry-level sim racing market, and the T598 is a positive start.
    #thrustmaster #t598 #hypercar #wheel #review
    Thrustmaster T598 + Hypercar Wheel review: a great value PC/PS5 sim racing wheel and pedals built on novel tech
    Thrustmaster T598 + Hypercar Wheel review: a great value PC/PS5 sim racing wheel and pedals built on novel tech Direct axial drive impresses, despite limited software and a firmly mid stock wheel. Image credit: Digital Foundry Review by Will Judd Deputy Editor, Digital Foundry Published on June 1, 2025 We've seen an explosion in the number of affordable direct driveracing wheels over the past couple of years, with Fanatec and Moza offering increasingly inexpensive options that still deliver the precise, quick and long-lasting force feedback that cheaper gear- or belt-driven wheels can't match. Now, Thrustmaster is intruding on that territory with the T598, a PlayStation/PC direct drive wheel, wheel base and pedals that costs just £449/That's on a similar level to the PC-only £459/Moza R5 bundle and the €399/Fanatec CSL DD bundle, so how does the newcomer compare? And what's changed from the more expensive T818 we reviewed before? We've been testing the T598 - and the fancy upgraded HyperCar wheel that's available as an upgrade option - for weeks to find out. Our full review follows, so read on - or check out the quick links below to jump to what you're most interested in. To see this content please enable targeting cookies. Thrustmaster T598 wheel base review: direct axial drive vs traditional direct drive Interestingly, the T598 arguably comes with a more advanced DD motor than the more expensive T818 does. It uses a "direct axial drive" versus the standard "direct radial drive", where the magnets are aligned parallel to the wheel shaft rather than perpendicular. This ought to allow for more efficient torque generation, producing less waste heat, minimising precision-sapping magnetic interference and requiring less copper to produce. It also means the T598 can "overshoot" to deliver more than its rated 5nm of constant torque for short periods. However, this design also requires a physically taller yet slimmer enclosure, potentially blocking the view forward and requiring a different bolt pattern to attach the base to your desk or sim racing cockpit - both of which are slight annoyances with the T598.Interestingly, you can also feel a slight vibration and hear a quiet crackling noise emanating from the T598 base while idle - something I haven't heard or felt with other direct drive motors and is reportedly inherent to this design. There's a lot going on inside this wheel base - including some genuine innovation. | Image credit: Thrustmaster/Digital Foundry Thrustmaster has written a pair of white papers to explain why their take on direct driveis better than what came before. Image credit: Thrustmaster In terms of the force feedback itself, Thrustmaster have achieved something quite special here. In some titles with a good force feedback implementation - Assetto Corsa, Assetto Corsa Evo and F1 23 stood out to me here - the wheel feels great, with strong force feedback and plenty of detail. If you run up on a kerb or start to lose traction, you know about it right away and can take corrective action. I also appreciated the way that turning the wheel feels perfectly smooth when turning, without any cogging - the slightly jerky sensation common to low-end and mid-range direct drive motors that corresponds to slight attraction as you pass each magnet. However, balancing this, the wheel's force feedback feels a little less consistent than others I've tested from the likes of Fanatec or Moza at a similar price point, with some games like Project Cars 3 and Forza Motorsport feeling almost bereft of force feedback by comparison. You also have that slight vibration when the wheel is stationary, which is potentially more noticeable than the cogging sensation in traditional DD designs. The overshoot is also a mixed bag - as the sudden jump in torque can feel a little artificial in some scenarios, eg when you're warming your tyres by weaving in F1 before a safety car restart. I'd say that these positives and negatives largely cancel each other out, and you're left with force feedback that is good, way better than non-DD wheels, but not noticeably better than more common radial direct drive designs. Depending on the games you play, either DD style could be preferable. It'll be interesting to see if Thrustmaster are able to tune out some of these negative characteristics through firmware updates - or simply in later products using the same technology. 1 of 7 Caption Attribution Here's how the T598 looks IRL - from the wheel base itself to the default rim, the upgraded Hypercar wheel and the included dual pedals. Click to enlarge. Apart from the novel motor, the rest of the wheel base is fairly standard - there's a smalldisplay on the top for adjusting your settings and seeing in-game info like a rev counter, four large circular buttons buttons, the usual Thrustmaster quick release lock for securing your wheel rim and a small button on the back to turn the wheel base on and off. There are connection options for power, USB and connecting other components like pedals or shifters on the back too. Weirdly, there's no ability to change settings in the PC Thrustmaster Panel app - it just says this functionality is "coming soon!" - so right now you can only use it for updating firmware, testing buttons and changing between profiles. "Coming soon!" starts to become a little less believable six months after the first reviews hit. | Image credit: Digital Foundry Instead, you'll be using the built-in screen for making changes, which works well enough but doesn't provide any allowance for extra information - so you'll be sticking to the four basic pre-made profiles, referring to the manual or checking suggested setups online rather than reading built-in tool tips. You still get access to the full whack of settings here, and of course this works well for PS5/PS4 users who wouldn't expect a software experience anyway, but PC users may be disappointed to learn that there's no intuitive software interface here. I found the Boosted Media YT review of the wheelbase to offer some good insight into what settings you're likely to want to change from their default values. Thrustmaster T598 Sportcar wheel review: a workable default option The Sportcar wheel rim looks good - but a plastic construction and relatively spartan controls make it "OK" at best. The "Sportcar" wheel provided in the bundle is a little less impressive-looking than the base itself, with a plasticky feel throughout and fairly mushy buttons - though the paddles are snappy enough and feel good to use. The usual PS-style face buttons are split into two clumps up top with L2 and R2, which is a bit odd, with four individual directional buttons in the lower left, start/select/PS in the lower middle and four configuration buttons in the lower right. Those configuration buttons require extra explanation, so here we go: the P button at the top swaps between four different pages, indicated with a different colour LED, allowing the remaining three physical buttons to activate up to 12 different functions.There are no rotary encoders or other additional controls here, so PC players that prefer more complicated racing sims may feel a bit underserved by this clunky, cost-saving solution. The 815g wheel is at least sized reasonably, with 300mm circular shape that particularly suits drifting, rally and trucking - though all forms of driving and racing are of course possible. The rubber grips under your hands are reasonably comfortable, but you can still feel seams in various places. Overall, the wheel is possibly the weakest part of the package, but perfectly usable and acceptable for the price point. Thrustmaster Hypercar Wheel Add-On review: true luxury An incredible wheel with premium materials, excellent controls and a more specialised shape. Thrustmaster also sent over the £339/Hypercar wheel rim for testing, which is an upgrade option that uses significantly better materials - leather, alcantara, aluminium and carbon - and offers a huge number of extra controls. Its oval shape feels a bit more responsive for faster vehiclesthat require a quick change of direction, but drifting and rally doesn't feel natural. It supports the same PS4, PS5 and PC platforms as the stock option, but there are no legends printed on the buttons to help you. The difference in quality here is immediately apparent, with much better tactile feedback from the buttons and a huge number of additional controls for adjusting stuff like ERS deployment or brake bias. Each control feels well-placed, even if the T-shaped layout for the face buttons is slightly unnatural at first, and the paddles for shifting and the clutch are particularly well engineered. I also found holding the wheel a bit more comfortable thanks to that flattened out shape, the more premium materials and the absence of bumps or seams anywhere you're likely to hold. It's a huge upgrade in terms of feel and features then, as you'd hope for a wheel rim that costs nearly as much as the entire T598 kit and caboodle. As an upgrade option, I do rate it, though it perhaps makes slightly more sense for T818 owners that have already invested a bit more in the Thrusmaster ecosystem. Regardless, it was this rim that I used for the majority of my time with the T598, and the wheel base feels significantly better with the upgrade. Thrustmaster T598 Raceline pedals review: great feedback, but no clutch and no load cell upgrade offered at present Surprisingly good for two add-in pedals, in terms of feedback and flexibility. The pedals that come with the T598 are surprisingly good, with an accelerator, a brake pedaland no clutch pedal. Each pedal's spring assembly can be pushed into one of three positions to change the amount of pre-load - ie make it a bit softer or harder to press and the pedal plates can be shifted up and down. The narrow dimensions of the metal wheel plate meant that it was impossible to mount directly in the centre of the Playseat Trophy I used for testing, but the slightly off-centre installation I ended up with still worked just fine. They connect using a non-USB connection, so you can't use the pedals with other wheel bases. Using the middle distance setting and the firmer of the two springs for the brake, I found the T598 produced good results, on par or perhaps even a tad better than other metal-construction Hall Effect position sensorpedals I've tested such as the Moza SR-P Lite and Fanatec CSL. Braking is the critical point here, as you want to be able to feel when the brake has mechanically reached its threshold and then modulate your inputs from there, and the T598 pedals do allow for this quite easily. They're also not so hard to actuate that you end up having to hard-mount them to a sim rig for good results, and the included carpet spikes are reasonably effective in keeping the pedals in place. Presumably, it ought to be possible to add on a load cell brake pedal down the line to upgrade to a properthree pedal setup. For the F1 style driving that I prefer, the clutch pedal isn't used anyway, so it wasn't a massive issue for me - and we frequently see companies like Moza and Fanatec drop the clutch pedal on these aggessively priced bundles so Thrustmaster aren't losing ground by following suit. Thrustmaster T598 final verdict: a competitive £450 package with potential For PlayStation owners, this is an incredible value pickup that ranks among the cheapest DD options - and PC owners ought to consider it too. For £449/the Thrustmaster T598 is an excellent value direct drive wheel and pedal bundle for PlayStation and PC with some relatively minor quirks. The wheel base is powerful, detailed and responsive in most games, with some advantages over traditional DD designs but also some disadvantages - notably the taller shape and a slight hum/vibration while stationary. Traditional DD designs from the likes of Fanatec and Moza can offer more reliable force feedback that works over a wider range of games, cars and tracks, while also benefitting from better PC software, but there's certainly potential for Thrustmaster to improve here. The included wheel feels a bit cheap, with a predominantly plastic design, spongey buttons and a slightly odd layout, but the full circle shape and full PS5/PS4 compatibility is most welcome. Upgrading to the HyperCar wheel provides a huge uptick in materials, tactile feedback and number of controls, though this does come at a fairly steep price of £339/If you plan to use the T598 for years and have the budget for it, this is a super upgrade to aim for. The included Raceline LTE pedals are the most surprising element for me. These consist of only an accelerator and a brake with only moderate adjustability and a narrow base plate, but they feel great to use, are made from durable metal with HE sensors, and only really lose out to significantly more expensive load cell options. For an add-in for a relatively cheap DD bundle, they're a solid inclusion, and I hope Thrustmaster release a load cell brake pedal for users to upgrade to a better three-pedal setup later. Overall, it's an competitive first outing for Thrustmaster with the T598 and direct axial drive, and I'm curious to see where the company - and the tech - goes from here. With Fanatec still on the rebuild after being acquired by Corsair and Moza's offerings being hard to order online in some regions, Thrustmaster has a golden opportunity to seize a share of the mid-range and entry-level sim racing market, and the T598 is a positive start. #thrustmaster #t598 #hypercar #wheel #review
    WWW.EUROGAMER.NET
    Thrustmaster T598 + Hypercar Wheel review: a great value PC/PS5 sim racing wheel and pedals built on novel tech
    Thrustmaster T598 + Hypercar Wheel review: a great value PC/PS5 sim racing wheel and pedals built on novel tech Direct axial drive impresses, despite limited software and a firmly mid stock wheel. Image credit: Digital Foundry Review by Will Judd Deputy Editor, Digital Foundry Published on June 1, 2025 We've seen an explosion in the number of affordable direct drive (DD) racing wheels over the past couple of years, with Fanatec and Moza offering increasingly inexpensive options that still deliver the precise, quick and long-lasting force feedback that cheaper gear- or belt-driven wheels can't match. Now, Thrustmaster is intruding on that territory with the T598, a PlayStation/PC direct drive wheel, wheel base and pedals that costs just £449/$499. That's on a similar level to the PC-only £459/$599 Moza R5 bundle and the €399/$569 Fanatec CSL DD bundle, so how does the newcomer compare? And what's changed from the more expensive T818 we reviewed before? We've been testing the T598 - and the fancy upgraded HyperCar wheel that's available as an upgrade option - for weeks to find out. Our full review follows, so read on - or check out the quick links below to jump to what you're most interested in. To see this content please enable targeting cookies. Thrustmaster T598 wheel base review: direct axial drive vs traditional direct drive Interestingly, the T598 arguably comes with a more advanced DD motor than the more expensive T818 does. It uses a "direct axial drive" versus the standard "direct radial drive", where the magnets are aligned parallel to the wheel shaft rather than perpendicular (see the diagram below). This ought to allow for more efficient torque generation, producing less waste heat, minimising precision-sapping magnetic interference and requiring less copper to produce. It also means the T598 can "overshoot" to deliver more than its rated 5nm of constant torque for short periods. However, this design also requires a physically taller yet slimmer enclosure (measuring 210x210x120mm), potentially blocking the view forward and requiring a different bolt pattern to attach the base to your desk or sim racing cockpit - both of which are slight annoyances with the T598. (You do get an angle bracket to allow for wider and potentially more compatible holes for your cockpit... but this makes the tall wheel base even taller. Table clamps are also included.) Interestingly, you can also feel a slight vibration and hear a quiet crackling noise emanating from the T598 base while idle - something I haven't heard or felt with other direct drive motors and is reportedly inherent to this design. There's a lot going on inside this wheel base - including some genuine innovation. | Image credit: Thrustmaster/Digital Foundry Thrustmaster has written a pair of white papers to explain why their take on direct drive ("axial flux") is better than what came before ("radial flux"). Image credit: Thrustmaster In terms of the force feedback itself, Thrustmaster have achieved something quite special here. In some titles with a good force feedback implementation - Assetto Corsa, Assetto Corsa Evo and F1 23 stood out to me here - the wheel feels great, with strong force feedback and plenty of detail. If you run up on a kerb or start to lose traction, you know about it right away and can take corrective action. I also appreciated the way that turning the wheel feels perfectly smooth when turning, without any cogging - the slightly jerky sensation common to low-end and mid-range direct drive motors that corresponds to slight attraction as you pass each magnet. However, balancing this, the wheel's force feedback feels a little less consistent than others I've tested from the likes of Fanatec or Moza at a similar price point, with some games like Project Cars 3 and Forza Motorsport feeling almost bereft of force feedback by comparison. You also have that slight vibration when the wheel is stationary, which is potentially more noticeable than the cogging sensation in traditional DD designs. The overshoot is also a mixed bag - as the sudden jump in torque can feel a little artificial in some scenarios, eg when you're warming your tyres by weaving in F1 before a safety car restart. I'd say that these positives and negatives largely cancel each other out, and you're left with force feedback that is good, way better than non-DD wheels, but not noticeably better than more common radial direct drive designs. Depending on the games you play, either DD style could be preferable. It'll be interesting to see if Thrustmaster are able to tune out some of these negative characteristics through firmware updates - or simply in later products using the same technology. 1 of 7 Caption Attribution Here's how the T598 looks IRL - from the wheel base itself to the default rim, the upgraded Hypercar wheel and the included dual pedals. Click to enlarge. Apart from the novel motor, the rest of the wheel base is fairly standard - there's a small (colour!) display on the top for adjusting your settings and seeing in-game info like a rev counter, four large circular buttons buttons (L3, R3, Mode and Settings), the usual Thrustmaster quick release lock for securing your wheel rim and a small button on the back to turn the wheel base on and off. There are connection options for power, USB and connecting other components like pedals or shifters on the back too. Weirdly, there's no ability to change settings in the PC Thrustmaster Panel app - it just says this functionality is "coming soon!" - so right now you can only use it for updating firmware, testing buttons and changing between profiles. "Coming soon!" starts to become a little less believable six months after the first reviews hit. | Image credit: Digital Foundry Instead, you'll be using the built-in screen for making changes, which works well enough but doesn't provide any allowance for extra information - so you'll be sticking to the four basic pre-made profiles, referring to the manual or checking suggested setups online rather than reading built-in tool tips. You still get access to the full whack of settings here, and of course this works well for PS5/PS4 users who wouldn't expect a software experience anyway, but PC users may be disappointed to learn that there's no intuitive software interface here. I found the Boosted Media YT review of the wheelbase to offer some good insight into what settings you're likely to want to change from their default values. Thrustmaster T598 Sportcar wheel review: a workable default option The Sportcar wheel rim looks good - but a plastic construction and relatively spartan controls make it "OK" at best. The "Sportcar" wheel provided in the bundle is a little less impressive-looking than the base itself, with a plasticky feel throughout and fairly mushy buttons - though the paddles are snappy enough and feel good to use. The usual PS-style face buttons are split into two clumps up top with L2 and R2, which is a bit odd, with four individual directional buttons in the lower left, start/select/PS in the lower middle and four configuration buttons in the lower right. Those configuration buttons require extra explanation, so here we go: the P button at the top swaps between four different pages, indicated with a different colour LED, allowing the remaining three physical buttons to activate up to 12 different functions. (The Fanatec GT DD Pro, by contrast, has dedicated five-way controls for each of its four functions. This costs more to produce, but allows you to use the controls without looking down to see what coloured light is active.) There are no rotary encoders or other additional controls here, so PC players that prefer more complicated racing sims may feel a bit underserved by this clunky, cost-saving solution. The 815g wheel is at least sized reasonably, with 300mm circular shape that particularly suits drifting, rally and trucking - though all forms of driving and racing are of course possible. The rubber grips under your hands are reasonably comfortable, but you can still feel seams in various places. Overall, the wheel is possibly the weakest part of the package, but perfectly usable and acceptable for the price point. Thrustmaster Hypercar Wheel Add-On review: true luxury An incredible wheel with premium materials, excellent controls and a more specialised shape. Thrustmaster also sent over the £339/$350 Hypercar wheel rim for testing, which is an upgrade option that uses significantly better materials - leather, alcantara, aluminium and carbon - and offers a huge number of extra controls (25 buttons, including four rotary encoders and two pairs of analogue paddles). Its oval shape feels a bit more responsive for faster vehicles (like F1 cars) that require a quick change of direction, but drifting and rally doesn't feel natural. It supports the same PS4, PS5 and PC platforms as the stock option, but there are no legends printed on the buttons to help you. The difference in quality here is immediately apparent, with much better tactile feedback from the buttons and a huge number of additional controls for adjusting stuff like ERS deployment or brake bias. Each control feels well-placed, even if the T-shaped layout for the face buttons is slightly unnatural at first, and the paddles for shifting and the clutch are particularly well engineered. I also found holding the wheel a bit more comfortable thanks to that flattened out shape, the more premium materials and the absence of bumps or seams anywhere you're likely to hold. It's a huge upgrade in terms of feel and features then, as you'd hope for a wheel rim that costs nearly as much as the entire T598 kit and caboodle. As an upgrade option, I do rate it, though it perhaps makes slightly more sense for T818 owners that have already invested a bit more in the Thrusmaster ecosystem. Regardless, it was this rim that I used for the majority of my time with the T598, and the wheel base feels significantly better with the upgrade. Thrustmaster T598 Raceline pedals review: great feedback, but no clutch and no load cell upgrade offered at present Surprisingly good for two add-in pedals, in terms of feedback and flexibility. The pedals that come with the T598 are surprisingly good, with an accelerator, a brake pedal (with a choice of two different spring options) and no clutch pedal. Each pedal's spring assembly can be pushed into one of three positions to change the amount of pre-load - ie make it a bit softer or harder to press and the pedal plates can be shifted up and down. The narrow dimensions of the metal wheel plate meant that it was impossible to mount directly in the centre of the Playseat Trophy I used for testing, but the slightly off-centre installation I ended up with still worked just fine. They connect using a non-USB connection, so you can't use the pedals with other wheel bases. Using the middle distance setting and the firmer of the two springs for the brake, I found the T598 produced good results, on par or perhaps even a tad better than other metal-construction Hall Effect position sensor (ie non-load cell) pedals I've tested such as the Moza SR-P Lite and Fanatec CSL. Braking is the critical point here, as you want to be able to feel when the brake has mechanically reached its threshold and then modulate your inputs from there, and the T598 pedals do allow for this quite easily. They're also not so hard to actuate that you end up having to hard-mount them to a sim rig for good results, and the included carpet spikes are reasonably effective in keeping the pedals in place. Presumably, it ought to be possible to add on a load cell brake pedal down the line to upgrade to a proper (if slightly cramped) three pedal setup. For the F1 style driving that I prefer, the clutch pedal isn't used anyway, so it wasn't a massive issue for me - and we frequently see companies like Moza and Fanatec drop the clutch pedal on these aggessively priced bundles so Thrustmaster aren't losing ground by following suit. Thrustmaster T598 final verdict: a competitive £450 package with potential For PlayStation owners, this is an incredible value pickup that ranks among the cheapest DD options - and PC owners ought to consider it too. For £449/$499, the Thrustmaster T598 is an excellent value direct drive wheel and pedal bundle for PlayStation and PC with some relatively minor quirks. The wheel base is powerful, detailed and responsive in most games, with some advantages over traditional DD designs but also some disadvantages - notably the taller shape and a slight hum/vibration while stationary. Traditional DD designs from the likes of Fanatec and Moza can offer more reliable force feedback that works over a wider range of games, cars and tracks, while also benefitting from better PC software, but there's certainly potential for Thrustmaster to improve here. The included wheel feels a bit cheap, with a predominantly plastic design, spongey buttons and a slightly odd layout, but the full circle shape and full PS5/PS4 compatibility is most welcome. Upgrading to the HyperCar wheel provides a huge uptick in materials, tactile feedback and number of controls, though this does come at a fairly steep price of £339/$350. If you plan to use the T598 for years and have the budget for it, this is a super upgrade to aim for. The included Raceline LTE pedals are the most surprising element for me. These consist of only an accelerator and a brake with only moderate adjustability and a narrow base plate, but they feel great to use, are made from durable metal with HE sensors, and only really lose out to significantly more expensive load cell options. For an add-in for a relatively cheap DD bundle, they're a solid inclusion, and I hope Thrustmaster release a load cell brake pedal for users to upgrade to a better three-pedal setup later. Overall, it's an competitive first outing for Thrustmaster with the T598 and direct axial drive, and I'm curious to see where the company - and the tech - goes from here. With Fanatec still on the rebuild after being acquired by Corsair and Moza's offerings being hard to order online in some regions, Thrustmaster has a golden opportunity to seize a share of the mid-range and entry-level sim racing market, and the T598 is a positive start.
    0 Комментарии 0 Поделились 0 предпросмотр
  • June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!

    In the Northern Hemisphere during the spring, the bright star Regulus is easy to spot above the eastern horizon. The alpha star of the constellation Leo, Regulus is the spiky star centered in this telescopic field of view. Regulus is a hot, rapidly spinning star that is known to be part of a multiple star system.
     
    CREDIT: Markus Horn

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    June 1Crescent Moon Visible Between Mars and RegulusJune 11Full Strawberry MoonMid JuneMercury Shows Off June 16-18The Red Planet Meets the Blue Heart of Leo June 20Summer SolsticeJune 30International Asteroid Day

    While the relatively short nights of summer mean less dark skies for stargazing, this month should still provide plenty to occupy those of us given to looking to the sky. June will feature several opportunities to see Mars and the moon in close proximity to Regulus, the iconic blue starthat shine from the heart of Leo, along with two weeks’ worth of excellent opportunities for observing Mercury. And did you know that June 30 is International Asteroid Day?
    June 1– Crescent Moon Visible Between Mars and Regulus
    The first evening of June will find the crescent moon sitting squarely between Mars and Regulus, the brightest member of the constellation Leo. Interesting fact: while it looks like a single object, the blue “star” we see as Regulus isn’t just one star. It’s actually four. The largest and brightest, Regulus A, is significantly hotter than our sun and way, way brighter than our sun, and is believed to be in a binary orbit with a much smaller object. This object is most likely a white dwarf, but it has never been observed directly. The other two stars–Regulus B and C–are also dwarf stars, and are also locked in a binary orbit.
    Anyway, keep Regulus in mind, because we’ll be returning to it later in the month.
    June 11– Full Strawberry Moon
    This month, the moon will reach peak illumination in the early hours of June 11. If you’re on EDT, the full moon will be at 3:44 a.m. This month’s moon is called the Strawberry Moon, and of all the lovely names for the full moon, June’s might just be the prettiest. The name refers to the berries that ripen as the summer solstice approaches, not the color of the moon itself, which will remain resolutely silver. Several Native American languages use this term, including Ojibwe, Oneida, and the Mahican dialect of the Stockbridge-Munsee band of Wisconsin. Other languages have similarly poetic names: in the Catawba language it’s the “River Moon” and in Cherokee it’s “They Are Arriving/Plants in Garden are Sprouting Month”, while in Seneca and Tunica it’s simply the “Summer Moon.”
    Mid-June– Mercury Shows Off
    Our solar system’s innermost planet can be difficult to observe—it’s small, dim, and a lot of the time, it simply gets lost in the glare of the sun. However, this month marks one of the regular periods when Mercury appears far enough removed from the sun to be visible to the naked eye.
    Throughout June, Mercury will approach its maximum eastern elongation,the point at which it appears furthest east of the sun. Unfortunately, its magnitude—i.e. its apparent brightness—will decline over the course of the month, and by the time it hits maximum elongation in early July, it’ll be dim enough that you might struggle to spot it without the aid of a telescope or some binoculars.
    This means that mid-June will offer the best balance of elongation and magnitude. As per the ever indispensable Farmer’s Almanac, Mercury should be visible between 9:00 p.m. and 9:15 p.m. local time, low in the sky to the west-northwest. On June 26, it’ll peek out from slightly below and to the left of the crescent moon.
     June 16-18– The Red Planet Meets the Blue Heart of Leo
    So, another thing about Regulus: it sits close to the plane of the solar system, which means that it is often seen in close proximity to the moon and the planets.
    This month brings one such occasion: for the nights of June 16, 17, and 18, Regulus will appear right next to Mars. The proximity of the Red Planet and the blazing blue heart of the constellation Leo should make for a pretty spectacular celestial juxtaposition.
    June 20– The Summer Solstice
    In the Northern Hemisphere, June 20 is the day on which the sun is highest in the sky, aka the summer solstice! This is the day on which the North Pole is tilted most directly toward the sun, bringing 24-hour daylight to the Arctic Circle and the longest day of the year to the rest of the Northern Hemisphere. Summer is here, y’all!
    June 30– International Asteroid Day
    June 30 marks the anniversary of the Tunguska Event, a frankly terrifying asteroid strike that remains the largest asteroid impact event in recorded history. On June 30, 1908, an asteroid estimated to be  about 160 to 200 feet wide exploded several miles above the surface of a remote area of Siberia. The force of the detonation is estimated to be comparable to  between 3 and 50 megatons of TNT, and registered on seismographs around the world. For comparison, the atomic bombs dropped on Hiroshima and Nagasaki had yields of 0.015 and 0.021 megatons, respectively.) The resultant shockwave flattened an estimated 80 million trees over an area of 830 square miles and broke windows hundreds of miles away.
    In 2014, a group of scientists proposed that June 30 be designated International Asteroid Day. The UN adopted the idea two years later. The day recognizes the potentially calamitous effect of asteroid impacts—what might have happened had the Tunguska asteroid hit a city instead of a barren part of Siberia doesn’t really bear thinking about—and to raise awareness about the importance of asteroid-tracking endeavors.
    Anyway, hopefully June’s stargazing endeavors won’t reveal any terrifying asteroids hurtling toward us. Whatever you’re setting your sights on, though, you’ll get the best experience if you get away from any sources of light pollution—and you make sure to check out our stargazing tips before you head off into the darkness.
    Until next month!
    #june #skygazing #strawberry #moon #summer
    June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!
    In the Northern Hemisphere during the spring, the bright star Regulus is easy to spot above the eastern horizon. The alpha star of the constellation Leo, Regulus is the spiky star centered in this telescopic field of view. Regulus is a hot, rapidly spinning star that is known to be part of a multiple star system.   CREDIT: Markus Horn Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. June 1Crescent Moon Visible Between Mars and RegulusJune 11Full Strawberry MoonMid JuneMercury Shows Off June 16-18The Red Planet Meets the Blue Heart of Leo June 20Summer SolsticeJune 30International Asteroid Day While the relatively short nights of summer mean less dark skies for stargazing, this month should still provide plenty to occupy those of us given to looking to the sky. June will feature several opportunities to see Mars and the moon in close proximity to Regulus, the iconic blue starthat shine from the heart of Leo, along with two weeks’ worth of excellent opportunities for observing Mercury. And did you know that June 30 is International Asteroid Day? June 1– Crescent Moon Visible Between Mars and Regulus The first evening of June will find the crescent moon sitting squarely between Mars and Regulus, the brightest member of the constellation Leo. Interesting fact: while it looks like a single object, the blue “star” we see as Regulus isn’t just one star. It’s actually four. The largest and brightest, Regulus A, is significantly hotter than our sun and way, way brighter than our sun, and is believed to be in a binary orbit with a much smaller object. This object is most likely a white dwarf, but it has never been observed directly. The other two stars–Regulus B and C–are also dwarf stars, and are also locked in a binary orbit. Anyway, keep Regulus in mind, because we’ll be returning to it later in the month. June 11– Full Strawberry Moon This month, the moon will reach peak illumination in the early hours of June 11. If you’re on EDT, the full moon will be at 3:44 a.m. This month’s moon is called the Strawberry Moon, and of all the lovely names for the full moon, June’s might just be the prettiest. The name refers to the berries that ripen as the summer solstice approaches, not the color of the moon itself, which will remain resolutely silver. Several Native American languages use this term, including Ojibwe, Oneida, and the Mahican dialect of the Stockbridge-Munsee band of Wisconsin. Other languages have similarly poetic names: in the Catawba language it’s the “River Moon” and in Cherokee it’s “They Are Arriving/Plants in Garden are Sprouting Month”, while in Seneca and Tunica it’s simply the “Summer Moon.” Mid-June– Mercury Shows Off Our solar system’s innermost planet can be difficult to observe—it’s small, dim, and a lot of the time, it simply gets lost in the glare of the sun. However, this month marks one of the regular periods when Mercury appears far enough removed from the sun to be visible to the naked eye. Throughout June, Mercury will approach its maximum eastern elongation,the point at which it appears furthest east of the sun. Unfortunately, its magnitude—i.e. its apparent brightness—will decline over the course of the month, and by the time it hits maximum elongation in early July, it’ll be dim enough that you might struggle to spot it without the aid of a telescope or some binoculars. This means that mid-June will offer the best balance of elongation and magnitude. As per the ever indispensable Farmer’s Almanac, Mercury should be visible between 9:00 p.m. and 9:15 p.m. local time, low in the sky to the west-northwest. On June 26, it’ll peek out from slightly below and to the left of the crescent moon.  June 16-18– The Red Planet Meets the Blue Heart of Leo So, another thing about Regulus: it sits close to the plane of the solar system, which means that it is often seen in close proximity to the moon and the planets. This month brings one such occasion: for the nights of June 16, 17, and 18, Regulus will appear right next to Mars. The proximity of the Red Planet and the blazing blue heart of the constellation Leo should make for a pretty spectacular celestial juxtaposition. June 20– The Summer Solstice In the Northern Hemisphere, June 20 is the day on which the sun is highest in the sky, aka the summer solstice! This is the day on which the North Pole is tilted most directly toward the sun, bringing 24-hour daylight to the Arctic Circle and the longest day of the year to the rest of the Northern Hemisphere. Summer is here, y’all! June 30– International Asteroid Day June 30 marks the anniversary of the Tunguska Event, a frankly terrifying asteroid strike that remains the largest asteroid impact event in recorded history. On June 30, 1908, an asteroid estimated to be  about 160 to 200 feet wide exploded several miles above the surface of a remote area of Siberia. The force of the detonation is estimated to be comparable to  between 3 and 50 megatons of TNT, and registered on seismographs around the world. For comparison, the atomic bombs dropped on Hiroshima and Nagasaki had yields of 0.015 and 0.021 megatons, respectively.) The resultant shockwave flattened an estimated 80 million trees over an area of 830 square miles and broke windows hundreds of miles away. In 2014, a group of scientists proposed that June 30 be designated International Asteroid Day. The UN adopted the idea two years later. The day recognizes the potentially calamitous effect of asteroid impacts—what might have happened had the Tunguska asteroid hit a city instead of a barren part of Siberia doesn’t really bear thinking about—and to raise awareness about the importance of asteroid-tracking endeavors. Anyway, hopefully June’s stargazing endeavors won’t reveal any terrifying asteroids hurtling toward us. Whatever you’re setting your sights on, though, you’ll get the best experience if you get away from any sources of light pollution—and you make sure to check out our stargazing tips before you head off into the darkness. Until next month! #june #skygazing #strawberry #moon #summer
    WWW.POPSCI.COM
    June skygazing: A strawberry moon, the summer solstice… and Asteroid Day!
    In the Northern Hemisphere during the spring, the bright star Regulus is easy to spot above the eastern horizon. The alpha star of the constellation Leo, Regulus is the spiky star centered in this telescopic field of view. Regulus is a hot, rapidly spinning star that is known to be part of a multiple star system.   CREDIT: Markus Horn Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. June 1Crescent Moon Visible Between Mars and RegulusJune 11Full Strawberry MoonMid JuneMercury Shows Off June 16-18The Red Planet Meets the Blue Heart of Leo June 20Summer SolsticeJune 30International Asteroid Day While the relatively short nights of summer mean less dark skies for stargazing, this month should still provide plenty to occupy those of us given to looking to the sky. June will feature several opportunities to see Mars and the moon in close proximity to Regulus, the iconic blue star(s) that shine from the heart of Leo, along with two weeks’ worth of excellent opportunities for observing Mercury. And did you know that June 30 is International Asteroid Day? June 1– Crescent Moon Visible Between Mars and Regulus The first evening of June will find the crescent moon sitting squarely between Mars and Regulus, the brightest member of the constellation Leo. Interesting fact: while it looks like a single object, the blue “star” we see as Regulus isn’t just one star. It’s actually four. The largest and brightest, Regulus A, is significantly hotter than our sun and way, way brighter than our sun, and is believed to be in a binary orbit with a much smaller object. This object is most likely a white dwarf, but it has never been observed directly. The other two stars–Regulus B and C–are also dwarf stars, and are also locked in a binary orbit. Anyway, keep Regulus in mind, because we’ll be returning to it later in the month. June 11– Full Strawberry Moon This month, the moon will reach peak illumination in the early hours of June 11. If you’re on EDT, the full moon will be at 3:44 a.m. This month’s moon is called the Strawberry Moon, and of all the lovely names for the full moon, June’s might just be the prettiest. The name refers to the berries that ripen as the summer solstice approaches, not the color of the moon itself, which will remain resolutely silver. Several Native American languages use this term, including Ojibwe, Oneida, and the Mahican dialect of the Stockbridge-Munsee band of Wisconsin. Other languages have similarly poetic names: in the Catawba language it’s the “River Moon” and in Cherokee it’s “They Are Arriving/Plants in Garden are Sprouting Month”, while in Seneca and Tunica it’s simply the “Summer Moon.” Mid-June– Mercury Shows Off Our solar system’s innermost planet can be difficult to observe—it’s small, dim, and a lot of the time, it simply gets lost in the glare of the sun. However, this month marks one of the regular periods when Mercury appears far enough removed from the sun to be visible to the naked eye. Throughout June, Mercury will approach its maximum eastern elongation,the point at which it appears furthest east of the sun. Unfortunately, its magnitude—i.e. its apparent brightness—will decline over the course of the month, and by the time it hits maximum elongation in early July, it’ll be dim enough that you might struggle to spot it without the aid of a telescope or some binoculars. This means that mid-June will offer the best balance of elongation and magnitude. As per the ever indispensable Farmer’s Almanac, Mercury should be visible between 9:00 p.m. and 9:15 p.m. local time, low in the sky to the west-northwest. On June 26, it’ll peek out from slightly below and to the left of the crescent moon.  [ Related: Mercury stuns in incredibly detailed new images. ] June 16-18– The Red Planet Meets the Blue Heart of Leo So, another thing about Regulus: it sits close to the plane of the solar system, which means that it is often seen in close proximity to the moon and the planets. This month brings one such occasion: for the nights of June 16, 17, and 18, Regulus will appear right next to Mars. The proximity of the Red Planet and the blazing blue heart of the constellation Leo should make for a pretty spectacular celestial juxtaposition. June 20– The Summer Solstice In the Northern Hemisphere, June 20 is the day on which the sun is highest in the sky, aka the summer solstice! This is the day on which the North Pole is tilted most directly toward the sun, bringing 24-hour daylight to the Arctic Circle and the longest day of the year to the rest of the Northern Hemisphere. Summer is here, y’all! June 30– International Asteroid Day June 30 marks the anniversary of the Tunguska Event, a frankly terrifying asteroid strike that remains the largest asteroid impact event in recorded history. On June 30, 1908, an asteroid estimated to be  about 160 to 200 feet wide exploded several miles above the surface of a remote area of Siberia. The force of the detonation is estimated to be comparable to  between 3 and 50 megatons of TNT, and registered on seismographs around the world. For comparison, the atomic bombs dropped on Hiroshima and Nagasaki had yields of 0.015 and 0.021 megatons, respectively.) The resultant shockwave flattened an estimated 80 million trees over an area of 830 square miles and broke windows hundreds of miles away. In 2014, a group of scientists proposed that June 30 be designated International Asteroid Day. The UN adopted the idea two years later. The day recognizes the potentially calamitous effect of asteroid impacts—what might have happened had the Tunguska asteroid hit a city instead of a barren part of Siberia doesn’t really bear thinking about—and to raise awareness about the importance of asteroid-tracking endeavors. Anyway, hopefully June’s stargazing endeavors won’t reveal any terrifying asteroids hurtling toward us. Whatever you’re setting your sights on, though, you’ll get the best experience if you get away from any sources of light pollution—and you make sure to check out our stargazing tips before you head off into the darkness. Until next month!
    8 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com