• EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Комментарии 0 Поделились 0 предпросмотр
  • Digital Meets Dine-In: 5 Expert QSR Engagement Strategies

    Reading Time: 3 minutes
    In a recent webinar hosted by MoEngage, QSR marketing experts from Radar and Bottle Rocket came together to unpack the findings of the 2025 State of Cross-Channel Marketing for QSRs report. 
    With more than 800 total survey responses, including 70 from QSR marketers, this report revealed where quick-service restaurants are focusing their energy, what is holding them back, and the emerging strategies reshaping the guest experience.
    Here is a recap of the key takeaways, expert insights, and actionable advice shared by our panelists.

     
    Where QSRs Are Focused in 2025: Loyalty, Personalization & Speed
    The webinar kicked off with a deep dive into shifting priorities. Customer engagement and loyalty emerged as the top focus for QSR marketers in 2025, with 80% of respondents increasing investment in customer experience technology. Mobile-first experiences and real-time personalization are no longer optional; they are essential for effective QSR marketing.
    Nick Patrick, CEO of Radar, put it simply: “Mobile has become the primary interface between QSRs and their customers. Real-time context is what makes that interface intelligent.”

    Challenges: Disconnected Data, Tech Silos, and Execution Speed
    While QSRs have the vision, many struggle with execution. 
    The report found that 60% of QSR leaders still struggle with personalization, and more than a quarter cited siloed data as a top challenge. The panelists echoed these findings, pointing to fragmented systems and misaligned teams as major hurdles.
    Brendan shared: “The POS system was designed for speed and accuracy, not personalization. But when you can use even a basic signal, like a loyalty status, to prompt a more human, high-touch experience, it makes a real difference.”

    Strategies That Work: Start Small, Focus Deep, Earn Trust
    Panelists emphasized the value of starting small with high-impact initiatives like curbside pickup or loyalty nudges. Cross-functional alignment and choosing scalable tech partners were key themes.
    “Don’t boil the ocean,” Brendan advised. “Start with one moment in the journey that can be improved and work cross-functionally to get it right.”

    Earning Location Opt-ins the Right Way
    Location data is a powerful lever for marketing and operations, but only if opt-ins are earned with care. Nick shared a practical framework: “It comes down to three things: transparency, value, and timing. You can’t just ask up front with no context and expect users to say yes.”
    He pointed to Outback Steakhouse as a standout example: “They clearly explain the value, guide users through branded screens, and then request OS-level permissions. It’s thoughtful, and it works.”

    AI in Action: Real Business Impact
    Artificial intelligence was another hot topic. Brendan shared two areas where AI is delivering real value: smarter cross-sell/upsell and feedback intelligence.
    “Even simple segmentation by daypart or region can lift basket size. Some tools report a 10% increase just by turning it on,” he said. “It doesn’t need to be complex to be effective.”

    QSR Webinar Recap: Closing Thoughts
    Whether optimizing app experiences, trying to unify your tech stack, automating manual processes, or building stronger loyalty loops, the advice was clear: start small, stay focused, and partner with tools and teams that can scale with you.
    Watch the full webinar on demand to explore these examples further and learn how MoEngage, Radar, and Bottle Rocket can help your team accelerate QSR engagement.

     
    The post Digital Meets Dine-In: 5 Expert QSR Engagement Strategies appeared first on MoEngage.
    #digital #meets #dinein #expert #qsr
    Digital Meets Dine-In: 5 Expert QSR Engagement Strategies
    Reading Time: 3 minutes In a recent webinar hosted by MoEngage, QSR marketing experts from Radar and Bottle Rocket came together to unpack the findings of the 2025 State of Cross-Channel Marketing for QSRs report.  With more than 800 total survey responses, including 70 from QSR marketers, this report revealed where quick-service restaurants are focusing their energy, what is holding them back, and the emerging strategies reshaping the guest experience. Here is a recap of the key takeaways, expert insights, and actionable advice shared by our panelists.   Where QSRs Are Focused in 2025: Loyalty, Personalization & Speed The webinar kicked off with a deep dive into shifting priorities. Customer engagement and loyalty emerged as the top focus for QSR marketers in 2025, with 80% of respondents increasing investment in customer experience technology. Mobile-first experiences and real-time personalization are no longer optional; they are essential for effective QSR marketing. Nick Patrick, CEO of Radar, put it simply: “Mobile has become the primary interface between QSRs and their customers. Real-time context is what makes that interface intelligent.” Challenges: Disconnected Data, Tech Silos, and Execution Speed While QSRs have the vision, many struggle with execution.  The report found that 60% of QSR leaders still struggle with personalization, and more than a quarter cited siloed data as a top challenge. The panelists echoed these findings, pointing to fragmented systems and misaligned teams as major hurdles. Brendan shared: “The POS system was designed for speed and accuracy, not personalization. But when you can use even a basic signal, like a loyalty status, to prompt a more human, high-touch experience, it makes a real difference.” Strategies That Work: Start Small, Focus Deep, Earn Trust Panelists emphasized the value of starting small with high-impact initiatives like curbside pickup or loyalty nudges. Cross-functional alignment and choosing scalable tech partners were key themes. “Don’t boil the ocean,” Brendan advised. “Start with one moment in the journey that can be improved and work cross-functionally to get it right.” Earning Location Opt-ins the Right Way Location data is a powerful lever for marketing and operations, but only if opt-ins are earned with care. Nick shared a practical framework: “It comes down to three things: transparency, value, and timing. You can’t just ask up front with no context and expect users to say yes.” He pointed to Outback Steakhouse as a standout example: “They clearly explain the value, guide users through branded screens, and then request OS-level permissions. It’s thoughtful, and it works.” AI in Action: Real Business Impact Artificial intelligence was another hot topic. Brendan shared two areas where AI is delivering real value: smarter cross-sell/upsell and feedback intelligence. “Even simple segmentation by daypart or region can lift basket size. Some tools report a 10% increase just by turning it on,” he said. “It doesn’t need to be complex to be effective.” QSR Webinar Recap: Closing Thoughts Whether optimizing app experiences, trying to unify your tech stack, automating manual processes, or building stronger loyalty loops, the advice was clear: start small, stay focused, and partner with tools and teams that can scale with you. Watch the full webinar on demand to explore these examples further and learn how MoEngage, Radar, and Bottle Rocket can help your team accelerate QSR engagement.   The post Digital Meets Dine-In: 5 Expert QSR Engagement Strategies appeared first on MoEngage. #digital #meets #dinein #expert #qsr
    WWW.MOENGAGE.COM
    Digital Meets Dine-In: 5 Expert QSR Engagement Strategies
    Reading Time: 3 minutes In a recent webinar hosted by MoEngage, QSR marketing experts from Radar and Bottle Rocket came together to unpack the findings of the 2025 State of Cross-Channel Marketing for QSRs report.  With more than 800 total survey responses, including 70 from QSR marketers, this report revealed where quick-service restaurants are focusing their energy, what is holding them back, and the emerging strategies reshaping the guest experience. Here is a recap of the key takeaways, expert insights, and actionable advice shared by our panelists.   Where QSRs Are Focused in 2025: Loyalty, Personalization & Speed The webinar kicked off with a deep dive into shifting priorities. Customer engagement and loyalty emerged as the top focus for QSR marketers in 2025, with 80% of respondents increasing investment in customer experience technology. Mobile-first experiences and real-time personalization are no longer optional; they are essential for effective QSR marketing. Nick Patrick, CEO of Radar, put it simply: “Mobile has become the primary interface between QSRs and their customers. Real-time context is what makes that interface intelligent.” Challenges: Disconnected Data, Tech Silos, and Execution Speed While QSRs have the vision, many struggle with execution.  The report found that 60% of QSR leaders still struggle with personalization, and more than a quarter cited siloed data as a top challenge. The panelists echoed these findings, pointing to fragmented systems and misaligned teams as major hurdles. Brendan shared: “The POS system was designed for speed and accuracy, not personalization. But when you can use even a basic signal, like a loyalty status, to prompt a more human, high-touch experience, it makes a real difference.” Strategies That Work: Start Small, Focus Deep, Earn Trust Panelists emphasized the value of starting small with high-impact initiatives like curbside pickup or loyalty nudges. Cross-functional alignment and choosing scalable tech partners were key themes. “Don’t boil the ocean,” Brendan advised. “Start with one moment in the journey that can be improved and work cross-functionally to get it right.” Earning Location Opt-ins the Right Way Location data is a powerful lever for marketing and operations, but only if opt-ins are earned with care. Nick shared a practical framework: “It comes down to three things: transparency, value, and timing. You can’t just ask up front with no context and expect users to say yes.” He pointed to Outback Steakhouse as a standout example: “They clearly explain the value, guide users through branded screens, and then request OS-level permissions. It’s thoughtful, and it works.” AI in Action: Real Business Impact Artificial intelligence was another hot topic. Brendan shared two areas where AI is delivering real value: smarter cross-sell/upsell and feedback intelligence. “Even simple segmentation by daypart or region can lift basket size. Some tools report a 10% increase just by turning it on,” he said. “It doesn’t need to be complex to be effective.” QSR Webinar Recap: Closing Thoughts Whether optimizing app experiences, trying to unify your tech stack, automating manual processes, or building stronger loyalty loops, the advice was clear: start small, stay focused, and partner with tools and teams that can scale with you. Watch the full webinar on demand to explore these examples further and learn how MoEngage, Radar, and Bottle Rocket can help your team accelerate QSR engagement.   The post Digital Meets Dine-In: 5 Expert QSR Engagement Strategies appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    297
    0 Комментарии 0 Поделились 0 предпросмотр
  • Judge Questions Potential Curbs for Google in AI Arms Race

    Lawyers will make closing arguments Friday in the landmark antitrust case that is set to play an outsize role in the future of AI.
    #judge #questions #potential #curbs #google
    Judge Questions Potential Curbs for Google in AI Arms Race
    Lawyers will make closing arguments Friday in the landmark antitrust case that is set to play an outsize role in the future of AI. #judge #questions #potential #curbs #google
    WWW.WSJ.COM
    Judge Questions Potential Curbs for Google in AI Arms Race
    Lawyers will make closing arguments Friday in the landmark antitrust case that is set to play an outsize role in the future of AI.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Delivery robot autonomously lifts, transports heavy cargo

    Tech Delivery robot autonomously lifts, transports heavy cargo Versatile robot autonomously walks, rolls, lifts for deliveries across any terrain
    Published
    May 26, 2025 6:00am EDT close 'CyberGuy': Delivery robot autonomously lifts, transports heavy cargo Tech expert Kurt Knutsson discusses LEVA, the autonomous robot that walks, rolls and lifts 187 pounds of cargo for all-terrain deliveries. Autonomous delivery robots are already starting to change the way goods move around cities and warehouses, but most still need humans to load and unload their cargo. That's where LEVA comes in. Developed by engineers and designers from ETH Zurich and other Swiss universities, LEVA is a robot that can not only navigate tricky environments but also lift and carry heavy boxes all on its own, making deliveries smoother and more efficient. Delivery robotWhat makes LEVA different?Most delivery robots either roll on wheels or walk on legs, but LEVA combines both. It has four legs, and each leg ends with a motorized, steerable wheel. This means on smooth surfaces like sidewalks, LEVA can roll quickly and efficiently, almost like a little car. When it encounters stairs, curbs or rough ground, it locks its wheels and walks or climbs like a four-legged animal. This unique design lets LEVA handle both flat urban streets and uneven terrain with ease. Delivery robotHow LEVA sees and moves aroundLEVA uses a mix of GPS, lidar sensors and five cameras placed around its body to understand its surroundings. These tools help it navigate city streets or indoor hallways while avoiding obstacles. One camera even looks downward to help LEVA line itself up perfectly when it's time to pick up or drop off cargo. Delivery robotThe big deal: Loading and unloading itselfWhat really sets LEVA apart is its ability to load and unload cargo boxes without any human help. It spots a standard cargo box, moves right over it, lowers itself by bending its legs and then locks onto the box using powered hooks underneath its body. After securing the box, LEVA lifts itself back up and carries the load to its destination. It can handle boxes weighing up to 187 pounds, which is pretty impressive for a robot of its size. Delivery robot’s wheelsLEVA's specsLEVA is about 4 feet long and 2.5 feet wide, with an adjustable height between 2 and 3 feet. It weighs around 187 pounds and can carry the same amount of cargo. Thanks to its wheels and legs, it can move smoothly on flat surfaces, climb stairs and handle rough terrain. Its sensors and cameras give it a sharp sense of where it is and what's around it. Delivery robotWhere could you see LEVA in action?LEVA's flexibility makes it useful in many places. It could deliver packages right to your doorstep, even if you live in a building with stairs. Farmers might use it to move supplies across fields. On construction sites, it could carry tools and materials over uneven ground. It might even assist in emergency situations by bringing supplies through rubble or rough terrain. Delivery robotWhat does this mean for you?For folks like us, LEVA could mean faster, more reliable deliveries, especially in tricky urban areas where stairs and curbs often slow things down. For businesses, it means cutting down on the need for manual labor to load and unload heavy items, which can reduce injuries and lower costs.It also means deliveries and material handling could happen around the clock without breaks, boosting efficiency. In industries like farming, construction and emergency response, LEVA's ability to get through tough terrain while carrying heavy loads could make a big difference in how quickly and safely supplies get where they need to go. Delivery robotWhat's next for LEVA?The first LEVA prototype has shown it can do a lot, but there's still work to be done. The team is improving its energy use, making it better at climbing stairs, and enhancing its ability to operate fully on its own. The goal is to have LEVA become a reliable part of automated delivery systems that work smoothly in real-world settings. Delivery robotKurt's key takeawaysLEVA blends the best of wheels and legs with the unique ability to load and unload itself. This makes it a promising tool for industries that need robots to be flexible, strong and smart. As LEVA continues to develop, it could change the way deliveries and material transport happen, making them faster, safer and more efficient for everyone.How much would you trust a robot to handle your valuable or fragile shipments without human supervision? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #delivery #robot #autonomously #lifts #transports
    Delivery robot autonomously lifts, transports heavy cargo
    Tech Delivery robot autonomously lifts, transports heavy cargo Versatile robot autonomously walks, rolls, lifts for deliveries across any terrain Published May 26, 2025 6:00am EDT close 'CyberGuy': Delivery robot autonomously lifts, transports heavy cargo Tech expert Kurt Knutsson discusses LEVA, the autonomous robot that walks, rolls and lifts 187 pounds of cargo for all-terrain deliveries. Autonomous delivery robots are already starting to change the way goods move around cities and warehouses, but most still need humans to load and unload their cargo. That's where LEVA comes in. Developed by engineers and designers from ETH Zurich and other Swiss universities, LEVA is a robot that can not only navigate tricky environments but also lift and carry heavy boxes all on its own, making deliveries smoother and more efficient. Delivery robotWhat makes LEVA different?Most delivery robots either roll on wheels or walk on legs, but LEVA combines both. It has four legs, and each leg ends with a motorized, steerable wheel. This means on smooth surfaces like sidewalks, LEVA can roll quickly and efficiently, almost like a little car. When it encounters stairs, curbs or rough ground, it locks its wheels and walks or climbs like a four-legged animal. This unique design lets LEVA handle both flat urban streets and uneven terrain with ease. Delivery robotHow LEVA sees and moves aroundLEVA uses a mix of GPS, lidar sensors and five cameras placed around its body to understand its surroundings. These tools help it navigate city streets or indoor hallways while avoiding obstacles. One camera even looks downward to help LEVA line itself up perfectly when it's time to pick up or drop off cargo. Delivery robotThe big deal: Loading and unloading itselfWhat really sets LEVA apart is its ability to load and unload cargo boxes without any human help. It spots a standard cargo box, moves right over it, lowers itself by bending its legs and then locks onto the box using powered hooks underneath its body. After securing the box, LEVA lifts itself back up and carries the load to its destination. It can handle boxes weighing up to 187 pounds, which is pretty impressive for a robot of its size. Delivery robot’s wheelsLEVA's specsLEVA is about 4 feet long and 2.5 feet wide, with an adjustable height between 2 and 3 feet. It weighs around 187 pounds and can carry the same amount of cargo. Thanks to its wheels and legs, it can move smoothly on flat surfaces, climb stairs and handle rough terrain. Its sensors and cameras give it a sharp sense of where it is and what's around it. Delivery robotWhere could you see LEVA in action?LEVA's flexibility makes it useful in many places. It could deliver packages right to your doorstep, even if you live in a building with stairs. Farmers might use it to move supplies across fields. On construction sites, it could carry tools and materials over uneven ground. It might even assist in emergency situations by bringing supplies through rubble or rough terrain. Delivery robotWhat does this mean for you?For folks like us, LEVA could mean faster, more reliable deliveries, especially in tricky urban areas where stairs and curbs often slow things down. For businesses, it means cutting down on the need for manual labor to load and unload heavy items, which can reduce injuries and lower costs.It also means deliveries and material handling could happen around the clock without breaks, boosting efficiency. In industries like farming, construction and emergency response, LEVA's ability to get through tough terrain while carrying heavy loads could make a big difference in how quickly and safely supplies get where they need to go. Delivery robotWhat's next for LEVA?The first LEVA prototype has shown it can do a lot, but there's still work to be done. The team is improving its energy use, making it better at climbing stairs, and enhancing its ability to operate fully on its own. The goal is to have LEVA become a reliable part of automated delivery systems that work smoothly in real-world settings. Delivery robotKurt's key takeawaysLEVA blends the best of wheels and legs with the unique ability to load and unload itself. This makes it a promising tool for industries that need robots to be flexible, strong and smart. As LEVA continues to develop, it could change the way deliveries and material transport happen, making them faster, safer and more efficient for everyone.How much would you trust a robot to handle your valuable or fragile shipments without human supervision? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #delivery #robot #autonomously #lifts #transports
    WWW.FOXNEWS.COM
    Delivery robot autonomously lifts, transports heavy cargo
    Tech Delivery robot autonomously lifts, transports heavy cargo Versatile robot autonomously walks, rolls, lifts for deliveries across any terrain Published May 26, 2025 6:00am EDT close 'CyberGuy': Delivery robot autonomously lifts, transports heavy cargo Tech expert Kurt Knutsson discusses LEVA, the autonomous robot that walks, rolls and lifts 187 pounds of cargo for all-terrain deliveries. Autonomous delivery robots are already starting to change the way goods move around cities and warehouses, but most still need humans to load and unload their cargo. That's where LEVA comes in. Developed by engineers and designers from ETH Zurich and other Swiss universities, LEVA is a robot that can not only navigate tricky environments but also lift and carry heavy boxes all on its own, making deliveries smoother and more efficient. Delivery robot (LEVA) (Kurt "CyberGuy" Knutsson)What makes LEVA different?Most delivery robots either roll on wheels or walk on legs, but LEVA combines both. It has four legs, and each leg ends with a motorized, steerable wheel. This means on smooth surfaces like sidewalks, LEVA can roll quickly and efficiently, almost like a little car. When it encounters stairs, curbs or rough ground, it locks its wheels and walks or climbs like a four-legged animal. This unique design lets LEVA handle both flat urban streets and uneven terrain with ease. Delivery robot (LEVA) (Kurt "CyberGuy" Knutsson)How LEVA sees and moves aroundLEVA uses a mix of GPS, lidar sensors and five cameras placed around its body to understand its surroundings. These tools help it navigate city streets or indoor hallways while avoiding obstacles. One camera even looks downward to help LEVA line itself up perfectly when it's time to pick up or drop off cargo. Delivery robot (LEVA) (Kurt "CyberGuy" Knutsson)The big deal: Loading and unloading itselfWhat really sets LEVA apart is its ability to load and unload cargo boxes without any human help. It spots a standard cargo box, moves right over it, lowers itself by bending its legs and then locks onto the box using powered hooks underneath its body. After securing the box, LEVA lifts itself back up and carries the load to its destination. It can handle boxes weighing up to 187 pounds, which is pretty impressive for a robot of its size. Delivery robot’s wheels (LEVA) (Kurt "CyberGuy" Knutsson)LEVA's specsLEVA is about 4 feet long and 2.5 feet wide, with an adjustable height between 2 and 3 feet. It weighs around 187 pounds and can carry the same amount of cargo. Thanks to its wheels and legs, it can move smoothly on flat surfaces, climb stairs and handle rough terrain. Its sensors and cameras give it a sharp sense of where it is and what's around it. Delivery robot (LEVA) (Kurt "CyberGuy" Knutsson)Where could you see LEVA in action?LEVA's flexibility makes it useful in many places. It could deliver packages right to your doorstep, even if you live in a building with stairs. Farmers might use it to move supplies across fields. On construction sites, it could carry tools and materials over uneven ground. It might even assist in emergency situations by bringing supplies through rubble or rough terrain. Delivery robot (LEVA) (Kurt "CyberGuy" Knutsson)What does this mean for you?For folks like us, LEVA could mean faster, more reliable deliveries, especially in tricky urban areas where stairs and curbs often slow things down. For businesses, it means cutting down on the need for manual labor to load and unload heavy items, which can reduce injuries and lower costs.It also means deliveries and material handling could happen around the clock without breaks, boosting efficiency. In industries like farming, construction and emergency response, LEVA's ability to get through tough terrain while carrying heavy loads could make a big difference in how quickly and safely supplies get where they need to go. Delivery robot (LEVA) (Kurt "CyberGuy" Knutsson)What's next for LEVA?The first LEVA prototype has shown it can do a lot, but there's still work to be done. The team is improving its energy use, making it better at climbing stairs, and enhancing its ability to operate fully on its own. The goal is to have LEVA become a reliable part of automated delivery systems that work smoothly in real-world settings. Delivery robot (LEVA) (Kurt "CyberGuy" Knutsson)Kurt's key takeawaysLEVA blends the best of wheels and legs with the unique ability to load and unload itself. This makes it a promising tool for industries that need robots to be flexible, strong and smart. As LEVA continues to develop, it could change the way deliveries and material transport happen, making them faster, safer and more efficient for everyone.How much would you trust a robot to handle your valuable or fragile shipments without human supervision? Let us know by writing us atCyberguy.com/Contact.For more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved. Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Chinese tech giants reveal how they're dealing with U.S. chip curbs to stay in the AI race

    Tencent and Baidu said stockpiling chips, optimizing AI models and using home-grown semiconductors have helped them progress with the tech.
    #chinese #tech #giants #reveal #how
    Chinese tech giants reveal how they're dealing with U.S. chip curbs to stay in the AI race
    Tencent and Baidu said stockpiling chips, optimizing AI models and using home-grown semiconductors have helped them progress with the tech. #chinese #tech #giants #reveal #how
    WWW.CNBC.COM
    Chinese tech giants reveal how they're dealing with U.S. chip curbs to stay in the AI race
    Tencent and Baidu said stockpiling chips, optimizing AI models and using home-grown semiconductors have helped them progress with the tech.
    0 Комментарии 0 Поделились 0 предпросмотр
  • The Download: the desert data center boom, and how to measure Earth’s elevations

    This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. The data center boom in the desert In the high desert east of Reno, Nevada, construction crews are flattening the golden foothills of the Virginia Range, laying the foundations of a data center city. Google, Tract, Switch, EdgeCore, Novva, Vantage, and PowerHouse are all operating, building, or expanding huge facilities nearby. Meanwhile, Microsoft has acquired more than 225 acres of undeveloped property, and Apple is expanding its existing data center just across the Truckee River from the industrial park.The corporate race to amass computing resources to train and run artificial intelligence models and store information in the cloud has sparked a data center boom in the desert—and it’s just far enough away from Nevada’s communities to elude wide notice and, some fear, adequate scrutiny. Read the full story.
    —James Temple This story is part of Power Hungry: AI and our energy future—our new series shining a light on the energy demands and carbon costs of the artificial intelligence revolution. Check out the rest of the package here.
    A new atomic clock in space could help us measure elevations on Earth In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss one. The misalignment happened because they measured elevation from sea level differently. To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. Now, a decade after its adoption, scientists are looking to update the standard—by using the most precise clock ever to fly in space. Read the full story. —Sophia Chen Three takeaways about AI’s energy use and climate impacts —Casey Crownhart This week, we published Power Hungry, a package all about AI and energy. At the center of this package is the most comprehensive look yet at AI’s growing power demand, if I do say so myself.

    This data-heavy story is the result of over six months of reporting by me and my colleague James O’Donnell. Over that time, with the help of leading researchers, we quantified the energy and emissions impacts of individual queries to AI models and tallied what it all adds up to, both right now and for the years ahead. There’s a lot of data to dig through, and I hope you’ll take the time to explore the whole story. But in the meantime, here are three of my biggest takeaways from working on this project. Read the full story.This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. MIT Technology Review Narrated: Congress used to evaluate emerging technologies. Let’s do it again. Artificial intelligence comes with a shimmer and a sheen of magical thinking. And if we’re not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion. One way to combat that might be resurrecting the Office of Technology Assessment, a Congressional think tank that detected lies and tested tech until it was shuttered in 1995. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.
    The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.
    1 OpenAI is buying Jony Ive’s AI startup The former Apple design guru will work with Sam Altman to design an entirely new range of devices.+ The deal is worth a whopping billion.+ Altman gave OpenAI staff a preview of its AI ‘companion’ devices.+ AI products to date have failed to set the world alight.2 Microsoft has blocked employee emails containing ‘Gaza’ or ‘Palestine’ Although the term ‘Israel’ does not trigger such a block.+ Protest group No Azure for Apartheid has accused the company of censorship.3 DOGE needs to do its work in secret That’s what the Trump administration is claiming to the Supreme Court, at least.+ It’s trying to avoid being forced to hand over internal documents.+ DOGE’s tech takeover threatens the safety and stability of our critical data.4 US banks are racing to embrace cryptocurrency Ahead of new stablecoin legislation.+ Attendees at Trump’s crypto dinner paid over million for the privilege.+ Bitcoin has surged to an all-time peak yet again.5 China is making huge technological leaps Thanks to the billions it’s poured into narrowing the gap between it and the US.+ Nvidia’s CEO has branded America’s chip curbs on China ‘a failure.’+ There can be no winners in a US-China AI arms race.6 Disordered eating content is rife on TikTokBut a pocket of creators are dedicated to debunking the worst of it.7 The US military is interested in the world’s largest aircraftThe gigantic WindRunner plane will have an 80-metre wingspan.+ Phase two of military AI has arrived.8 How AI is shaking up animationNew tools are slashing the costs of creating episodes by up to 90%.+ Generative AI is reshaping South Korea’s webcomics industry.9 Tesla’s Cybertruck is a flop Sorry, Elon.+ The vehicles’ resale value is plummeting.10 Google’s new AI video generator loves this terrible joke Which appears to originate from a Reddit post.+ What happened when 20 comedians got AI to write their routines.Quote of the day “It feels like we are marching off a cliff.” —An unnamed software engineering vice president jokes that future developers conferences will be attended by the AI agents companies like Microsoft are racing to deploy, Semafor reports. One more thing What does GPT-3 “know” about me?One of the biggest stories in tech is the rise of large language models that produce text that reads like a human might have written it. These models’ power comes from being trained on troves of publicly available human-created text hoovered up from the internet. If you’ve posted anything even remotely personal in English on the internet, chances are your data might be part of some of the world’s most popular LLMs.Melissa Heikkilä, MIT Technology Review’s former AI reporter, wondered what data these models might have on her—and how it could be misused. So she put OpenAI’s GPT-3 to the test. Read about what she found.We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ Don’t shoot the messenger, but it seems like there’s a new pizza king in town + Ranked: every Final Destination film, from worst to best.+ Who knew that jelly could help to preserve coral reefs? Not I.+ A new generation of space archaeologists are beavering away to document our journeys to the stars.
    #download #desert #data #center #boom
    The Download: the desert data center boom, and how to measure Earth’s elevations
    This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. The data center boom in the desert In the high desert east of Reno, Nevada, construction crews are flattening the golden foothills of the Virginia Range, laying the foundations of a data center city. Google, Tract, Switch, EdgeCore, Novva, Vantage, and PowerHouse are all operating, building, or expanding huge facilities nearby. Meanwhile, Microsoft has acquired more than 225 acres of undeveloped property, and Apple is expanding its existing data center just across the Truckee River from the industrial park.The corporate race to amass computing resources to train and run artificial intelligence models and store information in the cloud has sparked a data center boom in the desert—and it’s just far enough away from Nevada’s communities to elude wide notice and, some fear, adequate scrutiny. Read the full story. —James Temple This story is part of Power Hungry: AI and our energy future—our new series shining a light on the energy demands and carbon costs of the artificial intelligence revolution. Check out the rest of the package here. A new atomic clock in space could help us measure elevations on Earth In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss one. The misalignment happened because they measured elevation from sea level differently. To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. Now, a decade after its adoption, scientists are looking to update the standard—by using the most precise clock ever to fly in space. Read the full story. —Sophia Chen Three takeaways about AI’s energy use and climate impacts —Casey Crownhart This week, we published Power Hungry, a package all about AI and energy. At the center of this package is the most comprehensive look yet at AI’s growing power demand, if I do say so myself. This data-heavy story is the result of over six months of reporting by me and my colleague James O’Donnell. Over that time, with the help of leading researchers, we quantified the energy and emissions impacts of individual queries to AI models and tallied what it all adds up to, both right now and for the years ahead. There’s a lot of data to dig through, and I hope you’ll take the time to explore the whole story. But in the meantime, here are three of my biggest takeaways from working on this project. Read the full story.This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. MIT Technology Review Narrated: Congress used to evaluate emerging technologies. Let’s do it again. Artificial intelligence comes with a shimmer and a sheen of magical thinking. And if we’re not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion. One way to combat that might be resurrecting the Office of Technology Assessment, a Congressional think tank that detected lies and tested tech until it was shuttered in 1995. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 OpenAI is buying Jony Ive’s AI startup The former Apple design guru will work with Sam Altman to design an entirely new range of devices.+ The deal is worth a whopping billion.+ Altman gave OpenAI staff a preview of its AI ‘companion’ devices.+ AI products to date have failed to set the world alight.2 Microsoft has blocked employee emails containing ‘Gaza’ or ‘Palestine’ Although the term ‘Israel’ does not trigger such a block.+ Protest group No Azure for Apartheid has accused the company of censorship.3 DOGE needs to do its work in secret That’s what the Trump administration is claiming to the Supreme Court, at least.+ It’s trying to avoid being forced to hand over internal documents.+ DOGE’s tech takeover threatens the safety and stability of our critical data.4 US banks are racing to embrace cryptocurrency Ahead of new stablecoin legislation.+ Attendees at Trump’s crypto dinner paid over million for the privilege.+ Bitcoin has surged to an all-time peak yet again.5 China is making huge technological leaps Thanks to the billions it’s poured into narrowing the gap between it and the US.+ Nvidia’s CEO has branded America’s chip curbs on China ‘a failure.’+ There can be no winners in a US-China AI arms race.6 Disordered eating content is rife on TikTokBut a pocket of creators are dedicated to debunking the worst of it.7 The US military is interested in the world’s largest aircraftThe gigantic WindRunner plane will have an 80-metre wingspan.+ Phase two of military AI has arrived.8 How AI is shaking up animationNew tools are slashing the costs of creating episodes by up to 90%.+ Generative AI is reshaping South Korea’s webcomics industry.9 Tesla’s Cybertruck is a flop Sorry, Elon.+ The vehicles’ resale value is plummeting.10 Google’s new AI video generator loves this terrible joke Which appears to originate from a Reddit post.+ What happened when 20 comedians got AI to write their routines.Quote of the day “It feels like we are marching off a cliff.” —An unnamed software engineering vice president jokes that future developers conferences will be attended by the AI agents companies like Microsoft are racing to deploy, Semafor reports. One more thing What does GPT-3 “know” about me?One of the biggest stories in tech is the rise of large language models that produce text that reads like a human might have written it. These models’ power comes from being trained on troves of publicly available human-created text hoovered up from the internet. If you’ve posted anything even remotely personal in English on the internet, chances are your data might be part of some of the world’s most popular LLMs.Melissa Heikkilä, MIT Technology Review’s former AI reporter, wondered what data these models might have on her—and how it could be misused. So she put OpenAI’s GPT-3 to the test. Read about what she found.We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ Don’t shoot the messenger, but it seems like there’s a new pizza king in town 🍕+ Ranked: every Final Destination film, from worst to best.+ Who knew that jelly could help to preserve coral reefs? Not I.+ A new generation of space archaeologists are beavering away to document our journeys to the stars. #download #desert #data #center #boom
    WWW.TECHNOLOGYREVIEW.COM
    The Download: the desert data center boom, and how to measure Earth’s elevations
    This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. The data center boom in the desert In the high desert east of Reno, Nevada, construction crews are flattening the golden foothills of the Virginia Range, laying the foundations of a data center city. Google, Tract, Switch, EdgeCore, Novva, Vantage, and PowerHouse are all operating, building, or expanding huge facilities nearby. Meanwhile, Microsoft has acquired more than 225 acres of undeveloped property, and Apple is expanding its existing data center just across the Truckee River from the industrial park.The corporate race to amass computing resources to train and run artificial intelligence models and store information in the cloud has sparked a data center boom in the desert—and it’s just far enough away from Nevada’s communities to elude wide notice and, some fear, adequate scrutiny. Read the full story. —James Temple This story is part of Power Hungry: AI and our energy future—our new series shining a light on the energy demands and carbon costs of the artificial intelligence revolution. Check out the rest of the package here. A new atomic clock in space could help us measure elevations on Earth In 2003, engineers from Germany and Switzerland began building a bridge across the Rhine River simultaneously from both sides. Months into construction, they found that the two sides did not meet. The German side hovered 54 centimeters above the Swiss one. The misalignment happened because they measured elevation from sea level differently. To prevent such costly construction errors, in 2015 scientists in the International Association of Geodesy voted to adopt the International Height Reference Frame, or IHRF, a worldwide standard for elevation. Now, a decade after its adoption, scientists are looking to update the standard—by using the most precise clock ever to fly in space. Read the full story. —Sophia Chen Three takeaways about AI’s energy use and climate impacts —Casey Crownhart This week, we published Power Hungry, a package all about AI and energy. At the center of this package is the most comprehensive look yet at AI’s growing power demand, if I do say so myself. This data-heavy story is the result of over six months of reporting by me and my colleague James O’Donnell (and the work of many others on our team). Over that time, with the help of leading researchers, we quantified the energy and emissions impacts of individual queries to AI models and tallied what it all adds up to, both right now and for the years ahead. There’s a lot of data to dig through, and I hope you’ll take the time to explore the whole story. But in the meantime, here are three of my biggest takeaways from working on this project. Read the full story.This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. MIT Technology Review Narrated: Congress used to evaluate emerging technologies. Let’s do it again. Artificial intelligence comes with a shimmer and a sheen of magical thinking. And if we’re not careful, politicians, employers, and other decision-makers may accept at face value the idea that machines can and should replace human judgment and discretion. One way to combat that might be resurrecting the Office of Technology Assessment, a Congressional think tank that detected lies and tested tech until it was shuttered in 1995. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 OpenAI is buying Jony Ive’s AI startup The former Apple design guru will work with Sam Altman to design an entirely new range of devices. (NYT $)+ The deal is worth a whopping $6.5 billion. (Bloomberg $)+ Altman gave OpenAI staff a preview of its AI ‘companion’ devices. (WSJ $)+ AI products to date have failed to set the world alight. (The Atlantic $)2 Microsoft has blocked employee emails containing ‘Gaza’ or ‘Palestine’ Although the term ‘Israel’ does not trigger such a block. (The Verge)+ Protest group No Azure for Apartheid has accused the company of censorship. (Fortune $) 3 DOGE needs to do its work in secret That’s what the Trump administration is claiming to the Supreme Court, at least. (Ars Technica)+ It’s trying to avoid being forced to hand over internal documents. (NYT $)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)4 US banks are racing to embrace cryptocurrency Ahead of new stablecoin legislation. (The Information $)+ Attendees at Trump’s crypto dinner paid over $1 million for the privilege. (NBC News)+ Bitcoin has surged to an all-time peak yet again. (Reuters)5 China is making huge technological leaps Thanks to the billions it’s poured into narrowing the gap between it and the US. (WSJ $)+ Nvidia’s CEO has branded America’s chip curbs on China ‘a failure.’ (FT $)+ There can be no winners in a US-China AI arms race. (MIT Technology Review)6 Disordered eating content is rife on TikTokBut a pocket of creators are dedicated to debunking the worst of it. (Wired $) 7 The US military is interested in the world’s largest aircraftThe gigantic WindRunner plane will have an 80-metre wingspan. (New Scientist $) + Phase two of military AI has arrived. (MIT Technology Review)8 How AI is shaking up animationNew tools are slashing the costs of creating episodes by up to 90%. (NYT $) + Generative AI is reshaping South Korea’s webcomics industry. (MIT Technology Review)9 Tesla’s Cybertruck is a flop Sorry, Elon. (Fast Company $)+ The vehicles’ resale value is plummeting. (The Daily Beast)10 Google’s new AI video generator loves this terrible joke Which appears to originate from a Reddit post. (404 Media)+ What happened when 20 comedians got AI to write their routines. (MIT Technology Review) Quote of the day “It feels like we are marching off a cliff.” —An unnamed software engineering vice president jokes that future developers conferences will be attended by the AI agents companies like Microsoft are racing to deploy, Semafor reports. One more thing What does GPT-3 “know” about me?One of the biggest stories in tech is the rise of large language models that produce text that reads like a human might have written it. These models’ power comes from being trained on troves of publicly available human-created text hoovered up from the internet. If you’ve posted anything even remotely personal in English on the internet, chances are your data might be part of some of the world’s most popular LLMs.Melissa Heikkilä, MIT Technology Review’s former AI reporter, wondered what data these models might have on her—and how it could be misused. So she put OpenAI’s GPT-3 to the test. Read about what she found.We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Don’t shoot the messenger, but it seems like there’s a new pizza king in town 🍕 ($)+ Ranked: every Final Destination film, from worst to best.+ Who knew that jelly could help to preserve coral reefs? Not I.+ A new generation of space archaeologists are beavering away to document our journeys to the stars.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Nvidia’s Jensen Huang thinks U.S. chip curbs failed — and he’s not alone

    Nvidia CEO Jensen Huang has called U.S. semiconductor export controls on China “a failure,” and many chip analysts and pundits think he has a point.
    #nvidias #jensen #huang #thinks #chip
    Nvidia’s Jensen Huang thinks U.S. chip curbs failed — and he’s not alone
    Nvidia CEO Jensen Huang has called U.S. semiconductor export controls on China “a failure,” and many chip analysts and pundits think he has a point. #nvidias #jensen #huang #thinks #chip
    WWW.CNBC.COM
    Nvidia’s Jensen Huang thinks U.S. chip curbs failed — and he’s not alone
    Nvidia CEO Jensen Huang has called U.S. semiconductor export controls on China “a failure,” and many chip analysts and pundits think he has a point.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Nvidia to Set Up Research Center in Shanghai, Maintaining Foothold in China

    The plan follows the Trump administration’s recent move to tighten curbs on China’s access to Nvidia’s high-end AI chips.
    #nvidia #set #research #center #shanghai
    Nvidia to Set Up Research Center in Shanghai, Maintaining Foothold in China
    The plan follows the Trump administration’s recent move to tighten curbs on China’s access to Nvidia’s high-end AI chips. #nvidia #set #research #center #shanghai
    WWW.WSJ.COM
    Nvidia to Set Up Research Center in Shanghai, Maintaining Foothold in China
    The plan follows the Trump administration’s recent move to tighten curbs on China’s access to Nvidia’s high-end AI chips.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Nvidia to Set Up Research Center in Shanghai, Maintaining Foothold in China

    The plan follows the Trump White House’s recent move to tighten curbs on China’s access to Nvidia’s high-end AI chips.
    #nvidia #set #research #center #shanghai
    Nvidia to Set Up Research Center in Shanghai, Maintaining Foothold in China
    The plan follows the Trump White House’s recent move to tighten curbs on China’s access to Nvidia’s high-end AI chips. #nvidia #set #research #center #shanghai
    WWW.WSJ.COM
    Nvidia to Set Up Research Center in Shanghai, Maintaining Foothold in China
    The plan follows the Trump White House’s recent move to tighten curbs on China’s access to Nvidia’s high-end AI chips.
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com