• Dell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AI

    What just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery.
    The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology.
    Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing.
    While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain.
    Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell.

    The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory.
    A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads.
    // Related Stories

    The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis.
    Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe."
    In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing.
    Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operationsthat enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research.
    The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows."
    #dell #nvidia #department #energy #join
    Dell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AI
    What just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery. The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology. Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing. While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain. Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell. The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads. // Related Stories The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis. Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe." In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing. Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operationsthat enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research. The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows." #dell #nvidia #department #energy #join
    WWW.TECHSPOT.COM
    Dell, Nvidia, and Department of Energy join forces on "Doudna" supercomputer for science and AI
    What just happened? The Department of Energy has announced plans for a new supercomputer designed to significantly accelerate research across a wide range of scientific fields. The initiative highlights the growing convergence between commercial AI development and the computational demands of cutting-edge scientific discovery. The advanced system, to be housed at Lawrence Berkeley National Laboratory and scheduled to become operational in 2026, will be named "Doudna" in honor of Nobel laureate Jennifer Doudna, whose groundbreaking work on CRISPR gene editing has revolutionized molecular biology. Dell Technologies has been selected to deliver the Doudna supercomputer, marking a significant shift in the landscape of government-funded high-performance computing. While companies like Hewlett Packard Enterprise have traditionally dominated this space, Dell's successful bid signals a new chapter. "A big win for Dell," said Addison Snell, CEO of Intersect360 Research, in an interview with The New York Times, noting the company's historically limited presence in this domain. Dell executives explained that the Doudna project enabled them to move beyond the longstanding practice of building custom systems for individual laboratories. Instead, they focused on developing a flexible platform capable of serving a broad array of users. "This market had shifted into some form of autopilot. What we did was disengage the autopilot," said Paul Perez, senior vice president and technology fellow at Dell. The Perlmutter supercomputer at the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. A defining feature of Doudna will be its use of Nvidia's Vera Rubin platform, engineered to combine the strengths of traditional scientific simulations with the power of modern AI. Unlike previous Department of Energy supercomputers, which relied on processors from Intel or AMD, Doudna will incorporate a general-purpose Arm-based CPU from Nvidia, paired with the company's Rubin AI chips designed specifically for artificial intelligence and simulation workloads. // Related Stories The architecture aims to meet the needs of the laboratory's 11,000 users, who increasingly depend on both high-precision modeling and rapid AI-driven data analysis. Jensen Huang, founder and CEO of Nvidia, described the new system with enthusiasm. "Doudna is a time machine for science – compressing years of discovery into days," he said, adding that it will let "scientists delve deeper and think bigger to seek the fundamental truths of the universe." In terms of performance, Doudna is expected to be over 10 times faster than the lab's current flagship system, making it the Department of Energy's most powerful resource for training AI models and conducting advanced simulations. Jonathan Carter, associate lab director for computing sciences at Berkeley Lab, said the system's architecture was shaped by the evolving needs of researchers – many of whom are now using AI to augment simulations in areas like geothermal energy and quantum computing. Doudna's design reflects a broader shift in supercomputing. Traditional systems have prioritized 64-bit calculations for maximum numerical accuracy, but modern AI workloads often benefit from lower-precision operations (such as 16-bit or 8-bit) that enable faster processing speeds. Dion Harris, Nvidia's head of data center product marketing, noted that the flexibility to combine different levels of precision opens new frontiers for scientific research. The supercomputer will also be tightly integrated with the Energy Sciences Network, allowing researchers nationwide to stream data directly into Doudna for real-time analysis. Sudip Dosanjh, director of the National Energy Research Scientific Computing Center, described the new system as "designed to accelerate a broad set of scientific workflows."
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Self-Driving Tesla Suddenly Swerves Off the Road and Crashes

    A video that went viral on Reddit shows a Tesla Model 3 with its so-called "Full Self-Driving" driver assistance feature turned on veering off a country road, crashing into some fencing, and flipping onto its roof.An image shared by Wally, a Tesla owner in Alabama, shows the aftermath: deployed airbags, smashed windows, and a ripped-up metal wire fence.It's unclear what actually caused the crash, as there's nothing in particular that stands out as far as road conditions. The vehicle drives over several shadows being cast on the road by nearby trees, and a truck can be seen driving in the opposite direction just before the driver assistance feature goes haywire.It's yet another baffling incident involving Tesla's controversial driver assistance software, which has already drawn plenty of scrutiny from regulators after being linked to countless crashes and dozens of deaths.It's particularly harrowing, considering the Tesla is planning to roll out a robotaxi service in Austin, Texas, in less than a month's time, highlighting that the Elon Musk-led company may still be woefully unprepared and putting the public at risk. The company's misleadingly-named Full Self-Driving feature still requires drivers to be able to take over control at any time.However, that requirement still appears to fly over the heads of many of Tesla's customers."I used FSD every chance I could get I actually watched YouTube videos to tailor my FSD settings and experience," Wally told Electrek. "I was happy it could drive me to Waffle House and I could just sit back and relax while it would drive me on my morning commute to work.""I was driving to work had Full Self-Driving on. The oncoming car passed, and the wheel started turning rapidly, driving into the ditch, and side-swiping the tree, and the car flipped over," he added. "I did not have any time to react."Fortunately, he only incurred a cut to his chin that required seven stitches.His Model 3 featured Tesla's latest Hardware 4 onboard computer, running the latest version of FSD.Despite Musk's promises of kicking off a driverless ride-hailing service in a matter of weeks, we're still likely many years from Musk's promise of having hundreds of thousands of truly self-driving Teslas on the road.In a recent podcast interview, the company's head of Autopilot and AI software, Ashok Elluswamy, admitted that its driving tech is still a "couple of years" behind the likes of Waymo.Considering how easily cars can still crash in the absence of any apparent dangers with FSD turned on, Elluswamy may have a point.More on self-driving: Terrifying Footage Shows Cybertruck on Self-Driving Mode Swerve Into Oncoming TrafficShare This Article
    #selfdriving #tesla #suddenly #swerves #off
    Self-Driving Tesla Suddenly Swerves Off the Road and Crashes
    A video that went viral on Reddit shows a Tesla Model 3 with its so-called "Full Self-Driving" driver assistance feature turned on veering off a country road, crashing into some fencing, and flipping onto its roof.An image shared by Wally, a Tesla owner in Alabama, shows the aftermath: deployed airbags, smashed windows, and a ripped-up metal wire fence.It's unclear what actually caused the crash, as there's nothing in particular that stands out as far as road conditions. The vehicle drives over several shadows being cast on the road by nearby trees, and a truck can be seen driving in the opposite direction just before the driver assistance feature goes haywire.It's yet another baffling incident involving Tesla's controversial driver assistance software, which has already drawn plenty of scrutiny from regulators after being linked to countless crashes and dozens of deaths.It's particularly harrowing, considering the Tesla is planning to roll out a robotaxi service in Austin, Texas, in less than a month's time, highlighting that the Elon Musk-led company may still be woefully unprepared and putting the public at risk. The company's misleadingly-named Full Self-Driving feature still requires drivers to be able to take over control at any time.However, that requirement still appears to fly over the heads of many of Tesla's customers."I used FSD every chance I could get I actually watched YouTube videos to tailor my FSD settings and experience," Wally told Electrek. "I was happy it could drive me to Waffle House and I could just sit back and relax while it would drive me on my morning commute to work.""I was driving to work had Full Self-Driving on. The oncoming car passed, and the wheel started turning rapidly, driving into the ditch, and side-swiping the tree, and the car flipped over," he added. "I did not have any time to react."Fortunately, he only incurred a cut to his chin that required seven stitches.His Model 3 featured Tesla's latest Hardware 4 onboard computer, running the latest version of FSD.Despite Musk's promises of kicking off a driverless ride-hailing service in a matter of weeks, we're still likely many years from Musk's promise of having hundreds of thousands of truly self-driving Teslas on the road.In a recent podcast interview, the company's head of Autopilot and AI software, Ashok Elluswamy, admitted that its driving tech is still a "couple of years" behind the likes of Waymo.Considering how easily cars can still crash in the absence of any apparent dangers with FSD turned on, Elluswamy may have a point.More on self-driving: Terrifying Footage Shows Cybertruck on Self-Driving Mode Swerve Into Oncoming TrafficShare This Article #selfdriving #tesla #suddenly #swerves #off
    FUTURISM.COM
    Self-Driving Tesla Suddenly Swerves Off the Road and Crashes
    A video that went viral on Reddit shows a Tesla Model 3 with its so-called "Full Self-Driving" driver assistance feature turned on veering off a country road, crashing into some fencing, and flipping onto its roof.An image shared by Wally, a Tesla owner in Alabama, shows the aftermath: deployed airbags, smashed windows, and a ripped-up metal wire fence.It's unclear what actually caused the crash, as there's nothing in particular that stands out as far as road conditions. The vehicle drives over several shadows being cast on the road by nearby trees, and a truck can be seen driving in the opposite direction just before the driver assistance feature goes haywire.It's yet another baffling incident involving Tesla's controversial driver assistance software, which has already drawn plenty of scrutiny from regulators after being linked to countless crashes and dozens of deaths.It's particularly harrowing, considering the Tesla is planning to roll out a robotaxi service in Austin, Texas, in less than a month's time, highlighting that the Elon Musk-led company may still be woefully unprepared and putting the public at risk. The company's misleadingly-named Full Self-Driving feature still requires drivers to be able to take over control at any time.However, that requirement still appears to fly over the heads of many of Tesla's customers."I used FSD every chance I could get I actually watched YouTube videos to tailor my FSD settings and experience," Wally told Electrek. "I was happy it could drive me to Waffle House and I could just sit back and relax while it would drive me on my morning commute to work.""I was driving to work had Full Self-Driving on. The oncoming car passed, and the wheel started turning rapidly, driving into the ditch, and side-swiping the tree, and the car flipped over," he added. "I did not have any time to react."Fortunately, he only incurred a cut to his chin that required seven stitches.His Model 3 featured Tesla's latest Hardware 4 onboard computer, running the latest version of FSD.Despite Musk's promises of kicking off a driverless ride-hailing service in a matter of weeks, we're still likely many years from Musk's promise of having hundreds of thousands of truly self-driving Teslas on the road.In a recent podcast interview, the company's head of Autopilot and AI software, Ashok Elluswamy, admitted that its driving tech is still a "couple of years" behind the likes of Waymo.Considering how easily cars can still crash in the absence of any apparent dangers with FSD turned on, Elluswamy may have a point.More on self-driving: Terrifying Footage Shows Cybertruck on Self-Driving Mode Swerve Into Oncoming TrafficShare This Article
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Tesla Executive Admits That Self-Driving Is Going Nowhere Fast

    Nearly ten years ago in 2015, rising tech entrepreneur Elon Musk made a bold announcement: Tesla vehicles would be fully driving themselves by 2017.The billionaire was talking about vehicles with level 5 autonomy — a designation by the Society of Automotive Engineerscommonly used as the benchmark for a full self-driving car that can drive where its passengers please with no intervention.He repeated those claims in January 2016, saying "summon should work anywhere connected by land and not blocked by borders" within two years. For example, if you're in Los Angeles and your Tesla is in New York, you'd be able to summon it to you from across the country — at least according to his vision, which many took as gospel.By June of that year, Musk called level 5 autonomy a "solved problem." He did so again in 2017. And again in 2018. Then the next year. And the year after that.You probably see where this is going.Now in 2025, Tesla isn't looking meaningfully closer to Level 5 autonomy than in 2015. Though Tesla rolled out its Autopilot features en masse that same year, it's only achieved SAE Level 2 — enough for a driver to "take their hands off the wheel and let their vehicle take control when driving in certain conditions."That isn't likely to change anytime soon, according to an intriguing insider: Tesla's head of Autopilot and AI software, Ashok Elluswamy.The Tesla official was speaking on the Gobinath Podcast, an Indian-English interview show, where he admitted the EV company is still way behind its competitors — despite over a decade of self-driving development."Technically, Waymo is already performing," Elluswamy admitted, referencing Google's autonomous vehicle program. "We are lagging by maybe a couple of years."This is despite Tesla's — also long-promised — fully self-driving Robocab service supposedly going live in Austin, Texas this coming June. It's not understood how this will work, as Tesla would need to demonstrate a vehicle capable of driving itself at SAE Level 4 to transport riders without the need for human intervention.So far, Musk has been uncharacteristically silent on the prospects of a Level 4 vehicle.Late in 2024, Greg McGuire, managing director of the autonomous vehicle research facility at the University of Michigan, told SAE Media that Tesla "is not — from what I've seen — ready for general Level 4 operation.""Will they be there by 2027? At, we still think there's a couple of key scientific barriers," McGuire said.That makes Musk's ever-stretching timeline for full self-driving — let alone a Robocab network — tenuous at best, and an absurd fantasy at worst.Still, a billionaire can dream.Share This Article
    #tesla #executive #admits #that #selfdriving
    Tesla Executive Admits That Self-Driving Is Going Nowhere Fast
    Nearly ten years ago in 2015, rising tech entrepreneur Elon Musk made a bold announcement: Tesla vehicles would be fully driving themselves by 2017.The billionaire was talking about vehicles with level 5 autonomy — a designation by the Society of Automotive Engineerscommonly used as the benchmark for a full self-driving car that can drive where its passengers please with no intervention.He repeated those claims in January 2016, saying "summon should work anywhere connected by land and not blocked by borders" within two years. For example, if you're in Los Angeles and your Tesla is in New York, you'd be able to summon it to you from across the country — at least according to his vision, which many took as gospel.By June of that year, Musk called level 5 autonomy a "solved problem." He did so again in 2017. And again in 2018. Then the next year. And the year after that.You probably see where this is going.Now in 2025, Tesla isn't looking meaningfully closer to Level 5 autonomy than in 2015. Though Tesla rolled out its Autopilot features en masse that same year, it's only achieved SAE Level 2 — enough for a driver to "take their hands off the wheel and let their vehicle take control when driving in certain conditions."That isn't likely to change anytime soon, according to an intriguing insider: Tesla's head of Autopilot and AI software, Ashok Elluswamy.The Tesla official was speaking on the Gobinath Podcast, an Indian-English interview show, where he admitted the EV company is still way behind its competitors — despite over a decade of self-driving development."Technically, Waymo is already performing," Elluswamy admitted, referencing Google's autonomous vehicle program. "We are lagging by maybe a couple of years."This is despite Tesla's — also long-promised — fully self-driving Robocab service supposedly going live in Austin, Texas this coming June. It's not understood how this will work, as Tesla would need to demonstrate a vehicle capable of driving itself at SAE Level 4 to transport riders without the need for human intervention.So far, Musk has been uncharacteristically silent on the prospects of a Level 4 vehicle.Late in 2024, Greg McGuire, managing director of the autonomous vehicle research facility at the University of Michigan, told SAE Media that Tesla "is not — from what I've seen — ready for general Level 4 operation.""Will they be there by 2027? At, we still think there's a couple of key scientific barriers," McGuire said.That makes Musk's ever-stretching timeline for full self-driving — let alone a Robocab network — tenuous at best, and an absurd fantasy at worst.Still, a billionaire can dream.Share This Article #tesla #executive #admits #that #selfdriving
    FUTURISM.COM
    Tesla Executive Admits That Self-Driving Is Going Nowhere Fast
    Nearly ten years ago in 2015, rising tech entrepreneur Elon Musk made a bold announcement: Tesla vehicles would be fully driving themselves by 2017.The billionaire was talking about vehicles with level 5 autonomy — a designation by the Society of Automotive Engineers (SAE) commonly used as the benchmark for a full self-driving car that can drive where its passengers please with no intervention.He repeated those claims in January 2016, saying "summon should work anywhere connected by land and not blocked by borders" within two years. For example, if you're in Los Angeles and your Tesla is in New York, you'd be able to summon it to you from across the country — at least according to his vision, which many took as gospel.By June of that year, Musk called level 5 autonomy a "solved problem." He did so again in 2017. And again in 2018. Then the next year. And the year after that.You probably see where this is going.Now in 2025, Tesla isn't looking meaningfully closer to Level 5 autonomy than in 2015. Though Tesla rolled out its Autopilot features en masse that same year, it's only achieved SAE Level 2 — enough for a driver to "take their hands off the wheel and let their vehicle take control when driving in certain conditions." (That hasn't stopped numerous motorists from overestimating the system's capabilities and dying as a result.)That isn't likely to change anytime soon, according to an intriguing insider: Tesla's head of Autopilot and AI software, Ashok Elluswamy.The Tesla official was speaking on the Gobinath Podcast, an Indian-English interview show, where he admitted the EV company is still way behind its competitors — despite over a decade of self-driving development."Technically, Waymo is already performing," Elluswamy admitted, referencing Google's autonomous vehicle program. "We are lagging by maybe a couple of years."This is despite Tesla's — also long-promised — fully self-driving Robocab service supposedly going live in Austin, Texas this coming June. It's not understood how this will work, as Tesla would need to demonstrate a vehicle capable of driving itself at SAE Level 4 to transport riders without the need for human intervention.So far, Musk has been uncharacteristically silent on the prospects of a Level 4 vehicle.Late in 2024, Greg McGuire, managing director of the autonomous vehicle research facility at the University of Michigan, told SAE Media that Tesla "is not — from what I've seen — ready for general Level 4 operation.""Will they be there by 2027? At [UofM], we still think there's a couple of key scientific barriers," McGuire said.That makes Musk's ever-stretching timeline for full self-driving — let alone a Robocab network — tenuous at best, and an absurd fantasy at worst.Still, a billionaire can dream.Share This Article
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Going off autopilot in ad monetization: 4 innovative strategies to start implementing

    It's easy to stick with strategies that work - but incremental growth comes from balancing exactly that with constant testing and experimentation. That's why at Appfest 2022, Samantha Benjamin, Director of Growth at Supersonic, explored four ways you can break old patterns and be more experimental with your monetization strategy - or, as she puts it, “get off of autopilot."Get inspired by successful creativesFirst, Samantha recommended getting inspired by features in what she calls “booster creatives,” or creatives that give you 3-5x more installs for the same cost as other creatives.For example, her team saw that creatives with realistic obstacles performed significantly better than creatives with cartoonish ones - so they decided to take those realistic obstacles and actually add them to their game Going Balls. As a result, LTV grew on both iOS and Android, and D7 ARPU jumped 5-7%.As your creative team finds these “boosters”, make sure they pass them directly to your monetization team. This way, with every new idea, you can consider and optimize any potential monetization opportunities.Building a sophisticated interstitial player experienceThough interstitials are a major source of revenue, the potential impact on retention sometimes deters developers from monetizing with them . So to ensure players have the best possible interstitial experience, it’s critical to adjust it to your players’ engagement behavior.No-touch interstitialsFor example, when a player hasn’t touched their screen for at least 20 seconds, Supersonic displays what they call “no touch interstitials”. This player is likely taking a break - but they’re going to return to their phone eventually, so the ad will be the first thing they see. CPMs are high with this placement, and it’s a win-win - LTV is high, advertisers get installs, and players can enjoy a more sophisticated interstitial experience.Before or after end-level screenAdditionally, Supersonic tested adapting interstitials for users who reject rewarded video offers. Usually, developers just show interstitials when this happens, but clearly these users don’t want to engage with ads. Supersonic tried a different approach: showing interstitials to this segment during natural pauses in the game, like commercials. They tested this by putting interstitial ads for these users right before or after the end-level screen - and engagement boosted as a result.Looking at other genresNext, try broadening your sources of inspiration - beyond just games competing in your genre. Other kinds of games might seem drastically different, but if they have similar motivations, they can be an ideal learning opportunity.Highlighting progress with a leaderboardInspired by PvP games, Supersonic decided to add an automated leaderboard that pops up at the end of their hyper-casual games. By creating a competitive atmosphere and displaying players’ progress, they boosted LTV and ARPU lifted 12% on average.Celebrating wins with confettiThe Supersonic team noticed other genres creating excitement within their games, so they decided to add a burst of confetti in their games whenever players achieved something. Simply by emphasizing their players’ success and making them feel like winners, Supersonic saw their ARPU jump by 15%.The power of musicAs Samantha puts it: “never underestimate the power of music,” especially in ad-oriented games. When Supersonic tested incorporating more music into their games, they saw a 10% ARPU uplift - simply by tweaking the music and testing different volumes and sound effects.Staying open to ideasFinally, Samantha explains the value of having dedicated time to think of new ideas. In fact, when one growth operations manager at Supersonic pitched an idea, it inspired a real change in their games: timed treasure chests. To increase session length, as the user was approaching the average session length, they would see a pop-up chest with a timer - encouraging the user to keep playing and wait for their prize to be available. This proved so successful at increasing session length that Supersonic implemented this into three of their biggest games.Ultimately, new monetization ideas can come from anywhere and everywhere - so it’s crucial to stay on the lookout, trust your data - and, when the opportunity strikes, don’t be afraid to try going off autopilot.Watch the session here:
    #going #off #autopilot #monetization #innovative
    Going off autopilot in ad monetization: 4 innovative strategies to start implementing
    It's easy to stick with strategies that work - but incremental growth comes from balancing exactly that with constant testing and experimentation. That's why at Appfest 2022, Samantha Benjamin, Director of Growth at Supersonic, explored four ways you can break old patterns and be more experimental with your monetization strategy - or, as she puts it, “get off of autopilot."Get inspired by successful creativesFirst, Samantha recommended getting inspired by features in what she calls “booster creatives,” or creatives that give you 3-5x more installs for the same cost as other creatives.For example, her team saw that creatives with realistic obstacles performed significantly better than creatives with cartoonish ones - so they decided to take those realistic obstacles and actually add them to their game Going Balls. As a result, LTV grew on both iOS and Android, and D7 ARPU jumped 5-7%.As your creative team finds these “boosters”, make sure they pass them directly to your monetization team. This way, with every new idea, you can consider and optimize any potential monetization opportunities.Building a sophisticated interstitial player experienceThough interstitials are a major source of revenue, the potential impact on retention sometimes deters developers from monetizing with them . So to ensure players have the best possible interstitial experience, it’s critical to adjust it to your players’ engagement behavior.No-touch interstitialsFor example, when a player hasn’t touched their screen for at least 20 seconds, Supersonic displays what they call “no touch interstitials”. This player is likely taking a break - but they’re going to return to their phone eventually, so the ad will be the first thing they see. CPMs are high with this placement, and it’s a win-win - LTV is high, advertisers get installs, and players can enjoy a more sophisticated interstitial experience.Before or after end-level screenAdditionally, Supersonic tested adapting interstitials for users who reject rewarded video offers. Usually, developers just show interstitials when this happens, but clearly these users don’t want to engage with ads. Supersonic tried a different approach: showing interstitials to this segment during natural pauses in the game, like commercials. They tested this by putting interstitial ads for these users right before or after the end-level screen - and engagement boosted as a result.Looking at other genresNext, try broadening your sources of inspiration - beyond just games competing in your genre. Other kinds of games might seem drastically different, but if they have similar motivations, they can be an ideal learning opportunity.Highlighting progress with a leaderboardInspired by PvP games, Supersonic decided to add an automated leaderboard that pops up at the end of their hyper-casual games. By creating a competitive atmosphere and displaying players’ progress, they boosted LTV and ARPU lifted 12% on average.Celebrating wins with confettiThe Supersonic team noticed other genres creating excitement within their games, so they decided to add a burst of confetti in their games whenever players achieved something. Simply by emphasizing their players’ success and making them feel like winners, Supersonic saw their ARPU jump by 15%.The power of musicAs Samantha puts it: “never underestimate the power of music,” especially in ad-oriented games. When Supersonic tested incorporating more music into their games, they saw a 10% ARPU uplift - simply by tweaking the music and testing different volumes and sound effects.Staying open to ideasFinally, Samantha explains the value of having dedicated time to think of new ideas. In fact, when one growth operations manager at Supersonic pitched an idea, it inspired a real change in their games: timed treasure chests. To increase session length, as the user was approaching the average session length, they would see a pop-up chest with a timer - encouraging the user to keep playing and wait for their prize to be available. This proved so successful at increasing session length that Supersonic implemented this into three of their biggest games.Ultimately, new monetization ideas can come from anywhere and everywhere - so it’s crucial to stay on the lookout, trust your data - and, when the opportunity strikes, don’t be afraid to try going off autopilot.Watch the session here: #going #off #autopilot #monetization #innovative
    UNITY.COM
    Going off autopilot in ad monetization: 4 innovative strategies to start implementing
    It's easy to stick with strategies that work - but incremental growth comes from balancing exactly that with constant testing and experimentation. That's why at Appfest 2022, Samantha Benjamin, Director of Growth at Supersonic, explored four ways you can break old patterns and be more experimental with your monetization strategy - or, as she puts it, “get off of autopilot."Get inspired by successful creativesFirst, Samantha recommended getting inspired by features in what she calls “booster creatives,” or creatives that give you 3-5x more installs for the same cost as other creatives (and can even change the marketability power of your game).For example, her team saw that creatives with realistic obstacles performed significantly better than creatives with cartoonish ones - so they decided to take those realistic obstacles and actually add them to their game Going Balls. As a result, LTV grew on both iOS and Android, and D7 ARPU jumped 5-7%.As your creative team finds these “boosters”, make sure they pass them directly to your monetization team. This way, with every new idea, you can consider and optimize any potential monetization opportunities.Building a sophisticated interstitial player experienceThough interstitials are a major source of revenue, the potential impact on retention sometimes deters developers from monetizing with them . So to ensure players have the best possible interstitial experience, it’s critical to adjust it to your players’ engagement behavior.No-touch interstitialsFor example, when a player hasn’t touched their screen for at least 20 seconds, Supersonic displays what they call “no touch interstitials”. This player is likely taking a break - but they’re going to return to their phone eventually, so the ad will be the first thing they see. CPMs are high with this placement, and it’s a win-win - LTV is high, advertisers get installs, and players can enjoy a more sophisticated interstitial experience.Before or after end-level screenAdditionally, Supersonic tested adapting interstitials for users who reject rewarded video offers. Usually, developers just show interstitials when this happens, but clearly these users don’t want to engage with ads. Supersonic tried a different approach: showing interstitials to this segment during natural pauses in the game, like commercials. They tested this by putting interstitial ads for these users right before or after the end-level screen - and engagement boosted as a result.Looking at other genresNext, try broadening your sources of inspiration - beyond just games competing in your genre. Other kinds of games might seem drastically different, but if they have similar motivations, they can be an ideal learning opportunity.Highlighting progress with a leaderboardInspired by PvP games, Supersonic decided to add an automated leaderboard that pops up at the end of their hyper-casual games. By creating a competitive atmosphere and displaying players’ progress, they boosted LTV and ARPU lifted 12% on average.Celebrating wins with confettiThe Supersonic team noticed other genres creating excitement within their games, so they decided to add a burst of confetti in their games whenever players achieved something (like shooting a basketball through the hop). Simply by emphasizing their players’ success and making them feel like winners, Supersonic saw their ARPU jump by 15%.The power of musicAs Samantha puts it: “never underestimate the power of music,” especially in ad-oriented games. When Supersonic tested incorporating more music into their games, they saw a 10% ARPU uplift - simply by tweaking the music and testing different volumes and sound effects.Staying open to ideasFinally, Samantha explains the value of having dedicated time to think of new ideas. In fact, when one growth operations manager at Supersonic pitched an idea, it inspired a real change in their games: timed treasure chests. To increase session length, as the user was approaching the average session length, they would see a pop-up chest with a timer - encouraging the user to keep playing and wait for their prize to be available. This proved so successful at increasing session length that Supersonic implemented this into three of their biggest games.Ultimately, new monetization ideas can come from anywhere and everywhere - so it’s crucial to stay on the lookout, trust your data - and, when the opportunity strikes, don’t be afraid to try going off autopilot.Watch the session here: https://www.youtube.com/watch?v=RMqiWKAFENY
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Don’t Automate the Wrong Thing: Lessons from Building Agentic Hiring Systems

    Pankaj Khurana, VP Technology & Consulting, RocketMay 19, 20253 Min ReadElenaBs via Alamy StockFor the past four years, I’ve been building AI-powered tools that help recruiters do their job better. Before that, I was a recruiter myself -- reading resumes, making calls, living the grind. And here’s one thing I’ve learned from straddling both worlds: In hiring, automating the wrong thing can quietly erode everything that makes your process work. As engineering leaders, we’re constantly told to streamline and optimize. Move fast. But if you automate the wrong step -- like how candidates are filtered, scored, or messaged -- you might be replacing good human judgment with rigid shortcuts. And often, you won’t notice the damage until weeks later, when engagement plummets or teams stop trusting your system. The Allure of Automation Hiring is messy. Resumes come in all shapes. Job descriptions are vague. Recruiters are overworked. AI seems like a godsend. We start by automating outreach. Then scoring. Then matching. Eventually, someone asks: can this whole thing run without a person? But here’s the rub: many hiring decisions are deeply contextual. Should a product manager with a non-traditional background be fast-tracked for a high-growth SaaS role? That’s not a “yes/no” the system can decide for you. Early on at Rocket, we made that mistake. Our scoring engine prioritized resumes based solely on skills overlap. It was fast -- but completely off for roles that required nuance. We had to pause, rethink, and admit: “This isn’t working like we hoped.” Related:What Agentic Systems Do Well I’m not anti-automation. Far from it. But it has to be paired with human review. We found that agentic systems -- AI tools with autonomy to assist but not decide -- were far more effective. Think copilots, not autopilots. For example, our system can: Suggest better phrasing for job descriptions Flag resumes that match roles 80% or more Recommend outreach templates based on role and tone But it never auto-rejects or sends messages without review. The AI suggests; the recruiter decides. That balance makes all the difference. Lessons Learned: Where Automation Fails One of our biggest missteps? Automating outreach too heavily. We thought sending personalized AI-written emails at scale would boost response rates. It didn’t. Candidates sensed something off. The emails looked polished but felt cold. Engagement dropped. We eventually went back to having humans rewrite the AI drafts. That one shift nearly doubled our positive response rate. Why? Because candidates want to feel seen -- not sorted. Related:A CIO’s Checklist: What Not to Automate If you’re leading an AI initiative in hiring, here’s a checklist we now swear by: Don’t automate decisions that impact trust. Rejections, scores, hiring calls? Keep a human in the loop. Avoid automating tasks with high context needs. A great candidate might not use trendy buzzwords. That doesn’t make them a bad fit. Be careful with candidate-facing automation. Generic outreach harms brand perception. Do automate the repetitive stuff. Parsing, meeting scheduling, draft -- automate those and give time back to your team. Human-AI Collaboration Wins We saw the best outcomes when recruiters felt like they had an assistant -- not a competitor. Here’s one quick story: A recruiter used our AI to shortlist 10 profiles for a hard-to-fill GTM analyst role. She reviewed five, adjusted the messaging tone slightly, and got two responses in under a day. Same tools -- different mindset. Feedback loops mattered too. We built in ways for users to rate suggestions. The model kept improving -- and more importantly, people trusted it more. Final Thought: Think Like a System Designer If you’re building AI into your hiring stack, go beyond automation. Think augmentation. Don’t just ask, “Can this task be automated?” Instead, ask, “If I automate this, what do we lose in context, empathy, or nuance?” Related:Agentic hiring systems can deliver speed and scale -- but only if we let people stay in control of what matters most. About the AuthorPankaj KhuranaVP Technology & Consulting, RocketPankaj Khurana is VP of Technology & Consulting at Rocket, an AI-driven recruiting firm. He has over 20 years of experience in hiring and tech and has led the development of agentic hiring tools used by top US startups. See more from Pankaj KhuranaWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #dont #automate #wrong #thing #lessons
    Don’t Automate the Wrong Thing: Lessons from Building Agentic Hiring Systems
    Pankaj Khurana, VP Technology & Consulting, RocketMay 19, 20253 Min ReadElenaBs via Alamy StockFor the past four years, I’ve been building AI-powered tools that help recruiters do their job better. Before that, I was a recruiter myself -- reading resumes, making calls, living the grind. And here’s one thing I’ve learned from straddling both worlds: In hiring, automating the wrong thing can quietly erode everything that makes your process work. As engineering leaders, we’re constantly told to streamline and optimize. Move fast. But if you automate the wrong step -- like how candidates are filtered, scored, or messaged -- you might be replacing good human judgment with rigid shortcuts. And often, you won’t notice the damage until weeks later, when engagement plummets or teams stop trusting your system. The Allure of Automation Hiring is messy. Resumes come in all shapes. Job descriptions are vague. Recruiters are overworked. AI seems like a godsend. We start by automating outreach. Then scoring. Then matching. Eventually, someone asks: can this whole thing run without a person? But here’s the rub: many hiring decisions are deeply contextual. Should a product manager with a non-traditional background be fast-tracked for a high-growth SaaS role? That’s not a “yes/no” the system can decide for you. Early on at Rocket, we made that mistake. Our scoring engine prioritized resumes based solely on skills overlap. It was fast -- but completely off for roles that required nuance. We had to pause, rethink, and admit: “This isn’t working like we hoped.” Related:What Agentic Systems Do Well I’m not anti-automation. Far from it. But it has to be paired with human review. We found that agentic systems -- AI tools with autonomy to assist but not decide -- were far more effective. Think copilots, not autopilots. For example, our system can: Suggest better phrasing for job descriptions Flag resumes that match roles 80% or more Recommend outreach templates based on role and tone But it never auto-rejects or sends messages without review. The AI suggests; the recruiter decides. That balance makes all the difference. Lessons Learned: Where Automation Fails One of our biggest missteps? Automating outreach too heavily. We thought sending personalized AI-written emails at scale would boost response rates. It didn’t. Candidates sensed something off. The emails looked polished but felt cold. Engagement dropped. We eventually went back to having humans rewrite the AI drafts. That one shift nearly doubled our positive response rate. Why? Because candidates want to feel seen -- not sorted. Related:A CIO’s Checklist: What Not to Automate If you’re leading an AI initiative in hiring, here’s a checklist we now swear by: Don’t automate decisions that impact trust. Rejections, scores, hiring calls? Keep a human in the loop. Avoid automating tasks with high context needs. A great candidate might not use trendy buzzwords. That doesn’t make them a bad fit. Be careful with candidate-facing automation. Generic outreach harms brand perception. Do automate the repetitive stuff. Parsing, meeting scheduling, draft -- automate those and give time back to your team. Human-AI Collaboration Wins We saw the best outcomes when recruiters felt like they had an assistant -- not a competitor. Here’s one quick story: A recruiter used our AI to shortlist 10 profiles for a hard-to-fill GTM analyst role. She reviewed five, adjusted the messaging tone slightly, and got two responses in under a day. Same tools -- different mindset. Feedback loops mattered too. We built in ways for users to rate suggestions. The model kept improving -- and more importantly, people trusted it more. Final Thought: Think Like a System Designer If you’re building AI into your hiring stack, go beyond automation. Think augmentation. Don’t just ask, “Can this task be automated?” Instead, ask, “If I automate this, what do we lose in context, empathy, or nuance?” Related:Agentic hiring systems can deliver speed and scale -- but only if we let people stay in control of what matters most. About the AuthorPankaj KhuranaVP Technology & Consulting, RocketPankaj Khurana is VP of Technology & Consulting at Rocket, an AI-driven recruiting firm. He has over 20 years of experience in hiring and tech and has led the development of agentic hiring tools used by top US startups. See more from Pankaj KhuranaWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #dont #automate #wrong #thing #lessons
    WWW.INFORMATIONWEEK.COM
    Don’t Automate the Wrong Thing: Lessons from Building Agentic Hiring Systems
    Pankaj Khurana, VP Technology & Consulting, RocketMay 19, 20253 Min ReadElenaBs via Alamy StockFor the past four years, I’ve been building AI-powered tools that help recruiters do their job better. Before that, I was a recruiter myself -- reading resumes, making calls, living the grind. And here’s one thing I’ve learned from straddling both worlds: In hiring, automating the wrong thing can quietly erode everything that makes your process work. As engineering leaders, we’re constantly told to streamline and optimize. Move fast. But if you automate the wrong step -- like how candidates are filtered, scored, or messaged -- you might be replacing good human judgment with rigid shortcuts. And often, you won’t notice the damage until weeks later, when engagement plummets or teams stop trusting your system. The Allure of Automation Hiring is messy. Resumes come in all shapes. Job descriptions are vague. Recruiters are overworked. AI seems like a godsend. We start by automating outreach. Then scoring. Then matching. Eventually, someone asks: can this whole thing run without a person? But here’s the rub: many hiring decisions are deeply contextual. Should a product manager with a non-traditional background be fast-tracked for a high-growth SaaS role? That’s not a “yes/no” the system can decide for you. Early on at Rocket, we made that mistake. Our scoring engine prioritized resumes based solely on skills overlap. It was fast -- but completely off for roles that required nuance. We had to pause, rethink, and admit: “This isn’t working like we hoped.” Related:What Agentic Systems Do Well I’m not anti-automation. Far from it. But it has to be paired with human review. We found that agentic systems -- AI tools with autonomy to assist but not decide -- were far more effective. Think copilots, not autopilots. For example, our system can: Suggest better phrasing for job descriptions Flag resumes that match roles 80% or more Recommend outreach templates based on role and tone But it never auto-rejects or sends messages without review. The AI suggests; the recruiter decides. That balance makes all the difference. Lessons Learned: Where Automation Fails One of our biggest missteps? Automating outreach too heavily. We thought sending personalized AI-written emails at scale would boost response rates. It didn’t. Candidates sensed something off. The emails looked polished but felt cold. Engagement dropped. We eventually went back to having humans rewrite the AI drafts. That one shift nearly doubled our positive response rate. Why? Because candidates want to feel seen -- not sorted. Related:A CIO’s Checklist: What Not to Automate If you’re leading an AI initiative in hiring, here’s a checklist we now swear by: Don’t automate decisions that impact trust. Rejections, scores, hiring calls? Keep a human in the loop. Avoid automating tasks with high context needs. A great candidate might not use trendy buzzwords. That doesn’t make them a bad fit. Be careful with candidate-facing automation. Generic outreach harms brand perception. Do automate the repetitive stuff. Parsing, meeting scheduling, draft -- automate those and give time back to your team. Human-AI Collaboration Wins We saw the best outcomes when recruiters felt like they had an assistant -- not a competitor. Here’s one quick story: A recruiter used our AI to shortlist 10 profiles for a hard-to-fill GTM analyst role. She reviewed five, adjusted the messaging tone slightly, and got two responses in under a day. Same tools -- different mindset. Feedback loops mattered too. We built in ways for users to rate suggestions. The model kept improving -- and more importantly, people trusted it more. Final Thought: Think Like a System Designer If you’re building AI into your hiring stack, go beyond automation. Think augmentation. Don’t just ask, “Can this task be automated?” Instead, ask, “If I automate this, what do we lose in context, empathy, or nuance?” Related:Agentic hiring systems can deliver speed and scale -- but only if we let people stay in control of what matters most. About the AuthorPankaj KhuranaVP Technology & Consulting, RocketPankaj Khurana is VP of Technology & Consulting at Rocket, an AI-driven recruiting firm. He has over 20 years of experience in hiring and tech and has led the development of agentic hiring tools used by top US startups. See more from Pankaj KhuranaWebinarsMore WebinarsReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Tesla's Robotaxi Rollout Looks Like A Disaster Waiting To Happen

    Ready or not–and despite a spotty safety record–the EV maker is racing to launch a pilot ride service in Austin to show off its self-driving chops.

    Elon Musk is rolling out a handful of Tesla robotaxis in Austin next month, where up to 20 self-driving electric Model Ys will be unleashed to ferry passengers around the Texas city’s streets. He’s betting the future of Tesla on their success, as the automaker’s electric vehicle revenue tanks thanks to faster-growing Chinese rivals and a political backlash against Musk’s right-wing politics and role as job-slasher-in-chief for the Trump Administration.

    But there’s a big hitch: Tesla hasn’t proven its self-driving taxis are safe enough to start delivering rides. Given its misleadingly named Autopilot and Full Self-Drivingsoftware’s deadly track record, Musk’s failure to provide detailed safety and technical data about Tesla’s technology and his determination to rely on cheap cameras instead of more robust sensors to navigate complicated urban environments, the Austin rollout could be a debacle.

    For the latest in cleantech and sustainability news, sign up here for our Current Climate newsletter.

    “It's going to fail for sure,” Dan O’Dowd, a long-time critic of Musk’s autonomous driving claims who’s spent his own money on Super Bowl commercials to call out Autopilot and FSD safety flaws, told Forbes. His anti-Tesla initiative, The Dawn Project, tests every update of FSD, a more advanced version of which is powering Musk’s robotaxis in Austin, as soon as they’re available. That update is to roll out to all Tesla drivers who pay a monthly subscription fee.

    A pre-production Tesla Cybercab at the Petersen Automotive Museum in Los Angeles.Copyright 2024 The Associated Press. All rights reserved.
    O’Dowd has been putting the current version of FSD through its paces. “We drove it around Santa Barbara for 80 minutes and there were seven failures,” said O’Dowd, whose company Green Hills Software supplies security tech to defense and aerospace industry customers. “If there had not been a driver sitting in the driver's seat, it would’ve hit something.”
    While the company hasn’t booked a dollar of robotaxi revenue, that hasn’t stopped the world’s wealthiest person from declaring victory. “I don’t see anyone being able to compete with Tesla at present,” Musk said on the company’s April 22 results call. His assessment may be premature.“I was looking for a signal this was ready. I didn't get that.”  

    The sole public demonstration of Tesla’s robotaxi chops was staged drives of its new “Cybercab” at Warner Brothers Studio in Los Angeles last October. The event included hauling invited Tesla fans around a fake cityscape–free of pedestrians but with lots of Tesla technicians keeping close tabs on vehicles. It struck safety researcher Noah Goodall, who published a technical analysis of Tesla’s safety data, independently from his role with the Virginia Department of Transportation, as more amusement park attraction than real-world test.
    “It was just operating vehicles on a closed track on a movie lot. It was not impressive at all,” he said. “Navigating a real urban environment with uncertainty, other parties moving around, situations where just braking is not enough, that’s difficult. I was looking for a signal this was ready. I didn't get that.”
    Autonomy Promises
    In the decade since Tesla began selling customers its Autopilot and FSD features–for which it currently charges –the software has been linked to several fatal accidents where human drivers trusted the tech to drive their car, only for it to crash. The National Highway Traffic Safety Administration has opened multiple probes of Tesla’s Autopilot feature since 2016, including one last year to determine if it needed additional safety features after linking Autopilot to those 13 deaths. Last October, NHTSA also began investigating problems with FSD linked to two fatalities.
    Despite the names, this software has always been designed to have a human behind the wheel. For the past decade, Musk has repeatedly claimed “full autonomy”–where a car can drive without human assistance–was only months or a year away, repeatedly missing his targets. Now, with Tesla’s EV sales down 13% in the first quarter, the company needs some buzz to reassure investors CEO Musk can turn things around. Robotaxis, as well as AI and humanoid robots, are exactly that, according to Musk.
    So it’s running extensive tests in downtown Austin. “There’s just always a convoy of Teslas going all over to Austin in circles,” Musk said on the call. But a recent Business Insider story, citing interviews with former Tesla test drivers, doesn’t inspire confidence. The program “feels very forced,” one former worker said. "It's this breakthrough moment for Tesla, but there is also this feeling of so many last-minute details being up in the air.”The downtown Austin skyline.Getty Images
    Tesla’s program will operate in a very limited area of Austin and rely heavily on remote operators to minimize accidents, according to an executive with another autonomous tech company, based on conversations with Texas officials, who asked not to be identified as the matter isn’t public.
    To back up the AI driving the vehicles, Tesla has also hired human staff to monitor and assist if they get into jams, taking full control if necessary. “As we iterate on the AI that powers them, we need the ability to access and control them remotely,” the company said in a posting for one such job. Alphabet Inc.’s Waymo, the leader in robotaxi tech, also uses remote operators to assist the vehicles by providing suggested solutions to tricky situations, but those people don’t actually drive them. Lag and latency in cellular networks make remote operations unsafe.
    Limited Data
    Along with limited tests, there’s a dearth of trustworthy data about how well Tesla’s self-driving software operates. The company does file occasional safety performance reports about the software, but it’s not peer-reviewed by outside technical experts and it frames the data as positively as possible, according to Goodall, a technical witness in a lawsuit against Tesla over the death of Walther Huang, killed in 2018 when his Model X slammed into a highway divider while running on Autopilot.
    “With Full Self Driving, when they first started publishing numbers on that, they neglected to share that they’d only rolled the software out to drivers who had a very high safety score of 90 or above,” he said. “So of course the data showed it was safer, as your safest drivers were the only ones that had it.”
    By contrast, Waymo frequently posts detailed reports on how its robotaxis are performing, claiming the data is peer-reviewed by experts.
    Tesla also hasn’t yet shared details with the public about where in Austin it will offer its robotaxi service or exactly how it will operate. The city’s police and fire departments told Forbes the company contacted Austin’s Autonomous Vehicle Task Force, which includes their staff, and the city provided Tesla with “maps of schools and school zones; information about traffic control for special events; and information about our fire and police vehicles and procedures.”
    But a request to see communications between Tesla and the city was denied. “The City of Austin is withholding responsive documents without a ruling from the Attorney General’s office, as permitted by law,” it said in an email. “All responsive information has been withheld due to 3rd party.”
    The city didn’t respond to a question to confirm Tesla is that third party. The company, Musk and Ashok Elluswamy, head of Tesla’s autonomous vehicle program, didn’t respond to emails about the Austin rollout.
    NHTSA this week requested details about Tesla’s Austin plans to understand how the vehicles perform in bad weather. It’s been investigating Tesla collisions involving Autopilot and FSD in poor visibility situations since last October. It’s not clear if the company has responded yet.
    Tesla has had a permit to test autonomous vehicles in California for a decade, which requires companies to share safety data. Numerous competitors, including Waymo, Amazon’s Zoox, which hopes to operate robotaxis this year, Nuro and even Apple, which abandoned its program, have all submitted data on test miles logged including “disengagements”–when a human driver has to take over–as well as accident reports. Tesla hasn’t.
    Not Just Driving
    It’s hard to talk about Musk’s robotaxi dreams without comparing his approach to Waymo’s. The Alphabet unit has spent 16 years and billions of dollars trying to master every aspect of what a robotaxi has to do. Long before it gave its first paid rides to customers in Phoenix in 2018, the company tested intensely on public roads, privately at the “Castle,” its test facility at a decommissioned Air Force base in Central California, and with endless miles in virtual simulation to train its AI.A Waymo robotaxi in San Francisco.dpa/picture alliance via Getty Images
    Recognizing that robotaxis aren’t just a technical challenge, it also recruited people from the airline industry and businesses specializing in customer service. For robotaxis to work, the cars have to be good at doing small things that can be tough to master but are critical, like picking up and dropping off passengers.
    “We've been working on that for a long time,” said Chris Ludwick, director of Waymo’s product management team. “The first challenge with PUDOis that when you get there, the on-road scene is going to be somewhat different each time. You may encounter construction or a stopped delivery truck or something like that. This leads to a whole suite of challenges of what do you do when you can't do the exact thing that you said to the rider when they requested the ride.”
    That includes developing a sophisticated app to guide passengers to the safest, most convenient spots for them and other road users. “You can't just block traffic. That's unacceptable. If you do that the community gets upset,” Ludwick said. “There's just a lot of small details you have to get right.”
    As far as safety, Waymo has avoided major accidents, injuries and fatalities so far, but its AI-enabled driver isn’t flawless. The company just recalled software in its fleet to fix a flaw that could cause vehicles to hit chains, gates and other barriers, following a NHTSA investigation.
    Cheaper Robotaxis
    In all the years Musk has promised autonomous Teslas and a robotaxi service, he hasn’t talked about what it’s doing to master ride-service essentials. But he does talk a big game about Tesla’s cost advantage.
    “The issue with Waymo’s car is it costs way more money,” the billionaire said on Tesla’s results call. “Their car is very expensive, made in low volume. Teslas probably cost 25% or 20% of what a Waymo costs and are made in very high volume,” last month.
    A base Model Y with FSD software costs consumers about before taxes. While Waymo doesn’t disclose the cost of its modified, electric Jaguar I-PACE robotaxis, the lidar, radar, computers and other sensors mean it’s likely double that of Tesla’s vehicles. Those costs should drop substantially over the next year or so as Waymo shifts to lower-cost sensors and cheaper vehicles, including Hyundai’s Ioniq 5 and a small electric van from China’s Zeekr.
    Boasts about cheaper Tesla robotaxis will be meaningless if they can’t safely pick up and drop off riders without causing traffic jams, yielding to pedestrians or avoiding collisions.
    That’s made harder by the fact that Tesla uses eight 5-megapixel cameras as the main sensors for its system–far lower resolution than the 48-megapixel system on Apple’s iPhone 16. They’re inexpensive, but struggle with sunlight glare and low light conditions. Musk denied that was the case on Tesla’s April 22 call, but tests by O’Dowd’s Dawn Project after that found FSD disengages when directly facing the sun.“He thinks havingdoes not add enough benefit to outweigh the cost. This is a pretty typical engineering argument in general but incorrect in this particular case.”

    “We went out and took the car and drove it directly into the setting sun and guess what: it gave up,” O’Dowd said. “It starts flashing and it starts panicking, red lights going, it starts making noises, says put your hands back on the wheel.”
    By contrast, Waymo uses multiple sensors, including the much more expensive lidar, to ensure its vehicles see all potential road hazards, in daylight or at night, in 3D.Elon Musk attends a Cabinet meeting with President Donald Trump on April 30, 2025.The Washington Post via Getty Images
    “Musk has repeatedly said lidar is expensive and not needed,” said Missy Cummings, an artificial intelligence expert who advised NHTSA on autonomous vehicles. “He thinks having it does not add enough benefit to outweigh the cost. This is a pretty typical engineering argument in general but incorrect in this particular case.”
    After the Austin rollout, Musk said last month the goal is to expand to other U.S. markets, China and Europe, “limited only by regulatory approvals.” And one day soon, he envisions every person who owns a Tesla flipping a switch and deploying their car while not in use to a Tesla robotaxi network, helping them make additional cash on the side.“It’s all lies.”

    The world’s wealthiest person has achieved remarkable things with Tesla’s EVs, SpaceX rockets and Starlink satellites. But for years he’s also repeatedly failed to deliver big ideas he touted as potential game-changers or massive moneymakers, including battery swapping stations, solar tile roofs, the Hyperloop and high-speed underground transportation networks created by his Boring Co. Whether self-driving vehicles join that list remains to be seen.
    After repeatedly promising and failing to deliver vehicles that safely drive themselves for the past decade, let alone pick up riders, his track record isn’t looking good.
    Critics have a harsher interpretation. “It's all lies, everything he says,” said O’Dowd.
    More from Forbes
    #tesla039s #robotaxi #rollout #looks #like
    Tesla's Robotaxi Rollout Looks Like A Disaster Waiting To Happen
    Ready or not–and despite a spotty safety record–the EV maker is racing to launch a pilot ride service in Austin to show off its self-driving chops. Elon Musk is rolling out a handful of Tesla robotaxis in Austin next month, where up to 20 self-driving electric Model Ys will be unleashed to ferry passengers around the Texas city’s streets. He’s betting the future of Tesla on their success, as the automaker’s electric vehicle revenue tanks thanks to faster-growing Chinese rivals and a political backlash against Musk’s right-wing politics and role as job-slasher-in-chief for the Trump Administration. But there’s a big hitch: Tesla hasn’t proven its self-driving taxis are safe enough to start delivering rides. Given its misleadingly named Autopilot and Full Self-Drivingsoftware’s deadly track record, Musk’s failure to provide detailed safety and technical data about Tesla’s technology and his determination to rely on cheap cameras instead of more robust sensors to navigate complicated urban environments, the Austin rollout could be a debacle. For the latest in cleantech and sustainability news, sign up here for our Current Climate newsletter. “It's going to fail for sure,” Dan O’Dowd, a long-time critic of Musk’s autonomous driving claims who’s spent his own money on Super Bowl commercials to call out Autopilot and FSD safety flaws, told Forbes. His anti-Tesla initiative, The Dawn Project, tests every update of FSD, a more advanced version of which is powering Musk’s robotaxis in Austin, as soon as they’re available. That update is to roll out to all Tesla drivers who pay a monthly subscription fee. A pre-production Tesla Cybercab at the Petersen Automotive Museum in Los Angeles.Copyright 2024 The Associated Press. All rights reserved. O’Dowd has been putting the current version of FSD through its paces. “We drove it around Santa Barbara for 80 minutes and there were seven failures,” said O’Dowd, whose company Green Hills Software supplies security tech to defense and aerospace industry customers. “If there had not been a driver sitting in the driver's seat, it would’ve hit something.” While the company hasn’t booked a dollar of robotaxi revenue, that hasn’t stopped the world’s wealthiest person from declaring victory. “I don’t see anyone being able to compete with Tesla at present,” Musk said on the company’s April 22 results call. His assessment may be premature.“I was looking for a signal this was ready. I didn't get that.”   The sole public demonstration of Tesla’s robotaxi chops was staged drives of its new “Cybercab” at Warner Brothers Studio in Los Angeles last October. The event included hauling invited Tesla fans around a fake cityscape–free of pedestrians but with lots of Tesla technicians keeping close tabs on vehicles. It struck safety researcher Noah Goodall, who published a technical analysis of Tesla’s safety data, independently from his role with the Virginia Department of Transportation, as more amusement park attraction than real-world test. “It was just operating vehicles on a closed track on a movie lot. It was not impressive at all,” he said. “Navigating a real urban environment with uncertainty, other parties moving around, situations where just braking is not enough, that’s difficult. I was looking for a signal this was ready. I didn't get that.” Autonomy Promises In the decade since Tesla began selling customers its Autopilot and FSD features–for which it currently charges –the software has been linked to several fatal accidents where human drivers trusted the tech to drive their car, only for it to crash. The National Highway Traffic Safety Administration has opened multiple probes of Tesla’s Autopilot feature since 2016, including one last year to determine if it needed additional safety features after linking Autopilot to those 13 deaths. Last October, NHTSA also began investigating problems with FSD linked to two fatalities. Despite the names, this software has always been designed to have a human behind the wheel. For the past decade, Musk has repeatedly claimed “full autonomy”–where a car can drive without human assistance–was only months or a year away, repeatedly missing his targets. Now, with Tesla’s EV sales down 13% in the first quarter, the company needs some buzz to reassure investors CEO Musk can turn things around. Robotaxis, as well as AI and humanoid robots, are exactly that, according to Musk. So it’s running extensive tests in downtown Austin. “There’s just always a convoy of Teslas going all over to Austin in circles,” Musk said on the call. But a recent Business Insider story, citing interviews with former Tesla test drivers, doesn’t inspire confidence. The program “feels very forced,” one former worker said. "It's this breakthrough moment for Tesla, but there is also this feeling of so many last-minute details being up in the air.”The downtown Austin skyline.Getty Images Tesla’s program will operate in a very limited area of Austin and rely heavily on remote operators to minimize accidents, according to an executive with another autonomous tech company, based on conversations with Texas officials, who asked not to be identified as the matter isn’t public. To back up the AI driving the vehicles, Tesla has also hired human staff to monitor and assist if they get into jams, taking full control if necessary. “As we iterate on the AI that powers them, we need the ability to access and control them remotely,” the company said in a posting for one such job. Alphabet Inc.’s Waymo, the leader in robotaxi tech, also uses remote operators to assist the vehicles by providing suggested solutions to tricky situations, but those people don’t actually drive them. Lag and latency in cellular networks make remote operations unsafe. Limited Data Along with limited tests, there’s a dearth of trustworthy data about how well Tesla’s self-driving software operates. The company does file occasional safety performance reports about the software, but it’s not peer-reviewed by outside technical experts and it frames the data as positively as possible, according to Goodall, a technical witness in a lawsuit against Tesla over the death of Walther Huang, killed in 2018 when his Model X slammed into a highway divider while running on Autopilot. “With Full Self Driving, when they first started publishing numbers on that, they neglected to share that they’d only rolled the software out to drivers who had a very high safety score of 90 or above,” he said. “So of course the data showed it was safer, as your safest drivers were the only ones that had it.” By contrast, Waymo frequently posts detailed reports on how its robotaxis are performing, claiming the data is peer-reviewed by experts. Tesla also hasn’t yet shared details with the public about where in Austin it will offer its robotaxi service or exactly how it will operate. The city’s police and fire departments told Forbes the company contacted Austin’s Autonomous Vehicle Task Force, which includes their staff, and the city provided Tesla with “maps of schools and school zones; information about traffic control for special events; and information about our fire and police vehicles and procedures.” But a request to see communications between Tesla and the city was denied. “The City of Austin is withholding responsive documents without a ruling from the Attorney General’s office, as permitted by law,” it said in an email. “All responsive information has been withheld due to 3rd party.” The city didn’t respond to a question to confirm Tesla is that third party. The company, Musk and Ashok Elluswamy, head of Tesla’s autonomous vehicle program, didn’t respond to emails about the Austin rollout. NHTSA this week requested details about Tesla’s Austin plans to understand how the vehicles perform in bad weather. It’s been investigating Tesla collisions involving Autopilot and FSD in poor visibility situations since last October. It’s not clear if the company has responded yet. Tesla has had a permit to test autonomous vehicles in California for a decade, which requires companies to share safety data. Numerous competitors, including Waymo, Amazon’s Zoox, which hopes to operate robotaxis this year, Nuro and even Apple, which abandoned its program, have all submitted data on test miles logged including “disengagements”–when a human driver has to take over–as well as accident reports. Tesla hasn’t. Not Just Driving It’s hard to talk about Musk’s robotaxi dreams without comparing his approach to Waymo’s. The Alphabet unit has spent 16 years and billions of dollars trying to master every aspect of what a robotaxi has to do. Long before it gave its first paid rides to customers in Phoenix in 2018, the company tested intensely on public roads, privately at the “Castle,” its test facility at a decommissioned Air Force base in Central California, and with endless miles in virtual simulation to train its AI.A Waymo robotaxi in San Francisco.dpa/picture alliance via Getty Images Recognizing that robotaxis aren’t just a technical challenge, it also recruited people from the airline industry and businesses specializing in customer service. For robotaxis to work, the cars have to be good at doing small things that can be tough to master but are critical, like picking up and dropping off passengers. “We've been working on that for a long time,” said Chris Ludwick, director of Waymo’s product management team. “The first challenge with PUDOis that when you get there, the on-road scene is going to be somewhat different each time. You may encounter construction or a stopped delivery truck or something like that. This leads to a whole suite of challenges of what do you do when you can't do the exact thing that you said to the rider when they requested the ride.” That includes developing a sophisticated app to guide passengers to the safest, most convenient spots for them and other road users. “You can't just block traffic. That's unacceptable. If you do that the community gets upset,” Ludwick said. “There's just a lot of small details you have to get right.” As far as safety, Waymo has avoided major accidents, injuries and fatalities so far, but its AI-enabled driver isn’t flawless. The company just recalled software in its fleet to fix a flaw that could cause vehicles to hit chains, gates and other barriers, following a NHTSA investigation. Cheaper Robotaxis In all the years Musk has promised autonomous Teslas and a robotaxi service, he hasn’t talked about what it’s doing to master ride-service essentials. But he does talk a big game about Tesla’s cost advantage. “The issue with Waymo’s car is it costs way more money,” the billionaire said on Tesla’s results call. “Their car is very expensive, made in low volume. Teslas probably cost 25% or 20% of what a Waymo costs and are made in very high volume,” last month. A base Model Y with FSD software costs consumers about before taxes. While Waymo doesn’t disclose the cost of its modified, electric Jaguar I-PACE robotaxis, the lidar, radar, computers and other sensors mean it’s likely double that of Tesla’s vehicles. Those costs should drop substantially over the next year or so as Waymo shifts to lower-cost sensors and cheaper vehicles, including Hyundai’s Ioniq 5 and a small electric van from China’s Zeekr. Boasts about cheaper Tesla robotaxis will be meaningless if they can’t safely pick up and drop off riders without causing traffic jams, yielding to pedestrians or avoiding collisions. That’s made harder by the fact that Tesla uses eight 5-megapixel cameras as the main sensors for its system–far lower resolution than the 48-megapixel system on Apple’s iPhone 16. They’re inexpensive, but struggle with sunlight glare and low light conditions. Musk denied that was the case on Tesla’s April 22 call, but tests by O’Dowd’s Dawn Project after that found FSD disengages when directly facing the sun.“He thinks havingdoes not add enough benefit to outweigh the cost. This is a pretty typical engineering argument in general but incorrect in this particular case.” “We went out and took the car and drove it directly into the setting sun and guess what: it gave up,” O’Dowd said. “It starts flashing and it starts panicking, red lights going, it starts making noises, says put your hands back on the wheel.” By contrast, Waymo uses multiple sensors, including the much more expensive lidar, to ensure its vehicles see all potential road hazards, in daylight or at night, in 3D.Elon Musk attends a Cabinet meeting with President Donald Trump on April 30, 2025.The Washington Post via Getty Images “Musk has repeatedly said lidar is expensive and not needed,” said Missy Cummings, an artificial intelligence expert who advised NHTSA on autonomous vehicles. “He thinks having it does not add enough benefit to outweigh the cost. This is a pretty typical engineering argument in general but incorrect in this particular case.” After the Austin rollout, Musk said last month the goal is to expand to other U.S. markets, China and Europe, “limited only by regulatory approvals.” And one day soon, he envisions every person who owns a Tesla flipping a switch and deploying their car while not in use to a Tesla robotaxi network, helping them make additional cash on the side.“It’s all lies.” The world’s wealthiest person has achieved remarkable things with Tesla’s EVs, SpaceX rockets and Starlink satellites. But for years he’s also repeatedly failed to deliver big ideas he touted as potential game-changers or massive moneymakers, including battery swapping stations, solar tile roofs, the Hyperloop and high-speed underground transportation networks created by his Boring Co. Whether self-driving vehicles join that list remains to be seen. After repeatedly promising and failing to deliver vehicles that safely drive themselves for the past decade, let alone pick up riders, his track record isn’t looking good. Critics have a harsher interpretation. “It's all lies, everything he says,” said O’Dowd. More from Forbes #tesla039s #robotaxi #rollout #looks #like
    WWW.FORBES.COM
    Tesla's Robotaxi Rollout Looks Like A Disaster Waiting To Happen
    Ready or not–and despite a spotty safety record–the EV maker is racing to launch a pilot ride service in Austin to show off its self-driving chops. Elon Musk is rolling out a handful of Tesla robotaxis in Austin next month, where up to 20 self-driving electric Model Ys will be unleashed to ferry passengers around the Texas city’s streets. He’s betting the future of Tesla on their success, as the automaker’s electric vehicle revenue tanks thanks to faster-growing Chinese rivals and a political backlash against Musk’s right-wing politics and role as job-slasher-in-chief for the Trump Administration. But there’s a big hitch: Tesla hasn’t proven its self-driving taxis are safe enough to start delivering rides. Given its misleadingly named Autopilot and Full Self-Driving (FSD) software’s deadly track record, Musk’s failure to provide detailed safety and technical data about Tesla’s technology and his determination to rely on cheap cameras instead of more robust sensors to navigate complicated urban environments, the Austin rollout could be a debacle. For the latest in cleantech and sustainability news, sign up here for our Current Climate newsletter. “It's going to fail for sure,” Dan O’Dowd, a long-time critic of Musk’s autonomous driving claims who’s spent his own money on Super Bowl commercials to call out Autopilot and FSD safety flaws, told Forbes. His anti-Tesla initiative, The Dawn Project, tests every update of FSD, a more advanced version of which is powering Musk’s robotaxis in Austin, as soon as they’re available. That update is to roll out to all Tesla drivers who pay a $99 monthly subscription fee. A pre-production Tesla Cybercab at the Petersen Automotive Museum in Los Angeles.Copyright 2024 The Associated Press. All rights reserved. O’Dowd has been putting the current version of FSD through its paces. “We drove it around Santa Barbara for 80 minutes and there were seven failures,” said O’Dowd, whose company Green Hills Software supplies security tech to defense and aerospace industry customers. “If there had not been a driver sitting in the driver's seat, it would’ve hit something.” While the company hasn’t booked a dollar of robotaxi revenue, that hasn’t stopped the world’s wealthiest person from declaring victory. “I don’t see anyone being able to compete with Tesla at present,” Musk said on the company’s April 22 results call. His assessment may be premature.“I was looking for a signal this was ready. I didn't get that.”   The sole public demonstration of Tesla’s robotaxi chops was staged drives of its new “Cybercab” at Warner Brothers Studio in Los Angeles last October. The event included hauling invited Tesla fans around a fake cityscape–free of pedestrians but with lots of Tesla technicians keeping close tabs on vehicles. It struck safety researcher Noah Goodall, who published a technical analysis of Tesla’s safety data, independently from his role with the Virginia Department of Transportation, as more amusement park attraction than real-world test. “It was just operating vehicles on a closed track on a movie lot. It was not impressive at all,” he said. “Navigating a real urban environment with uncertainty, other parties moving around, situations where just braking is not enough, that’s difficult. I was looking for a signal this was ready. I didn't get that.” Autonomy Promises In the decade since Tesla began selling customers its Autopilot and FSD features–for which it currently charges $8,000–the software has been linked to several fatal accidents where human drivers trusted the tech to drive their car, only for it to crash. The National Highway Traffic Safety Administration has opened multiple probes of Tesla’s Autopilot feature since 2016, including one last year to determine if it needed additional safety features after linking Autopilot to those 13 deaths. Last October, NHTSA also began investigating problems with FSD linked to two fatalities. Despite the names, this software has always been designed to have a human behind the wheel. For the past decade, Musk has repeatedly claimed “full autonomy”–where a car can drive without human assistance–was only months or a year away, repeatedly missing his targets. Now, with Tesla’s EV sales down 13% in the first quarter, the company needs some buzz to reassure investors CEO Musk can turn things around. Robotaxis, as well as AI and humanoid robots, are exactly that, according to Musk. So it’s running extensive tests in downtown Austin. “There’s just always a convoy of Teslas going all over to Austin in circles,” Musk said on the call. But a recent Business Insider story, citing interviews with former Tesla test drivers, doesn’t inspire confidence. The program “feels very forced,” one former worker said. "It's this breakthrough moment for Tesla, but there is also this feeling of so many last-minute details being up in the air.”The downtown Austin skyline.Getty Images Tesla’s program will operate in a very limited area of Austin and rely heavily on remote operators to minimize accidents, according to an executive with another autonomous tech company, based on conversations with Texas officials, who asked not to be identified as the matter isn’t public. To back up the AI driving the vehicles, Tesla has also hired human staff to monitor and assist if they get into jams, taking full control if necessary. “As we iterate on the AI that powers them, we need the ability to access and control them remotely,” the company said in a posting for one such job. Alphabet Inc.’s Waymo, the leader in robotaxi tech, also uses remote operators to assist the vehicles by providing suggested solutions to tricky situations, but those people don’t actually drive them. Lag and latency in cellular networks make remote operations unsafe. Limited Data Along with limited tests, there’s a dearth of trustworthy data about how well Tesla’s self-driving software operates. The company does file occasional safety performance reports about the software, but it’s not peer-reviewed by outside technical experts and it frames the data as positively as possible, according to Goodall, a technical witness in a lawsuit against Tesla over the death of Walther Huang, killed in 2018 when his Model X slammed into a highway divider while running on Autopilot. “With Full Self Driving, when they first started publishing numbers on that, they neglected to share that they’d only rolled the software out to drivers who had a very high safety score of 90 or above,” he said. “So of course the data showed it was safer, as your safest drivers were the only ones that had it.” By contrast, Waymo frequently posts detailed reports on how its robotaxis are performing, claiming the data is peer-reviewed by experts. Tesla also hasn’t yet shared details with the public about where in Austin it will offer its robotaxi service or exactly how it will operate. The city’s police and fire departments told Forbes the company contacted Austin’s Autonomous Vehicle Task Force, which includes their staff, and the city provided Tesla with “maps of schools and school zones; information about traffic control for special events; and information about our fire and police vehicles and procedures.” But a request to see communications between Tesla and the city was denied. “The City of Austin is withholding responsive documents without a ruling from the Attorney General’s office, as permitted by law,” it said in an email. “All responsive information has been withheld due to 3rd party.” The city didn’t respond to a question to confirm Tesla is that third party. The company, Musk and Ashok Elluswamy, head of Tesla’s autonomous vehicle program, didn’t respond to emails about the Austin rollout. NHTSA this week requested details about Tesla’s Austin plans to understand how the vehicles perform in bad weather. It’s been investigating Tesla collisions involving Autopilot and FSD in poor visibility situations since last October. It’s not clear if the company has responded yet. Tesla has had a permit to test autonomous vehicles in California for a decade, which requires companies to share safety data. Numerous competitors, including Waymo, Amazon’s Zoox, which hopes to operate robotaxis this year, Nuro and even Apple, which abandoned its program, have all submitted data on test miles logged including “disengagements”–when a human driver has to take over–as well as accident reports. Tesla hasn’t. Not Just Driving It’s hard to talk about Musk’s robotaxi dreams without comparing his approach to Waymo’s. The Alphabet unit has spent 16 years and billions of dollars trying to master every aspect of what a robotaxi has to do. Long before it gave its first paid rides to customers in Phoenix in 2018, the company tested intensely on public roads, privately at the “Castle,” its test facility at a decommissioned Air Force base in Central California, and with endless miles in virtual simulation to train its AI.A Waymo robotaxi in San Francisco.dpa/picture alliance via Getty Images Recognizing that robotaxis aren’t just a technical challenge, it also recruited people from the airline industry and businesses specializing in customer service. For robotaxis to work, the cars have to be good at doing small things that can be tough to master but are critical, like picking up and dropping off passengers. “We've been working on that for a long time,” said Chris Ludwick, director of Waymo’s product management team. “The first challenge with PUDO (the company’s shorthand for pickup, drop-off) is that when you get there, the on-road scene is going to be somewhat different each time. You may encounter construction or a stopped delivery truck or something like that. This leads to a whole suite of challenges of what do you do when you can't do the exact thing that you said to the rider when they requested the ride.” That includes developing a sophisticated app to guide passengers to the safest, most convenient spots for them and other road users. “You can't just block traffic. That's unacceptable. If you do that the community gets upset,” Ludwick said. “There's just a lot of small details you have to get right.” As far as safety, Waymo has avoided major accidents, injuries and fatalities so far, but its AI-enabled driver isn’t flawless. The company just recalled software in its fleet to fix a flaw that could cause vehicles to hit chains, gates and other barriers, following a NHTSA investigation. Cheaper Robotaxis In all the years Musk has promised autonomous Teslas and a robotaxi service, he hasn’t talked about what it’s doing to master ride-service essentials. But he does talk a big game about Tesla’s cost advantage. “The issue with Waymo’s car is it costs way more money,” the billionaire said on Tesla’s results call. “Their car is very expensive, made in low volume. Teslas probably cost 25% or 20% of what a Waymo costs and are made in very high volume,” last month. A base Model Y with FSD software costs consumers about $55,000 before taxes. While Waymo doesn’t disclose the cost of its modified, electric Jaguar I-PACE robotaxis, the lidar, radar, computers and other sensors mean it’s likely double that of Tesla’s vehicles. Those costs should drop substantially over the next year or so as Waymo shifts to lower-cost sensors and cheaper vehicles, including Hyundai’s Ioniq 5 and a small electric van from China’s Zeekr. Boasts about cheaper Tesla robotaxis will be meaningless if they can’t safely pick up and drop off riders without causing traffic jams, yielding to pedestrians or avoiding collisions. That’s made harder by the fact that Tesla uses eight 5-megapixel cameras as the main sensors for its system–far lower resolution than the 48-megapixel system on Apple’s iPhone 16. They’re inexpensive, but struggle with sunlight glare and low light conditions. Musk denied that was the case on Tesla’s April 22 call, but tests by O’Dowd’s Dawn Project after that found FSD disengages when directly facing the sun.“He thinks having [lidar] does not add enough benefit to outweigh the cost. This is a pretty typical engineering argument in general but incorrect in this particular case.” “We went out and took the car and drove it directly into the setting sun and guess what: it gave up,” O’Dowd said. “It starts flashing and it starts panicking, red lights going, it starts making noises, says put your hands back on the wheel.” By contrast, Waymo uses multiple sensors, including the much more expensive lidar, to ensure its vehicles see all potential road hazards, in daylight or at night, in 3D.Elon Musk attends a Cabinet meeting with President Donald Trump on April 30, 2025.The Washington Post via Getty Images “Musk has repeatedly said lidar is expensive and not needed,” said Missy Cummings, an artificial intelligence expert who advised NHTSA on autonomous vehicles. “He thinks having it does not add enough benefit to outweigh the cost. This is a pretty typical engineering argument in general but incorrect in this particular case.” After the Austin rollout, Musk said last month the goal is to expand to other U.S. markets, China and Europe, “limited only by regulatory approvals.” And one day soon, he envisions every person who owns a Tesla flipping a switch and deploying their car while not in use to a Tesla robotaxi network, helping them make additional cash on the side (as long as they pay Tesla $99 per month).“It’s all lies.” The world’s wealthiest person has achieved remarkable things with Tesla’s EVs, SpaceX rockets and Starlink satellites. But for years he’s also repeatedly failed to deliver big ideas he touted as potential game-changers or massive moneymakers, including battery swapping stations, solar tile roofs, the Hyperloop and high-speed underground transportation networks created by his Boring Co. Whether self-driving vehicles join that list remains to be seen. After repeatedly promising and failing to deliver vehicles that safely drive themselves for the past decade, let alone pick up riders, his track record isn’t looking good. Critics have a harsher interpretation. “It's all lies, everything he says,” said O’Dowd. More from Forbes
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Eight Sleep Pod 4 Review: Better Than Your Therapist?

    I've been following Eight Sleep products for a few years and was excited to learn that the company is entering the UAE and Saudi markets. As the name implies, the company makes products that help you sleep better at night.
    The Eight Sleep Pod 4 is a smart mattress cover that adjusts temperature, tracks sleep metrics, and detects snoring. With summer approaching and the AC going into full blast, the Sleep Pod 4 could be a great solution for those with partners who have different temperature tolerances.
    Pricing and Availability
    The Eight Sleep Pod 4is available in regular and Ultra varieties. The regular version comes with the Pod and bed cover and costs AED 9,999 for a queen size, AED 10,799 for a king size, and AED 11,799 for an Emperor size.
    The Ultra version adds an elevating base that lifts the mattress into positions ideal for sleeping, reading, or relaxing. This adds about AED 8,000 to the base prices. Eight Sleep sent me the non-ultra version for this review.
    In addition to the price of the Sleep Pod, you have to pay AED 65 per month for AutoPilot features that adjust temperatures automatically, let you set alarms, and provide sleep and health reports. I think Sleep Eight should bundle at least one year's worth of subscription with any Pod purchase.

    Key Features
    The Pod 4 offers advanced sleep technology through a mattress cover compatible with existing beds. Its main features include:

    Temperature Regulation: Dual-zone climate control adjusts each side of the bed to as low as 13°C. The system uses water flow to regulate the temperature of your mattress.
    Sleep Tracking: Health-grade sensors monitor heart rate, heart rate variability, respiratory rate, and sleep stages.
    Snore Detection: The Pod 4 vibrates to identify snoring and alerts users through the app. The more expensive Pod 4 Ultra model automatically elevates the bed to reduce snoring.
    Autopilot AI: The algorithm personalises temperature and elevation based on bio feedback, user preferences, and data from other users.
    GentleRise Alarm: Vibration and thermal changes wake users gently, replacing traditional alarms.
    App Integration: The Eight Sleep app provides sleep insights, temperature scheduling, and control over settings.

    Design and Build Quality
    The Eight Sleep Pod 4 includes a mattress cover and a special fitted sheet covering your mattress. It is made of breathable fabric, which is comfortable to lie on, though you'll likely add a bedsheet above it.
    You can get this in multiple sizes. I was sent a 180x200 cms sized cover for my king-sized bed. This cover has a mesh layer below it that circulates water to make your bed cooler or warmer. This layer also has sensors that track how you sleep.
    Finally, there's the Bedside Hub, a desktop PC-sized box that controls the flow of water that cools or heats the Active Grid. It also connects to your Wi-Fi network. The mattress cover is connected to the Pod using a rather thick cable that I had concerns with but it tucked away easily and hasn't caused any concerns in the two months I've been using it.

    Installation requires water for the Hub's tank, which is supposed to be refilled every few months. The setup took me about an hour, and it is easier with two people to lift your mattress. I had to fill the water container in the bedside hub three times until it was properly dispersed to the mesh.
    The app guides you through all of this, starting with connecting your Pod to your Wi-Fi, which didn't go as planned. The Wi-Fi performance on the Hub is not great, and I had to move an access point into my bedroom for it to maintain a good connection.

    Features and Usage
    Once you set up the Pod, everything else is controlled and managed through the app, which is available for iPhone and Android phones. The app underwent a major overhaul during my testing and now looks more modern and streamlined.Recommended by Our Editors
    Using the app, you can schedule temperature changes, view sleep reports, and adjust all the settings for the Pod. You can set sepecific temperature for bedtime, later at night and at dawn. I set the schedule for my bedtime, and the Hub went into action about half an hour before that, cooling it to my desired temperature which is 2 degrees below the room temperature. You can also set the temperatures to be absolute values, such as 18 degrees.
    When I first started using the Eight Sleep Pod, I had set the temperatures at 19C but that proved to be very cool for my liking. After a few days of fiddling, I settled on cooling the nighttime temperature to 2 degrees lower than the bedtime and the dawn temperature to be 2 degrees higher than the room temperature.

    With AutoPilot, the bed automatically changes the temperature by a couple of degrees to help you get the best sleep when it detects you in REM or deep sleep states, which theoretically improves your readiness for the next day. You can also manually adjust the temperature by double or triple tapping your side of the bed to cool or warm it up.
    Another function of the Eight Sleep is to provide sleep tracking, which sounds great as you won't need to wear a device like the Apple Watch or Oura Ring to bed. I use an Oura Ring that I usually wear to bed, and I compared the stats it offered to the Eight Sleep.

    Benchmark
    Oura Ring
    Eight Sleep Pod 4

    Time Slept
    5h 43m
    4h 19m

    Deep Sleep
    40m
    46m

    REM
    50m
    1h 15s

    Resting Heart Rate
    73bpm
    74pm

    While this is the data for just one night, during my two weeks of testing, I found that the Oura Ring's results were more consistent than those of the Eight Sleep, and there's a good reason why.
    The Eight Sleep mattress cover is split into two parts to track you and your partner. If I moved towards my partner's side or if she moved towards my side, the data would not be analysed properly. Similarly, the data would be completely thrown off if our kid decided to jump in the bed in the middle of the night, which happened quite a few times while I was testing.
    So, while sleep tracking is a good secondary feature of the Eight Sleep Pod, it should not be the main reason to get one unless you sleep alone. Also worth nothing that the app does not sync your sleeping information with Apple Health, however it does work with Alexa if that's your preferred platform.
    One more feature of the Eight Sleep is a wake-up alarm that vibrates on your side of the bed. The vibrations are meant to go from light to strong to gently wake you up, along with raising the temperature of the Pod. While this feature woke me up every single time, it also woke my partner up because the vibrations carry off to the other half of the bed even at the lightest settings.
    The Verdict
    The Eight Sleep Pod 4 delivers excellent temperature regulation, and that should be your primary objective when purchasing this Pod. While you also get sleep tracking and a wake-up alarm, these features work much better if you're the only one sleeping on the bed. The unit is also quiet in operation, and Autopilot AI works well to enhance your sleep.
    However, it is an extremely expensive piece of equipment, and additional subscription charges make it less appealing to users who are careful about their finances. And while the app is pretty good, it doesn't sync with Apple Health.
    The high costs and limited integrations pose drawbacks, but the 30-day return policy reduces risk. The Eight Sleep Pod 4 is ideal for tech and fitness enthusiasts seeking premium sleep solutions and couples with different temperature preferences.
    #eight #sleep #pod #review #better
    Eight Sleep Pod 4 Review: Better Than Your Therapist?
    I've been following Eight Sleep products for a few years and was excited to learn that the company is entering the UAE and Saudi markets. As the name implies, the company makes products that help you sleep better at night. The Eight Sleep Pod 4 is a smart mattress cover that adjusts temperature, tracks sleep metrics, and detects snoring. With summer approaching and the AC going into full blast, the Sleep Pod 4 could be a great solution for those with partners who have different temperature tolerances. Pricing and Availability The Eight Sleep Pod 4is available in regular and Ultra varieties. The regular version comes with the Pod and bed cover and costs AED 9,999 for a queen size, AED 10,799 for a king size, and AED 11,799 for an Emperor size. The Ultra version adds an elevating base that lifts the mattress into positions ideal for sleeping, reading, or relaxing. This adds about AED 8,000 to the base prices. Eight Sleep sent me the non-ultra version for this review. In addition to the price of the Sleep Pod, you have to pay AED 65 per month for AutoPilot features that adjust temperatures automatically, let you set alarms, and provide sleep and health reports. I think Sleep Eight should bundle at least one year's worth of subscription with any Pod purchase. Key Features The Pod 4 offers advanced sleep technology through a mattress cover compatible with existing beds. Its main features include: Temperature Regulation: Dual-zone climate control adjusts each side of the bed to as low as 13°C. The system uses water flow to regulate the temperature of your mattress. Sleep Tracking: Health-grade sensors monitor heart rate, heart rate variability, respiratory rate, and sleep stages. Snore Detection: The Pod 4 vibrates to identify snoring and alerts users through the app. The more expensive Pod 4 Ultra model automatically elevates the bed to reduce snoring. Autopilot AI: The algorithm personalises temperature and elevation based on bio feedback, user preferences, and data from other users. GentleRise Alarm: Vibration and thermal changes wake users gently, replacing traditional alarms. App Integration: The Eight Sleep app provides sleep insights, temperature scheduling, and control over settings. Design and Build Quality The Eight Sleep Pod 4 includes a mattress cover and a special fitted sheet covering your mattress. It is made of breathable fabric, which is comfortable to lie on, though you'll likely add a bedsheet above it. You can get this in multiple sizes. I was sent a 180x200 cms sized cover for my king-sized bed. This cover has a mesh layer below it that circulates water to make your bed cooler or warmer. This layer also has sensors that track how you sleep. Finally, there's the Bedside Hub, a desktop PC-sized box that controls the flow of water that cools or heats the Active Grid. It also connects to your Wi-Fi network. The mattress cover is connected to the Pod using a rather thick cable that I had concerns with but it tucked away easily and hasn't caused any concerns in the two months I've been using it. Installation requires water for the Hub's tank, which is supposed to be refilled every few months. The setup took me about an hour, and it is easier with two people to lift your mattress. I had to fill the water container in the bedside hub three times until it was properly dispersed to the mesh. The app guides you through all of this, starting with connecting your Pod to your Wi-Fi, which didn't go as planned. The Wi-Fi performance on the Hub is not great, and I had to move an access point into my bedroom for it to maintain a good connection. Features and Usage Once you set up the Pod, everything else is controlled and managed through the app, which is available for iPhone and Android phones. The app underwent a major overhaul during my testing and now looks more modern and streamlined.Recommended by Our Editors Using the app, you can schedule temperature changes, view sleep reports, and adjust all the settings for the Pod. You can set sepecific temperature for bedtime, later at night and at dawn. I set the schedule for my bedtime, and the Hub went into action about half an hour before that, cooling it to my desired temperature which is 2 degrees below the room temperature. You can also set the temperatures to be absolute values, such as 18 degrees. When I first started using the Eight Sleep Pod, I had set the temperatures at 19C but that proved to be very cool for my liking. After a few days of fiddling, I settled on cooling the nighttime temperature to 2 degrees lower than the bedtime and the dawn temperature to be 2 degrees higher than the room temperature. With AutoPilot, the bed automatically changes the temperature by a couple of degrees to help you get the best sleep when it detects you in REM or deep sleep states, which theoretically improves your readiness for the next day. You can also manually adjust the temperature by double or triple tapping your side of the bed to cool or warm it up. Another function of the Eight Sleep is to provide sleep tracking, which sounds great as you won't need to wear a device like the Apple Watch or Oura Ring to bed. I use an Oura Ring that I usually wear to bed, and I compared the stats it offered to the Eight Sleep. Benchmark Oura Ring Eight Sleep Pod 4 Time Slept 5h 43m 4h 19m Deep Sleep 40m 46m REM 50m 1h 15s Resting Heart Rate 73bpm 74pm While this is the data for just one night, during my two weeks of testing, I found that the Oura Ring's results were more consistent than those of the Eight Sleep, and there's a good reason why. The Eight Sleep mattress cover is split into two parts to track you and your partner. If I moved towards my partner's side or if she moved towards my side, the data would not be analysed properly. Similarly, the data would be completely thrown off if our kid decided to jump in the bed in the middle of the night, which happened quite a few times while I was testing. So, while sleep tracking is a good secondary feature of the Eight Sleep Pod, it should not be the main reason to get one unless you sleep alone. Also worth nothing that the app does not sync your sleeping information with Apple Health, however it does work with Alexa if that's your preferred platform. One more feature of the Eight Sleep is a wake-up alarm that vibrates on your side of the bed. The vibrations are meant to go from light to strong to gently wake you up, along with raising the temperature of the Pod. While this feature woke me up every single time, it also woke my partner up because the vibrations carry off to the other half of the bed even at the lightest settings. The Verdict The Eight Sleep Pod 4 delivers excellent temperature regulation, and that should be your primary objective when purchasing this Pod. While you also get sleep tracking and a wake-up alarm, these features work much better if you're the only one sleeping on the bed. The unit is also quiet in operation, and Autopilot AI works well to enhance your sleep. However, it is an extremely expensive piece of equipment, and additional subscription charges make it less appealing to users who are careful about their finances. And while the app is pretty good, it doesn't sync with Apple Health. The high costs and limited integrations pose drawbacks, but the 30-day return policy reduces risk. The Eight Sleep Pod 4 is ideal for tech and fitness enthusiasts seeking premium sleep solutions and couples with different temperature preferences. #eight #sleep #pod #review #better
    ME.PCMAG.COM
    Eight Sleep Pod 4 Review: Better Than Your Therapist?
    I've been following Eight Sleep products for a few years and was excited to learn that the company is entering the UAE and Saudi markets. As the name implies, the company makes products that help you sleep better at night. The Eight Sleep Pod 4 is a smart mattress cover that adjusts temperature, tracks sleep metrics, and detects snoring. With summer approaching and the AC going into full blast, the Sleep Pod 4 could be a great solution for those with partners who have different temperature tolerances. Pricing and Availability The Eight Sleep Pod 4 (which is now replaced by the Eight Sleep Pod 5) is available in regular and Ultra varieties. The regular version comes with the Pod and bed cover and costs AED 9,999 for a queen size (160 x 200 cms), AED 10,799 for a king size (180 x 200 cms), and AED 11,799 for an Emperor size (200 x 200). The Ultra version adds an elevating base that lifts the mattress into positions ideal for sleeping, reading, or relaxing. This adds about AED 8,000 to the base prices. Eight Sleep sent me the non-ultra version for this review. In addition to the price of the Sleep Pod, you have to pay AED 65 per month for AutoPilot features that adjust temperatures automatically, let you set alarms, and provide sleep and health reports. I think Sleep Eight should bundle at least one year's worth of subscription with any Pod purchase. Key Features The Pod 4 offers advanced sleep technology through a mattress cover compatible with existing beds. Its main features include: Temperature Regulation: Dual-zone climate control adjusts each side of the bed to as low as 13°C. The system uses water flow to regulate the temperature of your mattress. Sleep Tracking: Health-grade sensors monitor heart rate, heart rate variability (HRV), respiratory rate, and sleep stages. Snore Detection: The Pod 4 vibrates to identify snoring and alerts users through the app. The more expensive Pod 4 Ultra model automatically elevates the bed to reduce snoring. Autopilot AI: The algorithm personalises temperature and elevation based on bio feedback, user preferences, and data from other users. GentleRise Alarm: Vibration and thermal changes wake users gently, replacing traditional alarms. App Integration: The Eight Sleep app provides sleep insights, temperature scheduling, and control over settings. Design and Build Quality The Eight Sleep Pod 4 includes a mattress cover and a special fitted sheet covering your mattress. It is made of breathable fabric, which is comfortable to lie on, though you'll likely add a bedsheet above it. You can get this in multiple sizes. I was sent a 180x200 cms sized cover for my king-sized bed. This cover has a mesh layer below it that circulates water to make your bed cooler or warmer. This layer also has sensors that track how you sleep. Finally, there's the Bedside Hub, a desktop PC-sized box that controls the flow of water that cools or heats the Active Grid. It also connects to your Wi-Fi network. The mattress cover is connected to the Pod using a rather thick cable that I had concerns with but it tucked away easily and hasn't caused any concerns in the two months I've been using it. Installation requires water for the Hub's tank, which is supposed to be refilled every few months. The setup took me about an hour, and it is easier with two people to lift your mattress. I had to fill the water container in the bedside hub three times until it was properly dispersed to the mesh. The app guides you through all of this, starting with connecting your Pod to your Wi-Fi, which didn't go as planned. The Wi-Fi performance on the Hub is not great, and I had to move an access point into my bedroom for it to maintain a good connection. Features and Usage Once you set up the Pod, everything else is controlled and managed through the app, which is available for iPhone and Android phones. The app underwent a major overhaul during my testing and now looks more modern and streamlined.Recommended by Our Editors Using the app, you can schedule temperature changes, view sleep reports, and adjust all the settings for the Pod. You can set sepecific temperature for bedtime, later at night and at dawn. I set the schedule for my bedtime, and the Hub went into action about half an hour before that, cooling it to my desired temperature which is 2 degrees below the room temperature. You can also set the temperatures to be absolute values, such as 18 degrees. When I first started using the Eight Sleep Pod, I had set the temperatures at 19C but that proved to be very cool for my liking. After a few days of fiddling, I settled on cooling the nighttime temperature to 2 degrees lower than the bedtime and the dawn temperature to be 2 degrees higher than the room temperature. With AutoPilot, the bed automatically changes the temperature by a couple of degrees to help you get the best sleep when it detects you in REM or deep sleep states, which theoretically improves your readiness for the next day. You can also manually adjust the temperature by double or triple tapping your side of the bed to cool or warm it up. Another function of the Eight Sleep is to provide sleep tracking, which sounds great as you won't need to wear a device like the Apple Watch or Oura Ring to bed. I use an Oura Ring that I usually wear to bed, and I compared the stats it offered to the Eight Sleep. Benchmark Oura Ring Eight Sleep Pod 4 Time Slept 5h 43m 4h 19m Deep Sleep 40m 46m REM 50m 1h 15s Resting Heart Rate 73bpm 74pm While this is the data for just one night, during my two weeks of testing, I found that the Oura Ring's results were more consistent than those of the Eight Sleep, and there's a good reason why. The Eight Sleep mattress cover is split into two parts to track you and your partner. If I moved towards my partner's side or if she moved towards my side, the data would not be analysed properly. Similarly, the data would be completely thrown off if our kid decided to jump in the bed in the middle of the night, which happened quite a few times while I was testing. So, while sleep tracking is a good secondary feature of the Eight Sleep Pod, it should not be the main reason to get one unless you sleep alone. Also worth nothing that the app does not sync your sleeping information with Apple Health, however it does work with Alexa if that's your preferred platform. One more feature of the Eight Sleep is a wake-up alarm that vibrates on your side of the bed. The vibrations are meant to go from light to strong to gently wake you up, along with raising the temperature of the Pod. While this feature woke me up every single time, it also woke my partner up because the vibrations carry off to the other half of the bed even at the lightest settings. The Verdict The Eight Sleep Pod 4 delivers excellent temperature regulation, and that should be your primary objective when purchasing this Pod. While you also get sleep tracking and a wake-up alarm, these features work much better if you're the only one sleeping on the bed. The unit is also quiet in operation, and Autopilot AI works well to enhance your sleep. However, it is an extremely expensive piece of equipment, and additional subscription charges make it less appealing to users who are careful about their finances. And while the app is pretty good, it doesn't sync with Apple Health. The high costs and limited integrations pose drawbacks, but the 30-day return policy reduces risk. The Eight Sleep Pod 4 is ideal for tech and fitness enthusiasts seeking premium sleep solutions and couples with different temperature preferences.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • PwC Releases Executive Guide on Agentic AI: A Strategic Blueprint for Deploying Autonomous Multi-Agent Systems in the Enterprise

    In its latest executive guide, “Agentic AI – The New Frontier in GenAI,” PwC presents a strategic approach for what it defines as the next pivotal evolution in enterprise automation: Agentic Artificial Intelligence.
    These systems, capable of autonomous decision-making and context-aware interactions, are poised to reconfigure how organizations operate—shifting from traditional software models to orchestrated AI-driven services.
    From Automation to Autonomous Intelligence
    Agentic AI is not just another AI trend—it marks a foundational shift.
    Unlike conventional systems that require human input for each decision point, agentic AI systems operate independently to achieve predefined goals.
    Drawing on multimodal data (text, audio, images), they reason, plan, adapt, and learn continuously in dynamic environments.
    PwC identifies six defining capabilities of agentic AI:
    Autonomy in decision-making
    Goal-driven behavior aligned with organizational outcomes
    Environmental interaction to adapt in real time
    Learning capabilities through reinforcement and historical data
    Workflow orchestration across complex business functions
    Multi-agent communication to coordinate actions within distributed systems
    This architecture enables enterprise-grade systems that go beyond single-task automation to orchestrate entire processes with human-like intelligence and accountability.
    Closing the Gaps of Traditional AI Approaches
    The report contrasts agentic AI with earlier generations of chatbots and RAG-based systems.
    Traditional rule-based bots suffer from rigidity, while retrieval-augmented systems often lack contextual understanding across long interactions.
    Agentic AI surpasses both by maintaining dialogue memory, reasoning across systems (e.g., CRM, ERP, IVR), and dynamically solving customer issues.
    PwC envisions micro-agents—each optimized for tasks like inquiry resolution, sentiment analysis, or escalation—coordinated by a central orchestrator to deliver coherent, responsive service experiences.
    Demonstrated Impact Across Sectors
    PwC’s guide is grounded in practical use cases spanning industries:
    JPMorgan Chase has automated legal document analysis via its COiN platform, saving over 360,000 manual review hours annually.
    Siemens leverages agentic AI for predictive maintenance, improving uptime and cutting maintenance costs by 20%.
    Amazon uses multimodal agentic models to deliver personalized recommendations, contributing to a 35% increase in sales and improved retention.
    These examples demonstrate how agentic systems can optimize decision-making, streamline operations, and enhance customer engagement across functions—from finance and healthcare to logistics and retail.
    A Paradigm Shift: Service-as-a-Software
    One of the report’s most thought-provoking insights is the rise of service-as-a-software—a departure from traditional licensing models.
    In this paradigm, organizations pay not for access to software but for task-specific outcomes delivered by AI agents.
    For instance, instead of maintaining a support center, a business might deploy autonomous agents like Sierra and only pay per successful customer resolution.
    This model reduces operational costs, expands scalability, and allows organizations to move incrementally from “copilot” to fully autonomous “autopilot” systems.
    To implement these systems, enterprises can choose from both commercial and open-source frameworks:
    LangGraph and CrewAI offer enterprise-grade orchestration with integration support.
    AutoGen and AutoGPT, on the open-source side, support rapid experimentation with multi-agent architectures.
    The optimal choice depends on integration needs, IT maturity, and long-term scalability goals.
    Crafting a Strategic Adoption Roadmap
    PwC emphasizes that success in deploying agentic AI hinges on aligning AI initiatives with business objectives, securing executive sponsorship, and starting with high-impact pilot programs.
    Equally crucial is preparing the organization with ethical safeguards, data infrastructure, and cross-functional talent.
    Agentic AI offers more than automation—it promises intelligent, adaptable systems that learn and optimize autonomously.
    As enterprises recalibrate their AI strategies, those that move early will not only unlock new efficiencies but also shape the next chapter of digital transformation.
    Download the Guide here. All credit for this research goes to the researchers of this project.
    Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit.
    Here’s a brief overview of what we’re building at Marktechpost:
    ML News Community – r/machinelearningnews (92k+ members)
    Newsletter– airesearchinsights.com/(30k+ subscribers)
    miniCON AI Events – minicon.marktechpost.com
    AI Reports & Magazines – magazine.marktechpost.com
    AI Dev & Research News – marktechpost.com (1M+ monthly readers)
    Partner with us
    NikhilNikhil is an intern consultant at Marktechpost.
    He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur.
    Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science.
    With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Multimodal" style="color: #0066cc;">https://www.marktechpost.com/author/nikhil0980/Multimodal AI Needs More Than Modality Support: Researchers Propose General-Level and General-Bench to Evaluate True Synergy in Generalist ModelsNikhilhttps://www.marktechpost.com/author/nikhil0980/This" style="color: #0066cc;">https://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces Effective State-Size (ESS): A Metric to Quantify Memory Utilization in Sequence Models for Performance OptimizationNikhilhttps://www.marktechpost.com/author/nikhil0980/Huawei" style="color: #0066cc;">https://www.marktechpost.com/author/nikhil0980/Huawei Introduces Pangu Ultra MoE: A 718B-Parameter Sparse Language Model Trained Efficiently on Ascend NPUs Using Simulation-Driven Architecture and System-Level OptimizationNikhilhttps://www.marktechpost.com/author/nikhil0980/Google" style="color: #0066cc;">https://www.marktechpost.com/author/nikhil0980/Google Redefines Computer Science R&D: A Hybrid Research Model that Merges Innovation with Scalable Engineering

    Source: https://www.marktechpost.com/2025/05/13/pwc-releases-executive-guide-on-agentic-ai-a-strategic-blueprint-for-deploying-autonomous-multi-agent-systems-in-the-enterprise/" style="color: #0066cc;">https://www.marktechpost.com/2025/05/13/pwc-releases-executive-guide-on-agentic-ai-a-strategic-blueprint-for-deploying-autonomous-multi-agent-systems-in-the-enterprise/
    #pwc #releases #executive #guide #agentic #strategic #blueprint #for #deploying #autonomous #multiagent #systems #the #enterprise
    PwC Releases Executive Guide on Agentic AI: A Strategic Blueprint for Deploying Autonomous Multi-Agent Systems in the Enterprise
    In its latest executive guide, “Agentic AI – The New Frontier in GenAI,” PwC presents a strategic approach for what it defines as the next pivotal evolution in enterprise automation: Agentic Artificial Intelligence. These systems, capable of autonomous decision-making and context-aware interactions, are poised to reconfigure how organizations operate—shifting from traditional software models to orchestrated AI-driven services. From Automation to Autonomous Intelligence Agentic AI is not just another AI trend—it marks a foundational shift. Unlike conventional systems that require human input for each decision point, agentic AI systems operate independently to achieve predefined goals. Drawing on multimodal data (text, audio, images), they reason, plan, adapt, and learn continuously in dynamic environments. PwC identifies six defining capabilities of agentic AI: Autonomy in decision-making Goal-driven behavior aligned with organizational outcomes Environmental interaction to adapt in real time Learning capabilities through reinforcement and historical data Workflow orchestration across complex business functions Multi-agent communication to coordinate actions within distributed systems This architecture enables enterprise-grade systems that go beyond single-task automation to orchestrate entire processes with human-like intelligence and accountability. Closing the Gaps of Traditional AI Approaches The report contrasts agentic AI with earlier generations of chatbots and RAG-based systems. Traditional rule-based bots suffer from rigidity, while retrieval-augmented systems often lack contextual understanding across long interactions. Agentic AI surpasses both by maintaining dialogue memory, reasoning across systems (e.g., CRM, ERP, IVR), and dynamically solving customer issues. PwC envisions micro-agents—each optimized for tasks like inquiry resolution, sentiment analysis, or escalation—coordinated by a central orchestrator to deliver coherent, responsive service experiences. Demonstrated Impact Across Sectors PwC’s guide is grounded in practical use cases spanning industries: JPMorgan Chase has automated legal document analysis via its COiN platform, saving over 360,000 manual review hours annually. Siemens leverages agentic AI for predictive maintenance, improving uptime and cutting maintenance costs by 20%. Amazon uses multimodal agentic models to deliver personalized recommendations, contributing to a 35% increase in sales and improved retention. These examples demonstrate how agentic systems can optimize decision-making, streamline operations, and enhance customer engagement across functions—from finance and healthcare to logistics and retail. A Paradigm Shift: Service-as-a-Software One of the report’s most thought-provoking insights is the rise of service-as-a-software—a departure from traditional licensing models. In this paradigm, organizations pay not for access to software but for task-specific outcomes delivered by AI agents. For instance, instead of maintaining a support center, a business might deploy autonomous agents like Sierra and only pay per successful customer resolution. This model reduces operational costs, expands scalability, and allows organizations to move incrementally from “copilot” to fully autonomous “autopilot” systems. To implement these systems, enterprises can choose from both commercial and open-source frameworks: LangGraph and CrewAI offer enterprise-grade orchestration with integration support. AutoGen and AutoGPT, on the open-source side, support rapid experimentation with multi-agent architectures. The optimal choice depends on integration needs, IT maturity, and long-term scalability goals. Crafting a Strategic Adoption Roadmap PwC emphasizes that success in deploying agentic AI hinges on aligning AI initiatives with business objectives, securing executive sponsorship, and starting with high-impact pilot programs. Equally crucial is preparing the organization with ethical safeguards, data infrastructure, and cross-functional talent. Agentic AI offers more than automation—it promises intelligent, adaptable systems that learn and optimize autonomously. As enterprises recalibrate their AI strategies, those that move early will not only unlock new efficiencies but also shape the next chapter of digital transformation. Download the Guide here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit. Here’s a brief overview of what we’re building at Marktechpost: ML News Community – r/machinelearningnews (92k+ members) Newsletter– airesearchinsights.com/(30k+ subscribers) miniCON AI Events – minicon.marktechpost.com AI Reports & Magazines – magazine.marktechpost.com AI Dev & Research News – marktechpost.com (1M+ monthly readers) Partner with us NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Multimodal AI Needs More Than Modality Support: Researchers Propose General-Level and General-Bench to Evaluate True Synergy in Generalist ModelsNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces Effective State-Size (ESS): A Metric to Quantify Memory Utilization in Sequence Models for Performance OptimizationNikhilhttps://www.marktechpost.com/author/nikhil0980/Huawei Introduces Pangu Ultra MoE: A 718B-Parameter Sparse Language Model Trained Efficiently on Ascend NPUs Using Simulation-Driven Architecture and System-Level OptimizationNikhilhttps://www.marktechpost.com/author/nikhil0980/Google Redefines Computer Science R&D: A Hybrid Research Model that Merges Innovation with Scalable Engineering Source: https://www.marktechpost.com/2025/05/13/pwc-releases-executive-guide-on-agentic-ai-a-strategic-blueprint-for-deploying-autonomous-multi-agent-systems-in-the-enterprise/ #pwc #releases #executive #guide #agentic #strategic #blueprint #for #deploying #autonomous #multiagent #systems #the #enterprise
    WWW.MARKTECHPOST.COM
    PwC Releases Executive Guide on Agentic AI: A Strategic Blueprint for Deploying Autonomous Multi-Agent Systems in the Enterprise
    In its latest executive guide, “Agentic AI – The New Frontier in GenAI,” PwC presents a strategic approach for what it defines as the next pivotal evolution in enterprise automation: Agentic Artificial Intelligence. These systems, capable of autonomous decision-making and context-aware interactions, are poised to reconfigure how organizations operate—shifting from traditional software models to orchestrated AI-driven services. From Automation to Autonomous Intelligence Agentic AI is not just another AI trend—it marks a foundational shift. Unlike conventional systems that require human input for each decision point, agentic AI systems operate independently to achieve predefined goals. Drawing on multimodal data (text, audio, images), they reason, plan, adapt, and learn continuously in dynamic environments. PwC identifies six defining capabilities of agentic AI: Autonomy in decision-making Goal-driven behavior aligned with organizational outcomes Environmental interaction to adapt in real time Learning capabilities through reinforcement and historical data Workflow orchestration across complex business functions Multi-agent communication to coordinate actions within distributed systems This architecture enables enterprise-grade systems that go beyond single-task automation to orchestrate entire processes with human-like intelligence and accountability. Closing the Gaps of Traditional AI Approaches The report contrasts agentic AI with earlier generations of chatbots and RAG-based systems. Traditional rule-based bots suffer from rigidity, while retrieval-augmented systems often lack contextual understanding across long interactions. Agentic AI surpasses both by maintaining dialogue memory, reasoning across systems (e.g., CRM, ERP, IVR), and dynamically solving customer issues. PwC envisions micro-agents—each optimized for tasks like inquiry resolution, sentiment analysis, or escalation—coordinated by a central orchestrator to deliver coherent, responsive service experiences. Demonstrated Impact Across Sectors PwC’s guide is grounded in practical use cases spanning industries: JPMorgan Chase has automated legal document analysis via its COiN platform, saving over 360,000 manual review hours annually. Siemens leverages agentic AI for predictive maintenance, improving uptime and cutting maintenance costs by 20%. Amazon uses multimodal agentic models to deliver personalized recommendations, contributing to a 35% increase in sales and improved retention. These examples demonstrate how agentic systems can optimize decision-making, streamline operations, and enhance customer engagement across functions—from finance and healthcare to logistics and retail. A Paradigm Shift: Service-as-a-Software One of the report’s most thought-provoking insights is the rise of service-as-a-software—a departure from traditional licensing models. In this paradigm, organizations pay not for access to software but for task-specific outcomes delivered by AI agents. For instance, instead of maintaining a support center, a business might deploy autonomous agents like Sierra and only pay per successful customer resolution. This model reduces operational costs, expands scalability, and allows organizations to move incrementally from “copilot” to fully autonomous “autopilot” systems. To implement these systems, enterprises can choose from both commercial and open-source frameworks: LangGraph and CrewAI offer enterprise-grade orchestration with integration support. AutoGen and AutoGPT, on the open-source side, support rapid experimentation with multi-agent architectures. The optimal choice depends on integration needs, IT maturity, and long-term scalability goals. Crafting a Strategic Adoption Roadmap PwC emphasizes that success in deploying agentic AI hinges on aligning AI initiatives with business objectives, securing executive sponsorship, and starting with high-impact pilot programs. Equally crucial is preparing the organization with ethical safeguards, data infrastructure, and cross-functional talent. Agentic AI offers more than automation—it promises intelligent, adaptable systems that learn and optimize autonomously. As enterprises recalibrate their AI strategies, those that move early will not only unlock new efficiencies but also shape the next chapter of digital transformation. Download the Guide here. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 90k+ ML SubReddit. Here’s a brief overview of what we’re building at Marktechpost: ML News Community – r/machinelearningnews (92k+ members) Newsletter– airesearchinsights.com/(30k+ subscribers) miniCON AI Events – minicon.marktechpost.com AI Reports & Magazines – magazine.marktechpost.com AI Dev & Research News – marktechpost.com (1M+ monthly readers) Partner with us NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/Multimodal AI Needs More Than Modality Support: Researchers Propose General-Level and General-Bench to Evaluate True Synergy in Generalist ModelsNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces Effective State-Size (ESS): A Metric to Quantify Memory Utilization in Sequence Models for Performance OptimizationNikhilhttps://www.marktechpost.com/author/nikhil0980/Huawei Introduces Pangu Ultra MoE: A 718B-Parameter Sparse Language Model Trained Efficiently on Ascend NPUs Using Simulation-Driven Architecture and System-Level OptimizationNikhilhttps://www.marktechpost.com/author/nikhil0980/Google Redefines Computer Science R&D: A Hybrid Research Model that Merges Innovation with Scalable Engineering
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com