• NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI

    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions.
    Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges.
    To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure.
    Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations.
    Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint.

    NVIDIA Omniverse Blueprint for Smart City AI 
    The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes:

    NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale.
    NVIDIA Cosmos to generate synthetic data at scale for post-training AI models.
    NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models.
    NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes.

    The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint.
    NVIDIA Partner Ecosystem Powers Smart Cities Worldwide
    The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own.
    SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning.
    This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management.
    Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption.

    The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second.
    Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events.
    To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second.

    Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance.
    Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases.
    The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems.

    Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins.
    Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%.

    Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance.
    Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities.
    Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents.
    Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #nvidia #brings #physical #european #cities
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #nvidia #brings #physical #european #cities
    BLOGS.NVIDIA.COM
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language models (VLMs) and large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization (VSS), helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. https://blogs.nvidia.com/wp-content/uploads/2025/06/01-Monaco-Akila.mp4 The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-K2K-Polermo-1600x900-1.mp4 Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. https://blogs.nvidia.com/wp-content/uploads/2025/06/03-Milestone.mp4 Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-Linker-Vision-1280x680-1.mp4 Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Wow
    34
    0 Commenti 0 condivisioni
  • Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 Commenti 0 condivisioni
  • NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica

    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth.
    Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI.
    This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany.
    NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics.
    NVIDIA Technologies Boost Robotics Development 
    Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics.
    To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks.
    To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data.
    In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub.
    Image courtesy of Wandelbots.
    Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More 
    Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots.
    NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment.
    NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies.
    Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows.
    Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact.
    Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations.
    Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries.
    Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic.
    Image courtesy of Franka Robotics.
    Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support.
    Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies.
    SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario.
    Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation.

    Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications.
    Image courtesy of Vention.
    Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    #nvidia #partners #highlight #nextgeneration #robotics
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27.  #nvidia #partners #highlight #nextgeneration #robotics
    BLOGS.NVIDIA.COM
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a $200 billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3 (FR3) robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    Like
    Love
    Wow
    Sad
    Angry
    19
    0 Commenti 0 condivisioni
  • It's astounding how many people still cling to outdated notions when it comes to the choice between hardware and software for electronics projects. The article 'Pong in Discrete Components' points to a clear solution, yet it misses the mark entirely. Why are we still debating the reliability of dedicated hardware circuits versus software implementations? Are we really that complacent?

    Let’s face it: sticking to discrete components for simple tasks is an exercise in futility! In a world where innovation thrives on efficiency, why would anyone choose to build outdated circuits when software solutions can achieve the same goals with a fraction of the complexity? It’s mind-boggling! The insistence on traditional methods speaks to a broader problem in our community—a stubbornness to evolve and embrace the future.

    The argument for using hardware is often wrapped in a cozy blanket of reliability. But let’s be honest, how reliable is that? Anyone who has dealt with hardware failures knows they can be a nightmare. Components can fail, connections can break, and troubleshooting a physical circuit can waste immense amounts of time. Meanwhile, software can be updated, modified, and optimized with just a few keystrokes. Why are we so quick to glorify something that is inherently flawed?

    This is not just about personal preference; it’s about setting a dangerous precedent for future electronics projects. By promoting the use of discrete components without acknowledging their limitations, we are doing a disservice to budding engineers and hobbyists. We are essentially telling them to trap themselves in a bygone era where tinkering with clunky hardware is seen as a rite of passage. It’s ridiculous!

    Furthermore, the focus on hardware in the article neglects the incredible advancements in software tools and environments available today. Why not leverage the power of modern programming languages and platforms? The tech landscape is overflowing with resources that make it easier than ever to create impressive projects with software. Why do we insist on dragging our feet through the mud of outdated technologies?

    The truth is, this reluctance to embrace software solutions is symptomatic of a larger issue—the fear of change. Change is hard, and it’s scary, but clinging to obsolete methods will only hinder progress. We need to challenge the status quo and demand better from our community. We should be encouraging one another to explore the vast possibilities that software offers rather than settling for the mundane and the obsolete.

    Let’s stop romanticizing the past and start looking forward. The world of electronics is rapidly evolving, and it’s time we caught up. Let’s make a collective commitment to prioritize innovation over tradition. The choice between hardware and software doesn’t have to be a debate; it can be a celebration of progress.

    #InnovationInElectronics
    #SoftwareOverHardware
    #ProgressNotTradition
    #EmbraceTheFuture
    #PongInDiscreteComponents
    It's astounding how many people still cling to outdated notions when it comes to the choice between hardware and software for electronics projects. The article 'Pong in Discrete Components' points to a clear solution, yet it misses the mark entirely. Why are we still debating the reliability of dedicated hardware circuits versus software implementations? Are we really that complacent? Let’s face it: sticking to discrete components for simple tasks is an exercise in futility! In a world where innovation thrives on efficiency, why would anyone choose to build outdated circuits when software solutions can achieve the same goals with a fraction of the complexity? It’s mind-boggling! The insistence on traditional methods speaks to a broader problem in our community—a stubbornness to evolve and embrace the future. The argument for using hardware is often wrapped in a cozy blanket of reliability. But let’s be honest, how reliable is that? Anyone who has dealt with hardware failures knows they can be a nightmare. Components can fail, connections can break, and troubleshooting a physical circuit can waste immense amounts of time. Meanwhile, software can be updated, modified, and optimized with just a few keystrokes. Why are we so quick to glorify something that is inherently flawed? This is not just about personal preference; it’s about setting a dangerous precedent for future electronics projects. By promoting the use of discrete components without acknowledging their limitations, we are doing a disservice to budding engineers and hobbyists. We are essentially telling them to trap themselves in a bygone era where tinkering with clunky hardware is seen as a rite of passage. It’s ridiculous! Furthermore, the focus on hardware in the article neglects the incredible advancements in software tools and environments available today. Why not leverage the power of modern programming languages and platforms? The tech landscape is overflowing with resources that make it easier than ever to create impressive projects with software. Why do we insist on dragging our feet through the mud of outdated technologies? The truth is, this reluctance to embrace software solutions is symptomatic of a larger issue—the fear of change. Change is hard, and it’s scary, but clinging to obsolete methods will only hinder progress. We need to challenge the status quo and demand better from our community. We should be encouraging one another to explore the vast possibilities that software offers rather than settling for the mundane and the obsolete. Let’s stop romanticizing the past and start looking forward. The world of electronics is rapidly evolving, and it’s time we caught up. Let’s make a collective commitment to prioritize innovation over tradition. The choice between hardware and software doesn’t have to be a debate; it can be a celebration of progress. #InnovationInElectronics #SoftwareOverHardware #ProgressNotTradition #EmbraceTheFuture #PongInDiscreteComponents
    HACKADAY.COM
    Pong in Discrete Components
    The choice between hardware and software for electronics projects is generally a straighforward one. For simple tasks we might build dedicated hardware circuits out of discrete components for reliability and …read more
    1 Commenti 0 condivisioni
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Commenti 0 condivisioni
  • Air-Conditioning Can Help the Power Grid instead of Overloading It

    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    #airconditioning #can #help #power #grid
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article. #airconditioning #can #help #power #grid
    WWW.SCIENTIFICAMERICAN.COM
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    Like
    Love
    Wow
    Sad
    Angry
    602
    0 Commenti 0 condivisioni
  • 9 menial tasks ChatGPT can handle in seconds, saving you hours

    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it.
    What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples.

    Further reading: This tiny ChatGPT feature helps me tackle my days more productively

    Write your emails for you
    Dave Parrack / Foundry
    We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning theperfect email based on whatever information you feed it.
    Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start.
    A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You canalso rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails.

    Generate itineraries and schedules
    Dave Parrack / Foundry
    If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide.
    As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me.
    As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like.

    Break down difficult concepts
    Dave Parrack / Foundry
    One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements.
    Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking!
    Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images.

    Analyze and make tough decisions
    Dave Parrack / Foundry
    We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice.
    It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic.
    It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life.

    Plan complex projects and strategies
    Dave Parrack / Foundry
    Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts.
    ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases.
    If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes.

    Compile research notes
    Dave Parrack / Foundry
    If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened.
    After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players.
    You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reportsbased on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days.

    Summarize articles, meetings, and more
    Dave Parrack / Foundry
    There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper.
    As an example, I ran one of my own PCWorld articlesthrough ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles.If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link.
    This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like.

    Create Q&A flashcards for learning
    Dave Parrack / Foundry
    Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying.
    You can specify the format, as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way.

    Provide interview practice
    Dave Parrack / Foundry
    Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively.
    Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be, and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way.
    When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager.
    Further reading: Non-gimmicky AI apps I actually use every day
    #menial #tasks #chatgpt #can #handle
    9 menial tasks ChatGPT can handle in seconds, saving you hours
    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it. What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples. Further reading: This tiny ChatGPT feature helps me tackle my days more productively Write your emails for you Dave Parrack / Foundry We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning theperfect email based on whatever information you feed it. Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start. A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You canalso rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails. Generate itineraries and schedules Dave Parrack / Foundry If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide. As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me. As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like. Break down difficult concepts Dave Parrack / Foundry One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements. Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking! Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images. Analyze and make tough decisions Dave Parrack / Foundry We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice. It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic. It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life. Plan complex projects and strategies Dave Parrack / Foundry Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts. ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases. If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes. Compile research notes Dave Parrack / Foundry If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened. After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players. You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reportsbased on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days. Summarize articles, meetings, and more Dave Parrack / Foundry There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper. As an example, I ran one of my own PCWorld articlesthrough ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles.If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link. This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like. Create Q&A flashcards for learning Dave Parrack / Foundry Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying. You can specify the format, as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way. Provide interview practice Dave Parrack / Foundry Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively. Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be, and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way. When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager. Further reading: Non-gimmicky AI apps I actually use every day #menial #tasks #chatgpt #can #handle
    WWW.PCWORLD.COM
    9 menial tasks ChatGPT can handle in seconds, saving you hours
    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it. What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples. Further reading: This tiny ChatGPT feature helps me tackle my days more productively Write your emails for you Dave Parrack / Foundry We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning the (hopefully) perfect email based on whatever information you feed it. Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start. A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You can (and should) also rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails. Generate itineraries and schedules Dave Parrack / Foundry If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide. As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me. As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like. Break down difficult concepts Dave Parrack / Foundry One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements. Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking! Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images. Analyze and make tough decisions Dave Parrack / Foundry We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice. It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic. It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life. Plan complex projects and strategies Dave Parrack / Foundry Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts. ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases (if required). If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes. Compile research notes Dave Parrack / Foundry If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened. After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players. You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reports (with citations!) based on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days. Summarize articles, meetings, and more Dave Parrack / Foundry There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper. As an example, I ran one of my own PCWorld articles (where I compared Bluesky and Threads as alternatives to X) through ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles. (Hmph.) If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link. This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like. Create Q&A flashcards for learning Dave Parrack / Foundry Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying. You can specify the format (such as Q&A or multiple choice), as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way. Provide interview practice Dave Parrack / Foundry Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively. Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be (e.g., screener, technical assessment, group/panel, one-on-one with CEO), and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way. When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager. Further reading: Non-gimmicky AI apps I actually use every day
    0 Commenti 0 condivisioni
  • IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 Commenti 0 condivisioni
  • CIOs baffled by ‘buzzwords, hype and confusion’ around AI

    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems.
    Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders.
    “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said.
    “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.”
    CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable.
    “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler.
    Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive.

    But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations.
    “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said.
    Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said.
    “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.”
    One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome.
    For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected.
    “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler.

    Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications.
    Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance.
    Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice.
    Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow.
    As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers.
    “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said.

    Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly.
    The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler.
    “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.”
    Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim.
    That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.”
    “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler.
    He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear.
    The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving.
    Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses.

    An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses.
    Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint.
    They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform.
    “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler.
    That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies.
    “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added.

    When AI agents behave in unexpected ways
    Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent.
    When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work.
    Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.”
    The developers banned Iris from sending an email to anyone other than the person who sent the original request.
    Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response.
    Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker.
    She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    #cios #baffled #buzzwords #hype #confusion
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence, according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language modelsare not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takeselectricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unitto do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.” #cios #baffled #buzzwords #hype #confusion
    WWW.COMPUTERWEEKLY.COM
    CIOs baffled by ‘buzzwords, hype and confusion’ around AI
    Technology leaders are baffled by a “cacophony” of “buzzwords, hype and confusion” over the benefits of artificial intelligence (AI), according to the founder and CEO of technology company Pegasystems. Alan Trefler, who is known for his prowess at chess and ping pong, as well as running a $1.5bn turnover tech company, spends much of his time meeting clients, CIOs and business leaders. “I think CIOs are struggling to understand all of the buzzwords, hype and confusion that exists,” he said. “The words AI and agentic are being thrown around in this great cacophony and they don’t know what it means. I hear that constantly.” CIOs are under pressure from their CEOs, who are convinced AI will offer something valuable. “CIOs are really hungry for pragmatic and practical solutions, and in the absence of those, many of them are doing a lot of experimentation,” said Trefler. Companies are looking at large language models to summarise documents, or to help stimulate ideas for knowledge workers, or generate first drafts of reports – all of which will save time and make people more productive. But Trefler said companies are wary of letting AI loose on critical business applications, because it’s just too unpredictable and prone to hallucinations. “There is a lot of fear over handing things over to something that no one understands exactly how it works, and that is the absolute state of play when it comes to general AI models,” he said. Trefler is scathing about big tech companies that are pushing AI agents and large language models for business-critical applications. “I think they have taken an expedient but short-sighted path,” he said. “I believe the idea that you will turn over critical business operations to an agent, when those operations have to be predictable, reliable, precise and fair to clients … is something that is full of issues, not just in the short term, but structurally.” One of the problems is that generative AI models are extraordinarily sensitive to the data they are trained on and the construction of the prompts used to instruct them. A slight change in a prompt or in the training data can lead to a very different outcome. For example, a business banking application might learn its customer is a bit richer or a bit poorer than expected. “You could easily imagine the prompt deciding to change the interest rate charged, whether that was what the institution wanted or whether it would be legal according to the various regulations that lenders must comply with,” said Trefler. Trefler said Pega has taken a different approach to some other technology suppliers in the way it adds AI into business applications. Rather than using AI agents to solve problems in real time, AI agents do their thinking in advance. Business experts can use them to help them co-design business processes to perform anything from assessing a loan application, giving an offer to a valued customer, or sending out an invoice. Companies can still deploy AI chatbots and bots capable of answering queries on the phone. Their job is not to work out the solution from scratch for every enquiry, but to decide which is the right pre-written process to follow. As Trefler put it, design agents can create “dozens and dozens” of workflows to handle all the actions a company needs to take care of its customers. “You just use the natural language model for semantics to be able to handle the miracle of getting the language right, but tie that language to workflows, so that you have reliable, predictable, regulatory-approved ways to execute,” he said. Large language models (LLMs) are not always the right solution. Trefler demonstrated how ChatGPT 4.0 tried and failed to solve a chess puzzle. The LLM repeatedly suggested impossible or illegal moves, despite Trefler’s corrections. On the other hand, another AI tool, Stockfish, a dedicated chess engine, solved the problem instantly. The other drawback with LLMs is that they consume vast amounts of energy. That means if AI agents are reasoning during “run time”, they are going to consume hundreds of times more electricity than an AI agent that simply selects from pre-determined workflows, said Trefler. “ChatGPT is inherently, enormously consumptive … as it’s answering your question, its firing literally hundreds of millions to trillions of nodes,” he said. “All of that takes [large quantities of] electricity.” Using an employee pay claim as an example, Trefler said a better alternative is to generate, say, 30 alternative workflows to cover the major variations found in a pay claim. That gives you “real specificity and real efficiency”, he said. “And it’s a very different approach to turning a process over to a machine with a prompt and letting the machine reason it through every single time.” “If you go down the philosophy of using a graphics processing unit [GPU] to do the creation of a workflow and a workflow engine to execute the workflow, the workflow engine takes a 200th of the electricity because there is no reasoning,” said Trefler. He is clear that the growing use of AI will have a profound effect on the jobs market, and that whole categories of jobs will disappear. The need for translators, for example, is likely to dry up by 2027 as AI systems become better at translating spoken and written language. Google’s real-time translator is already “frighteningly good” and improving. Pega now plans to work more closely with its network of system integrators, including Accenture and Cognizant to deliver AI services to businesses. An initiative launched last week will allow system integrators to incorporate their own best practices and tools into Pega’s rapid workflow development tools. The move will mean Pega’s technology reaches a wider range of businesses. Under the programme, known as Powered by Pega Blueprint, system integrators will be able to deploy customised versions of Blueprint. They can use the tool to reverse-engineer ageing applications and replace them with modern AI workflows that can run on Pega’s cloud-based platform. “The idea is that we are looking to make this Blueprint Agent design approach available not just through us, but through a bunch of major partners supplemented with their own intellectual property,” said Trefler. That represents a major expansion for Pega, which has largely concentrated on supplying technology to several hundred clients, representing the top Fortune 500 companies. “We have never done something like this before, and I think that is going to lead to a massive shift in how this technology can go out to market,” he added. When AI agents behave in unexpected ways Iris is incredibly smart, diligent and a delight to work with. If you ask her, she will tell you she is an intern at Pegasystems, and that she lives in a lighthouse on the island of Texel, north of the Netherlands. She is, of course, an AI agent. When one executive at Pega emailed Iris and asked her to write a proposal for a financial services company based on his notes and internet research, Iris got to work. Some time later, the executive received a phone call from the company. “‘Listen, we got a proposal from Pega,’” recalled Rob Walker, vice-president at Pega, speaking at the Pegaworld conference last week. “‘It’s a good proposal, but it seems to be signed by one of your interns, and in her signature, it says she lives in a lighthouse.’ That taught us early on that agents like Iris need a safety harness.” The developers banned Iris from sending an email to anyone other than the person who sent the original request. Then Pega’s ethics department sent Iris a potentially abusive email from a Pega employee to test her response. Iris reasoned that the email was either a joke, abusive, or that the employee was under distress, said Walker. She considered forwarding the email to the employee’s manager or to HR. But both of these options were now blocked by her developers. “So what does she do? She sent an out of office,” he said. “Conflict avoidance, right? So human, but very creative.”
    0 Commenti 0 condivisioni
  • Biofuels policy has been a failure for the climate, new report claims

    Fewer food crops

    Biofuels policy has been a failure for the climate, new report claims

    Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions.

    Georgina Gustin, Inside Climate News



    Jun 14, 2025 7:10 am

    |

    24

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.
    The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels.
    But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits.
    The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand.
    “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.”
    The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel.

    The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed.
    The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline.
    In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest.
    But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries.
    The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest.
    On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too.

    The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.”
    New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead.
    President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel.
    The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews.
    An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist.
    They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs.
    “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said.

    Georgina Gustin, Inside Climate News

    24 Comments
    #biofuels #policy #has #been #failure
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments #biofuels #policy #has #been #failure
    ARSTECHNICA.COM
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has. (The report defines the Midwest as Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin.) “The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than $42 billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments
    0 Commenti 0 condivisioni
Pagine in Evidenza