• Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration

    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures.
    These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations.
    For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type.
    These factors directly affect network performance, user experience and energy consumption.
    To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration.
    At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos.
    Automate Network Configuration With the AI Blueprint
    NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices.
    The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI.
    This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures.
    Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies.
    The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input.
    Powered and Deployed by Industry Leaders
    Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience.
    With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes.
    Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond.
    “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.”
    Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies
    The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality.
    Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences.
    NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing.
    Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference.
    For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos.
    Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems.
    Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing.
    The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making.
    Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations.
    ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy.
    Get started with the new blueprint today.
    Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    #calling #llms #new #nvidia #blueprint
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA. #calling #llms #new #nvidia #blueprint
    BLOGS.NVIDIA.COM
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly $295 billion in capital expenditures and over $1 trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance (ISNA), designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    Like
    Love
    Wow
    Sad
    Angry
    80
    0 Yorumlar 0 hisse senetleri
  • NVIDIA CEO Drops the Blueprint for Europe’s AI Boom

    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it.
    “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris.
    From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future.

    A New Industrial Revolution
    At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing.
    “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance.
    At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware.
    There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers.
    Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue.
    NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth.
    Quantum Meets Classical
    Europe’s quantum ambitions just got a boost.
    The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems.
    Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction.
    “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.”
    Sovereign Models, Smarter Agents
    European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs.
    “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said.
    These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe.
    “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said.
    Huang explained how NVIDIA is helping countries across Europe build AI infrastructure.
    Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments.
    The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents.
    To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity.
    “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute.
    The Industrial Cloud Goes Live
    AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution.
    “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent.
    Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.”
    To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.
    “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.”
    NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation.
    And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics.
    The Next Wave
    The next wave of AI has begun — and it’s exponential, Huang explained.
    “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.”
    This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said.
    To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.”
    Huang and Grek, as he explained how AI is driving advancements in robotics.
    These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence.
    “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.”
    With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe.
    Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    #nvidia #ceo #drops #blueprint #europes
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions. #nvidia #ceo #drops #blueprint #europes
    BLOGS.NVIDIA.COM
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    Like
    Love
    Sad
    23
    0 Yorumlar 0 hisse senetleri
  • Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 Yorumlar 0 hisse senetleri
  • NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica

    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth.
    Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI.
    This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany.
    NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics.
    NVIDIA Technologies Boost Robotics Development 
    Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics.
    To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks.
    To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data.
    In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub.
    Image courtesy of Wandelbots.
    Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More 
    Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots.
    NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment.
    NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies.
    Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows.
    Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact.
    Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations.
    Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries.
    Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic.
    Image courtesy of Franka Robotics.
    Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support.
    Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies.
    SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario.
    Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation.

    Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications.
    Image courtesy of Vention.
    Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    #nvidia #partners #highlight #nextgeneration #robotics
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27.  #nvidia #partners #highlight #nextgeneration #robotics
    BLOGS.NVIDIA.COM
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a $200 billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3 (FR3) robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    Like
    Love
    Wow
    Sad
    Angry
    19
    0 Yorumlar 0 hisse senetleri
  • What a world we live in when scientists finally unlock the secrets to the axolotls' ability to regenerate limbs, only to reveal that the key lies not in some miraculous regrowth molecule, but in its controlled destruction! Seriously, what kind of twisted logic is this? Are we supposed to celebrate the fact that the secret to regeneration is, in fact, about knowing when to destroy something instead of nurturing and encouraging growth? This revelation is not just baffling; it's downright infuriating!

    In an age where regenerative medicine holds the promise of healing wounds and restoring functionality, we are faced with the shocking realization that the science is not about building up, but rather about tearing down. Why would we ever want to focus on the destruction of growth molecules instead of creating an environment where regeneration can bloom unimpeded? Where is the inspiration in that? It feels like a slap in the face to anyone who believes in the potential of science to improve lives!

    Moreover, can we talk about the implications of this discovery? If the key to regeneration involves a meticulous dance of destruction, what does that say about our approach to medical advancements? Are we really expected to just stand by and accept that we must embrace an idea that says, "let's get rid of the good stuff to allow for growth"? This is not just a minor flaw in reasoning; it's a fundamental misunderstanding of what regeneration should mean for us!

    To make matters worse, this revelation could lead to misguided practices in regenerative medicine. Instead of developing therapies that promote healing and growth, we could end up with treatments that focus on the elimination of beneficial molecules. This is absolutely unacceptable! How dare the scientific community suggest that the way forward is through destruction rather than cultivation? We should be demanding more from our researchers, not less!

    Let’s not forget the ethical implications. If the path to regeneration is paved with the controlled destruction of vital components, how can we trust the outcomes? We’re putting lives in the hands of a process that promotes destruction. Just imagine the future of medicine being dictated by a philosophy that sounds more like a dystopian nightmare than a beacon of hope.

    It is high time we hold scientists accountable for the direction they are taking in regenerative research. We need a shift in focus that prioritizes constructive growth, not destructive measures. If we are serious about advancing regenerative medicine, we must reject this flawed notion and demand a commitment to genuine regeneration—the kind that nurtures life, rather than sabotages it.

    Let’s raise our voices against this madness. We deserve better than a science that advocates for destruction as the means to an end. The axolotls may thrive on this paradox, but we, as humans, should expect far more from our scientific endeavors.

    #RegenerativeMedicine #Axolotl #ScienceFail #MedicalEthics #Innovation
    What a world we live in when scientists finally unlock the secrets to the axolotls' ability to regenerate limbs, only to reveal that the key lies not in some miraculous regrowth molecule, but in its controlled destruction! Seriously, what kind of twisted logic is this? Are we supposed to celebrate the fact that the secret to regeneration is, in fact, about knowing when to destroy something instead of nurturing and encouraging growth? This revelation is not just baffling; it's downright infuriating! In an age where regenerative medicine holds the promise of healing wounds and restoring functionality, we are faced with the shocking realization that the science is not about building up, but rather about tearing down. Why would we ever want to focus on the destruction of growth molecules instead of creating an environment where regeneration can bloom unimpeded? Where is the inspiration in that? It feels like a slap in the face to anyone who believes in the potential of science to improve lives! Moreover, can we talk about the implications of this discovery? If the key to regeneration involves a meticulous dance of destruction, what does that say about our approach to medical advancements? Are we really expected to just stand by and accept that we must embrace an idea that says, "let's get rid of the good stuff to allow for growth"? This is not just a minor flaw in reasoning; it's a fundamental misunderstanding of what regeneration should mean for us! To make matters worse, this revelation could lead to misguided practices in regenerative medicine. Instead of developing therapies that promote healing and growth, we could end up with treatments that focus on the elimination of beneficial molecules. This is absolutely unacceptable! How dare the scientific community suggest that the way forward is through destruction rather than cultivation? We should be demanding more from our researchers, not less! Let’s not forget the ethical implications. If the path to regeneration is paved with the controlled destruction of vital components, how can we trust the outcomes? We’re putting lives in the hands of a process that promotes destruction. Just imagine the future of medicine being dictated by a philosophy that sounds more like a dystopian nightmare than a beacon of hope. It is high time we hold scientists accountable for the direction they are taking in regenerative research. We need a shift in focus that prioritizes constructive growth, not destructive measures. If we are serious about advancing regenerative medicine, we must reject this flawed notion and demand a commitment to genuine regeneration—the kind that nurtures life, rather than sabotages it. Let’s raise our voices against this madness. We deserve better than a science that advocates for destruction as the means to an end. The axolotls may thrive on this paradox, but we, as humans, should expect far more from our scientific endeavors. #RegenerativeMedicine #Axolotl #ScienceFail #MedicalEthics #Innovation
    Scientists Discover the Key to Axolotls’ Ability to Regenerate Limbs
    A new study reveals the key lies not in the production of a regrowth molecule, but in that molecule's controlled destruction. The discovery could inspire future regenerative medicine.
    Like
    Love
    Wow
    Sad
    Angry
    586
    1 Yorumlar 0 hisse senetleri
  • Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response

    Ultimately, the big takeaway for ML researchers is that before proclaiming an AI milestone—or obituary—make sure the test itself isn’t flawedRead More
    #reasoning #models #really #think #not
    Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response
    Ultimately, the big takeaway for ML researchers is that before proclaiming an AI milestone—or obituary—make sure the test itself isn’t flawedRead More #reasoning #models #really #think #not
    VENTUREBEAT.COM
    Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response
    Ultimately, the big takeaway for ML researchers is that before proclaiming an AI milestone—or obituary—make sure the test itself isn’t flawedRead More
    Like
    Love
    Wow
    Sad
    Angry
    478
    0 Yorumlar 0 hisse senetleri
  • Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data

    Jun 16, 2025Ravie LakshmananMalware / DevOps

    Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others.
    The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions."
    The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week.
    Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload.
    Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer.

    The stealer malware is equipped to siphon a wide range of data from infected machines. This includes -

    JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers
    Pod sandbox environment authentication tokens and git information
    CI/CD information from environment variables
    Zscaler host configuration
    Amazon Web Services account information and tokens
    Public IP address
    General platform, user, and host information

    The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems.
    The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis.
    "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said.

    "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity."
    The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below -

    eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry.
    SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown.
    "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said.
    Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed.
    "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work."
    Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server.
    This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB.
    "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL."

    Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user.
    The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT.
    "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent."
    Crypto Malware in the Open-Source Supply Chain
    The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem.

    Some of the examples of these packages include -

    express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys
    bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing.
    lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers

    "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said.
    "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets."
    AI and Slopsquatting
    The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks.
    Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences.

    Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting.
    "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said.
    "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases."

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.

    SHARE




    #malicious #pypi #package #masquerades #chimera
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Indexrepository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development ofsolutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithmin order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compatts-runtime-compat-checksolders@mediawave/libAll the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former packageto retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server. This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domainand configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB. "is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account controlusing a combination of FodHelper.exe and programmatic identifiersto evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language modelscan hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE     #malicious #pypi #package #masquerades #chimera
    THEHACKERNEWS.COM
    Malicious PyPI Package Masquerades as Chimera Module to Steal AWS, CI/CD, and macOS Data
    Jun 16, 2025Ravie LakshmananMalware / DevOps Cybersecurity researchers have discovered a malicious package on the Python Package Index (PyPI) repository that's capable of harvesting sensitive developer-related information, such as credentials, configuration data, and environment variables, among others. The package, named chimera-sandbox-extensions, attracted 143 downloads and likely targets users of a service called Chimera Sandbox, which was released by Singaporean tech company Grab last August to facilitate "experimentation and development of [machine learning] solutions." The package masquerades as a helper module for Chimera Sandbox, but "aims to steal credentials and other sensitive information such as Jamf configuration, CI/CD environment variables, AWS tokens, and more," JFrog security researcher Guy Korolevski said in a report published last week. Once installed, it attempts to connect to an external domain whose domain name is generated using a domain generation algorithm (DGA) in order to download and execute a next-stage payload. Specifically, the malware acquires from the domain an authentication token, which is then used to send a request to the same domain and retrieve the Python-based information stealer. The stealer malware is equipped to siphon a wide range of data from infected machines. This includes - JAMF receipts, which are records of software packages installed by Jamf Pro on managed computers Pod sandbox environment authentication tokens and git information CI/CD information from environment variables Zscaler host configuration Amazon Web Services account information and tokens Public IP address General platform, user, and host information The kind of data gathered by the malware shows that it's mainly geared towards corporate and cloud infrastructure. In addition, the extraction of JAMF receipts indicates that it's also capable of targeting Apple macOS systems. The collected information is sent via a POST request back to the same domain, after which the server assesses if the machine is a worthy target for further exploitation. However, JFrog said it was unable to obtain the payload at the time of analysis. "The targeted approach employed by this malware, along with the complexity of its multi-stage targeted payload, distinguishes it from the more generic open-source malware threats we have encountered thus far, highlighting the advancements that malicious packages have made recently," Jonathan Sar Shalom, director of threat research at JFrog Security Research team, said. "This new sophistication of malware underscores why development teams remain vigilant with updates—alongside proactive security research – to defend against emerging threats and maintain software integrity." The disclosure comes as SafeDep and Veracode detailed a number of malware-laced npm packages that are designed to execute remote code and download additional payloads. The packages in question are listed below - eslint-config-airbnb-compat (676 Downloads) ts-runtime-compat-check (1,588 Downloads) solders (983 Downloads) @mediawave/lib (386 Downloads) All the identified npm packages have since been taken down from npm, but not before they were downloaded hundreds of times from the package registry. SafeDep's analysis of eslint-config-airbnb-compat found that the JavaScript library has ts-runtime-compat-check listed as a dependency, which, in turn, contacts an external server defined in the former package ("proxy.eslint-proxy[.]site") to retrieve and execute a Base64-encoded string. The exact nature of the payload is unknown. "It implements a multi-stage remote code execution attack using a transitive dependency to hide the malicious code," SafeDep researcher Kunal Singh said. Solders, on the other hand, has been found to incorporate a post-install script in its package.json, causing the malicious code to be automatically executed as soon as the package is installed. "At first glance, it's hard to believe that this is actually valid JavaScript," the Veracode Threat Research team said. "It looks like a seemingly random collection of Japanese symbols. It turns out that this particular obfuscation scheme uses the Unicode characters as variable names and a sophisticated chain of dynamic code generation to work." Decoding the script reveals an extra layer of obfuscation, unpacking which reveals its main function: Check if the compromised machine is Windows, and if so, run a PowerShell command to retrieve a next-stage payload from a remote server ("firewall[.]tel"). This second-stage PowerShell script, also obscured, is designed to fetch a Windows batch script from another domain ("cdn.audiowave[.]org") and configures a Windows Defender Antivirus exclusion list to avoid detection. The batch script then paves the way for the execution of a .NET DLL that reaches out to a PNG image hosted on ImgBB ("i.ibb[.]co"). "[The DLL] is grabbing the last two pixels from this image and then looping through some data contained elsewhere in it," Veracode said. "It ultimately builds up in memory YET ANOTHER .NET DLL." Furthermore, the DLL is equipped to create task scheduler entries and features the ability to bypass user account control (UAC) using a combination of FodHelper.exe and programmatic identifiers (ProgIDs) to evade defenses and avoid triggering any security alerts to the user. The newly-downloaded DLL is Pulsar RAT, a "free, open-source Remote Administration Tool for Windows" and a variant of the Quasar RAT. "From a wall of Japanese characters to a RAT hidden within the pixels of a PNG file, the attacker went to extraordinary lengths to conceal their payload, nesting it a dozen layers deep to evade detection," Veracode said. "While the attacker's ultimate objective for deploying the Pulsar RAT remains unclear, the sheer complexity of this delivery mechanism is a powerful indicator of malicious intent." Crypto Malware in the Open-Source Supply Chain The findings also coincide with a report from Socket that identified credential stealers, cryptocurrency drainers, cryptojackers, and clippers as the main types of threats targeting the cryptocurrency and blockchain development ecosystem. Some of the examples of these packages include - express-dompurify and pumptoolforvolumeandcomment, which are capable of harvesting browser credentials and cryptocurrency wallet keys bs58js, which drains a victim's wallet and uses multi-hop transfers to obscure theft and frustrate forensic tracing. lsjglsjdv, asyncaiosignal, and raydium-sdk-liquidity-init, which functions as a clipper to monitor the system clipboard for cryptocurrency wallet strings and replace them with threat actor‑controlled addresses to reroute transactions to the attackers "As Web3 development converges with mainstream software engineering, the attack surface for blockchain-focused projects is expanding in both scale and complexity," Socket security researcher Kirill Boychenko said. "Financially motivated threat actors and state-sponsored groups are rapidly evolving their tactics to exploit systemic weaknesses in the software supply chain. These campaigns are iterative, persistent, and increasingly tailored to high-value targets." AI and Slopsquatting The rise of artificial intelligence (AI)-assisted coding, also called vibe coding, has unleashed another novel threat in the form of slopsquatting, where large language models (LLMs) can hallucinate non-existent but plausible package names that bad actors can weaponize to conduct supply chain attacks. Trend Micro, in a report last week, said it observed an unnamed advanced agent "confidently" cooking up a phantom Python package named starlette-reverse-proxy, only for the build process to crash with the error "module not found." However, should an adversary upload a package with the same name on the repository, it can have serious security consequences. Furthermore, the cybersecurity company noted that advanced coding agents and workflows such as Claude Code CLI, OpenAI Codex CLI, and Cursor AI with Model Context Protocol (MCP)-backed validation can help reduce, but not completely eliminate, the risk of slopsquatting. "When agents hallucinate dependencies or install unverified packages, they create an opportunity for slopsquatting attacks, in which malicious actors pre-register those same hallucinated names on public registries," security researcher Sean Park said. "While reasoning-enhanced agents can reduce the rate of phantom suggestions by approximately half, they do not eliminate them entirely. Even the vibe-coding workflow augmented with live MCP validations achieves the lowest rates of slip-through, but still misses edge cases." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    Like
    Love
    Wow
    Sad
    Angry
    514
    2 Yorumlar 0 hisse senetleri
  • EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Yorumlar 0 hisse senetleri
  • Alec Haase Q&A: Customer Engagement Book Interview

    Reading Time: 6 minutes
    What is marketing without data? Assumptions. Guesses. Fluff.
    For Chapter 6 of our book, “The Customer Engagement Book: Adapt or Die,” we spoke with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, to explore how engagement data can truly inform critical business decisions. 
    Alec discusses the different types of customer behaviors that matter most, how to separate meaningful information from the rest, and the role of systems that learn over time to create tailored customer experiences.
    This interview provides insights into using data for real-time actions and shaping the future of marketing. Prepare to learn about AI decision-making and how a focus on data is changing how we engage with customers.

     
    Alec Haase Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    It’s a culmination of everything.
    Behavioral signals — the actual conversions and micro-conversions that users take within your product or website.
    Obviously, that’s things like purchases. But there are also other behavioral signals marketers should be using and thinking about. Things like micro-conversions — maybe that’s shopping for a product, clicking to learn more about a product, or visiting a certain page on your website.
    Behind that, you also need to have all your user data to tie that to.

    So I know someone took said action; I can follow up with them in email or out on paid social. I need the user identifiers to do that.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    Data that’s actionable includes the conversions and micro-conversions — very clear instances of “someone did this.” I can react to or measure those.
    What’s becoming a bit of a challenge for marketers is understanding that there’s other data that is valuable for machine learning or reinforcement learning models, things like tags on the types of products customers are interacting with.
    Maybe there’s category information about that product, or color information. That would otherwise look like noise to the average marketer. But behind the scenes, it can be used for reinforcement learning.

    There is definitely the “clear-cut” actionable data, but marketers shouldn’t be quick to classify things as noise because the rise in machine learning and reinforcement learning will make that data more valuable.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    At Hightouch, we don’t necessarily think about retroactive analysis. We have a system where we have customer engagement data firing in that we then have real-time scores reacting to.
    An interesting example is when you have machine learning and reinforcement learning models running. In the pet retailer example I gave you, the system is able to figure out what to prioritize.
    The concept of reinforcement learning is not a marketer making rules to say, “I know this type of thing works well on this type of audience.”

    It’s the machine itself using the data to determine what attribute responds well to which offer, recommendation, or marketing campaign.

    4. How can marketers ensure their use of customer engagement data aligns with the broader business objectives?
    It starts with the objectives. It’s starting with the desired outcome and working your way back. That whole flip of the paradigm is starting with outcomes and letting the system optimize. What are you trying to drive, and then back into the types of experiences that can make that happen?
    There’s personalization.
    When we talk about data-driven experiences and personalization, Spotify Wrapped is the North Star. For Spotify Wrapped, you want to drive customer stickiness and create a brand. To make that happen, you want to send a personalized email. What components do you want in that email?

    Maybe it’s top five songs, top five artists, and then you can back into the actual event data you need to make that happen.

    5. What role does engagement data play in influencing cross-functional decisions such as those in product development, sales, or customer service?
    For product development, it’s product analytics — knowing what features users are using, or seeing in heat maps where users are clicking.
    Sales is similar. We’re using behavioral signals like what types of content they’re reading on the site to help inform what they would be interested in — the types of products or the types of use cases.

    For customer service, you can look at errors they’ve run into in the past or specific purchases they’ve made, so that when you’re helping them the next time they engage with you, you know exactly what their past behaviors were and what products they could be calling about.

    6. What are some challenges marketers face when trying to translate customer engagement data into actionable insights?
    Access to data is one challenge. You might not know what data you have because marketers historically may not have been used to the systems where data is stored.
    Historically, that’s been pretty siloed away from them. Rich behavioral data and other data across the business was stored somewhere else.
    Now, as more companies embrace the data warehouse at the center of their business, it gives everyone a true single place where data can be stored.

    Marketers are working more with data teams, understanding more about the data they have, and using that data to power downstream use cases, personalization, reinforcement learning, or general business insights.

    7. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    As a marketer, I think proof is key. The best thing is if you’ve actually run a test. “I think we should do this. I ran a small test, and it’s showing that this is actually proving out.” Being able to clearly explain and justify your reasoning with data is super important.

    8. What technology or tools have you found most effective for gathering and analyzing customer engagement data?
    Any type of behavioral event collection, specifically ones that write to the cloud data warehouse, is the critical component. Your data team is operating off the data warehouse.
    Having an event collection product that stores data in that central spot is really important if you want to use the other data when making recommendations.
    You want to get everything into the data warehouse where it can be used both for insights and for putting into action.

    For Spotify Wrapped, you want to collect behavioral event signals like songs listened to or concerts attended, writing to the warehouse so that you can get insights back — how many songs were played this year, projections for next month — but then you can also use those behavioral events in downstream platforms to fire off personalized emails with product recommendations or Spotify Wrapped-style experiences.

    9. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?

    What we’re excited about is the concept of AI Decisioning — having AI agents actually using customer data to train their own models and decision-making to create personalized experiences.
    We’re sitting on top of all this behavioral data, engagement data, and user attributes, and our system is learning from all of that to make the best decisions across downstream systems.
    Whether that’s as simple as driving a loyalty program and figuring out what emails to send or what on-site experiences to show, or exposing insights that might lead you to completely change your business strategy, we see engagement data as the fuel to the engine of reinforcement learning, machine learning, AI agents, this whole next wave of Martech that’s just now coming.
    But it all starts with having the data to train those systems.

    I think that behavioral data is the fuel of modern Martech, and that only holds more true as Martech platforms adopt these decisioning and AI capabilities, because they’re only as good as the data that’s training the models.

     

     
    This interview Q&A was hosted with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Alec Haase Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #alec #haase #qampampa #customer #engagement
    Alec Haase Q&A: Customer Engagement Book Interview
    Reading Time: 6 minutes What is marketing without data? Assumptions. Guesses. Fluff. For Chapter 6 of our book, “The Customer Engagement Book: Adapt or Die,” we spoke with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, to explore how engagement data can truly inform critical business decisions.  Alec discusses the different types of customer behaviors that matter most, how to separate meaningful information from the rest, and the role of systems that learn over time to create tailored customer experiences. This interview provides insights into using data for real-time actions and shaping the future of marketing. Prepare to learn about AI decision-making and how a focus on data is changing how we engage with customers.   Alec Haase Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? It’s a culmination of everything. Behavioral signals — the actual conversions and micro-conversions that users take within your product or website. Obviously, that’s things like purchases. But there are also other behavioral signals marketers should be using and thinking about. Things like micro-conversions — maybe that’s shopping for a product, clicking to learn more about a product, or visiting a certain page on your website. Behind that, you also need to have all your user data to tie that to. So I know someone took said action; I can follow up with them in email or out on paid social. I need the user identifiers to do that. 2. How do you distinguish between data that is actionable versus data that is just noise? Data that’s actionable includes the conversions and micro-conversions — very clear instances of “someone did this.” I can react to or measure those. What’s becoming a bit of a challenge for marketers is understanding that there’s other data that is valuable for machine learning or reinforcement learning models, things like tags on the types of products customers are interacting with. Maybe there’s category information about that product, or color information. That would otherwise look like noise to the average marketer. But behind the scenes, it can be used for reinforcement learning. There is definitely the “clear-cut” actionable data, but marketers shouldn’t be quick to classify things as noise because the rise in machine learning and reinforcement learning will make that data more valuable. 3. How can customer engagement data be used to identify and prioritize new business opportunities? At Hightouch, we don’t necessarily think about retroactive analysis. We have a system where we have customer engagement data firing in that we then have real-time scores reacting to. An interesting example is when you have machine learning and reinforcement learning models running. In the pet retailer example I gave you, the system is able to figure out what to prioritize. The concept of reinforcement learning is not a marketer making rules to say, “I know this type of thing works well on this type of audience.” It’s the machine itself using the data to determine what attribute responds well to which offer, recommendation, or marketing campaign. 4. How can marketers ensure their use of customer engagement data aligns with the broader business objectives? It starts with the objectives. It’s starting with the desired outcome and working your way back. That whole flip of the paradigm is starting with outcomes and letting the system optimize. What are you trying to drive, and then back into the types of experiences that can make that happen? There’s personalization. When we talk about data-driven experiences and personalization, Spotify Wrapped is the North Star. For Spotify Wrapped, you want to drive customer stickiness and create a brand. To make that happen, you want to send a personalized email. What components do you want in that email? Maybe it’s top five songs, top five artists, and then you can back into the actual event data you need to make that happen. 5. What role does engagement data play in influencing cross-functional decisions such as those in product development, sales, or customer service? For product development, it’s product analytics — knowing what features users are using, or seeing in heat maps where users are clicking. Sales is similar. We’re using behavioral signals like what types of content they’re reading on the site to help inform what they would be interested in — the types of products or the types of use cases. For customer service, you can look at errors they’ve run into in the past or specific purchases they’ve made, so that when you’re helping them the next time they engage with you, you know exactly what their past behaviors were and what products they could be calling about. 6. What are some challenges marketers face when trying to translate customer engagement data into actionable insights? Access to data is one challenge. You might not know what data you have because marketers historically may not have been used to the systems where data is stored. Historically, that’s been pretty siloed away from them. Rich behavioral data and other data across the business was stored somewhere else. Now, as more companies embrace the data warehouse at the center of their business, it gives everyone a true single place where data can be stored. Marketers are working more with data teams, understanding more about the data they have, and using that data to power downstream use cases, personalization, reinforcement learning, or general business insights. 7. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? As a marketer, I think proof is key. The best thing is if you’ve actually run a test. “I think we should do this. I ran a small test, and it’s showing that this is actually proving out.” Being able to clearly explain and justify your reasoning with data is super important. 8. What technology or tools have you found most effective for gathering and analyzing customer engagement data? Any type of behavioral event collection, specifically ones that write to the cloud data warehouse, is the critical component. Your data team is operating off the data warehouse. Having an event collection product that stores data in that central spot is really important if you want to use the other data when making recommendations. You want to get everything into the data warehouse where it can be used both for insights and for putting into action. For Spotify Wrapped, you want to collect behavioral event signals like songs listened to or concerts attended, writing to the warehouse so that you can get insights back — how many songs were played this year, projections for next month — but then you can also use those behavioral events in downstream platforms to fire off personalized emails with product recommendations or Spotify Wrapped-style experiences. 9. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? What we’re excited about is the concept of AI Decisioning — having AI agents actually using customer data to train their own models and decision-making to create personalized experiences. We’re sitting on top of all this behavioral data, engagement data, and user attributes, and our system is learning from all of that to make the best decisions across downstream systems. Whether that’s as simple as driving a loyalty program and figuring out what emails to send or what on-site experiences to show, or exposing insights that might lead you to completely change your business strategy, we see engagement data as the fuel to the engine of reinforcement learning, machine learning, AI agents, this whole next wave of Martech that’s just now coming. But it all starts with having the data to train those systems. I think that behavioral data is the fuel of modern Martech, and that only holds more true as Martech platforms adopt these decisioning and AI capabilities, because they’re only as good as the data that’s training the models.     This interview Q&A was hosted with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Alec Haase Q&A: Customer Engagement Book Interview appeared first on MoEngage. #alec #haase #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Alec Haase Q&A: Customer Engagement Book Interview
    Reading Time: 6 minutes What is marketing without data? Assumptions. Guesses. Fluff. For Chapter 6 of our book, “The Customer Engagement Book: Adapt or Die,” we spoke with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, to explore how engagement data can truly inform critical business decisions.  Alec discusses the different types of customer behaviors that matter most, how to separate meaningful information from the rest, and the role of systems that learn over time to create tailored customer experiences. This interview provides insights into using data for real-time actions and shaping the future of marketing. Prepare to learn about AI decision-making and how a focus on data is changing how we engage with customers.   Alec Haase Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? It’s a culmination of everything. Behavioral signals — the actual conversions and micro-conversions that users take within your product or website. Obviously, that’s things like purchases. But there are also other behavioral signals marketers should be using and thinking about. Things like micro-conversions — maybe that’s shopping for a product, clicking to learn more about a product, or visiting a certain page on your website. Behind that, you also need to have all your user data to tie that to. So I know someone took said action; I can follow up with them in email or out on paid social. I need the user identifiers to do that. 2. How do you distinguish between data that is actionable versus data that is just noise? Data that’s actionable includes the conversions and micro-conversions — very clear instances of “someone did this.” I can react to or measure those. What’s becoming a bit of a challenge for marketers is understanding that there’s other data that is valuable for machine learning or reinforcement learning models, things like tags on the types of products customers are interacting with. Maybe there’s category information about that product, or color information. That would otherwise look like noise to the average marketer. But behind the scenes, it can be used for reinforcement learning. There is definitely the “clear-cut” actionable data, but marketers shouldn’t be quick to classify things as noise because the rise in machine learning and reinforcement learning will make that data more valuable. 3. How can customer engagement data be used to identify and prioritize new business opportunities? At Hightouch, we don’t necessarily think about retroactive analysis. We have a system where we have customer engagement data firing in that we then have real-time scores reacting to. An interesting example is when you have machine learning and reinforcement learning models running. In the pet retailer example I gave you, the system is able to figure out what to prioritize. The concept of reinforcement learning is not a marketer making rules to say, “I know this type of thing works well on this type of audience.” It’s the machine itself using the data to determine what attribute responds well to which offer, recommendation, or marketing campaign. 4. How can marketers ensure their use of customer engagement data aligns with the broader business objectives? It starts with the objectives. It’s starting with the desired outcome and working your way back. That whole flip of the paradigm is starting with outcomes and letting the system optimize. What are you trying to drive, and then back into the types of experiences that can make that happen? There’s personalization. When we talk about data-driven experiences and personalization, Spotify Wrapped is the North Star. For Spotify Wrapped, you want to drive customer stickiness and create a brand. To make that happen, you want to send a personalized email. What components do you want in that email? Maybe it’s top five songs, top five artists, and then you can back into the actual event data you need to make that happen. 5. What role does engagement data play in influencing cross-functional decisions such as those in product development, sales, or customer service? For product development, it’s product analytics — knowing what features users are using, or seeing in heat maps where users are clicking. Sales is similar. We’re using behavioral signals like what types of content they’re reading on the site to help inform what they would be interested in — the types of products or the types of use cases. For customer service, you can look at errors they’ve run into in the past or specific purchases they’ve made, so that when you’re helping them the next time they engage with you, you know exactly what their past behaviors were and what products they could be calling about. 6. What are some challenges marketers face when trying to translate customer engagement data into actionable insights? Access to data is one challenge. You might not know what data you have because marketers historically may not have been used to the systems where data is stored. Historically, that’s been pretty siloed away from them. Rich behavioral data and other data across the business was stored somewhere else. Now, as more companies embrace the data warehouse at the center of their business, it gives everyone a true single place where data can be stored. Marketers are working more with data teams, understanding more about the data they have, and using that data to power downstream use cases, personalization, reinforcement learning, or general business insights. 7. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? As a marketer, I think proof is key. The best thing is if you’ve actually run a test. “I think we should do this. I ran a small test, and it’s showing that this is actually proving out.” Being able to clearly explain and justify your reasoning with data is super important. 8. What technology or tools have you found most effective for gathering and analyzing customer engagement data? Any type of behavioral event collection, specifically ones that write to the cloud data warehouse, is the critical component. Your data team is operating off the data warehouse. Having an event collection product that stores data in that central spot is really important if you want to use the other data when making recommendations. You want to get everything into the data warehouse where it can be used both for insights and for putting into action. For Spotify Wrapped, you want to collect behavioral event signals like songs listened to or concerts attended, writing to the warehouse so that you can get insights back — how many songs were played this year, projections for next month — but then you can also use those behavioral events in downstream platforms to fire off personalized emails with product recommendations or Spotify Wrapped-style experiences. 9. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? What we’re excited about is the concept of AI Decisioning — having AI agents actually using customer data to train their own models and decision-making to create personalized experiences. We’re sitting on top of all this behavioral data, engagement data, and user attributes, and our system is learning from all of that to make the best decisions across downstream systems. Whether that’s as simple as driving a loyalty program and figuring out what emails to send or what on-site experiences to show, or exposing insights that might lead you to completely change your business strategy, we see engagement data as the fuel to the engine of reinforcement learning, machine learning, AI agents, this whole next wave of Martech that’s just now coming. But it all starts with having the data to train those systems. I think that behavioral data is the fuel of modern Martech, and that only holds more true as Martech platforms adopt these decisioning and AI capabilities, because they’re only as good as the data that’s training the models.     This interview Q&A was hosted with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Alec Haase Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    0 Yorumlar 0 hisse senetleri
  • Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 

    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks.
    To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms.
    Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA. 
    Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior.
    Proving Rust program properties with Aeneas
    Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”.
    For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references.
    As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs.
    Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community.
    Compiling Rust to C supports backward compatibility  
    We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs.
    Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code.
    As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed.

    Microsoft research podcast

    Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness
    As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India.

    Listen now

    Opens in a new tab
    Timing analysis with Revizor 
    Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct. 
    To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.  
    Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel. 
    By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code.
    Verified Rust implementations begin with ML-KEM
    This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling.
    A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings. 
    Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations. 
    As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems.
    Looking forward 
    This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library.
    Opens in a new tab
    #rewriting #symcrypt #rust #modernize #microsofts
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab #rewriting #symcrypt #rust #modernize #microsofts
    WWW.MICROSOFT.COM
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt (opens in new tab)—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsics (compiler-provided low-level functions) and assembly language (direct processor instructions). It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneas (opens in new tab) because it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean (opens in new tab), allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice (opens in new tab), a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydice (opens in new tab) compiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries (via C or Rust APIs), or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor (opens in new tab), a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcrypto (opens in new tab) branch of the SymCrypt repository. We encourage users to try the Rust build and share feedback (opens in new tab). Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab
    0 Yorumlar 0 hisse senetleri
Arama Sonuçları