Driving Impact: NVIDIA Expands Automotive Ecosystem to Bring Physical AI to the Streets
blogs.nvidia.com
The autonomous vehicle (AV) revolution is here and NVIDIA is at its forefront, bringing more than two decades of automotive computing, software and safety expertise to power innovation from the cloud to the car.At NVIDIA GTC, a global AI conference taking place this week in San Jose, California, dozens of transportation leaders are showcasing their latest advancements with NVIDIA technologies that span passenger cars, trucks, commercial vehicles and more.Mobility leaders are increasingly turning to NVIDIAs three core accelerated compute platforms: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse and NVIDIA Cosmos running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA DRIVE AGX in-vehicle computer to process real-time sensor data for safe, highly automated and autonomous driving capabilities.For manufacturers and developers in the multitrillion-dollar auto industry, this unlocks new possibilities for designing, manufacturing and deploying functionally safe, intelligent mobility solutions offering consumers safer, smarter and more enjoyable experiences.Transforming Passenger VehiclesThe U.S.s largest automaker, General Motors (GM), is collaborating with NVIDIA to develop and build its next-generation vehicles, factories and robots using NVIDIAs accelerated compute platforms. GM has been investing in NVIDIA GPU platforms for training AI models.The companies collaboration now expands to include optimizing factory planning using Omnivese with Cosmos and deploying next-generation vehicles at scale accelerated by the NVIDIA DRIVE AGX. This will help GM build physical AI systems tailored to its company vision, craft and know-how, and ultimately enable mobility thats safer, smarter and more accessible than ever.Volvo Cars, which is using the NVIDIA DRIVE AGX in-vehicle computer in its next-generation electric vehicles, and its subsidiary Zenseact use the NVIDIA DGX platform to analyze and contextualize sensor data, unlock new insights and train future safety models that will enhance overall vehicle performance and safety.Lenovo has teamed with robotics company Nuro to create a robust end-to-end system for level 4 autonomous vehicles that prioritize safety, reliability and convenience. The system is built on NVIDIA DRIVE AGX in-vehicle compute.Advancements in TruckingNVIDIAs AI-driven technologies are also supercharging trucking, helping address pressing challenges like driver shortages, rising e-commerce demands and high operational costs. NVIDIA DRIVE AGX delivers the computational muscle needed for safe, reliable and efficient autonomous operations improving road safety and logistics on a massive scale.Gatik is integrating DRIVE AGX for the onboard AI processing necessary for its freight-only class 6 and 7 trucks, manufactured by Isuzu Motors, which offer driverless middle-mile delivery of a wide range of goods to Fortune 500 customers including Tyson Foods, Kroger and Loblaw.Uber Freight is also adopting DRIVE AGX as the AI computing backbone of its current and future carrier fleets, sustainably enhancing efficiency and saving costs for shippers.Torc is developing a scalable, physical AI compute system for autonomous trucks. The system uses NVIDIA DRIVE AGX in-vehicle compute and the NVIDIA DriveOS operating system with Flexs Jupiter platform and manufacturing capabilities to support Torcs productization and scaled market entry in 2027.Growing Demand for DRIVE AGXNVIDIA DRIVE AGX Orin platform is the AI brain behind todays intelligent fleets and the next wave of mobility is already arriving, as production vehicles built on the NVIDIA DRIVE AGX Thor centralized car computer start to hit the roads.Magna is a key global automotive supplier helping to meet the surging demand for the NVIDIA Blackwell architecture-based DRIVE Thor platform designed for the most demanding processing workloads, including those involving generative AI, vision language models and large language models (LLMs). Magna will develop driving systems built with DRIVE AGX Thor for integration in automakers vehicle roadmaps, delivering active safety and comfort functions along with interior cabin AI experiences.Simulation and Data: The Backbone of AV DevelopmentEarlier this year, NVIDIA announced the Omniverse Blueprint for AV simulation, a reference workflow for creating rich 3D worlds for autonomous vehicle training, testing and validation. The blueprint is expanding to include NVIDIA Cosmos world foundation models (WFMs) to amplify photoreal data variation.Unveiled at the CES trade show in January, Cosmos is already being adopted in automotive, including by Plus, which is embedding Cosmos physical AI models into its SuperDrive technology, accelerating the development of level 4 self-driving trucks.Foretellix is extending its integration of the blueprint, using the Cosmos Transfer WFM to add conditions like weather and lighting to its sensor simulation scenarios to achieve greater situation diversity. Mcity is integrating the blueprint into the digital twin of its AV testing facility to enable physics-based modeling of camera, lidar, radar and ultrasonic sensor data.CARLA, which offers an open-source AV simulator, has integrated the blueprint to deliver high-fidelity sensor simulation. Global systems integrator Capgemini will be the first to use CARLAs Omniverse integration for enhanced sensor simulation in its AV development platform.NVIDIA is using Nexars extensive, high-quality, edge-case data to train and fine-tune NVIDIA Cosmos simulation capabilities. Nexar is tapping into Cosmos, neural infrastructure models and the NVIDIA DGX Cloud platform to supercharge its AI development, refining AV training, high-definition mapping and predictive modeling.Enhancing In-Vehicle Experiences With NVIDIA AI EnterpriseMobility leaders are integrating the NVIDIA AI Enterprise software platform, running on DRIVE AGX, to enhance in-vehicle experiences with generative and agentic AI.At GTC, Cerence AI is showcasing Cerence xUI, its new LLM-based AI assistant platform that will advance the next generation of agentic in-vehicle user experiences. The Cerence xUI hybrid platform runs in the cloud as well as onboard the vehicle, optimized first on NVIDIA DRIVE AGX Orin.As the foundation for Cerence xUI, the CaLLM family of language models is based on open-source foundation models and fine-tuned on Cerence AIs automotive dataset. Tapping into NVIDIA AI Enterprise and bolstering inference performance including through the NVIDIA TensorRT-LLM library and NVIDIA NeMo, Cerence AI has optimized CaLLM to serve as the central agentic orchestrator facilitating enriched driver experiences at the edge and in the cloud.SoundHound will also be demonstrating its next-generation in-vehicle voice assistant, which uses generative AI at the edge with NVIDIA DRIVE AGX, enhancing the in-car experience by bringing cloud-based LLM intelligence directly to vehicles.The Complexity of Autonomy and NVIDIAs Safety-First SolutionSafety is the cornerstone in deploying highly automated and autonomous vehicles to the roads at scale. But building AVs is one of todays most complex computing challenges. It demands immense computational power, precision and an unwavering commitment to safety.AVs and highly automated cars promise to extend mobility to those who need it most, reducing accidents and saving lives. To help deliver on this promise, NVIDIA has developed NVIDIA Halos, a full-stack comprehensive safety system that unifies vehicle architecture, AI models, chips, software, tools and services for the safe development of AVs from the cloud to the car.NVIDIA will host its inaugural AV Safety Day at GTC today, featuring in-depth discussions on automotive safety frameworks and implementation.In addition, NVIDIA will host Automotive Developer Day on Thursday, March 20, offering sessions on the latest advancements in end-to-end AV development and beyond.New Tools for AV DevelopersNVIDIA also released new NVIDIA NIM microservices for automotive designed to accelerate development and deployment of end-to-end stacks from cloud to car. The new NIM microservices for in-vehicle applications, which utilize the nuScenes dataset by Motional, include:BEVFormer, a state-of-the-art transformer-based model that fuses multi-frame camera data into a unified birds-eye-view representation for 3D perception.SparseDrive, an end-to-end autonomous driving model that performs motion prediction and planning simultaneously, outputting a safe planning trajectory.For automotive enterprise applications, NVIDIA offers a variety of models, including NV-CLIP, a multimodal transformer model that generates embeddings from images and text; Cosmos Nemotron, a vision language model that queries and summarizes images and videos for multimodal understanding and AI-powered perception; and many more.Learn more about NVIDIAs latest automotive news by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.
0 التعليقات ·0 المشاركات ·18 مشاهدة