NVIDIA
NVIDIA
This is the Official NVIDIA Page
11 people like this
244 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • NoTraffic Reduces Road Delays, Carbon Emissions With NVIDIA AI and Accelerated Computing
    blogs.nvidia.com
    More than 90 million new vehicles are introduced to roads across the globe every year, leading to an annual 12% increase in traffic congestion according to NoTraffic, a member of the NVIDIA Inception program for cutting-edge startups and the NVIDIA Metropolis vision AI ecosystem.Still, 99% of the worlds traffic signals run on fixed timing plans, leading to unnecessary congestion and delays.To reduce such inefficiencies, mitigate car accidents and reduce carbon emissions from vehicles, NoTraffics AI Mobility platform predicts road scenarios, helps ensure continuous traffic flow, minimizes stops and optimizes safety at intersections across the U.S., Canada and elsewhere.The platform which enables road infrastructure management at both local-intersection and city-grid scale integrates NVIDIA-powered software and hardware at the edge, under a cloud-based operating system.Its built using the NVIDIA Jetson edge AI platform, NVIDIA accelerated computing and the NVIDIA Metropolis vision AI developer stack.With NVIDIA accelerated computing, we achieved a 3x speedup in AI training and doubled AI Mobilitys energy efficiency, said Uriel Katz, cofounder and chief technology officer of NoTraffic. These optimizations in time, money and energy efficiency are all bolstered by NVIDIA Jetson, which sped our image preprocessing tasks by 40x compared with a CPU-only workflow. Plus, GPU-accelerated NVIDIA CUDA libraries increased our model throughput by 30x.These libraries include the NVIDIA TensorRT ecosystem of application programming interfaces for high-performance deep learning inference and the NVIDIA cuDNN library of primitives for deep neural networks.Taming Traffic in Tuscon, Vancouver and BeyondIn Tuscon, Arizona, more than 80 intersections are tapping into the NoTraffic AI Mobility platform, which has enabled up to a 46% reduction in road delays during rush hours and a half-mile reduction in peak queue length.The work is an expansion of NoTraffics initial deployment on Tuscons West Ajo Way. That effort led to an average delay reduction of 23% for drivers.Since installation, NoTraffic technology has helped free Tucson drivers from over 1.25 million hours stuck in traffic, the company estimates, representing an economic benefit of over $24.3 million. The company has also tracked a nearly 80% reduction in red-light runners since its platform was deployed, helping improve safety at Tucson intersections.By reducing travel times, drivers have also saved over $1.6 million in gas, cutting emissions and improving air quality to make the equivalent impact of planting 650,000 trees.In Vancouver, Canada, the University of British Columbia (UBC) is using the NoTraffic platform and Rogers Communications 5G-connected, AI-enabled smart-traffic platform to reduce both pedestrian delays and greenhouse gas emissions.Rogers Communications 5G networks provide robust and stable connectivity to the sensors embedded on the traffic poles.This advanced network infrastructure enhances the NoTraffic platforms efficacy and scalability, as the improved speed and reduced latency of 5G networks means traffic data can be processed in real time. This is critical for predicting numerous potential traffic scenarios, adjusting signal timings and prioritizing road users accordingly.With AI Mobility deployed at seven intersections across the campus, the university experienced an up to 40% reduction in pedestrian delays and significant decreases in vehicle wait time.In addition, UBC reduces 74 tons of carbon dioxide emissions each year thanks to the NoTraffic and Rogers solution, which is powered by NVIDIA edge AI and accelerated computing.The platform is also in action on the roads of Phoenix, Arizona; Baltimore, Maryland; and in 35 states through 200+ agencies across the U.S. and Canada.Honk If You Love Reducing Congestion, Carbon EmissionsThe NoTraffic AI Mobility platform offers local AI-based predictions that, based on sensor inputs at multiple intersections, analyze numerous traffic scenarios up to two minutes in advance.It can adapt to real-time changes in traffic patterns and volumes, send messages between intersections and run optimization algorithms that control traffic signals to improve overall transportation efficiency and safety through cloud connectivity.Speedups in the AI Mobility platform mean quicker optimizations of traffic signals and reduced congestion on the roads means reduced carbon emissions from vehicles.NoTraffic estimates that for every city optimized with this platform, eight hours of traffic time could be saved per driver. Plus, with over 300,000 signalized intersections in the U.S., the company says this could result in a total of $14 billion in economic savings per year.Learn more about the NVIDIA Metropolis platform and how its used in smart cities and spaces.
    0 Comments ·0 Shares ·30 Views
  • Fantastic Four-ce Awakens: Season One of Marvel Rivals Joins GeForce NOW
    blogs.nvidia.com
    Time to suit up, members. The multiverse is about to get a whole lot cloudier as GeForce NOW opens a portal to the first season of hit game Marvel Rivals from NetEase Games.Members can now game in a new dimension with expanded support for virtual- and mixed-reality devices. This weeks GeForce NOW app update 2.0.70 begins rolling out compatibility for Apple Vision Pro spatial computers, Meta Quest 3 and 3S, and Pico 4 and 4 Ultra devices.Plus, no GFN Thursday is complete without new games. Get ready for seven new titles joining the cloud this week, including multiplayer online battle arena game SMITE 2.Invisible No MoreSink your teeth into the Fantastic Four.Eternal night falls for Marvel Rivals, the superhero, team-based player vs. player shooter that lets players assemble an ever-evolving all-star squad of Super Heroes and Super Villains battling with unique powers across a dynamic lineup of destructible maps from the Marvel Multiverse.The Fantastic Four will be playable in season one of the game. For Eternal Night Falls, Invisible Woman and Mister Fantastic will be released in the first half of the season, followed by Human Torch and The Thing in the second. Season one will also feature three new maps, special events and an all-new Doom Match game mode.Stream it all with a GeForce NOW membership across devices, from an underpowered laptop, Mac devices, a Steam Deck or the supported platform of virtual- and mixed-reality devices.Head in the CloudsHeadset on, latency gone.The latest GeForce NOW app update is expanding cloud streaming capabilities to Apple Vision Pro spatial computers, Meta Quest 3 and 3S, and Pico 4 and 4 Ultra virtual- and mixed-reality headsets starting this week.These newly supported devices will give members access to an extensive library of games to stream through GeForce NOW. Members can gain access by visiting play.geforcenow.com or via the Android-native client on the PICO store. The rollout will be complete on Tuesday, Jan. 21.Members will be able to transform their space into a personal gaming theater by playing, on massive virtual screens, their favorite PC games, such as the latest season of Marvel Rivals, Dragon Age and more. With access to NVIDIA technologies, including ray tracing and NVIDIA DLSS on supported games, these devices now provide an enhanced visual experience with the highest frame rates and lowest latency.Here Comes The NewBecome a god and wage war.SMITE 2 is now free to play and has brought a huge update to mark the start of open beta. New god Aladdin joins, along with SMITE 1 fan favourites Geb, Agni, Mulan and Ullr bringing the total god roster to 45. Twenty of the gods now feature Aspects an optional spin on each gods ability kit that opens up even more strategic options. The 3v3 mode Joust has also arrived, featuring a brand-new, Arthurian-themed map. Assault and Duel game modes are also available. Finally, the Conquest mode brings a wealth of updates to the map, features and balance.Hyper Light Breaker (New release on Steam, Jan. 14)Aloft (New release on Steam, Jan. 15)Assetto Corsa EVO (New release on Steam, Jan. 16)Generation Zero (Xbox, available on PC Game Pass)HOT WHEELS UNLEASHED 2 Turbocharged (Xbox, available on PC Game Pass)SMITE 2 (Steam)Voidwrought (Steam)What are you planning to play this weekend? Let us know on X or in the comments below.Which role is your main? Tank Damage Support NVIDIA GeForce NOW (@NVIDIAGFN) January 15, 2025
    0 Comments ·0 Shares ·38 Views
  • NVIDIA Releases NIM Microservices to Safeguard Applications for Agentic AI
    blogs.nvidia.com
    AI agents are poised to transform productivity for the worlds billion knowledge workers with knowledge robots that can accomplish a variety of tasks. To develop AI agents, enterprises need to address critical concerns like trust, safety, security and compliance.New NVIDIA NIM microservices for AI guardrails part of the NVIDIA NeMo Guardrails collection of software tools are portable, optimized inference microservices that help companies improve the safety, precision and scalability of their generative AI applications.Central to the orchestration of the microservices is NeMo Guardrails, part of the NVIDIA NeMo platform for curating, customizing and guardrailing AI. NeMo Guardrails helps developers integrate and manage AI guardrails in large language model (LLM) applications. Industry leaders Amdocs, Cerence AI and Lowes are among those using NeMo Guardrails to safeguard AI applications.Developers can use the NIM microservices to build more secure, trustworthy AI agents that provide safe, appropriate responses within context-specific guidelines and are bolstered against jailbreak attempts. Deployed in customer service across industries like automotive, finance, healthcare, manufacturing and retail, the agents can boost customer satisfaction and trust.One of the new microservices, built for moderating content safety, was trained using the Aegis Content Safety Dataset one of the highest-quality, human-annotated data sources in its category. Curated and owned by NVIDIA, the dataset is publicly available on Hugging Face and includes over 35,000 human-annotated data samples flagged for AI safety and jailbreak attempts to bypass system restrictions.NVIDIA NeMo Guardrails Keeps AI Agents on TrackAI is rapidly boosting productivity for a broad range of business processes. In customer service, its helping resolve customer issues up to 40% faster. However, scaling AI for customer service and other AI agents requires secure models that prevent harmful or inappropriate outputs and ensure the AI application behaves within defined parameters.NVIDIA has introduced three new NIM microservices for NeMo Guardrails that help AI agents operate at scale while maintaining controlled behavior:Content safety NIM microservice that safeguards AI against generating biased or harmful outputs, ensuring responses align with ethical standards.Topic control NIM microservice that keeps conversations focused on approved topics, avoiding digression or inappropriate content.Jailbreak detection NIM microservice that adds protection against jailbreak attempts, helping maintain AI integrity in adversarial scenarios.By applying multiple lightweight, specialized models as guardrails, developers can cover gaps that may occur when only more general global policies and protections exist as a one-size-fits-all approach doesnt properly secure and control complex agentic AI workflows.Small language models, like those in the NeMo Guardrails collection, offer lower latency and are designed to run efficiently, even in resource-constrained or distributed environments. This makes them ideal for scaling AI applications in industries such as healthcare, automotive and manufacturing, in locations like hospitals or warehouses.Industry Leaders and Partners Safeguard AI With NeMo GuardrailsNeMo Guardrails, available to the open-source community, helps developers orchestrate multiple AI software policies called rails to enhance LLM application security and control. It works with NVIDIA NIM microservices to offer a robust framework for building AI systems that can be deployed at scale without compromising on safety or performance.Amdocs, a leading global provider of software and services to communications and media companies, is harnessing NeMo Guardrails to enhance AI-driven customer interactions by delivering safer, more accurate and contextually appropriate responses.Technologies like NeMo Guardrails are essential for safeguarding generative AI applications, helping make sure they operate securely and ethically, said Anthony Goonetilleke, group president of technology and head of strategy at Amdocs. By integrating NVIDIA NeMo Guardrails into our amAIz platform, we are enhancing the platforms Trusted AI capabilities to deliver agentic experiences that are safe, reliable and scalable. This empowers service providers to deploy AI solutions safely and with confidence, setting new standards for AI innovation and operational excellence.Cerence AI, a company specializing in AI solutions for the automotive industry, is using NVIDIA NeMo Guardrails to help ensure its in-car assistants deliver contextually appropriate, safe interactions powered by its CaLLM family of large and small language models.Cerence AI relies on high-performing, secure solutions from NVIDIA to power our in-car assistant technologies, said Nils Schanz, executive vice president of product and technology at Cerence AI. Using NeMo Guardrails helps us deliver trusted, context-aware solutions to our automaker customers and provide sensible, mindful and hallucination-free responses. In addition, NeMo Guardrails is customizable for our automaker customers and helps us filter harmful or unpleasant requests, securing our CaLLM family of language models from unintended or inappropriate content delivery to end users.Lowes, a leading home improvement retailer, is leveraging generative AI to build on the deep expertise of its store associates. By providing enhanced access to comprehensive product knowledge, these tools empower associates to answer customer questions, helping them find the right products to complete their projects and setting a new standard for retail innovation and customer satisfaction.Were always looking for ways to help associates to above and beyond for our customers, said Chandhu Nair, senior vice president of data, AI and innovation at Lowes. With our recent deployments of NVIDIA NeMo Guardrails, we ensure AI-generated responses are safe, secure and reliable, enforcing conversational boundaries to deliver only relevant and appropriate content.To further accelerate AI safeguards adoption in AI application development and deployment in retail, NVIDIA recently announced at the NRF show that its NVIDIA AI Blueprint for retail shopping assistants incorporates NeMo Guardrails microservices for creating more reliable and controlled customer interactions during digital shopping experiences.Consulting leaders Taskus, Tech Mahindra and Wipro are also integrating NeMo Guardrails into their solutions to provide their enterprise clients safer, more reliable and controlled generative AI applications.NeMo Guardrails is open and extensible, offering integration with a robust ecosystem of leading AI safety model and guardrail providers, as well as AI observability and development tools. It supports integration with ActiveFences ActiveScore, which filters harmful or inappropriate content in conversational AI applications, and provides visibility, analytics and monitoring.Hive, which provides its AI-generated content detection models for images, video and audio content as NIM microservices, can be easily integrated and orchestrated in AI applications using NeMo Guardrails.The Fiddler AI Observability platform easily integrates with NeMo Guardrails to enhance AI guardrail monitoring capabilities. And Weights & Biases, an end-to-end AI developer platform, is expanding the capabilities of W&B Weave by adding integrations with NeMo Guardrails microservices. This enhancement builds on Weights & Biases existing portfolio of NIM integrations for optimized AI inferencing in production.NeMo Guardrails Offers Open-Source Tools for AI Safety TestingDevelopers ready to test the effectiveness of applying safeguard models and other rails can use NVIDIA Garak an open-source toolkit for LLM and application vulnerability scanning developed by the NVIDIA Research team.With Garak, developers can identify vulnerabilities in systems using LLMs by assessing them for issues such as data leaks, prompt injections, code hallucination and jailbreak scenarios. By generating test cases involving inappropriate or incorrect outputs, Garak helps developers detect and address potential weaknesses in AI models to enhance their robustness and safety.AvailabilityNVIDIA NeMo Guardrails microservices, as well as NeMo Guardrails for rail orchestration and the NVIDIA Garak toolkit, are now available for developers and enterprises. Developers can get started building AI safeguards into AI agents for customer service using NeMo Guardrails with this tutorial.See notice regarding software product information.
    0 Comments ·0 Shares ·62 Views
  • How AI Is Enhancing Surgical Safety and Education
    blogs.nvidia.com
    Troves of unwatched surgical video footage are finding new life, fueling AI tools that help make surgery safer and enhance surgical education. The Surgical Data Science Collective (SDSC) is transforming global surgery through AI-driven video analysis, helping to close the gaps in surgical training and practice.In this episode of the NVIDIA AI Podcast, Margaux Masson-Forsythe, director of machine learning at SDSC, discusses the unique challenges of doing AI research as a nonprofit, how the collective distills insights from massive amounts of video data and ways AI can help address the stark reality that five billion people still lack access to safe surgery.The AI Podcast How SDSC Uses AI to Transform Surgical Training and Practice Episode 241Learn more about SDSC, and hear more about the future of AI in healthcare by listening to the J.P. Morgan Healthcare Conference talk by Kimberly Powell, vice president of healthcare at NVIDIA.Time Stamps8:01 What are the opportunities and challenges of analyzing surgical videos?12:50 Masson-Forsythe on trying new models and approaches to stay on top of the field.18:14 How does a nonprofit approach conducting AI research?24:05 How the community can get involved with SDSC.You Might Also LikeCofounder of Annalise.ai Aengus Tran on Using AI as a Spell Check for Health ChecksHarrison.ai has developed annalise.ai, an AI system that automates radiology image analysis to improve diagnosis speed and accuracy, and is now working on Franklin.ai to enhance histopathology diagnosis. CEO Aengus Tran emphasizes the importance of using AI in healthcare to reduce misdiagnoses and improve patient outcomes.Matice Founder Jessica Whited on Harnessing Regenerative Species for Medical BreakthroughsScientists at Matice Biosciences, cofounded by regenerative biologist Jessica Whited, are using AI to study the tissue regeneration capabilities of animals like salamanders and planarians, with the goal of developing treatments to help humans heal from injuries without scarring.Cardiac Clarity: Dr. Keith Channon Talks Revolutionizing Heart Health With AI Caristo Diagnostics has developed an AI-powered solution called Caristo that detects coronary inflammation in cardiac CT scans by analyzing radiometric features in the surrounding fat tissue, helping physicians improve treatment plans and risk predictions.Subscribe to the AI PodcastGet the AI Podcast through Amazon Music, Apple Podcasts, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, SoundCloud, Spotify, Stitcher and TuneIn.
    0 Comments ·0 Shares ·46 Views
  • x.com
    RTNVIDIA GeForceNVIDIA's Bryan Catanzaro and Edward Liu walkthrough new capabilities & improved technologies in DLSS 4: New Multi Frame Generation for RTX 50 Series Improved Frame Generation for RTX 40 & 50 Series Enhanced Ray Reconstruction, Super Resolution, and DLAA for all RTX GPUs
    0 Comments ·0 Shares ·28 Views
  • x.com
    WIN THE NEXT GENERATION OF RTX The #GeForceRTX50 sweepstakes is here & we're giving you multiple chances to WIN a GeForce RTX 5090!Here's your first chance to enter: Like this post Comment #GeForceRTX50
    0 Comments ·0 Shares ·29 Views
  • x.com
    Re Learn More: https://nvda.ws/4a77b28
    0 Comments ·0 Shares ·29 Views
  • x.com
    Our first #GeForceRTX50 spotlight: GeForce RTX 5090 Founders Edition DLSS 4 with Multi Frame Gen Ultimate responsiveness with Reflex 2 Double Flow Through designWant to WIN the next generation of RTX?Comment #GeForceRTX50Good luck!
    0 Comments ·0 Shares ·28 Views
  • RT Wondershare Filmora Video Editor: Filmora will adapt New GeForce RTX 50 Series GPUs soon!To empower your great creation! Smoother video editing e...
    x.com
    RTWondershare Filmora Video EditorFilmora will adapt New GeForce RTX 50 Series GPUs soon!To empower your great creation! Smoother video editing experience Video quality lossless with smaller size Faster video export speed&More #Filmora #WednesdayMotivationNVIDIA Studio: Announced at #CES2025: New GeForce RTX 50 Series GPUs Revolutionize Content Creation. FP4 support for faster GenAI NVIDIA NIM Microservices & Blueprints RTX & AI enhanced video editing & livestreaming DLSS 4 for 3D creation& More https://nvda.ws/3PrO1ui
    0 Comments ·0 Shares ·36 Views
  • x.com
    Re @Filmora_Editor Incredible!
    0 Comments ·0 Shares ·32 Views
  • RT NVIDIA GeForce: GeForce RTX 5090 MIC Drop in Vegas with @steveaoki at OMNIA
    x.com
    RTNVIDIA GeForceGeForce RTX 5090 MIC Drop in Vegas with @steveaoki at OMNIA
    0 Comments ·0 Shares ·30 Views
  • That's a wrap, #CES2025! Thank you to all who tuned in to the keynote, announcements, and said hello in-person!
    x.com
    That's a wrap, #CES2025! Thank you to all who tuned in to the keynote, announcements, and said hello in-person!
    0 Comments ·0 Shares ·32 Views
  • x.com
    What inspires you more: the real world or the digital one?
    0 Comments ·0 Shares ·36 Views
  • Who's ready to hit the slopes? From snowy peaks to ski lifts, this artwork from bhansendesign (IG) captures the thrill of a perfect day on th...
    x.com
    Who's ready to hit the slopes? From snowy peaks to ski lifts, this artwork from bhansendesign (IG) captures the thrill of a perfect day on the mountain. Share your winter themed creations with #WinterArtChallenge!
    0 Comments ·0 Shares ·36 Views
  • NVIDIA GTC 2025: Quantum Day to Illuminate the Future of Quantum Computing
    blogs.nvidia.com
    Quantum computing is one of the most exciting areas in computer science, promising progress in accelerated computing beyond whats considered possible today.Its expected that the technology will tackle myriad problems that were once deemed impractical, or even impossible to solve. Quantum computing promises huge leaps forward for fields spanning drug discovery and materials development to financial forecasting.But just as exciting as quantum computings future are the breakthroughs already being made today in quantum hardware, error correction and algorithms.NVIDIA is celebrating and exploring this remarkable progress in quantum computing by announcing its first Quantum Day at GTC 2025 on March 20. This new focus area brings together leading experts for a comprehensive and balanced perspective on what businesses should expect from quantum computing in the coming decades mapping the path toward useful quantum applications.Discussing the state of the art in quantum computing, NVIDIA founder and CEO Jensen Huang will share the stage with executives from industry leaders, including:Alice & BobAtom ComputingD-WaveInfleqtionIonQPasqalPsiQuantumQuantinuumQuantum CircuitsQuEra ComputingRigettiSEEQCLearn About Quantum Computing at NVIDIA GTCQuantum Day will feature:Sessions exploring whats possible and available now in quantum computing, and where quantum technologies are headed, hosted by Huang and representatives from across the quantum community.A developer day session outlining how partners are working with NVIDIA to advance quantum computing.Educational sessions providing attendees with hands-on training on how to use the most advanced tools to explore and develop quantum hardware and applications.A Quantum Day special address, unveiling the latest news and advances from NVIDIA in quantum computing shortening the timeline to useful applications.Quantum Day at GTC 2025 is the destination for leaders and experts seeking to chart a course into the future of quantum computing.Registerfor GTC.
    0 Comments ·0 Shares ·51 Views
  • Healthcare Leaders, NVIDIA CEO Share AI Innovation Across the Industry
    blogs.nvidia.com
    AI is making inroads across the entire healthcare industry from genomic research to drug discovery, clinical trial workflows and patient care.In a fireside chat Monday during the annual J.P. Morgan Healthcare Conference in San Francisco, NVIDIA founder and CEO Jensen Huang took the stage with industry leaders progressing each of these areas to advance biomedical science and meet the global demand for patient care.Healthcare has a more severe labor shortage than any other field the industry is expected to be short 10 million workers by the end of the decade, according to the World Health Organization. By deploying foundation models to narrow the field of potential drug molecules and streamlining workflows with agentic AI, these innovators are helping meet the global demand by enabling clinicians and researchers to achieve more with their limited time.They include industry luminaries Patrick Collison, cofounder of Stripe and the Arc Institute nonprofit research organization; Christina Zorn, chief administrative officer at Mayo Clinic; Jacob Thaysen, CEO of DNA sequencing technology leader Illumina; and Ari Bousbib, chairman and CEO of clinical research and commercial services provider IQVIA.The four organizations at J.P. Morgan Healthcare announced partnerships with NVIDIA to advance drug discovery, accelerate pathology, enhance genomic research and augment healthcare with agentic AI, respectively.AIs Evolution, From Predicting to ReasoningHuang opened the event by reflecting on the tremendous progress in AI over the past year, spanning large language models, visual generative AI and physical AI for robotics and outlining a vision for a future involving agentic AI models that are capable of reasoning and problem-solving.The future of AI is likely to involve a fair amount of thinking, he said. The ability for AI to now reason, plan and act is foundational to the way were going to go forward.To support the development of these AI models, NVIDIA recently unveiled NVIDIA Cosmos, a physical AI platform that includes state-of-the art generative world foundation models. These models apply the same technique as a language model that predicts the next word in a sentence instead predicting the next action a robot should take.The idea that you can generate the next frame for a video has become common sense, Huang said. And if thats the case, is it possible that generating the next articulation could be common sense? And the answer is absolutely.AI for Every ModalityChanneling a late-night talk show host, Huang called up the guest speakers one by one to discuss their work accelerating biomedical research with AI innovation.First up was Collison, who shared the Arc Institutes mission to help researchers tackle long-term scientific challenges by providing multiyear funding that enables them to focus on innovative research instead of grant writing which he believes will spur breakthroughs that are unfeasible to pursue under todays funding models.A lot of the low-hanging fruit, the stuff that is easier to discover, we did, Collison said, referring to the development of groundbreaking treatments like antibiotics, chemotherapy and more in decades past. Today, its immensely harder.Already, Arc Institutes investments have resulted in Evo, a powerful foundation model that understands the languages of DNA, RNA and proteins. The institute is now working with NVIDIA on foundation models for biology that can advance applications for drug discovery, synthetic biology across multiple scales of complexity, disease and evolution research, and more.Next, Mayo Clinics Zorn shared how the research hospital is applying NVIDIA technology to one of the worlds largest pathology databases to transform cancer care with AI insights.We saw a paradigm shift in healthcare. Youre either going to disrupt from within or youre going to be disrupted, she said. We knew we had to embrace tech in a way that was really going to optimize everything we do.Zorn also shared how Mayo Clinic is approaching the future healthcare worker shortage by investing in robotics.Were going to use, essentially, the robots to be a member of the healthcare team in the healthcare spaces, she said.The evening wrapped with two leaders in healthcare information reflecting on ways multimodal AI models can uncover insights and streamline processes to boost the capabilities of human experts.Combining other information, other modalities, other omicsis going to give us much deeper insight into biology. But while DNA was very difficult itself, when you then combine all the omics, it becomes exponentially more challenging, said Illuminas Thaysen. Its getting so complicated that we do need huge computing power and AI to really understand and process it.IQVIA is working with NVIDIA to build custom foundation models and agentic AI workflows trained on the organizations vast healthcare-specific information and deep domain expertise. Use cases include boosting the efficiency of clinical trials and optimizing planning for the launch of therapies and medical devices.The company is committed to using AI responsibly, ensuring that its AI-powered capabilities are grounded in privacy, regulatory compliance and patient safety.The opportunity here is to try to reduce the dependencies and sequential series of steps that require a lot of interactions, and handle them without human touch, said Bousbib. AI agents will be able to eliminate the white space, that is, the time waiting for humans to complete those tasks. Theres a great opportunity to reduce time and costs.NVIDIA at J.P. Morgan HealthcareThe fireside chat followed a presentation at the conference by Kimberly Powell, NVIDIAs vice president of healthcare. In her talk, Powell discussed the industry collaborations and announced new resources for healthcare and life sciences developers.These include an NVIDIA NIM microservice for GenMol, a generative AI model for controlled, high-performance molecular generation and an NVIDIA BioNeMo Blueprint for protein binder design, part of the NVIDIA Blueprints collection of enterprise-grade reference workflows for agentic and generative AI use cases.For more from NVIDIA at the J.P. Morgan Healthcare Conference, listen to the audio recording of Powells session.Subscribe to NVIDIA healthcare news.Main image above features, from left to right, Illuminas Jacob Thaysen, Mayo Clinics Christina Zorn, Arc Institutes Patrick Collison, IQVIAs Ari Bousbib and NVIDIAs Jensen Huang.
    0 Comments ·0 Shares ·51 Views
  • NVIDIA and IQVIA Build Domain-Expert Agentic AI for Healthcare and Life Sciences
    blogs.nvidia.com
    IQVIA, the worlds leading provider of clinical research services, commercial insights and healthcare intelligence, is working with NVIDIA to build custom foundation models and agentic AI workflows that can accelerate research, clinical development and access to new treatments.AI applications trained on the organizations vast healthcare-specific information and guided by its deep domain expertise will help the industry boost the efficiency of clinical trials and optimize planning for the launch of therapies and medical devices ultimately improving patient outcomes.Operating in over 100 countries, IQVIA has built the largest global healthcare network and is uniquely connected to the ecosystem with the most comprehensive and granular set of information, analytics and technologies in the industry.Announced today at the J.P. Morgan Conference in San Francisco, IQVIAs collection of models, AI agents and reference workflows will be developed with the NVIDIA AI Foundry platform for building custom models, allowing IQVIAs thousands of pharmaceutical, biotech and medical device customers to benefit from NVIDIAs agentic AI capabilities and IQVIAs technologies, life sciences information and expertise.Enabling Industry Applications in Clinical TrialsThe healthcare and life sciences industry generates more information than any other industry in the world, making up 30% of the worlds data volume.IQVIA plans to use its unparalleled information assets, analytics and domain expertise known as IQVIA Connected Intelligence with the NVIDIA AI Foundry service to build language and multimodal foundational models that will power a collection of customized IQVIA AI agents.These agents are anticipated to be available in predefined workflows, or blueprints, that would accomplish a specific task. This partnership aims to accelerate the innovation cycle of IQVIA Healthcare-grade AI. IQVIA has been leading in the responsible use of AI, ensuring that its AI-powered capabilities are grounded in privacy, regulatory compliance and patient safety. IQVIA Healthcare-grade AI represents the companys commitment to these principles.One key opportunity area is in clinical development, when clinical trials are conducted for new drugs. The overall process takes about 11 years, on average, and each trial has a multitude of workflows that could be supported by AI agents. For example, just starting a clinical trial involves site selection, participant recruitment, regulatory submissions and tight communication between study sites and their sponsors.NVIDIA AI Foundry Streamlines Custom Model DevelopmentTo streamline the development of these AI agents, IQVIA is using tools within NVIDIA AI Foundry and the NVIDIA AI Enterprise software platform, including NVIDIA NIM microservices, especially the Llama Nemotron and Cosmos Nemotron model families; NVIDIA AI Blueprint reference workflows; the NVIDIA NeMo platform for developing custom generative AI; and dedicated capacity on NVIDIA DGX Cloud.The NVIDIA AI Blueprint for multimodal PDF data extraction can help IQVIA unlock the immense amount of healthcare text, graphs, charts and tables stored in PDF files, bringing previously inaccessible information to train AI models and agents for domain-specific and even customer-specific applications. NVIDIA RAPIDS data science libraries then accelerate the construction of knowledge graphs.Additional AI agents could automate complex, time-consuming tasks, like document generation and patient recruitment, allowing healthcare professionals to focus on strategic decision-making and human interaction.Learn more about NVIDIA technologies and their impact on healthcare and life sciences.
    0 Comments ·0 Shares ·85 Views
  • NVIDIA Statement on the Biden Administrations Misguided AI Diffusion Rule
    blogs.nvidia.com
    For decades, leadership in computing and software ecosystems has been a cornerstone of American strength and influence worldwide. The federal government has wisely refrained from dictating the design, marketing and sale of mainstream computers and software key drivers of innovation and economic growth.The first Trump Administration laid the foundation for Americas current strength and success in AI, fostering an environment where U.S. industry could compete and win on merit without compromising national security. As a result, mainstream AI has become an integral part of every new application, driving economic growth, promoting U.S. interests and ensuring American leadership in cutting-edge technology.Today, companies, startups and universities around the world are tapping mainstream AI to advance healthcare, agriculture, manufacturing, education and countless other fields, driving economic growth and unlocking the potential of nations. Built on American technology, the adoption of AI around the world fuels growth and opportunity for industries at home and abroad.That global progress is now in jeopardy. The Biden Administration now seeks to restrict access to mainstream computing applications with its unprecedented and misguided AI Diffusion rule, which threatens to derail innovation and economic growth worldwide.In its last days in office, the Biden Administration seeks to undermine Americas leadership with a 200+ page regulatory morass, drafted in secret and without proper legislative review. This sweeping overreach would impose bureaucratic control over how Americas leading semiconductors, computers, systems and even software are designed and marketed globally. And by attempting to rig market outcomes and stifle competition the lifeblood of innovation the Biden Administrations new rule threatens to squander Americas hard-won technological advantage.While cloaked in the guise of an anti-China measure, these rules would do nothing to enhance U.S. security. The new rules would control technology worldwide, including technology that is already widely available in mainstream gaming PCs and consumer hardware. Rather than mitigate any threat, the new Biden rules would only weaken Americas global competitiveness, undermining the innovation that has kept the U.S. ahead.Although the rule is not enforceable for 120 days, it is already undercutting U.S. interests. As the first Trump Administration demonstrated, America wins through innovation, competition and by sharing our technologies with the world not by retreating behind a wall of government overreach. We look forward to a return to policies that strengthen American leadership, bolster our economy and preserve our competitive edge in AI and beyond.Ned Finkle is vice president of government affairs at NVIDIA.
    0 Comments ·0 Shares ·79 Views
  • AI Gets Real for Retailers: 9 Out of 10 Retailers Now Adopting or Piloting AI, Latest NVIDIA Survey Finds
    blogs.nvidia.com
    Artificial intelligence is rapidly becoming the cornerstone of innovation in the retail and consumer packaged goods (CPG) industries.Forward-thinking companies are using AI to reimagine their entire business models, from in-store experiences to omnichannel digital platforms, including ecommerce, mobile and social channels. This technological wave is simultaneously transforming advertising and marketing, customer engagement and supply chain operations. By harnessing AI, retailers and CPG brands are not just adapting to change theyre actively shaping the future of commerce.NVIDIAs second annual State of AI in Retail and CPG survey provides insights into the adoption, investment and impact of AI, including generative AI; the top use cases and challenges; and a special section this year examining the use of AI in the supply chain. Its an in-depth look at the current ecosystem of AI in retail and CPG, and how its transforming the industries.Drawn from hundreds of responses from industry professionals, key highlights of the survey show:89% of respondents said they are either actively using AI in their operations or assessing AI projects, including trials and pilots (up from 82% in 2023)87% said AI had a positive impact on increasing annual revenue94% said AI has helped reduce annual operational costs97% said spending on AI would increase in the next fiscal yearGenerative AI in Retail Takes Center StageGenerative AI has found a strong foothold in retail and CPG, with over 80% of companies either using or piloting projects. Companies are harnessing the technology, especially for content generation in marketing and advertising, as well as customer analysis and analytics.Consistent with last years survey, over 50% of retailers believe that generative AI is a strategic technology that will be a differentiator in the market.The top use cases for generative AI in retail include:Content generation for marketing (60%)Predictive analytics (44%)Personalized marketing and advertising (42%)Customer analysis and segmentation (41%)Digital shopping assistants or copilots (40%)While some concerns about generative AI exist, specifically around data privacy, security and implementation costs, these concerns havent dampened retailers enthusiasm, with 93% of respondents saying they still plan to increase generative AI investment next year.AI Across the Retail LandscapeAI use cases have proliferated across nearly every line of business in retail, with over 50% of retailers using AI in more than six different use cases throughout their operations.In physical stores, the top three use cases are inventory management, analytics and insights, and adaptive advertising. For digital retail, theyre marketing and advertising content creation, and hyperpersonalized recommendations. And in the back office, the top use cases are customer analysis and predictive analytics.AI has made a significant impact in retail and CPG, with improved insights and decision-making (43%) and enhanced employee productivity (42%) being listed as top benefits among survey respondents.The most common AI challenge retailers faced in 2024 was a lack of easy to understand and explainable AI tools, underscoring a greater need for software and solutions specifically around generative AI and AI agents to enter the market to make it easier for companies to use AI solutions and understand how they work.AI in the Supply ChainManaging the supply chain has always been a challenge for retail and CPG companies, but its become increasingly difficult over the last several years due to tumultuous global events and shifting consumer preferences. Companies are feeling the pressure, with 59% of respondents saying that their supply chain challenges have grown in the last year.Increasingly, companies are turning to AI to help address these challenges, and the impact of these AI solutions is starting to show up in results.58% said AI is helping to improve operational efficiency and throughput.45% are using AI to reduce supply chain costs.42% are employing AI to meet shifting customer expectations.Investment in AI for supply chain management is set to grow, with 82% of companies planning to increase spending in the next fiscal year.As the retail and CPG industries continue to embrace the power of AI, the findings from the latest survey underscore a pivotal shift in how businesses operate in a complex new landscape. Leading companies are harnessing advanced technologies such as AI agents and physical AI to enhance efficiency and drive revenue, as well as to position themselves as leaders in innovation, helping redefine the future of retail and CPG.Download the State of AI in Retail and CPG: 2025 Trends report for in-depth results and insights.Explore NVIDIAs AI solutions and enterprise-level platforms for retail.
    0 Comments ·0 Shares ·110 Views
  • Hyundai Motor Group Embraces NVIDIA AI and Omniverse for Next-Gen Mobility
    blogs.nvidia.com
    Driving the future of smart mobility, Hyundai Motor Group (the Group) is partnering with NVIDIA to develop the next generation of safe, secure mobility with AI and industrial digital twins.Announced today at the CES trade show in Las Vegas, this latest work will elevate Hyundai Motor Groups smart mobility innovation with NVIDIA accelerated computing, generative AI, digital twins and physical AI technologies.The Group is launching a broad range of AI initiatives into its key mobility products, including software-defined vehicles and robots, along with optimizing its manufacturing lines.Hyundai Motor Group is exploring innovative approaches with AI technologies in various fields such as robotics, autonomous driving and smart factory, said Heung-Soo Kim, executive vice president and head of the global strategy office at Hyundai Motor Group. This partnership is set to accelerate our progress, positioning the Group as a frontrunner in driving AI-empowered mobility innovation.Hyundai Motor Group will tap into NVIDIAs data-center-level computing and infrastructure to efficiently manage the massive data volumes essential for training its advanced AI models and building a robust autonomous vehicle (AV) software stack.Manufacturing Intelligence With Simulation and Digital TwinsWith the NVIDIA Omniverse platform running on NVIDIA OVX systems, Hyundai Motor Group will build a digital thread across its existing software tools to achieve highly accurate product design and prototyping in a digital twin environment. This will help boost engineering efficiencies, reduce costs and accelerate time to market.The Group will also work with NVIDIA to create simulated environments for developing autonomous driving systems and validating self-driving applications.Simulation is becoming increasingly critical in the safe deployment of AVs. It provides a safe way to test self-driving technology in any possible weather, traffic conditions or locations, as well as rare or dangerous scenarios.Hyundai Motor Group will develop applications, like digital twins using Omniverse technologies, to optimize its existing and future manufacturing lines in simulation. These digital twins can improve production quality, streamline costs and enhance overall manufacturing efficiencies.The company can also build and train industrial robots for safe deployment in its factories using NVIDIA Isaac Sim, a robotics simulation framework built on Omniverse.NVIDIA is helping advance robotics intelligence with AI tools and libraries for automated manufacturing. As a result, Hyundai Motor Group can conduct industrial robot training in physically accurate virtual environments optimizing manufacturing and enhancing quality.This can also help make interactions with these robots and their real-world surroundings more intuitive and effective while ensuring they can work safely alongside humans.Using NVIDIA technology, Hyundai Motor Group is driving the creation of safer, more intelligent vehicles, enhancing manufacturing with greater efficiency and quality, and deploying cutting-edge robotics to build a smarter, more connected digital workplace.The partnership was formalized during a signing ceremony that took place last night at CES.Learn more about how NVIDIA technologies are advancing autonomous vehicles.
    0 Comments ·0 Shares ·111 Views
  • GeForce NOW at CES: Bring PC RTX Gaming Everywhere With the Power of GeForce NOW
    blogs.nvidia.com
    This GFN Thursday recaps the latest cloud announcements from the CES trade show, including GeForce RTX gaming expansion across popular devices such as Steam Deck, Apple Vision Pro spatial computers, Meta Quest 3 and 3S, and Pico mixed-reality devices.Gamers in India will also be able to access their PC gaming library at GeForce RTX 4080 quallity with an Ultimate membership for the first time in the region. This follows expansion in Chile and Columbia with GeForce NOW Alliance partner Digevo.More AAA gaming is on the way, with highly anticipated titles DOOM: The Dark Ages and Avowed joining GeForce NOWs extensive library of over 2,100 supported titles when they launch on PC later this year.Plus, no GFN Thursday is complete without new games. Get ready for six new titles joining the cloud this week.Head in the CloudsCES 2025 is coming to a close, but GeForce NOW members still have lots to look forward to.Members will be able to play over 2,100 titles from the GeForce NOW cloud library at GeForce RTX quality on Valves popular Steam Deck device with the launch of a native GeForce NOW app, coming later this year. Steam Deck gamers can gain access to all the same benefits as GeForce RTX 4080 GPU owners with a GeForce NOW Ultimate membership, including NVIDIA DLSS 3 technology for the highest frame rates and NVIDIA Reflex for ultra-low latency.GeForce NOW delivers a stunning streaming experience, no matter how Steam Deck users choose to play, whether in handheld mode for high dynamic range (HDR)-quality graphics, connected to a monitor for up to 1440p 120 frames per second HDR, or hooked up to a TV for big-screen streaming at up to 4K 60 fps.GeForce NOW members can take advantage of RTX ON with the Steam Deck for photorealistic gameplay on supported titles, as well as HDR10 and SDR10 when connected to a compatible display for richer, more accurate color gradients.Get your head in the clouds.Get immersed in a new dimension of big-screen gaming. In collaboration with Apple, Meta and ByteDance, NVIDIA is expanding GeForce NOW cloud gaming to Apple Vision Pro spatial computers, Meta Quest 3 and 3S, and Pico virtual- and mixed-reality devices with all the bells and whistles of NVIDIA technologies, including ray tracing and NVIDIA DLSS. Have a hell of a time in the cloud.In addition, NVIDIA will launch the first GeForce RTX-powered data center in India this year, making gaming more accessible around the world. This follows the recent launch of GeForce NOW in Colombia and Chile operated by GeForce NOW Alliance partner Digevo as well as Thailand coming soon to be operated by GeForce NOW Alliance partner Brothers Picture.Game OnAAA content from celebrated publishers is coming to the cloud. Avowed from Obsidian Entertainment, known for iconic titles such as Fallout: New Vegas, will join GeForce NOW. The cloud gaming platform will also bring DOOM: The Dark Ages from id Software the legendary studio behind the DOOM franchise. These titles will be available at launch on PC this year.Get ready to jump into the Living Lands.Avowed, a first-person fantasy role-playing game, will join the cloud when it launches on PC on Tuesday, Feb. 18. Take on the role of an Aedyr Empire envoy tasked with investigating a mysterious plague. Freely combine weapons and magic harness dual-wield wands, pair a sword with a pistol or opt for a more traditional sword-and-shield approach. In-game companions which join the players parties have unique abilities and storylines that can be influenced by gamers choices.Have a hell of a time in the cloud.DOOM: The Dark Ages is the single-player, action first-person shooter prequel to the critically acclaimed DOOM (2016) and DOOM Eternal. Play as the DOOM Slayer, the legendary demon-killing warrior fighting endlessly against Hell. Experience the epic cinematic origin story of the DOOM Slayers rage in 2025.Shiny New GamesLook for the following games available to stream in the cloud this week:Marvel Rivals comes to the cloud.Road 96 (New release on Xbox, available on PC Game Pass, Jan. 7)Builders of Egypt (New release on Steam, Jan. 8)DREDGE (Epic Games Store)Drova Forsaken Kin (Steam)Kingdom Come: Deliverance (Xbox, available on Microsoft Store)Marvel Rivals (Steam, coming to the cloud after the launch of Season 1) What are you planning to play this weekend? Let us know on X or in the comments below.
    0 Comments ·0 Shares ·79 Views
  • Unveiling a New Era of Local AI With NVIDIA NIM Microservices and AI Blueprints
    blogs.nvidia.com
    Over the past year, generative AI has transformed the way people live, work and play, enhancing everything from writing and content creation to gaming, learning and productivity. PC enthusiasts and developers are leading the charge in pushing the boundaries of this groundbreaking technology.Countless times, industry-defining technological breakthroughs have been invented in one place a garage. This week marks the start of the RTX AI Garage series, which will offer routine content for developers and enthusiasts looking to learn more about NVIDIA NIM microservices and AI Blueprints, and how to build AI agents, creative workflow, digital human, productivity apps and more on AI PCs. Welcome to the RTX AI Garage.This first installment spotlights announcements made earlier this week at CES, including new AI foundation models available on NVIDIA RTX AI PCs that take digital humans, content creation, productivity and development to the next level.These models offered as NVIDIA NIM microservices are powered by new GeForce RTX 50 Series GPUs. Built on the NVIDIA Blackwell architecture, RTX 50 Series GPUs deliver up to 3,352 trillion AI operations per second of performance, 32GB of VRAM and feature FP4 compute, doubling AI inference performance and enabling generative AI to run locally with a smaller memory footprint.NVIDIA also introduced NVIDIA AI Blueprints ready-to-use, preconfigured workflows, built on NIM microservices, for applications like digital humans and content creation.NIM microservices and AI Blueprints empower enthusiasts and developers to build, iterate and deliver AI-powered experiences to the PC faster than ever. The result is a new wave of compelling, practical capabilities for PC users.Fast-Track AI With NVIDIA NIMThere are two key challenges to bringing AI advancements to PCs. First, the pace of AI research is breakneck, with new models appearing daily on platforms like Hugging Face, which now hosts over a million models. As a result, breakthroughs quickly become outdated.Second, adapting these models for PC use is a complex, resource-intensive process. Optimizing them for PC hardware, integrating them with AI software and connecting them to applications requires significant engineering effort.NVIDIA NIM helps address these challenges by offering prepackaged, state-of-the-art AI models optimized for PCs. These NIM microservices span model domains, can be installed with a single click, feature application programming interfaces (APIs) for easy integration, and harness NVIDIA AI software and RTX GPUs for accelerated performance.At CES, NVIDIA announced a pipeline of NIM microservices for RTX AI PCs, supporting use cases spanning large language models (LLMs), vision-language models, image generation, speech, retrieval-augmented generation (RAG), PDF extraction and computer vision.The new Llama Nemotron family of open models provide high accuracy on a wide range of agentic tasks. The Llama Nemotron Nano model, which will be offered as a NIM microservice for RTX AI PCs and workstations, excels at agentic AI tasks like instruction following, function calling, chat, coding and math.Soon, developers will be able to quickly download and run these microservices on Windows 11 PCs using Windows Subsystem for Linux (WSL).To demonstrate how enthusiasts and developers can use NIM to build AI agents and assistants, NVIDIA previewed Project R2X, a vision-enabled PC avatar that can put information at a users fingertips, assist with desktop apps and video conference calls, read and summarize documents, and more. Sign up for Project R2X updates.By using NIM microservices, AI enthusiasts can skip the complexities of model curation, optimization and backend integration and focus on creating and innovating with cutting-edge AI models.Whats in an API?An API is the way in which an application communicates with a software library. An API defines a set of calls that the application can make to the library and what the application can expect in return. Traditional AI APIs require a lot of setup and configuration, making AI capabilities harder to use and hampering innovation.NIM microservices expose easy-to-use, intuitive APIs that an application can simply send requests to and get a response. In addition, theyre designed around the input and output media for different model types. For example, LLMs take text as input and produce text as output, image generators convert text to image, speech recognizers turn speech to text and so on.The microservices are designed to integrate seamlessly with leading AI development and agent frameworks such as AI Toolkit for VSCode, AnythingLLM, ComfyUI, Flowise AI, LangChain, Langflow and LM Studio. Developers can easily download and deploy them from build.nvidia.com.By bringing these APIs to RTX, NVIDIA NIM will accelerate AI innovation on PCs.Enthusiasts are expected to be able to experience a range of NIM microservices using an upcoming release of the NVIDIA ChatRTX tech demo.A Blueprint for InnovationBy using state-of-the-art models, prepackaged and optimized for PCs, developers and enthusiasts can quickly create AI-powered projects. Taking things a step further, they can combine multiple AI models and other functionality to build complex applications like digital humans, podcast generators and application assistants.NVIDIA AI Blueprints, built on NIM microservices, are reference implementations for complex AI workflows. They help developers connect several components, including libraries, software development kits and AI models, together in a single application.AI Blueprints include everything that a developer needs to build, run, customize and extend the reference workflow, which includes the reference application and source code, sample data, and documentation for customization and orchestration of the different components.At CES, NVIDIA announced two AI Blueprints for RTX: one for PDF to podcast, which lets users generate a podcast from any PDF, and another for 3D-guided generative AI, which is based on FLUX.1 [dev] and expected be offered as a NIM microservice, offers artists greater control over text-based image generation.With AI Blueprints, developers can quickly go from AI experimentation to AI development for cutting-edge workflows on RTX PCs and workstations.Built for Generative AIThe new GeForce RTX 50 Series GPUs are purpose-built to tackle complex generative AI challenges, featuring fifth-generation Tensor Cores with FP4 support, faster G7 memory and an AI-management processor for efficient multitasking between AI and creative workflows.The GeForce RTX 50 Series adds FP4 support to help bring better performance and more models to PCs. FP4 is a lower quantization method, similar to file compression, that decreases model sizes. Compared with FP16 the default method that most models feature FP4 uses less than half of the memory, and 50 Series GPUs provide over 2x performance compared with the previous generation. This can be done with virtually no loss in quality with advanced quantization methods offered by NVIDIA TensorRT Model Optimizer.For example, Black Forest Labs FLUX.1 [dev] model at FP16 requires over 23GB of VRAM, meaning it can only be supported by the GeForce RTX 4090 and professional GPUs. With FP4, FLUX.1 [dev] requires less than 10GB, so it can run locally on more GeForce RTX GPUs.With a GeForce RTX 4090 with FP16, the FLUX.1 [dev] model can generate images in 15 seconds with 30 steps. With a GeForce RTX 5090 with FP4, images can be generated in just over five seconds.Get Started With the New AI APIs for PCsNVIDIA NIM microservices and AI Blueprints are expected to be available starting next month, with initial hardware support for GeForce RTX 50 Series, GeForce RTX 4090 and 4080, and NVIDIA RTX 6000 and 5000 professional GPUs. Additional GPUs will be supported in the future.NIM-ready RTX AI PCs are expected to be available from Acer, ASUS, Dell, GIGABYTE, HP, Lenovo, MSI, Razer and Samsung, and from local system builders Corsair, Falcon Northwest, LDLC, Maingear, Mifcon, Origin PC, PCS and Scan.GeForce RTX 50 Series GPUs and laptops deliver game-changing performance, power transformative AI experiences, and enable creators to complete workflows in record time. Rewatch NVIDIA CEO Jensen Huangs keynote to learn more about NVIDIAs AI news unveiled at CES.See notice regarding software product information.
    0 Comments ·0 Shares ·111 Views
  • Why Enterprises Need AI Query Engines to Fuel Agentic AI
    blogs.nvidia.com
    Data is the fuel of AI applications, but the magnitude and scale of enterprise data often make it too expensive and time-consuming to use effectively.According to IDCs Global DataSphere1, enterprises will generate 317 zettabytes of data annually by 2028 including the creation of 29 zettabytes of unique data of which 78% will be unstructured data and 44% of that will be audio and video. Because of the extremely high volume and various data types, most generative AI applications use a fraction of the total amount of data being stored and generated.For enterprises to thrive in the AI era, they must find a way to make use of all of their data. This isnt possible using traditional computing and data processing techniques. Instead, enterprises need an AI query engine.What Is an AI Query Engine?Simply, an AI query engine is a system that connects AI applications, or AI agents, to data. Its a critical component of agentic AI, as it serves as a bridge between an organizations knowledge base and AI-powered applications, enabling more accurate, context-aware responses.AI agents form the basis of an AI query engine, where they can gather information and do work to assist human employees. An AI agent will gather information from many data sources, plan, reason and take action. AI agents can communicate with users, or they can work in the background, where human feedback and interaction will always be available.In practice, an AI query engine is a sophisticated system that efficiently processes large amounts of data, extracts and stores knowledge, and performs semantic search on that knowledge, which can be quickly retrieved and used by AI.An AI query engine processes, stores and retrieves data connecting AI agents to insights.AI Query Engines Unlock Intelligence in Unstructured DataAn enterprises AI query engine will have access to knowledge stored in many different formats, but being able to extract intelligence from unstructured data is one of the most significant advancements it enables.To generate insights, traditional query engines rely on structured queries and data sources, such as relational databases. Users must formulate precise queries using languages like SQL, and results are limited to predefined data formats.In contrast, AI query engines can process structured, semi-structured and unstructured data. Common unstructured data formats are PDFs, log files, images and video, and are stored on object stores, file servers and parallel file systems. AI agents communicate with users and with each other using natural language. This enables them to interpret user intent, even when its ambiguous, by accessing diverse data sources. These agents can deliver results in a conversational format, so that users can interpret results.This capability makes it possible to derive more insights and intelligence from any type of data not just data that fits neatly into rows and columns.For example, companies like DataStax and NetApp are building AI data platforms that enable their customers to have an AI query engine for their next-generation applications.Key Features of AI Query EnginesAI query engines possess several crucial capabilities:Diverse data handling: AI query engines can access and process various data types, including structured, semi-structured and unstructured data from multiple sources, including text, PDF, image, video and specialty data types.Scalability: AI query engines can efficiently handle petabyte-scale data, making all enterprise knowledge available to AI applications quickly.Accurate retrieval: AI query engines provide high-accuracy, high-performance embedding, vector search and reranking of knowledge from multiple sources.Continuous learning: AI query engines can store and incorporate feedback from AI-powered applications, creating an AI data flywheel in which the feedback is used to refine models and increase the effectiveness of the applications over time.Retrieval-augmented generation is a component of AI query engines. RAG uses the power of generative AI models to act as a natural language interface to data, allowing models to access and incorporate relevant information from large datasets during the response generation process.Using RAG, any business or other organization can turn its technical information, policy manuals, videos and other data into useful knowledge bases. An AI query engine can then rely on these sources to support such areas as customer relations, employee training and developer productivity.Additional information-retrieval techniques and ways to store knowledge are in research and development, so the capabilities of an AI query engine are expected to rapidly evolve.The Impact of AI Query EnginesUsing AI query engines, enterprises can fully harness the power of AI agents to connect their workforces to vast amounts of enterprise knowledge, improve the accuracy and relevance of AI-generated responses, process and utilize previously untapped data sources, and create data-driven AI flywheels that continuously improve their AI applications.Some examples include an AI virtual assistant that provides personalized, 24/7 customer service experiences, an AI agent for searching and summarizing video, an AI agent for analyzing software vulnerabilities or an AI research assistant.Bridging the gap between raw data and AI-powered applications, AI query engines will grow to play a crucial role in helping organizations extract value from their data.NVIDIA Blueprints can help enterprises get started connecting AI to their data. Learn more about NVIDIA Blueprints and try them in the NVIDIA API catalog.IDC, Global DataSphere Forecast, 2024.
    0 Comments ·0 Shares ·83 Views
  • Why World Foundation Models Will Be Key to Advancing Physical AI
    blogs.nvidia.com
    In the fast-evolving landscape of AI, its becoming increasingly important to develop models that can accurately simulate and predict outcomes in physical, real-world environments to enable the next generation of physical AI systems.Ming-Yu Liu, vice president of research at NVIDIA and an IEEE Fellow, joined the NVIDIA AI Podcast to discuss the significance of world foundation models (WFM) powerful neural networks that can simulate physical environments. WFMs can generate detailed videos from text or image input data and predict how a scene evolves by combining its current state (image or video) with actions (such as prompts or control signals).World foundation models are important to physical AI developers, said Liu. They can imagine many different environments and can simulate the future, so we can make good decisions based on this simulation.This is particularly valuable for physical AI systems, such as robots and self-driving cars, which must interact safely and efficiently with the real world.The AI Podcast NVIDIAs Ming-Yu Liu on How World Foundation Models Will Advance Physical AI Episode 240Why Are World Foundation Models Important?Building world models often requires vast amounts of data, which can be difficult and expensive to collect. WFMs can generate synthetic data, providing a rich, varied dataset that enhances the training process.In addition, training and testing physical AI systems in the real world can be resource-intensive. WFMs provide virtual, 3D environments where developers can simulate and test these systems in a controlled setting without the risks and costs associated with real-world trials.Open Access to World Foundation ModelsAt the CES trade show, NVIDIA announced NVIDIA Cosmos, a platform of generative WFMs that accelerate the development of physical AI systems such as robots and self-driving cars.The platform is designed to be open and accessible, and includes pretrained WFMs based on diffusion and auto-regressive architectures, along with tokenizers that can compress videos into tokens for transformer models.Liu explained that with these open models, enterprises and developers have all the ingredients they need to build large-scale models. The open platform also provides teams with the flexibility to explore various options for training and fine-tuning models, or build their own based on specific needs.Enhancing AI Workflows Across IndustriesWFMs are expected to enhance AI workflows and development in various industries. Liu sees particularly significant impacts in two areas:The self-driving car industry and the humanoid [robot] industry will benefit a lot from world model development, said Liu. [WFMs] can simulate different environments that will be difficult to have in the real world, to make sure the agent behaves respectively.For self-driving cars, these models can simulate environments that allow for comprehensive testing and optimization. For example, a self-driving car can be tested in various simulated weather conditions and traffic scenarios to help ensure it performs safely and efficiently before deployment on roads.In robotics, WFMs can simulate and verify the behavior of robotic systems in different environments to make sure they perform tasks safely and efficiently before deployment.NVIDIA is collaborating with companies like 1X, Huobi and XPENG to help address challenges in physical AI development and advance their systems.We are still in the infancy of world foundation model development its useful, but we need to make it more useful, Liu said. We also need to study how to best integrate these world models into the physical AI systems in a way that can really benefit them.Listen to the podcast with Ming-Yu Liu, or read the transcript.Learn more about NVIDIA Cosmos and the latest announcements in generative AI and robotics by watching the CES opening keynote by NVIDIA founder and CEO Jensen Huang, as well as joining NVIDIA sessions at the show.
    0 Comments ·0 Shares ·82 Views
  • NVIDIA Launches DRIVE AI Systems Inspection Lab, Achieves New Industry Safety Milestones
    blogs.nvidia.com
    A new NVIDIA DRIVE AI Systems Inspection Lab will help automotive ecosystem partners navigate evolving industry standards for autonomous vehicle safety.The lab, launched today, will focus on inspecting and verifying that automotive partner software and systems on the NVIDIA DRIVE AGX platform meet the automotive industrys stringent safety and cybersecurity standards, including AI functional safety.The lab has been accredited by the ANSI National Accreditation Board (ANAB) according to the ISO/IEC 17020 assessment for standards, including:Functional safety (ISO 26262)SOTIF (ISO 21448)Cybersecurity (ISO 21434)UN-R regulations, including UN-R 79, UN-R 13-H, UN-R 152, UN-R 155, UN-R 157 and UN-R 171AI functional safety (ISO PAS 8800 and ISO/IEC TR 5469)The launch of this new lab will help partners in the global automotive ecosystem create safe, reliable autonomous driving technology, said Ali Kani, vice president of automotive at NVIDIA. With accreditation by ANAB, the lab will carry out an inspection plan that combines functional safety, cybersecurity and AI bolstering adherence to the industrys safety standards.ANAB is proud to be the accreditation body for the NVIDIA DRIVE AI Systems Inspection Lab, said R. Douglas Leonard Jr., executive director of ANAB. NVIDIAs comprehensive evaluation verifies the demonstration of competence and compliance with internationally recognized standards, helping ensure that DRIVE ecosystem partners meet the highest benchmarks for functional safety, cybersecurity and AI integration.The new lab builds on NVIDIAs ongoing safety compliance work with Mercedes-Benz and JLR. Inaugural participants in the lab include Continental and Sony SSS-America.We are pleased to participate in the newly launched NVIDIA Drive AI Systems Inspection Lab and to further intensify the fruitful, ongoing collaboration between our two companies, said Nobert Hammerschmidt, head of components business at Continental.Self-driving vehicles have the capability to significantly enhance safety on roads, said Marius Evensen, head of automotive image sensors at Sony SSS-America. We look forward to working with NVIDIAs DRIVE AI Systems Inspection Lab to help us deliver the highest levels of safety to our customers.Compliance with functional safety, SOTIF and cybersecurity is particularly challenging for complex systems such as AI-based autonomous vehicles, said Riccardo Mariani, head of industry safety at NVIDIA. Through the DRIVE AI Systems Inspection Lab, the correctness of the integration of our partners products with DRIVE safety and cybersecurity requirements can be inspected and verified.Now open to all NVIDIA DRIVE AGX platform partners, the lab is expected to expand to include additional automotive and robotics products and add a testing component.Complementing International Automotive Safety StandardsThe NVIDIA DRIVE AI Systems Inspection Lab complements the missions of independent third-party certification bodies, including technical service organizations such as TV SD, TV Rheinland and exida, as well as vehicle certification agencies such as VCA and KBA.Todays announcement dovetails with recent significant safety certifications and assessments of NVIDIA automotive products:TV SD granted the ISO 21434 Cybersecurity Process certification to NVIDIA for its automotive system-on-a-chip, platform and software engineering processes. Upon certification release, the NVIDIA DriveOS 6.0 operating system conforms with ISO 26262 Automotive Safety Integrity Level (ASIL) D standards.Meeting cybersecurity process requirements is of fundamental importance in the autonomous vehicle era, said Martin Webhofer, CEO of TV SD Rail GmbH. NVIDIA has successfully established processes, activities and procedures that fulfill the stringent requirements of ISO 21434. Additionally, NVIDIA DriveOS 6.0 conforms to ISO 26262 ASIL D standards, pending final certification activities.TV Rheinland performed an independent United Nations Economic Commission for Europe safety assessment of NVIDIA DRIVE AV related to safety requirements for complex electronic systems.NVIDIA has demonstrated thorough, high-quality, safety-oriented processes and technologies in the context of the assessment of the generic, non-OEM-specific parts of the SAE level 2 NVIDIA DRIVE system, said Dominik Strixner, global lead functional safety automotive mobility at TV Rheinland.To learn more about NVIDIAs work in advancing autonomous driving safety, read the NVIDIA Self-Driving Safety Report.
    0 Comments ·0 Shares ·86 Views
  • NVIDIA DRIVE Partners Showcase Latest Mobility Innovations at CES
    blogs.nvidia.com
    Leading global transportation companies spanning the makers of passenger vehicles, trucks, robotaxis and autonomous delivery systems are turning to the NVIDIA DRIVE AGX platform and AI to build the future of mobility.NVIDIAs automotive business provides a range of next-generation highly automated and autonomous vehicle (AV) development technologies, including cloud-based AI training, simulation and in-vehicle compute.At the CES trade show in Las Vegas this week, NVIDIAs customers and partners are showcasing their latest mobility innovations built on NVIDIA accelerated computing and AI.Readying Future Vehicle Roadmaps With NVIDIA DRIVE Thor, Built on NVIDIA BlackwellThe NVIDIA DRIVE AGX Thor system-on-a-chip (SoC), built on the NVIDIA Blackwell architecture, is engineered to handle the transportation industrys most demanding data-intensive workloads, including those involving generative AI, vision language modelsand large language models.DRIVE Ecosystem Partners Transform the Show Floor and Industry at LargeNVIDIA partners are pushing boundaries of automotive innovation with their latest developments and demos, using NVIDIA technologies and accelerated computing to advance everything from sensors, simulation and training to generative AI and teledriving, and include:Delivering 1,000 teraflops of accelerated compute performance, DRIVE Thor is equipped to accelerate inference tasks that are critical for autonomous vehicles to understand and navigate the world around them, such as recognizing pedestrians, adjusting to inclement weather and more.At CES, Aurora, Continental and NVIDIA announced a long-term strategic partnership to deploy driverless trucks at scale, powered by the next-generation NVIDIA DRIVE Thor SoC. NVIDIA DRIVE Thor and DriveOS will be integrated into the Aurora Driver, an SAE level 4 autonomous driving system that Continental plans to mass-manufacture in 2027.Arm, one of NVIDIAs key technology partners, is the compute platform of choice for a number of innovations at CES. The Arm Neoverse V3AE CPU, designed to meet the specific safety and performance demands of automotive, is integrated with DRIVE Thor. This marks the first implementation of Arms next-generation automotive CPU, which combines Arm v9-based technologies with data-center-class single-thread performance, alongside essential safety and security features.Tried and True DRIVE Orin Mainstream Adoption ContinuesNVIDIA DRIVE AGX Orin, the predecessor of DRIVE Thor, continues to be a production-proven advanced driver-assistance system computer widely used in cars today delivering 254 trillion operations per second of accelerated compute to process sensor data for safe, real-time driving decisions.Toyota, the worlds largest automaker, will build its next-generation vehicles on the high-performance, automotive-grade NVIDIA DRIVE Orin SoC, running the safety-certified NVIDIA DriveOS. These vehicles will offer functionally safe advanced driving-assistance capabilities.At the NVIDIA showcase on the fourth floor of the Fontainebleau, Volvo Cars software-defined EX90 and Nuros autonomous driving technology the Nuro Driver platform will be on display, built on NVIDIA DRIVE AGX.Other vehicles powered by NVIDIA DRIVE Orin on display during CES include:Zeekr Mix and Zeekr 001, which feature DRIVE Orin will be on display along with the debut of Zeekrs self-developed ultra-high-performance intelligent driving domain controller that will be built on DRIVE Thor and the NVIDIA Blackwell architecture (LVCC West Hall, booth 5640)Lotus Eletre Carbon (LVCC West Hall, booth 4266 with P3 and 3SS and booth 3500 with HERE)Rivian R1S and Polestar 3 activated with Dolby vehicles on display and demos available by appointment (Park MGM/NoMad Hotel next to Dolby Live)Lucid Air (LVCC West Hall booth 4964 with SoundHound AI)Zeekr MIXRivian R1SNVIDIAs partners will also showcase their automotive solutions built on NVIDIA technologies, including:Arbe: Delivering next-generation, ultra-high-definition radar technology, integrating with NVIDIA DRIVE AGX to revolutionize radar-based free-space mapping with cutting-edge AI capabilities. The integration empowers manufacturers to incorporate radar data effortlessly into their perception systems, enhancing safety applications and autonomous driving. (LVCC, West Hall 7406, Diamond Lot 323)Cerence: Collaborating with NVIDIA to enhance its CaLLM family of language models, including the cloud-based Cerence Automotive Large Language Model, or CaLLM, powered by DRIVE Orin.Foretellix: Integrating NVIDIA Omniverse Sensor RTX APIs into its Foretify AV test management platform, enhancing object-level simulation with physically accurate sensor simulations.Imagry: Building AI-driven, HD-mapless autonomous driving solutions, accelerated by NVIDIA technology, that are designed for both self-driving passenger vehicles and urban buses. (LVCC, West Hall, 5976)Lenovo Vehicle Computing: Previewing (by appointment) its Lenovo AD1, a powerful automotive-grade domain controller built on the NVIDIA DRIVE Thor platform, and tailored for SAE level 4 autonomous driving.Provizio: Showcasing Provizios 5D perception Imaging Radar, accelerated by NVIDIA technology, that delivers unprecedented, scalable, on-the-edge radar perception capabilities, with on-vehicle demonstration rides at CES.Quanta: Demonstrating (by appointment) in-house NVIDIA DRIVE AGX Hyperion cameras running on its electronic control unit powered by DRIVE Orin.SoundHound AI: Showcasing its work with NVIDIA to bring voice generative AI directly to the edge, bringing the intelligence of cloud-based LLMs directly to vehicles. (LVCC, West Hall, 4964)Vay: Offering innovative door-to-door mobility services by combining Vays remote driving capabilities with NVIDIA DRIVE advanced AI and computing power.Zoox: Showcasing its latest robotaxi, which leverages NVIDIA technology, driving autonomously on the streets of Las Vegas and parked in the Zoox booth. (LVCC, West Hall 3316).Safety Is the Way for Autonomous InnovationAt CES, NVIDIA also announced that its DRIVE AGX Hyperion platform has achieved safety certifications from TV SD and TV Rheinland, setting new standards for autonomous vehicle safety and innovation.To enhance safety measures, NVIDIA also launched the DRIVE AI Systems Inspection Lab, designed to help partners meet rigorous autonomous vehicle safety and cybersecurity requirements.In addition, complementing its three computers designed to accelerate AV development NVIDIA AGX, NVIDIA Omniverse running on OVX and NVIDIA DGX NVIDIA has introduced the NVIDIA Cosmos platform. Cosmos world foundation models and advanced data processing pipelines can dramatically scale generated data and speed up physical AI system development. With the platforms data flywheel capability, developers can effectively transform thousands of real-world driven miles into billions of virtual miles.Transportation leaders using Cosmos to build physical AI for AVs include Fortellix, Uber, Waabi and Wayve.Learn more about NVIDIAs latest automotive news by watching NVIDIA founder and CEO Jensen Huangs opening keynote at CES.See notice regarding software product information.
    0 Comments ·0 Shares ·96 Views
  • PC Gaming in the Cloud Goes Everywhere With New Devices and AAA Games on GeForce NOW
    blogs.nvidia.com
    GeForce NOW turns any device into a GeForce RTX gaming PC, and is bringing cloud gaming and AAA titles to more devices and regions.Announced today at the CES trade show, gamers will soon be able to play titles from their Steam library at GeForce RTX quality with the launch of a native GeForce NOW app for the Steam Deck. NVIDIA is working to bring cloud gaming to the popular PC gaming handheld device later this year.In collaboration with Apple, Meta and ByteDance, NVIDIA is expanding GeForce NOW cloud gaming to Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality devices with all the bells and whistles of NVIDIA technologies, including ray tracing and NVIDIA DLSS.In addition, NVIDIA is launching the first GeForce RTX-powered data center in India, making gaming more accessible around the world.Plus, GeForce NOWs extensive library of over 2,100 supported titles is expanding with highly anticipated AAA titles. DOOM: The Dark Ages and Avowed will join the cloud when they launch on PC this year.RTX on DeckThe Steam Decks portability paired with GeForce NOW opens up new possibilities for high-fidelity gaming everywhere. The native GeForce NOW app will offer up to 4K resolution and 60 frames per second with high dynamic range on Valves innovative Steam Deck handheld when connected to a TV, streaming from GeForce RTX-powered gaming rigs in the cloud.Last year, GeForce NOW rolled out a beta installation method that was eagerly welcomed by the gaming community. Later this year, members will be able to download the native GeForce NOW app and install it on Steam Deck.Steam Deck gamers can gain access to all the same benefits as GeForce RTX 4080 GPU owners with a GeForce NOW Ultimate membership, including NVIDIA DLSS 3 technology for the highest frame rates and NVIDIA Reflex for ultra-low latency. Because GeForce NOW streams from an RTX gaming rig in the cloud, the Steam Deck uses less processing power, which extends battery life compared with playing locally.The streaming experience with GeForce NOW looks stunning, whichever way Steam Deck users want to play whether thats in handheld mode for HDR-quality graphics, connected to a monitor for up to 1440p 120 fps HDR or hooked up to a TV for big-screen streaming at up to 4K 60 HDR. GeForce NOW members can take advantage of RTX ON with the Steam Deck for photorealistic gameplay on supported titles, as well as HDR10 and SDR10 when connected to a compatible display for richer, more accurate color gradients.Get ready for major upgrades to streaming on the go when the GeForce NOW app launches on the Steam Deck later this year.Stream Beyond RealityGet immersed in a new dimension of big-screen gaming as GeForce NOW brings AAA titles to life on Apple Vision Pro spatial computers, Meta Quest 3 and 3S and Pico virtual- and mixed-reality headsets. Later this month, these supported devices will give members access to an extensive library of games to stream through GeForce NOW by opening the browser to play.geforcenow.com when the newest app update, version 2.0.70, starts rolling out later this month.Jump into a whole new gaming dimension with GeForce NOW.Members can transform the space around them into a personal gaming theater with GeForce NOW. The streaming experience on these devices will support gamepad-compatible titles for members to play their favorite PC games on a massive virtual screen.For an even more enhanced visual experience, GeForce NOW Ultimate and Performance members using these devices can tap into RTX and DLSS technologies in supported games. Members will be able to step into a world where games come to life on a grand scale, powered by GeForce NOW technologies.Land of a Thousand Lights and GamesNew year, new data center.NVIDIA is broadening cloud gaming in India and Latin America. The first GeForce RTX 4080-powered data center will launch in India in the first half of this year. This follows the launch of GeForce NOW in Japan last year, as well as in Colombia and Chile, to be operated by GeForce NOW Alliance partner Digevo.GeForce RTX-powered gaming in the rapidly growing Indian gaming market will provide the ability to stream AAA games without the latest hardware. Gamers in the region can look forward to the launch of Ultimate memberships, along with all the new games and technological advancements announced at CES.Send in the GamesAAA content from celebrated publishers is coming to the cloud. Avowed from Obsidian Entertainment, known for iconic titles such as Fallout: New Vegas, will join GeForce NOW. The cloud gaming platform will also bring DOOM: The Dark Ages from id Software, the legendary studio behind the DOOM franchise. All will be available at launch on PC this year.Get ready to jump into the Living Lands.Avowed, a first-person fantasy role-playing game, will join the cloud when it launches on PC on Tuesday, Feb. 18. Welcome to the Living Lands, an island full of mysteries and secrets, danger and adventure, choices and consequences and untamed wilderness. Take on the role of an Aedyr Empire envoy tasked with investigating a mysterious plague. Freely combine weapons and magic harness dual-wield wands, pair a sword with a pistol or opt for a more traditional sword-and-shield approach. In-game companions which join the players parties have unique abilities and storylines that can be influenced by gamers choices.Have a hell of a time in the cloud.DOOM: The Dark Ages is the single-player, action first-person shooter prequel to the critically acclaimed DOOM (2016) and DOOM Eternal. Play as the DOOM Slayer, the legendary demon-killing warrior fighting endlessly against Hell. Experience the epic cinematic origin story of the DOOM Slayers rage this year.Get ready to play these titles and more at high performance when they join GeForce NOW at launch. Ultimate members will be able to stream at up to 4K resolution and 120 fps with support for NVIDIA DLSS and Reflex technology, and experience the action even on low-powered devices. Keep an eye out on GFN Thursdays for the latest on their release dates in the cloud.GeForce NOW is making popular devices cloud-gaming-ready while consistently delivering quality titles from top publishers to bring another ultimate year of gaming to members across the globe.See notice regarding software product information.
    0 Comments ·0 Shares ·81 Views
  • NVIDIA Makes Cosmos World Foundation Models Openly Available to Physical AI Developer Community
    blogs.nvidia.com
    NVIDIA Cosmos, a platform for accelerating physical AI development, introduces a family of world foundation models neural networks that can predict and generate physics-aware videos of the future state of a virtual environment to help developers build next-generation robots and autonomous vehicles (AVs).World foundation models, or WFMs, are as fundamental as large language models. They use input data, including text, image, video and movement, to generate and simulate virtual worlds in a way that accurately models the spatial relationships of objects in the scene and their physical interactions.Announced today at CES, NVIDIA is making available the first wave of Cosmos WFMs for physics-based simulation and synthetic data generation plus state-of-the-art tokenizers, guardrails, an accelerated data processing and curation pipeline, and a framework for model customization and optimization.Researchers and developers, regardless of their company size, can freely use the Cosmos models under NVIDIAs permissive open model license that allows commercial usage. Enterprises building AI agents can also use new open NVIDIA Llama Nemotron and Cosmos Nemotron models, unveiled at CES.The openness of Cosmos state-of-the-art models unblocks physical AI developers building robotics and AV technology and enables enterprises of all sizes to more quickly bring their physical AI applications to market. Developers can use Cosmos models directly to generate physics-based synthetic data, or they can harness the NVIDIA NeMo framework to fine-tune the models with their own videos for specific physical AI setups.Physical AI leaders including robotics companies 1X, Agility Robotics and XPENG, and AV developers Uber and Waabi are already working with Cosmos to accelerate and enhance model development.Developers can preview the first Cosmos autoregressive and diffusion models on the NVIDIA API catalog, and download the family of models and fine-tuning framework from the NVIDIA NGC catalog and Hugging Face.World Foundational Models for Physical AICosmos world foundation models are a suite of open diffusion and autoregressive transformer models for physics-aware video generation. The models have been trained on 9,000 trillion tokens from 20 million hours of real-world human interactions, environment, industrial, robotics and driving data.The models come in three categories: Nano, for models optimized for real-time, low-latency inference and edge deployment; Super, for highly performant baseline models; and Ultra, for maximum quality and fidelity, best used for distilling custom models.When paired with NVIDIA Omniverse 3D outputs, the diffusion models generate controllable, high-quality synthetic video data to bootstrap training of robotic and AV perception models. The autoregressive models predict what should come next in a sequence of video frames based on input frames and text. This enables real-time next-token prediction, giving physical AI models the foresight to predict their next best action.Developers can use Cosmos open models for text-to-world and video-to-world generation. Versions of the diffusion and autoregressive models, with between 4 and 14 billion parameters each, are available now on the NGC catalog and Hugging Face.Also available are a 12-billion-parameter upsampling model for refining text prompts, a 7-billion-parameter video decoder optimized for augmented reality, and guardrail models to ensure responsible, safe use.To demonstrate opportunities for customization, NVIDIA is also releasing fine-tuned model samples for vertical applications, such as generating multisensor views for AVs.Advancing Robotics, Autonomous Vehicle ApplicationsCosmos world foundation models can enable synthetic data generation to augment training datasets, simulation to test and debug physical AI models before theyre deployed in the real world, and reinforcement learning in virtual environments to accelerate AI agent learning.Developers can generate massive amounts of controllable, physics-based synthetic data by conditioning Cosmos with composed 3D scenes from NVIDIA Omniverse.Waabi, a company pioneering generative AI for the physical world, starting with autonomous vehicles, is evaluating the use of Cosmos for the search and curation of video data for AV software development and simulation. This will further accelerate the companys industry-leading approach to safety, which is based on Waabi World, a generative AI simulator that can create any situation a vehicle might encounter with the same level of realism as if it happened in the real world.In robotics, WFMs can generate synthetic virtual environments or worlds to provide a less expensive, more efficient and controlled space for robot learning. Embodied AI startup Hillbot is boosting its data pipeline by using Cosmos to generate terabytes of high-fidelity 3D environments. This AI-generated data will help the company refine its robotic training and operations, enabling faster, more efficient robotic skilling and improved performance for industrial and domestic tasks.In both industries, developers can use NVIDIA Omniverse and Cosmos as a multiverse simulation engine, allowing a physical AI policy model to simulate every possible future path it could take to execute a particular task which in turn helps the model select the best of these paths.Data curation and the training of Cosmos models relied on thousands of NVIDIA GPUs through NVIDIA DGX Cloud, a high-performance, fully managed AI platform that provides accelerated computing clusters in every leading cloud.Developers adopting Cosmos can use DGX Cloud for an easy way to deploy Cosmos models, with further support available through the NVIDIA AI Enterprise software platform.Customize and Deploy With NVIDIA CosmosIn addition to foundation models, the Cosmos platform includes a data processing and curation pipeline powered by NVIDIA NeMo Curator and optimized for NVIDIA data center GPUs.Robotics and AV developers collect millions or billions of hours of real-world recorded video, resulting in petabytes of data. Cosmos enables developers to process 20 million hours of data in just 40 days on NVIDIA Hopper GPUs, or as little as 14 days on NVIDIA Blackwell GPUs. Using unoptimized pipelines running on a CPU system with equivalent power consumption, processing the same amount of data would take over three years.The platform also features a suite of powerful video and image tokenizers that can convert videos into tokens at different video compression ratios for training various transformer models.The Cosmos tokenizers deliver 8x more total compression than state-of-the-art methods and 12x faster processing speed, which offers superior quality and reduced computational costs in both training and inference. Developers can access these tokenizers, available under NVIDIAs open model license, via Hugging Face and GitHub.Developers using Cosmos can also harness model training and fine-tuning capabilities offered by NeMo framework, a GPU-accelerated framework that enables high-throughput AI training.Developing Safe, Responsible AI ModelsNow available to developers under the NVIDIA Open Model License Agreement, Cosmos was developed in line with NVIDIAs trustworthy AI principles, which include nondiscrimination, privacy, safety, security and transparency.The Cosmos platform includes Cosmos Guardrails, a dedicated suite of models that, among other capabilities, mitigates harmful text and image inputs during preprocessing and screens generated videos during postprocessing for safety. Developers can further enhance these guardrails for their custom applications.Cosmos models on the NVIDIA API catalog also feature an inbuilt watermarking system that enables identification of AI-generated sequences.NVIDIA Cosmos was developed by NVIDIA Research. Read the research paper, Cosmos World Foundation Model Platform for Physical AI, for more details on model development and benchmarks. Model cards providing additional information are available on Hugging Face.Learn more about world foundation models in an AI Podcast episode, airing Jan. 7, that features Ming-Yu Liu, vice president of research at NVIDIA.Get started with NVIDIA Cosmos and join NVIDIA at CES. Watch the Cosmos demo and Huangs keynote below:See notice regarding software product information.
    0 Comments ·0 Shares ·82 Views
  • NVIDIA Announces Isaac GR00T Blueprint to Accelerate Humanoid Robotics Development
    blogs.nvidia.com
    Over the next two decades, the market for humanoid robots is expected to reach $38 billion. To address this significant demand, particularly in industrial and manufacturing sectors, NVIDIA is releasing a collection of robot foundation models, data pipelines and simulation frameworks to accelerate next-generation humanoid robot development efforts.Announced by NVIDIA founder and CEO Jensen Huang today at the CES trade show, the NVIDIA Isaac GR00T Blueprint for synthetic motion generation helps developers generate exponentially large synthetic motion data to train their humanoids using imitation learning.Imitation learning a subset of robot learning enables humanoids to acquire new skills by observing and mimicking expert human demonstrations. Collecting these extensive, high-quality datasets in the real world is tedious, time-consuming and often prohibitively expensive. Implementing the Isaac GR00T blueprint for synthetic motion generation allows developers to easily generate exponentially large synthetic datasets from just a small number of human demonstrations.Starting with the GR00T-Teleop workflow, users can tap into the Apple Vision Pro to capture human actions in a digital twin. These human actions are mimicked by a robot in simulation and recorded for use as ground truth.The GR00T-Mimic workflow then multiplies the captured human demonstration into a larger synthetic motion dataset. Finally, the GR00T-Gen workflow, built on the NVIDIA Omniverse and NVIDIA Cosmos platforms, exponentially expands this dataset through domain randomization and 3D upscaling.The dataset can then be used as an input to the robot policy, which teaches robots how to move and interact with their environment effectively and safely in NVIDIA Isaac Lab, an open-source and modular framework for robot learning.World Foundation Models Narrow the Sim-to-Real GapNVIDIA also announced Cosmos at CES, a platform featuring a family of open, pretrained world foundation models purpose-built for generating physics-aware videos and world states for physical AI development. It includes autoregressive and diffusion models in a variety of sizes and input data formats. The models were trained on 18 quadrillion tokens, including 2 million hours of autonomous driving, robotics, drone footage and synthetic data.In addition to helping generate large datasets, Cosmos can reduce the simulation-to-real gap by upscaling images from 3D to real. Combining Omniverse a developer platform of application programming interfaces and microservices for building 3D applications and services with Cosmos is critical, because it helps minimize potential hallucinations commonly associated with world models by providing crucial safeguards through its highly controllable, physically accurate simulations.An Expanding EcosystemCollectively, NVIDIA Isaac GR00T, Omniverse and Cosmos are helping physical AI and humanoid innovation take a giant leap forward. Major robotics companies have started adopting and demonstrated results with Isaac GR00T, including Boston Dynamics and Figure.Humanoid software, hardware and robot manufacturers can apply for early access to NVIDIAs humanoid robot developer program.Watch the CES opening keynote from NVIDIA founder and CEO Jensen Huang, and stay up to date by subscribing to the newsletter and following NVIDIA Robotics on LinkedIn, Instagram, X and Facebook.See notice regarding software product information.
    0 Comments ·0 Shares ·86 Views
  • NVIDIA Media2 Transforms Content Creation, Streaming and Audience Experiences With AI
    blogs.nvidia.com
    From creating the GPU, RTX real-time ray tracing and neural rendering to now reinventing computing for AI, NVIDIA has for decades been at the forefront of computer graphics pushing the boundaries of whats possible in media and entertainment.NVIDIA Media2 is the latest AI-powered initiative transforming content creation, streaming and live media experiences.Built on technologies like NVIDIA NIM microservices and AI Blueprints and breakthrough AI applications from startups and software partners Media2 uses AI to drive the creation of smarter, more tailored and more impactful content that can adapt to individual viewer preferences.Amid this rapid creative transformation, companies embracing NVIDIA Media2 can stay on the $3 trillion media and entertainment industrys cutting edge, reshaping how audiences consume and engage with content.NVIDIA Media2 technology stackNVIDIA Technologies at the Heart of Media2As the media and entertainment industry embraces generative AI and accelerated computing, NVIDIA technologies are transforming how content is created, delivered and experienced.NVIDIA Holoscan for Media is a software-defined, AI-enabled platform that allows companies in broadcast, streaming and live sports to run live video pipelines on the same infrastructure as AI. The platform delivers applications from vendors across the industry on NVIDIA-accelerated infrastructure.NVIDIA Holoscan for MediaDelivering the power needed to drive the next wave of data-enhanced intelligent content creation and hyper-personalized media is the NVIDIA Blackwell architecture, built to handle data-center-scale generative AI workflows with up to 25x more energy efficiency over the NVIDIA Hopper generation. Blackwell integrates six types of chips: GPUs, CPUs, DPUs, NVIDIA NVLink Switch chips, NVIDIA InfiniBand switches and Ethernet switches.NVIDIA Blackwell architectureBlackwell is supported by NVIDIA AI Enterprise, an end-to-end software platform for production-grade AI. NVIDIA AI Enterprise comprises NVIDIA NIM microservices, AI frameworks, libraries and tools that media companies can deploy on NVIDIA-accelerated clouds, data centers and workstations. Of the expanding list, these include:The Llama 3.1-405B-Instruct NIM microservice, which enables synthetic data generation, distillation and inference for chatbots, coding and domain-specific tasks.The Mistral-NeMo-12B-Instruct NIM microservice, which enables multilingual information retrieval the ability to search, process and retrieve knowledge across languages. This is key in enhancing an AI models outputs with greater accuracy and global relevancy.The NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI, which can help advertisers easily build personalized, on-brand and product-accurate marketing content at scale using real-time rendering and generative AI without affecting a hero product asset.NVIDIA NeMo Retriever embedding and reranking NIM microservices, which can vectorize text documents, transcripts, news articles and other written content. Media companies can use these to expand their generative AI efforts and build accurate, multilingual systems.The NVIDIA Cosmos Nemotron vision language model NIM microservice, which is a multimodal VLM that can understand the meaning and context of text, images and video. With the microservice, media companies can query images and videos with natural language and receive informative responses.The NVIDIA AI Blueprint for video search and summarization (VSS), which integrates VLMs and LLMs and provides cloud-native building blocks to build video analytics, search and summarization applications.The NVIDIA Edify multimodal generative AI architecture, which can generate visual assets like images, 3D models and HDRi environments from text or image prompts. It offers advanced editing tools and efficient training for developers. With NVIDIA AI Foundry, service providers can customize Edify models for commercial visual services using NVIDIA NIM microservices.Partners in the Media2 EcosystemPartners across the industry are adopting NVIDIA technology to reshape the next chapter of storytelling.Getty Images and Shutterstock are intelligent content creation services built with NVIDIA Edify. The AI models have also been optimized and packaged for maximum performance with NVIDIA NIM microservices.Bria is a commercial-first visual generative AI platform designed for developers. Its trained on 100% licensed data and built on responsible AI principles. The platform offers tools for custom pipelines, seamless integration and flexible deployment, ensuring enterprise-grade compliance and scalable, predictable content generation. Optimized with NVIDIA NIM microservices, Bria delivers faster, safer and scalable production-ready solutions.Runway is an AI platform that provides advanced creative tools for artists and filmmakers. The companys Gen-3 Alpha Turbo model excels in video generation and includes a new Camera Control feature that allows for precise camera movements like pan, tilt and zoom. Runways integration of the NVIDIA CV-CUDA open-source library combined with NVIDIA GPUs accelerates preprocessing for high-resolution videos in its segmentation model.Wonder Dynamics, an Autodesk company, recently launched the beta version of Wonder Animation, featuring powerful new video-to-3D scene technology that can turn any video sequence into a 3D-animated scene for animated film production. Accelerated by NVIDIA GPU technology, Wonder Animation provides visual effects artists and animators with an easy-to-use, flexible tool that significantly reduces the time, complexity and efforts traditionally associated with 3D animation and visual effects workflows while allowing the artist to maintain full creative control.Comcasts Sky innovation team is collaborating with NVIDIA on lab testing NVIDIA NIM microservices and partner models for its global platforms. The integration could lead to greater interactivity and accessibility for customers around the world, such as enabling the use of voice commands to request summaries during live sports and access other contextual information.V, a creative technology company and home to the largest network of virtual studios, is broadening access to the creation of virtual environments and immersive content with NVIDIA-accelerated generative AI technologies.Twelve Labs, a member of the NVIDIA Inception program for startups, is developing advanced multimodal foundation models that can understand videos like humans, enabling precise semantic search, content analysis and video-to-text generation. Twelve Labs uses NVIDIA H100 GPUs to significantly improve the models inference performance, achieving up to a 7x improvement in requests served per second.S4 Capitals Monks is using cutting-edge AI technologies to enhance live broadcasts with real-time content segmentation and personalized fan experiences. Powered by NVIDIA Holoscan for Media, the companys solution is integrated with tools like NVIDIA VILA to generate contextual metadata for injection within a time-addressible media store framework enabling precise, action-based searching within video content.Additionally, Monks uses NVIDIA NeMo Curator to help process data to build tailored AI models for sports leagues and IP holders, unlocking new monetization opportunities through licensing. By combining these technologies, broadcasters can seamlessly deliver hyper-relevant content to fans as events unfold, while adapting to the evolving demands of modern audiences.Media companies manage vast amounts of video content, which can be challenging and time-consuming to locate, catalog and compile into finished assets. Leading media-focused consultant and system integrator Qvest has developed an AI video discovery engine, built on NIM microservices, that accelerates this process by automating the data capture of video files. This streamlines a users ability to both discover and contextualize how videos can fit in their intended story.Verizon is transforming global enterprise operations, as well as live media and sports content, by integrating its reliable, secure private 5G network with NVIDIAs full-stack AI platform, including NVIDIA AI Enterprise and NIM microservices, to deliver the latest AI solutions at the edge.Using this solution, streamers, sports leagues and rights holders can enhance fan experiences with greater interactivity and immersion by deploying high-performance 5G connectivity along with generative AI, agentic AI, extended reality and streaming applications that enable personalized content delivery. These technologies also help elevate player performance and viewer engagement by offering real-time data analytics to coaches, players, referees and fans. It can also enable private 5G-powered enterprise AI use cases to drive automation and productivity.Welcome to NVIDIA Media2The NVIDIA Media2 initiative empowers companies to redefine the future of media and entertainment through intelligent, data-driven and immersive technologies giving them a competitive edge while equipping them to drive innovation across the industry.NIM microservices from NVIDIA and model developers are now available to try, with additional models added regularly.Get started with NVIDIA NIM and AI Blueprints, and watch the CES opening keynote delivered by NVIDIA founder and CEO Jensen Huang to hear the latest advancements in AI.See notice regarding software product information.
    0 Comments ·0 Shares ·111 Views
More Stories