NVIDIA
NVIDIA
This is the Official NVIDIA Page
13 pessoas curtiram isso
367 Publicações
2 fotos
0 Vídeos
0 Anterior
Atualizações recentes
  • NVIDIAs Jacob Liberman on Bringing Agentic AI to Enterprises
    blogs.nvidia.com
    AI is rapidly transforming how organizations solve complex challenges.The early stages of enterprise AI adoption focused on using large language models to create chatbots. Now, enterprises are using agentic AI to create intelligent systems that reason, act and execute complex tasks with a degree of autonomy.Jacob Liberman, director of product management at NVIDIA, joined the NVIDIA AI Podcast to explain how agentic AI bridges the gap between powerful AI models and practical enterprise applications.Enterprises are deploying AI agents to free human workers from time-consuming and error-prone tasks. This allows people to spend more time on high-value work that requires creativity and strategic thinking.Liberman anticipates it wont be long before teams of AI agents and human workers collaborate to tackle complex tasks requiring reasoning, intuition and judgement. For example, enterprise software developers will work with AI agents to develop more efficient algorithms. And medical researchers will collaborate with AI agents to design and test new drugs.NVIDIA AI Blueprints help enterprises build their own AI agents including many of the use cases listed above.Blueprints are reference architectures implemented in code that show you how to take NVIDIA software and apply it to some productive task in an enterprise to solve a real business problem, Liberman said.The blueprints are entirely open source. A developer or service provider can deploy a blueprint directly, or customize it by integrating their own technology.Liberman highlighted the versatility of the AI Blueprint for customer service, for example, which features digital humans.The digital human can be made into a bedside digital nurse, a sportscaster or a bank teller with just some verticalization, he said.Other popular NVIDIA Blueprints include a video search and summarization agent, an enterprise multimodal PDF chatbot and a generative virtual screening pipeline for drug discovery.Time Stamps:1:14 What is an AI agent?17:25 How software developers are early adopters of agentic AI.19:50 Explanation of test-time compute and reasoning models.23:05 Using AI agents in cybersecurity and risk management applications.You Might Also LikeImbue CEO Kanjun Que on Transforming AI Agents Into Personal CollaboratorsKanjun Qiu, CEO of Imbue, discusses the emerging era of personal AI agents, drawing a parallel to the PC revolution and explaining how modern AI systems are evolving to enhance user capabilities through collaboration.Telenors Kaaren Hilsen on Launching Norways First AI FactoryKaaren Hilsen, chief innovation officer and head of the AI factory at Telenor, highlights Norways first AI factory, which securely processes sensitive data within the country while promoting data sovereignty and environmental sustainability through green computing initiatives, including a renewable energy-powered data center in Oslo.Firsthands Jon Heller Shares How AI Agents Enhance Consumer Journeys in RetailJon Heller of Firsthand explains how the companys AI Brand Agents are boosting retail and digital marketing by personalizing customer experiences and converting marketing interactions into valuable research data.
    0 Comentários ·0 Compartilhamentos ·8 Visualizações
  • NVIDIA GeForce RTX 50 Series Accelerates Adobe Premiere Pro and Media Encoders 4:2:2 Color Sampling
    blogs.nvidia.com
    Video editing workflows are getting a lot more colorful.Adobe recently announced massive updates to Adobe Premiere Pro (beta) and Adobe Media Encoder, including PC support for 4:2:2 video color editing.The 4:2:2 color format is a game changer for professional video editors, as it retains nearly as much color information as 4:4:4 while greatly reducing file size. This improves color grading and chroma keying using color information to isolate a specific range of hues while maximizing efficiency and quality.In addition, new NVIDIA GeForce RTX 5090 and 5080 laptops built on the NVIDIA Blackwell architecture are out now, accelerating 4:2:2 and advanced AI-powered features across video-editing workflows.Adobe and other industry partners are attending NAB Show a premier gathering of over 100,000 leaders in the broadcast, media and entertainment industries running April 5-9 in Las Vegas. Professionals in these fields will come together for education, networking and exploring the latest technologies and trends.Shed Some Color on 4:2:2Consumer cameras that are limited to 4:2:0 color compression capture a limited amount of color information. 4:2:0 is acceptable for video playback on browsers, but professional video editors often rely on cameras that capture 4:2:2 color depth with precise color accuracy to ensure higher color fidelity.Adobe Premiere Pros beta with 4:2:2 means video data can now provide double the color information with just a 1.3x increase in raw file size over 4:2:0. This unlocks several key benefits within professional video-production workflows:Increased Color Accuracy: 10-bit 4:2:2 retains more color information compared with 8-bit 4:2:0, leading to more accurate color representation and better color grading results.4:2:2 offers more accurate color representation for better color grading results.More Flexibility: The extra color data allows for increased flexibility during color correction and grading, enabling more nuanced adjustments and corrections.Improved Keying: 4:2:2 is particularly beneficial for keying including green screening as it enables cleaner, more accurate extraction of the subject from the background, as well as cleaner edges of small keyed objects like hair.4:2:2 enables cleaner green screen video content.Smaller File Sizes: Compared with 4:4:4, 4:2:2 reduces file sizes without significantly impacting picture quality, offering an optimal balance between quality and storage.Combining 4:2:2 support with NVIDIA hardware increases creative possibilities.Advanced Video EditingProsumer-grade cameras from most major brands support HEVC and H.264 10-bit 4:2:2 formats to deliver superior image quality, manageable file sizes and the flexibility needed for professional video production.GeForce RTX 50 Series GPUs paired with Microsoft Windows 11 come with GPU-powered decode acceleration in HEVC and H.264 10-bit 4:2:2 formats.GPU-powered decode enables faster-than-real-time playback without stuttering, the ability to work with original camera media instead of proxies, smoother timeline responsiveness and reduced CPU load freeing system resources for multi-app workflows and creative tasks.RTX 50 Series 4:2:2 hardware can decode up to six 4K 60 frames-per-second video sources on an RTX 5090-enabled studio PC, enabling smooth multi-camera video-editing workflows on Adobe Premiere Pro.Video exports are also accelerated with NVIDIAs ninth-generation encoder and sixth-generation decoder.NVIDIA and GeForce RTX Laptop GPU encoders and decoders.In GeForce RTX 50 Series GPUs, the ninth-generation NVIDIA video encoder, NVENC, offers an 8% BD-BR upgrade in video encoding efficiency when exporting to HEVC on Premiere Pro.Adobe AI AcceleratedAdobe delivers an impressive array of advanced AI features for idea generation, enabling streamlined processes, improved productivity and opportunities to explore new artistic avenues all accelerated by NVIDIA RTX GPUs.For example, Adobe Media Intelligence, a feature in Premiere Pro (beta) and After Effects (beta), uses AI to analyze footage and apply semantic tags to clips. This lets users more easily and quickly find specific footage by describing its content, including objects, locations, camera angles and even transcribed spoken words.Media Intelligence runs 30% faster on the GeForce RTX 5090 Laptop GPU compared with the GeForce RTX 4090 Laptop GPU.In addition, the Enhance Speech feature in Premiere Pro (beta) improves the quality of recorded speech by filtering out unwanted noise and making the audio sound clearer and more professional. Enhance Speech runs 7x faster on GeForce RTX 5090 Laptop GPUs compared to the MacBook Pro M4 Max.Visit Adobes Premiere Pro page to download a free trial of the beta and explore the slew of AI-powered features across the Adobe Creative Cloud and Substance 3D apps.Unleash (AI)nfinite PossibilitiesGeForce RTX 5090 and 5080 Series laptops deliver the largest-ever generational leap in portable performance for creating, gaming and all things AI.They can run creative generative AI models such as Flux up to 2x faster in a smaller memory footprint, compared with the previous generation.The previously mentioned ninth-generation NVIDIA encoders elevate video editing and livestreaming workflows, and come with NVIDIA DLSS 4 technology and up to 24GB of VRAM to tackle massive 3D projects.NVIDIA Max-Q hardware technologies use AI to optimize every aspect of a laptop the GPU, CPU, memory, thermals, software, display and more to deliver incredible performance and battery life in thin and quiet devices.All GeForce RTX 50 Series laptops include NVIDIA Studio platform optimizations, with over 130 GPU-accelerated content creation apps and exclusive Studio tools including NVIDIA Studio Drivers, tested extensively to enhance performance and maximize stability in popular creative apps.The game-changing NVIDIA GeForce RTX 5090 and 5080 GPU laptops are available now.Adobe will participate in the Creator Lab at NAB Show, offering hands-on training for editors to elevate their skills with Adobe tools. Attend a 30-minute section and try out Puget Systems laptops equipped with GeForce RTX 5080 Laptop GPUs to experience blazing-fast performance and demo new generative AI features.Use NVIDIAs product finder to explore available GeForce RTX 50 Series laptops with complete specifications.New creative app updates and optimizations are powered by the NVIDIA Studio platform. Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·9 Visualizações
  • Speed Demon: NVIDIA Blackwell Takes Pole Position in Latest MLPerf Inference Results
    blogs.nvidia.com
    In the latest MLPerf Inference V5.0 benchmarks, which reflect some of the most challenging inference scenarios, the NVIDIA Blackwell platform set records and marked NVIDIAs first MLPerf submission using the NVIDIA GB200 NVL72 system, a rack-scale solution designed for AI reasoning.Delivering on the promise of cutting-edge AI takes a new kind of compute infrastructure, called AI factories. Unlike traditional data centers, AI factories do more than store and process data they manufacture intelligence at scale by transforming raw data into real-time insights. The goal for AI factories is simple: deliver accurate answers to queries quickly, at the lowest cost and to as many users as possible.The complexity of pulling this off is significant and takes place behind the scenes. As AI models grow to billions and trillions of parameters to deliver smarter replies, the compute required to generate each token increases. This requirement reduces the number of tokens that an AI factory can generate and increases cost per token. Keeping inference throughput high and cost per token low requires rapid innovation across every layer of the technology stack, spanning silicon, network systems and software.The latest updates to MLPerf Inference, a peer-reviewed industry benchmark of inference performance, include the addition of Llama 3.1 405B, one of the largest and most challenging-to-run open-weight models. The new Llama 2 70B Interactive benchmark features much stricter latency requirements compared with the original Llama 2 70B benchmark, better reflecting the constraints of production deployments in delivering the best possible user experiences.In addition to the Blackwell platform, the NVIDIA Hopper platform demonstrated exceptional performance across the board, with performance increasing significantly over the last year on Llama 2 70B thanks to full-stack optimizations.NVIDIA Blackwell Sets New RecordsThe GB200 NVL72 system connecting 72 NVIDIA Blackwell GPUs to act as a single, massive GPU delivered up to 30x higher throughput on the Llama 3.1 405B benchmark over the NVIDIA H200 NVL8 submission this round. This feat was achieved through more than triple the performance per GPU and a 9x larger NVIDIA NVLink interconnect domain.While many companies run MLPerf benchmarks on their hardware to gauge performance, only NVIDIA and its partners submitted and published results on the Llama 3.1 405B benchmark.Production inference deployments often have latency constraints on two key metrics. The first is time to first token (TTFT), or how long it takes for a user to begin seeing a response to a query given to a large language model. The second is time per output token (TPOT), or how quickly tokens are delivered to the user.The new Llama 2 70B Interactive benchmark has a 5x shorter TPOT and 4.4x lower TTFT modeling a more responsive user experience. On this test, NVIDIAs submission using an NVIDIA DGX B200 system with eight Blackwell GPUs tripled performance over using eight NVIDIA H200 GPUs, setting a high bar for this more challenging version of the Llama 2 70B benchmark.Combining the Blackwell architecture and its optimized software stack delivers new levels of inference performance, paving the way for AI factories to deliver higher intelligence, increased throughput and faster token rates.NVIDIA Hopper AI Factory Value Continues IncreasingThe NVIDIA Hopper architecture, introduced in 2022, powers many of todays AI inference factories, and continues to power model training. Through ongoing software optimization, NVIDIA increases the throughput of Hopper-based AI factories, leading to greater value.On the Llama 2 70B benchmark, first introduced a year ago in MLPerf Inference v4.0, H100 GPU throughput has increased by 1.5x. The H200 GPU, based on the same Hopper GPU architecture with larger and faster GPU memory, extends that increase to 1.6x.Hopper also ran every benchmark, including the newly added Llama 3.1 405B, Llama 2 70B Interactive and graph neural network tests. This versatility means Hopper can run a wide range of workloads and keep pace as models and usage scenarios grow more challenging.It Takes an EcosystemThis MLPerf round, 15 partners submitted stellar results on the NVIDIA platform, including ASUS, Cisco, CoreWeave, Dell Technologies, Fujitsu, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Oracle Cloud Infrastructure, Quanta Cloud Technology, Supermicro, Sustainable Metal Cloud and VMware.The breadth of submissions reflects the reach of the NVIDIA platform, which is available across all cloud service providers and server makers worldwide.MLCommons work to continuously evolve the MLPerf Inference benchmark suite to keep pace with the latest AI developments and provide the ecosystem with rigorous, peer-reviewed performance data is vital to helping IT decision makers select optimal AI infrastructure.Learn more about MLPerf.Images and video taken at an Equinix data center in the Silicon Valley.
    0 Comentários ·0 Compartilhamentos ·9 Visualizações
  • NVIDIA GeForce RTX 50 Series Accelerates Adobe Premiere Pro and Media Encoders 4:2:2 Color Sampling
    blogs.nvidia.com
    Video editing workflows are getting a lot more colorful.Adobe recently announced massive updates to Adobe Premiere Pro (beta) and Adobe Media Encoder, including PC support for 4:2:2 video color editing.The 4:2:2 color format is a game changer for professional video editors, as it retains nearly as much color information as 4:4:4 while greatly reducing file size. This improves color grading and chroma keying using color information to isolate a specific range of hues while maximizing efficiency and quality.In addition, new NVIDIA GeForce RTX 5090 and 5080 laptops built on the NVIDIA Blackwell architecture are out now, accelerating 4:2:2 and advanced AI-powered features across video-editing workflows.Adobe and other industry partners are attending NAB Show a premier gathering of over 100,000 leaders in the broadcast, media and entertainment industries running April 5-9 in Las Vegas. Professionals in these fields will come together for education, networking and exploring the latest technologies and trends.Shed Some Color on 4:2:2Consumer cameras that are limited to 4:2:0 color compression capture a limited amount of color information. 4:2:0 is acceptable for video playback on browsers, but professional video editors often rely on cameras that capture 4:2:2 color depth with precise color accuracy to ensure higher color fidelity.Adobe Premiere Pros beta with 4:2:2 means video data can now provide double the color information with just a 1.3x increase in raw file size over 4:2:0. This unlocks several key benefits within professional video-production workflows:Increased Color Accuracy: 10-bit 4:2:2 retains more color information compared with 8-bit 4:2:0, leading to more accurate color representation and better color grading results.4:2:2 offers more accurate color representation for better color grading results.More Flexibility: The extra color data allows for increased flexibility during color correction and grading, enabling more nuanced adjustments and corrections.Improved Keying: 4:2:2 is particularly beneficial for keying including green screening as it enables cleaner, more accurate extraction of the subject from the background, as well as cleaner edges of small keyed objects like hair.4:2:2 enables cleaner green screen video content.Smaller File Sizes: Compared with 4:4:4, 4:2:2 reduces file sizes without significantly impacting picture quality, offering an optimal balance between quality and storage.Combining 4:2:2 support with NVIDIA hardware increases creative possibilities.Advanced Video EditingProsumer-grade cameras from most major brands support HEVC and H.264 10-bit 4:2:2 formats to deliver superior image quality, manageable file sizes and the flexibility needed for professional video production.GeForce RTX 50 Series GPUs paired with Microsoft Windows 11 come with GPU-powered decode acceleration in HEVC and H.264 10-bit 4:2:2 formats.GPU-powered decode enables faster-than-real-time playback without stuttering, the ability to work with original camera media instead of proxies, smoother timeline responsiveness and reduced CPU load freeing system resources for multi-app workflows and creative tasks.RTX 50 Series 4:2:2 hardware can decode up to six 4K 60 frames-per-second video sources on an RTX 5090-enabled studio PC, enabling smooth multi-camera video-editing workflows on Adobe Premiere Pro.Video exports are also accelerated with NVIDIAs ninth-generation encoder and sixth-generation decoder.NVIDIA and GeForce RTX Laptop GPU encoders and decoders.In GeForce RTX 50 Series GPUs, the ninth-generation NVIDIA video encoder, NVENC, offers an 8% BD-BR upgrade in video encoding efficiency when exporting to HEVC on Premiere Pro.Adobe AI AcceleratedAdobe delivers an impressive array of advanced AI features for idea generation, enabling streamlined processes, improved productivity and opportunities to explore new artistic avenues all accelerated by NVIDIA RTX GPUs.For example, Adobe Media Intelligence, a feature in Premiere Pro (beta) and After Effects (beta), uses AI to analyze footage and apply semantic tags to clips. This lets users more easily and quickly find specific footage by describing its content, including objects, locations, camera angles and even transcribed spoken words.Media Intelligence runs 30% faster on the GeForce RTX 5090 Laptop GPU compared with the GeForce RTX 4090 Laptop GPU.In addition, the Enhance Speech feature in Premiere Pro (beta) improves the quality of recorded speech by filtering out unwanted noise and making the audio sound clearer and more professional. Enhance Speech runs 7x faster on GeForce RTX 5090 Laptop GPUs compared to the MacBook Pro M4 Max.Visit Adobes Premiere Pro page to download a free trial of the beta and explore the slew of AI-powered features across the Adobe Creative Cloud and Substance 3D apps.Unleash (AI)nfinite PossibilitiesGeForce RTX 5090 and 5080 Series laptops deliver the largest-ever generational leap in portable performance for creating, gaming and all things AI.They can run creative generative AI models such as Flux up to 2x faster in a smaller memory footprint, compared with the previous generation.The previously mentioned ninth-generation NVIDIA encoders elevate video editing and livestreaming workflows, and come with NVIDIA DLSS 4 technology and up to 24GB of VRAM to tackle massive 3D projects.NVIDIA Max-Q hardware technologies use AI to optimize every aspect of a laptop the GPU, CPU, memory, thermals, software, display and more to deliver incredible performance and battery life in thin and quiet devices.All GeForce RTX 50 Series laptops include NVIDIA Studio platform optimizations, with over 130 GPU-accelerated content creation apps and exclusive Studio tools including NVIDIA Studio Drivers, tested extensively to enhance performance and maximize stability in popular creative apps.The game-changing NVIDIA GeForce RTX 5090 and 5080 GPU laptops are available now.Adobe will participate in the Creator Lab at NAB Show, offering hands-on training for editors to elevate their skills with Adobe tools. Attend a 30-minute section and try out Puget Systems laptops equipped with GeForce RTX 5080 Laptop GPUs to experience blazing-fast performance and demo new generative AI features.Use NVIDIAs product finder to explore available GeForce RTX 50 Series laptops with complete specifications.New creative app updates and optimizations are powered by the NVIDIA Studio platform. Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·9 Visualizações
  • RT NVIDIA AI PC: AI at your fingertips. NVIDIA NIM microservices arrive on RTX AI PCs & workstations making AI tool creation easier than ever. Plus...
    x.com
    RTNVIDIA AI PCAI at your fingertips. NVIDIA NIM microservices arrive on RTX AI PCs & workstations making AI tool creation easier than ever. Plus, Project G-Assist System Assistant expands PC AI abilities with a custom plugin builder.#RTXAIGarage: https://nvda.ws/4iZLlRn
    0 Comentários ·0 Compartilhamentos ·29 Visualizações
  • Bubbles that look real enough to pop! Master the fine art of realistic 3D bubbles in Part 4 of our Studio Sessions tutorial series hosted by Alek...
    x.com
    Bubbles that look real enough to pop! Master the fine art of realistic 3D bubbles in Part 4 of our Studio Sessions tutorial series hosted by Aleksandr Eskin. Watch now https://nvda.ws/4iWy8c2
    0 Comentários ·0 Compartilhamentos ·30 Visualizações
  • Every castle holds a story. @OOsteras "Edrugarth" is an awe-inspiring blend of medieval fantasy and digital artistry. Share your own artwork ...
    x.com
    Every castle holds a story. @OOsteras "Edrugarth" is an awe-inspiring blend of medieval fantasy and digital artistry.Share your own artwork made with an NVIDIA GPU using #StudioShare for a chance to be featured!
    0 Comentários ·0 Compartilhamentos ·25 Visualizações
  • Which part of your creative process do you enjoy most: concept, creation, or final polish?
    x.com
    Which part of your creative process do you enjoy most: concept, creation, or final polish?
    0 Comentários ·0 Compartilhamentos ·26 Visualizações
  • Industrial Ecosystem Adopts Mega NVIDIA Omniverse Blueprint to Train Physical AI in Digital Twins
    blogs.nvidia.com
    Advances in physical AI are enabling organizations to embrace embodied AI across their operations, bringing unprecedented intelligence, automation and productivity to the worlds factories, warehouses and industrial facilities.Humanoid robots can work alongside human teams, autonomous mobile robots (AMRs) can navigate complex warehouse environments, and intelligent cameras and visual AI agents can monitor and optimize entire facilities. In these ways, physical AI is becoming integral to todays industrial operations.Helping industrial enterprises accelerate the development, testing and deployment of physical AI, the Mega NVIDIA Omniverse Blueprint for testing multi-robot fleets in digital twins is now available in preview on build.nvidia.com.At Hannover Messe a trade show on industrial development running through April 4 in Germany manufacturing, warehousing and supply chain leaders such as Accenture and Schaeffler are showcasing their adoption of the blueprint to simulate Digit, a humanoid robot from Agility Robotics, and discussing how they use industrial AI and digital twins to optimize facility layouts, material flow and collaboration between humans and robots inside complex production environments.In addition, NVIDIA ecosystem partners including Delta Electronics, Rockwell Automation and Siemens are announcing further integrations with NVIDIA Omniverse and NVIDIA AI technologies at the event.Digital Twins the Training Ground for Physical AIIndustrial facility digital twins are physically accurate virtual replicas of real-world facilities that serve as critical testing grounds for simulating and validating physical AI and how robots and autonomous fleets interact, collaborate and tackle complex tasks before deployment.Developers can use NVIDIA Omniverse platform technologies and the Universal Scene Description (OpenUSD) framework to develop digital twins of their facilities and processes. This simulation-first approach dramatically accelerates development cycles while reducing the costs and risks associated with real-world testing.Built for a Diversity of Robots and AI AgentsThe Mega blueprint equips industrial enterprises with a reference workflow for combining sensor simulation and synthetic data generation to simulate complex human-robot interactions and verify the performance of autonomous systems in industrial digital twins.Enterprises can use Mega to test various robot brains and policies at scale for mobility, navigation, dexterity and spatial reasoning. This enables fleets comprising different types of robots to work together as a coordinated system.As robot brains execute their missions in simulation, they perceive the results of their actions through sensor simulation and plan their next action. This cycle continues until the policies are refined and ready for deployment.Once validated, these policies are deployed to real robots, which continue to learn from their environment sending sensor information back through the entire loop and creating a continuous learning and improvement cycle.Transforming Industrial Operations With Visual AI AgentsIn addition to AMRs and humanoid robots, advanced visual AI agents extract information from live and recorded video data, enabling new levels of intelligence and automation. These visual AI agents bring real-time contextual awareness to robots and help to improve worker safety, maintain warehouse compliance, support visual inspection and maximize space utilization.To support developers building visual AI agents, which can be integrated with the Mega blueprint, NVIDIA last year announced an AI Blueprint for video search and summarization (VSS). At Hannover Messe, leading partners are featuring how they use the VSS blueprint to improve productivity and operational efficiency.Accelerating Industrial DigitalizationThe industrial world is now experiencing its software-defined moment, with visual AI agents and digital twins as the training ground for physical AI.Join NVIDIA and its partners at Hannover Messe to discover how AI agents and real-time simulation, powered by NVIDIAs Three Computer Solution, are reshaping industrial workflows and driving innovation, automation and efficiency in manufacturing.Read the technical blog to learn more about the Mega blueprint for industrial robot fleets. See the blueprint in action on this interactive demo page.Stay up to date by subscribing to NVIDIA news, joining the Omniverse community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.Explore the new self-paced Learn OpenUSD training curriculum that includes free NVIDIA Deep Learning Institute courses for 3D practitioners and developers.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·31 Visualizações
  • The Dream Life Awaits: Play inZOI on GeForce NOW Anytime, Anywhere
    blogs.nvidia.com
    A new resident is moving into the cloud KRAFTONs inZOI joins the 2,000+ games in the GeForce NOW cloud gaming library.Plus, members can get ready for an exclusive sneak peek as the Sunderfolk First Look Demo comes to the cloud. The demo is exclusively available for players on GeForce NOW until April 7, including Performance and Ultimate members as well as free users.And explore the world of Atomfall part of 12 games joining the cloud this week.Cloud of PossibilitiesLive the life of your dreams in the cloud.In inZOI a groundbreaking life simulation game by Krafton that pushes the genres boundaries take on the role of an intern at AR COMPANY, managing virtual beings called Zois in a simulated city.The game features over 400 mental elements influencing Zois behaviors. Experience the games dynamic weather system, open-world environments inspired by real locations and cinematic cut scenes for key life events and even create in-game objects. inZOI lets players craft unique stories and live out their dreams in a meticulously designed virtual world.Dive into the world of Zois without the need for high-end hardware. Members can manage their virtual homes, customize characters and explore the games dynamic environments from various devices, streaming its detailed graphics and complex simulations with ease.A Magical GatewaySunderfolks First Look Demo has arrived on GeForce NOW, offering a tantalizing look into the magical realm of the Sunderlands. Designed as a TV-first experience, this shared-turn-based tactical role-playing game (RPG) enables using a mobile phone as the gameplay controller. Up to four players can gather around the big screen and embark on a journey filled with strategic battles.This second-screen approach keeps players engaged in real time, adding new layers of immersion. With all six unique character classes unlocked from the start, players can experience the early hours of the game, experimenting with different team compositions and tactics to overcome the challenges that await.Let the magic begin.Accessing the demo is a breeze head to the GeForce NOW app, select Sunderfolk and jump right in. Explore the Sunderlands, engage in flexible turn-based combat and help rebuild the village of Arden to get a taste of the full games depth and camaraderie.Gather the gaming squad, grab a phone and prepare to write a completely new legend in this RPG adventure. The First Look Demo is only available on GeForce NOW, where members can enjoy high-quality graphics and seamless gameplay on their phones and tablets, along with the innovative mobile-as-controller mechanic that makes Sunderfolks couch co-op experience so engaging.Epic Adventures AwaitEnter a world where danger lurks in every shadow.Blending folk horror and intense combat, Atomfall is a survival-action game set in an alternate 1960s Britain, where the Windscale nuclear disaster has left Northern England a radioactive wasteland. Players explore eerie open zones filled with mutated creatures, cultists and Cold War mysteries while scavenging resources, crafting weapons and uncovering the truth behind the disaster. GeForce NOW members can stream it today across their devices of choice.Look for the following games available to stream in the cloud this week:Sunderfolk First Look Demo (New release, March 25)Atomfall (New release on Steam and Xbox available on PC Game Pass, March 27)The First Berserker: Khazan (New release on Steam, March 27)inZOI (New release on Steam, March 27)Beholder (Epic Games Store)Bus Simulator 21 (Epic Games Store)Galacticare (Xbox, available on PC Game Pass)Half-Life 2 RTX Demo (Steam)The Legend of Heroes: Trails through Daybreak II (Steam)One Lonely Outpost (Xbox, available on PC Game Pass)Psychonauts (Xbox, available on PC Game Pass)Undying (Epic Games Store)What are you planning to play this weekend? Let us know on X or in the comments below.Which game do you think deserves a sequel? NVIDIA GeForce NOW (@NVIDIAGFN) March 26, 2025
    0 Comentários ·0 Compartilhamentos ·100 Visualizações
  • Buzz Solutions Uses Vision AI to Supercharge the Electric Grid
    blogs.nvidia.com
    The reliability of the electric grid is critical.From handling demand surges and evolving power needs to preventing infrastructure failures that can cause wildfires, utility companies have a lot to keep tabs on.Buzz Solutions a member of the NVIDIA Inception program for cutting-edge startups is helping by using AI to improve how utilities monitor and maintain their infrastructure.Kaitlyn Albertoli, CEO and cofounder of Buzz Solutions joined the AI Podcast to explain how the companys vision AI technology helps utilities spot potential problems faster.Buzz Solutions helps utility companies analyze the massive amounts of inspection data collected by drones and helicopters. The companys proprietary machine learning algorithms identify potential issues ranging from broken and rusted components to encroaching vegetation and unwelcome wildlife visits before they cause outages or wildfires.To help address substation issues, Buzz Solutions built PowerGUARD, a container-based application pipeline that uses AI to analyze video streams from substation cameras in real time. It detects security, safety, fire, smoke and equipment issues, annotates the video, then sends alerts via email or to a dashboard.PowerGUARD uses the NVIDIA DeepStream software development kit for processing and inference of video streams used in real-time video analytics. DeepStream runs within the NVIDIA Metropolis framework on the NVIDIA Jetson edge AI platform or on cloud-based virtual machines to improve performance, reduce costs and save time.Albertoli believes AI is just getting started in the utility industry, as it enables workers to take action rather than spend months reviewing images manually. We are just at the tip of the iceberg of seeing AI enter into the energy sector and start to provide real value, she said.Time Stamps05:15: How Buzz Solutions saw an opportunity in the massive amounts of inspection data utility companies were collecting but not analyzing.12:25: The importance of modernizing energy infrastructure with actionable intelligence.16:27: How AI identifies critical risks like rusted components, vegetation encroachment and sparking issues before they cause wildfires.20:00: Buzz Solutions innovative use of synthetic data to train algorithms for rare events.You Might Also LikeTelenor Builds Norways First AI Factory, Offering Sustainable and Sovereign Data ProcessingTelenor opened Norways first AI factory in November 2024, enabling organizations to process sensitive data securely on Norwegian soil while prioritizing environmental responsibility. Telenors Chief Innovation Officer and Head of the AI Factory Kaaren Hilsen discusses the AI factorys rapid development, going from concept to reality in under a year.NVIDIAs Josh Parker on How AI and Accelerated Computing Drive SustainabilityAI isnt just about building smarter machines. Its about building a greener world. AI and accelerated computing are helping industries tackle some of the worlds toughest environmental challenges. Joshua Parker, senior director of corporate sustainability at NVIDIA, explains how these technologies are powering a new era of energy efficiency.Currents of Change: ITIFs Daniel Castro on Energy-Efficient AI and Climate ChangeAI is everywhere. So, too, are concerns about advanced technologys environmental impact. Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of its Center for Data Innovation, discusses his AI energy use report that addresses misconceptions about AIs energy consumption. He also talks about the need for continued development of energy-efficient technology.Subscribe to the AI PodcastGet the AI Podcast through Amazon Music, Apple Podcasts, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, SoundCloud, Spotify, Stitcher and TuneIn.
    0 Comentários ·0 Compartilhamentos ·100 Visualizações
  • NVIDIA NIM Microservices Now Available to Streamline Agentic Workflows on RTX AI PCs and Workstations
    blogs.nvidia.com
    Generative AI is unlocking new capabilities for PCs and workstations, including game assistants, enhanced content-creation and productivity tools and more.NVIDIA NIM microservices, available now, and AI Blueprints, in the coming weeks, accelerate AI development and improve its accessibility. Announced at the CES trade show in January, NVIDIA NIM provides prepackaged, state-of-the-art AI models optimized for the NVIDIA RTX platform, including the NVIDIA GeForce RTX 50 Series and, now, the new NVIDIA Blackwell RTX PRO GPUs. The microservices are easy to download and run. They span the top modalities for PC development and are compatible with top ecosystem applications and tools.The experimental System Assistant feature of Project G-Assist was also released today. Project G-Assist showcases how AI assistants can enhance apps and games. The System Assistant allows users to run real-time diagnostics, get recommendations on performance optimizations, or control system software and peripherals all via simple voice or text commands. Developers and enthusiasts can extend its capabilities with a simple plug-in architecture and new plug-in builder.Amid a pivotal moment in computing where groundbreaking AI models and a global developer community are driving an explosion in AI-powered tools and workflows NIM microservices, AI Blueprints and G-Assist are helping bring key innovations to PCs. This RTX AI Garage blog series will continue to deliver updates, insights and resources to help developers and enthusiasts build the next wave of AI on RTX AI PCs and workstations.Ready, Set, NIM!Though the pace of innovation with AI is incredible, it can still be difficult for the PC developer community to get started with the technology.Bringing AI models from research to the PC requires curation of model variants, adaptation to manage all of the input and output data, and quantization to optimize resource usage. In addition, models must be converted to work with optimized inference backend software and connected to new AI application programming interfaces (APIs). This takes substantial effort, which can slow AI adoption.NVIDIA NIM microservices help solve this issue by providing prepackaged, optimized, easily downloadable AI models that connect to industry-standard APIs. Theyre optimized for performance on RTX AI PCs and workstations, and include the top AI models from the community, as well as models developed by NVIDIA.NIM microservices support a range of AI applications, including large language models (LLMs), vision language models, image generation, speech processing, retrieval-augmented generation (RAG)-based search, PDF extraction and computer vision. Ten NIM microservices for RTX are available, supporting a range of applications, including language and image generation, computer vision, speech AI and more. Get started with these NIM microservices today:Language and Reasoning: Deepseek-R1-distill-llama-8B, Mistral-nemo-12B-instruct, Llama3.1-8B-instructImage Generation: Flux.devAudio: Riva Parakeet-ctc-0.6B-asr, Maxine Studio VoiceRAG: Llama-3.2-NV-EmbedQA-1B-v2Computer Vision and Understanding: NV-CLIP, PaddleOCR, Yolo-X-v1NIM microservices are also available through top AI ecosystem tools and frameworks.For AI enthusiasts, AnythingLLM and ChatRTX now support NIM, making it easy to chat with LLMs and AI agents through a simple, user-friendly interface. With these tools, users can create personalized AI assistants and integrate their own documents and data, helping automate tasks and enhance productivity.For developers looking to build, test and integrate AI into their applications, FlowiseAI and Langflow now support NIM and offer low- and no-code solutions with visual interfaces to design AI workflows with minimal coding expertise. Support for ComfyUI is coming soon. With these tools, developers can easily create complex AI applications like chatbots, image generators and data analysis systems.In addition, Microsoft VS Code AI Toolkit, CrewAI and Langchain now support NIM and provide advanced capabilities for integrating the microservices into application code, helping ensure seamless integration and optimization.Visit the NVIDIA technical blog and build.nvidia.com to get started.NVIDIA AI Blueprints Will Offer Pre-Built WorkflowsNVIDIA AI Blueprints give AI developers a head start in building generative AI workflows with NVIDIA NIM microservices.Blueprints are ready-to-use, extensible reference samples that bundle everything needed source code, sample data, documentation and a demo app to create and customize advanced AI workflows that run locally. Developers can modify and extend AI Blueprints to tweak their behavior, use different models or implement completely new functionality.PDF to podcast AI Blueprint coming soon.The PDF to podcast AI Blueprint will transform documents into audio content so users can learn on the go. By extracting text, images and tables from a PDF, the workflow uses AI to generate an informative podcast. For deeper dives into topics, users can then have an interactive discussion with the AI-powered podcast hosts.The AI Blueprint for 3D-guided generative AI will give artists finer control over image generation. While AI can generate amazing images from simple text prompts, controlling image composition using only words can be challenging. With this blueprint, creators can use simple 3D objects laid out in a 3D renderer like Blender to guide AI image generation. The artist can create 3D assets by hand or generate them using AI, place them in the scene and set the 3D viewport camera. Then, a prepackaged workflow powered by the FLUX NIM microservice will use the current composition to generate high-quality images that match the 3D scene.NVIDIA NIM on RTX With Windows Subsystem for LinuxOne of the key technologies that enables NIM microservices to run on PCs is Windows Subsystem for Linux (WSL).Microsoft and NVIDIA collaborated to bring CUDA and RTX acceleration to WSL, making it possible to run optimized, containerized microservices on Windows. This allows the same NIM microservice to run anywhere, from PCs and workstations to the data center and cloud.Get started with NVIDIA NIM on RTX AI PCs at build.nvidia.com.Project G-Assist Expands PC AI Features With Custom Plug-InsAs part of Project G-Assist, an experimental version of the System Assistant feature for GeForce RTX desktop users is now available via the NVIDIA App, with laptop support coming soon.G-Assist helps users control a broad range of PC settings including optimizing game and system settings, charting frame rates and other key performance statistics, and controlling select peripherals settings such as lighting all via basic voice or text commands.G-Assist is built on NVIDIA ACE the same AI technology suite game developers use to breathe life into non-player characters. Unlike AI tools that use massive cloud-hosted AI models that require online access and paid subscriptions, G-Assist runs locally on a GeForce RTX GPU. This means its responsive, free and can run without an internet connection. Manufacturers and software providers are already using ACE to create custom AI Assistants like G-Assist, including MSIs AI Robot engine, the Streamlabs Intelligent AI Assistant and upcoming capabilities in HPs Omen Gaming hub.G-Assist was built for community-driven expansion. Get started with this NVIDIA GitHub repository, including samples and instructions for creating plug-ins that add new functionality. Developers can define functions in simple JSON formats and drop configuration files into a designated directory, allowing G-Assist to automatically load and interpret them. Developers can even submit plug-ins to NVIDIA for review and potential inclusion.Currently available sample plug-ins include Spotify, to enable hands-free music and volume control, and Google Gemini allowing G-Assist to invoke a much larger cloud-based AI for more complex conversations, brainstorming sessions and web searches using a free Google AI Studio API key.In the clip below, youll see G-Assist ask Gemini about which Legend to pick in Apex Legends when solo queueing, and whether its wise to jump into Nightmare mode at level 25 in Diablo IV:For even more customization, follow the instructions in the GitHub repository to generate G-Assist plug-ins using a ChatGPT-based Plug-in Builder. With this tool, users can write and export code, then integrate it into G-Assist enabling quick, AI-assisted functionality that responds to text and voice commands.Watch how a developer used the Plug-in Builder to create a Twitch plug-in for G-Assist to check if a streamer is live:More details on how to build, share and load plug-ins are available in the NVIDIA GitHub repository.Check out the G-Assist article for system requirements and additional information.Build, Create, InnovateNVIDIA NIM microservices for RTX are available at build.nvidia.com, providing developers and AI enthusiasts with powerful, ready-to-use tools for building AI applications.Download Project G-Assist through the NVIDIA Apps Home tab, in the Discovery section. G-Assist currently supports GeForce RTX desktop GPUs, as well as a variety of voice and text commands in the English language. Future updates will add support for GeForce RTX Laptop GPUs, new and enhanced G-Assist capabilities, as well as support for additional languages. Press Alt+G after installation to activate G-Assist.Each week, RTX AI Garage features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X and stay informed by subscribing to the RTX AI PC newsletter.Follow NVIDIA Workstation on LinkedIn and X.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·105 Visualizações
  • Lights, camera, render! In Part 3 of our Studio Sessions tutorial series, Aleksandr Eskin takes the next step in his 3D photorealistic dropper ...
    x.com
    Lights, camera, render! In Part 3 of our Studio Sessions tutorial series, Aleksandr Eskin takes the next step in his 3D photorealistic dropper scene workflow with initial rendering. Watch now: https://nvda.ws/4iEzlEQ
    0 Comentários ·0 Compartilhamentos ·134 Visualizações
  • Multiple monitors or one ultrawide? What's your preference and why?
    x.com
    Multiple monitors or one ultrawide? What's your preference and why?
    0 Comentários ·0 Compartilhamentos ·134 Visualizações
  • Assassins Creed Shadows Emerges From the Mist on GeForce NOW
    blogs.nvidia.com
    Time to sharpen the blade. GeForce NOW brings a legendary addition to the cloud: Ubisofts highly anticipated Assassins Creed Shadows is now available for members to stream.Plus, dive into the updated version of the iconic Fable Anniversary part of 11 games joining the cloud this week.Silent as a ShadowTake the Leap of Faith from the cloud.Explore 16th-century Japan, uncover conspiracies and shape the destiny of a nation all from the cloud.Assassins Creed Shadows unfolds in 1579, during the turbulent Azuchi-Momoyama period of feudal Japan, a time of civil war and cultural exchange.Step into the roles of Naoe, a fictional shinobi assassin and daughter of Fujibayashi Nagato, and Yasuke, a character based on the historical African samurai. Their stories intertwine as they find themselves on opposite sides of a conflict.The games dynamic stealth system enables players to hide in shadows and use a new Observe mechanic to identify targets, tag enemies and highlight objectives. Yasuke and Naoe each have unique abilities and playstyles: Naoe excels in stealth, equipped with classic Assassin techniques and shinobi skills, while Yasuke offers a more combat-focused approach.Navigate the turbulent Sengoku period on GeForce NOW, and experience the games breathtaking landscapes and intense combat at up to 4K resolution and 120 frames per second with an Ultimate membership. Every sword clash and sweeping vista is delivered with exceptional smoothness and clarity.A Classic RebornFable Anniversary revitalizes the original Fable: The Lost Chapters with enhanced graphics, a new save system and Xbox achievements. This action role-playing game invites players to shape their heroes destinies in the whimsical world of Albion.Make every choice from the cloud.Fable Anniversary weaves an epic tale of destiny and choice, following the journey of a young boy whose life is forever changed when bandits raid his peaceful village of Oakvale. Recruited to the Heroes Guild, he embarks on a quest to uncover the truth about his family and confront the mysterious Jack of Blades.Players shape their heros destiny through a series of moral choices. These decisions influence the storys progression and even manifest physically on the character.Stream the title with a GeForce NOW membership across PCs that may not be game-ready, Macs, mobile devices, and Samsung and LG smart TVs. GeForce NOW transforms these devices into powerful gaming rigs, with up to eight-hour gaming sessions for Ultimate members.Unleash the GamesCrash, smash, repeat.Wreckfest 2, the highly anticipated sequel by Bugbear Entertainment to the original demolition derby racing game, promises an even more intense and chaotic experience. The game features a range of customizable cars, from muscle cars to novelty vehicles, each with a story to tell.Play around with multiple modes, including traditional racing with physics-driven handling, and explore demolition derby arenas where the goal is to cause maximum destruction. With enhanced multiplayer features, including skills-based matchmaking and split-screen mode, Wreckfest 2 is the ultimate playground for destruction-racing enthusiasts.Look for the following games available to stream in the cloud this week:Assassins Creed Shadows (New release on Steam and Ubisoft Connect, March 20)Wreckfest 2 (New release on Steam, March 20)Aliens: Dark Descent (Xbox, available on PC Game Pass)Crime Boss: Rockay City (Epic Games Store)Eternal Strands (Xbox, available on PC Game Pass)Fable Anniversary (Steam)Motor Town: Behind the Wheel (Steam)Nine Sols (Xbox, available on PC Game Pass)Quake Live (Steam)Skydrift Infinity (Epic Games Store)To the Rescue! (Epic Games Store)What are you planning to play this weekend? Let us know on X or in the comments below.If you could go on a vacation to any video game realm, where would you go? NVIDIA GeForce NOW (@NVIDIAGFN) March 19, 2025
    0 Comentários ·0 Compartilhamentos ·114 Visualizações
  • EPRI, NVIDIA and Collaborators Launch Open Power AI Consortium to Transform the Future of Energy
    blogs.nvidia.com
    The power and utilities sector keeps the lights on for the worlds populations and industries. As the global energy landscape evolves, so must the tools it relies on.To advance the next generation of electricity generation and distribution, many of the industrys members are joining forces through the creation of the Open Power AI Consortium. The consortium includes energy companies, technology companies and researchers developing AI applications to tackle domain-specific challenges, such as adapting to an increased deployment of distributed energy resources and significant load growth on electric grids.Led by independent, nonprofit energy R&D organization EPRI, the consortium aims to spur AI adoption in the power sector through a collaborative effort to build open models using curated, industry-specific data. The initiative was launched today at NVIDIA GTC, a global AI conference taking place through Friday, March 21, in San Jose, California.Over the next decade, AI has the great potential to revolutionize the power sector by delivering the capability to enhance grid reliability, optimize asset performance, and enable more efficient energy management, said Arshad Mansoor, EPRIs president and CEO. With the Open Power AI Consortium, EPRI and its collaborators will lead this transformation, driving innovation toward a more resilient and affordable energy future.As part of the consortium, EPRI, NVIDIA and Articul8, a member of the NVIDIA Inception program for cutting-edge startups, are developing a set of domain-specific, multimodal large language models trained on massive libraries of proprietary energy and electrical engineering data from EPRI that can help utilities streamline operations, boost energy efficiency and improve grid resiliency.The first version of an industry-first open AI model for electric and power systems was developed using hundreds of NVIDIA H100 GPUs and is expected to soon be available in early access as an NVIDIA NIM microservice.Working with EPRI, we aim to leverage advanced AI tools to address todays unique industry challenges, positioning us at the forefront of innovation and operational excellence, said Vincent Sorgi, CEO of PPL Corporation and EPRI board chair.PPL is a leading U.S. energy company that provides electricity and natural gas to more than 3.6 million customers in Pennsylvania, Kentucky, Rhode Island and Virginia.The Open AI Consortiums Executive Advisory Committee includes executives from over 20 energy companies such as Duke Energy, Pacific Gas & Electric Company and Portland General Electric, as well as leading tech companies such as AWS, Oracle and Microsoft. The consortium plans to further expand its global member base.Powering Up AI to Energize Operations, Drive InnovationGlobal energy consumption is projected to grow by nearly 4% annually through 2027, according to the International Energy Agency. To support this surge in demand, electricity providers are looking to enhance the resiliency of power infrastructure, balance diverse energy sources and expand the grids capacity.AI agents trained on thousands of documents specific to this sector including academic research, industry regulations and standards, and technical documents can enable utility and energy companies to more quickly assess energy needs and prepare the studies and permits required to improve infrastructure.We can bring AI to the global power sector in a much more accelerated way by working together to develop foundation models for the industry, and collaborating with the power sector to y apply solutions tailored to its unique needs, Mansoor said.Utilities could tap the consortiums model to help accelerate interconnection studies, which analyze the feasibility and potential impact of connecting new generators to the existing electric grid. The process varies by region but can take up to four years to complete. By introducing AI agents that can support the analysis, the consortium aims to cut this timeline down by at least 5x.The AI model could also be used to support the preparation of licenses, permits, environmental studies and utility rate cases, where energy companies seek regulatory approval and public comment on proposed changes to electricity rates.Beyond releasing datasets and models, the consortium also aims to develop a standardized framework of benchmarks to help utilities, researchers and other energy sector stakeholders evaluate the performance and reliability of AI technologies.Learn more about the Open Power AI Consortium online and in EPRIs sessions at GTC:Accelerate Energy Transformation With Industry Domain AI Models Arshad Mansoor, president and CEO of EPRIEnergy Transition: Impact of Generative AI in the Power Ecosystem of Generation, Transmission and Distribution Swati Daji, executive vice president and chief financial, risk and operations officer at EPRITo learn more about advancements in AI across industries, watch the GTC keynote by NVIDIA founder and CEO Jensen Huang:See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·126 Visualizações
  • Innovation to Impact: How NVIDIA Research Fuels Transformative Work in AI, Graphics and Beyond
    blogs.nvidia.com
    The roots of many of NVIDIAs landmark innovations the foundational technology that powers AI, accelerated computing, real-time ray tracing and seamlessly connected data centers can be found in the companys research organization, a global team of around 400 experts in fields including computer architecture, generative AI, graphics and robotics.Established in 2006 and led since 2009 by Bill Dally, former chair of Stanford Universitys computer science department, NVIDIA Research is unique among corporate research organizations set up with a mission to pursue complex technological challenges while having a profound impact on the company and the world.We make a deliberate effort to do great research while being relevant to the company, said Dally, chief scientist and senior vice president of NVIDIA Research. Its easy to do one or the other. Its hard to do both.Dally is among NVIDIA Research leaders sharing the groups innovations at NVIDIA GTC, the premier developer conference at the heart of AI, taking place this week in San Jose, California.We make a deliberate effort to do great research while being relevant to the company. Bill Dally, chief scientist and senior vice presidentWhile many research organizations may describe their mission as pursuing projects with a longer time horizon than those of a product team, NVIDIA researchers seek out projects with a larger risk horizon and a huge potential payoff if they succeed.Our mission is to do the right thing for the company. Its not about building a trophy case of best paper awards or a museum of famous researchers, said David Luebke, vice president of graphics research and NVIDIAs first researcher. We are a small group of people who are privileged to be able to work on ideas that could fail. And so it is incumbent upon us to not waste that opportunity and to do our best on projects that, if they succeed, will make a big difference.Innovating as One TeamOne of NVIDIAs core values is one team a deep commitment to collaboration that helps researchers work closely with product teams and industry stakeholders to transform their ideas into real-world impact.Everybody at NVIDIA is incentivized to figure out how to work together because the accelerated computing work that NVIDIA does requires full-stack optimization, said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. You cant do that if each piece of technology exists in isolation and everybodys staying in silos. You have to work together as one team to achieve acceleration.When evaluating potential projects, NVIDIA researchers consider whether the challenge is a better fit for a research or product team, whether the work merits publication at a top conference, and whether theres a clear potential benefit to NVIDIA. If they decide to pursue the project, they do so while engaging with key stakeholders.We are a small group of people who are privileged to be able to work on ideas that could fail. And so it is incumbent upon us to not waste that opportunity. David Luebke, vice president of graphics researchWe work with people to make something real, and often, in the process, we discover that the great ideas we had in the lab dont actually work in the real world, Catanzaro said. Its a tight collaboration where the research team needs to be humble enough to learn from the rest of the company what they need to do to make their ideas work.The team shares much of its work through papers, technical conferences and open-source platforms like GitHub and Hugging Face. But its focus remains on industry impact.We think of publishing as a really important side effect of what we do, but its not the point of what we do, Luebke said.NVIDIA Researchs first effort was focused on ray tracing, which after a decade of sustained work led directly to the launch of NVIDIA RTX and redefined real-time computer graphics. The organization now includes teams specializing in chip design, networking, programming systems, large language models, physics-based simulation, climate science, humanoid robotics and self-driving cars and continues expanding to tackle additional areas of study and tap expertise across the globe.You have to work together as one team to achieve acceleration. Bryan Catanzaro, vice president of applied deep learning researchTransforming NVIDIA and the IndustryNVIDIA Research didnt just lay the groundwork for some of the companys most well-known products its innovations have propelled and enabled todays era of AI and accelerated computing.It began with CUDA, a parallel computing software platform and programming model that enables researchers to tap GPU acceleration for myriad applications. Launched in 2006, CUDA made it easy for developers to harness the parallel processing power of GPUs to speed up scientific simulations, gaming applications and the creation of AI models.Developing CUDA was the single most transformative thing for NVIDIA, Luebke said. It happened before we had a formal research group, but it happened because we hired top researchers and had them work with top architects.Making Ray Tracing a RealityOnce NVIDIA Research was founded, its members began working on GPU-accelerated ray tracing, spending years developing the algorithms and the hardware to make it possible. In 2009, the project led by the late Steven Parker, a real-time ray tracing pioneer who was vice president of professional graphics at NVIDIA reached the product stage with the NVIDIA OptiX application framework, detailed in a 2010 SIGGRAPH paper.The researchers work expanded and, in collaboration with NVIDIAs architecture group, eventually led to the development of NVIDIA RTX ray-tracing technology, including RT Cores that enabled real-time ray tracing for gamers and professional creators.Unveiled in 2018, NVIDIA RTX also marked the launch of another NVIDIA Research innovation: NVIDIA DLSS, or Deep Learning Super Sampling. With DLSS, the graphics pipeline no longer needs to draw all the pixels in a video. Instead, it draws a fraction of the pixels and gives an AI pipeline the information needed to create the image in crisp, high resolution.https://blogs.nvidia.com/wp-content/uploads/2025/03/DLSS4.mp4Accelerating AI for Virtually Any ApplicationNVIDIAs research contributions in AI software kicked off with the NVIDIA cuDNN library for GPU-accelerated neural networks, which was developed as a research project when the deep learning field was still in its initial stages then released as a product in 2014.As deep learning soared in popularity and evolved into generative AI, NVIDIA Research was at the forefront exemplified by NVIDIA StyleGAN, a groundbreaking visual generative AI model that demonstrated how neural networks could rapidly generate photorealistic imagery.While generative adversarial networks, or GANs, were first introduced in 2014, StyleGAN was the first model to generate visuals that could completely pass muster as a photograph, Luebke said. It was a watershed moment.NVIDIA StyleGANNVIDIA researchers introduced a slew of popular GAN models such as the AI painting tool GauGAN, which later developed into the NVIDIA Canvas application. And with the rise of diffusion models, neural radiance fields and Gaussian splatting, theyre still advancing visual generative AI including in 3D with recent models like Edify 3D and 3DGUT.NVIDIA GauGANIn the field of large language models, Megatron-LM was an applied research initiative that enabled the efficient training and inference of massive LLMs for language-based tasks such as content generation, translation and conversational AI. Its integrated into the NVIDIA NeMo platform for developing custom generative AI, which also features speech recognition and speech synthesis models that originated in NVIDIA Research.Achieving Breakthroughs in Chip Design, Networking, Quantum and MoreAI and graphics are only some of the fields NVIDIA Research tackles several teams are achieving breakthroughs in chip architecture, electronic design automation, programming systems, quantum computing and more.In 2012, Dally submitted a research proposal to the U.S. Department of Energy for a project that would become NVIDIA NVLink and NVSwitch, the high-speed interconnect that enables rapid communication between GPU and CPU processors in accelerated computing systems.NVLink Switch trayIn 2013, the circuit research team published work on chip-to-chip links that introduced a signaling system co-designed with the interconnect to enable a high-speed, low-area and low-power link between dies. The project eventually became the link between the NVIDIA Grace CPU and NVIDIA Hopper GPU.In 2021, the ASIC and VLSI Research group developed a software-hardware codesign technique for AI accelerators called VS-Quant that enabled many machine learning models to run with 4-bit weights and 4-bit activations at high accuracy. Their work influenced the development of FP4 precision support in the NVIDIA Blackwell architecture.And unveiled this year at the CES trade show was NVIDIA Cosmos, a platform created by NVIDIA Research to accelerate the development of physical AI for next-generation robots and autonomous vehicles. Read the research paper and check out the AI Podcast episode on Cosmos for details.Learn more about NVIDIA Research at GTC. Watch the keynote by NVIDIA founder and CEO Jensen Huang below:See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·123 Visualizações
  • NVIDIA Blackwell Powers Real-Time AI for Entertainment Workflows
    blogs.nvidia.com
    AI has been shaping the media and entertainment industry for decades, from early recommendation engines to AI-driven editing and visual effects automation. Real-time AI which lets companies actively drive content creation, personalize viewing experiences and rapidly deliver data insights marks the next wave of that transformation.With the NVIDIA RTX PRO Blackwell GPU series, announced yesterday at the NVIDIA GTC global AI conference, media companies can now harness real-time AI for media workflows with unprecedented speed, efficiency and creative potential.NVIDIA Blackwell serves as the foundation of NVIDIA Media2, an initiative that enables real-time AI by bringing together NVIDIA technologies including NVIDIA NIM microservices, NVIDIA AI Blueprints, accelerated computing platforms and generative AI software to transform all aspects of production workflows and experiences, starting with content creation, streaming and live media.Powering Intelligent Content CreationAccelerated computing enables AI-driven workflows to process massive datasets in real time, unlocking faster rendering, simulation and content generation.NVIDIA RTX PRO Blackwell GPUs series include new features that enable unprecedented graphics and AI performance. The NVIDIA Streaming Multiprocessor offers up to 1.5x faster throughput over the NVIDIA Ada generation, and new neural shaders that integrate AI inside of programmable shaders for advanced content creation.Fourth-generation RT Cores deliver up to 2x the performance of the previous generation, enabling the creation of massive photoreal and physically accurate animated scenes. Fifth-generation Tensor Cores deliver up to 4,000 AI trillion operations per second and add support for FP4 precision. And up to 96GB of GDDR7 memory boosts GPU bandwidth and capacity, allowing applications to run faster and work with larger, more complex datasets for massive 3D and AI projects, large-scale virtual-reality environments and more.Elio Disney/PixarOne of the most exciting aspects of new technology is how it empowers our artists with tools to enhance their creative workflows, said Steve May, chief technology officer of Pixar Animation Studios. With Pixars next-generation renderer, RenderMan XPU optimized for the NVIDIA Blackwell platform 99% of Pixar shots can now fit within the 96GB of memory on the NVIDIA RTX PRO 6000 Blackwell GPUs. This breakthrough will fundamentally improve the way we make movies. Lucasfilm Ltd.Our artists were frequently maxing out our 48GB cards with ILM StageCraft environments and having to battle performance issues on set for 6K and 8K real-time renders, said Stephen Hill, principal rendering engineer at Lucasfilm. The new NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPU lifts these limitations were seeing upwards of a 2.5x performance increase over our current production GPUs, and with 96GB of VRAM we now have twice as much memory to play with.In addition, neural rendering with NVIDIA RTX Kit brings cinematic-quality ray tracing and AI-enhanced graphics to real-time engines, elevating visual fidelity in film, TV and interactive media. Including neural texture compression, neural shaders, RTX Global Illumination and Mega Geometry, RTX Kit is a suite of neural rendering technologies that enhance graphics for games, animation, virtual production scenes and immersive experiences.Fueling the Future of Streaming and Data AnalyticsData analytics is transforming raw audience insights into actionable intelligence faster than ever. NVIDIA accelerated computing and AI-powered frameworks enable studios to analyze viewer behavior, predict engagement patterns and optimize content in real time, driving hyper-personalized experiences and smarter creative decisions.With the new GPUs, users can achieve real-time ingestion and data transformation with GPU-accelerated data loading and cleansing at scale.The NVIDIA technologies accelerating streaming and data analytics include a suite of NVIDIA CUDA-X data processing libraries that enable immediate insights from continuous data streams and reduce latency, such as:NVIDIA cuML: Enables GPU-accelerated training and inference for recommendation models using scikit-learn algorithms, providing real-time personalization capabilities and up-to-date relevant content recommendations that boost viewer engagement while reducing churn.NVIDIA cuDF: Offers pandas DataFrame operations on GPUs, enabling faster and more efficient NVIDIA-accelerated extract, transform and load operations and analytics. cuDF helps optimize content delivery by analyzing user data to predict demand and adjust content distribution in real time, improving overall user experiences.Along with cuML and cuDF, accelerated data science libraries provide seamless integration with the open-source Dask library for multi-GPU or multi-node clusters. NVIDIA RTX Blackwell PRO GPUs large GPU memory can further assist with handling massive datasets and spikes in usage without sacrificing performance.And, the video search and summarization blueprint integrates vision language models and large language models and provides cloud-native building blocks to build video analytics, search and summarization applications.Breathing Life Into Live MediaWith NVIDIA RTX PRO Blackwell GPUs, broadcasters can achieve higher performance than ever in high-resolution video processing, real-time augmented reality and AI-driven content production and video analytics.New features include:Ninth-Generation NVIDIA NVENC: Adds support for 4:2:2 encoding, accelerating video encoding speed and improving quality for broadcast and live media applications while reducing costs of storing uncompressed video.Sixth-Generation NVIDIA NVDEC: Provides up to double H.264 decoding throughput and offers support for 4:2:2 H.264 and HEVC decode. Professionals can benefit from high-quality video playback, accelerate video data ingestion and use advanced AI-powered video editing features.Fifth-Generation PCIe: Provides double the bandwidth over the previous generation, improving data transfer speeds from CPU memory and unlocking faster performance for data-intensive tasks.DisplayPort 2.1: Drives high-resolution displays at up to 8K at 240Hz and 16K at 60Hz. Increased bandwidth enables seamless multi-monitor setups, while high dynamic range and higher color depth support deliver more precise color accuracy for tasks like video editing and live broadcasting.The NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPU is a transformative force in Cosms mission to redefine immersive entertainment, said Devin Poolman, chief product and technology officer at Cosm, a global immersive technology, media and entertainment company. With its unparalleled performance, we can push the boundaries of real-time rendering, unlocking the ultra-high resolution and fluid frame rates needed to make our live, immersive experiences feel nearly indistinguishable from reality.As a key component of Cosms CX System 12K LED dome displays, RTX PRO 6000 Max-Q enables seamless merging of the physical and digital worlds to deliver shared reality experiences, enabling audiences to engage with sports, live events and cinematic content in entirely new ways.Cosms shared reality experience, featuring its 87-foot-diameter LED dome display in stunning 12K resolution, with millions of pixels shining 10x brighter than the brightest cinematic display. Image courtesy of Cosm.To learn more about NVIDIA Media2, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at the show, which runs through Friday, March 21.Try NVIDIA NIM microservices and AI Blueprints on build.nvidia.com.
    0 Comentários ·0 Compartilhamentos ·129 Visualizações
  • NVIDIA Honors Americas Partners Advancing Agentic and Physical AI
    blogs.nvidia.com
    NVIDIA this week recognized 14 partners leading the way across the Americas for their work advancing agentic and physical AI across industries.The 2025 Americas NVIDIA Partner Network awards announced at the GTC 2025 global AI conference represent key efforts by industry leaders to help customers become experts in using AI to solve many of todays greatest challenges. The awards honor the diverse contributions of NPN members fostering AI-driven innovation and growth.This year, NPN introduced three new award categories that reflect how AI is driving economic growth and opportunities, including:Trailblazer, which honors a visionary partner spearheading AI adoption and setting new industry standards.Rising Star, which celebrates an emerging talent helping industries harness AI to drive transformation.Innovation, which recognizes a partner thats demonstrated exceptional creativity and forward thinking.This years NPN ecosystem winners have helped companies across industries use AI to adapt to new challenges and prioritize energy-efficient accelerated computing. NPN partners help customers implement a broad range of AI technologies, including NVIDIA-accelerated AI factories, as well as large language models and generative AI chatbots, to transform business operations.The 2025 NPN award winners for the Americas are:Global Consulting Partner of the Year Accenture is recognized for its impact and depth of engineering with its AI Refinery platform for industries, simulation and robotics, marketing and sovereignty, which helps organizations enhance innovation and growth with custom-built approaches to AI-driven enterprise reinvention.Trailblazer Partner of the Year Advizex is recognized for its commitment to driving innovation in AI and high-performance computing, helping industries like healthcare, manufacturing, retail and government seamlessly integrate advanced AI technologies into existing business frameworks. This enables organizations to achieve significant operations efficiencies, enhanced decision-making, and accelerated digital transformation.Rising Star Partner of the Year AHEAD is recognized for its leadership, technical expertise and deployment of NVIDIA software, NVIDIA DGX systems, NVIDIA HGX and networking technologies to advance AI, benefitting customers across healthcare, financial services, life sciences and higher education.Networking Partner of the Year Computacenter is recognized for advancing high-performance computing and data centers with NVIDIA networking technologies. The company achieved this by using the NVIDIA AI Enterprise software platform, DGX platforms and NVIDIA networking to drive innovation and growth throughout industries with efficient, accelerated data centers.Solution Integration Partner of the Year EXXACT is recognized for its efforts in helping research institutions and businesses tap into generative AI, large language models and high-performance computing. The company harnesses NVIDIA GPUs and networking technologies to deliver powerful computing platforms that accelerate innovation and tackle complex computational challenges across various industries.Enterprise Partner of the Year World Wide Technology (WWT) is recognized for its leadership in advancing AI adoption of customers across industry verticals worldwide. The company expanded its end-to-end AI capabilities by integrating NVIDIA Blueprints into its AI Proving Ground and has made a $500 million commitment to AI development over three years to help speed enterprise generative AI deployments.Software Partner of the Year Mark III is recognized for the work of its cross-functional team spanning data scientists, developers, 3D artists, systems engineers, and HPC and AI architects, as well as its close collaborations with enterprises and institutions, to deploy NVIDIA software, including NVIDIA AI Enterprise and NVIDIA Omniverse, across industries. These efforts have helped many customers build software-powered pipelines and data flywheels with machine learning, generative AI, high-performance computing and digital twins.Higher Education Research Partner of the Year Mark III is recognized for its close engagement with universities, academic institutions and research organizations to cultivate the next generation of leaders across AI, machine learning, generative AI, high-performance computing and digital twins.Healthcare Partner of the Year Lambda is recognized for empowering healthcare and biotech organizations with AI training, fine-tuning and inferencing solutions to speed innovation and drive breakthroughs in AI-driven drug discovery. The company provides AI training, fine-tuning and inferencing solutions at every scale from individual workstations to comprehensive AI factories that help healthcare providers seamlessly integrate NVIDIA accelerated computing and software into their infrastructure.Financial Services Partner of the Year WWT is recognized for driving the digital transformation of the worlds largest banks and financial institutions. The company harnesses NVIDIA AI technologies to optimize data management, enhance cybersecurity and deliver transformative generative AI solutions, helping financial services clients navigate rapid technological changes and evolving customer expectations.Innovation Partner of the Year Cambridge Computer is recognized for supporting customers deploying transformative technologies, including NVIDIA Grace Hopper, NVIDIA Blackwell and the NVIDIA Omniverse platform for physical AI.Service Delivery Partner of the Year SoftServe is recognized for its impact in driving enterprise adoption of NVIDIA AI and Omniverse with custom NVIDIA Blueprints that tap into NVIDIA NIM microservices and NVIDIA NeMo and Riva software. SoftServe helps customers create generative AI services for industries spanning manufacturing, retail, financial services, auto, healthcare and life sciences.Distribution Partner of the Year TD SYNNEX has been recognized for the second consecutive year for supporting customers in accelerating AI growth through rapid delivery of NVIDIA accelerated computing and software, as part of its Destination AI initiative.Rising Star Consulting Partner of the Year Tata Consultancy Services (TCS) is recognized for its growth and commitment to providing industry-specific solutions that help customers adopt AI faster and at scale. Through its recently launched business unit and center of excellence built on NVIDIA AI Enterprise and Omniverse, TCS is poised to accelerate adoption of agentic AI and physical AI solutions to speed innovation for customers worldwide.Canadian Partner of the Year Hypertec is recognized for its advancement of high-performance computing and generative AI across Canada. The company has employed the full-stack NVIDIA platform to accelerate AI for financial services, higher education and research.Public Sector Partner of the Year Government Acquisitions (GAI) is recognized for its rapid AI deployment and robust customer relationships, helping serve the unique needs of the federal government by adding AI to operations to improve public safety and efficiency.Learn more about the NPN program.
    0 Comentários ·0 Compartilhamentos ·131 Visualizações
  • NVIDIA Accelerates Science and Engineering With CUDA-X Libraries Powered by GH200 and GB200 Superchips
    blogs.nvidia.com
    Scientists and engineers of all kinds are equipped to solve tough problems a lot faster with NVIDIA CUDA-X libraries powered by NVIDIA GB200 and GH200 superchips.Announced today at the NVIDIA GTC global AI conference, developers can now take advantage of tighter automatic integration and coordination between CPU and GPU resources enabled by CUDA-X working with these latest superchip architectures resulting in up to 11x speedups for computational engineering tools and 5x larger calculations compared with using traditional accelerated computing architectures.This greatly accelerates and improves workflows in engineering simulation, design optimization and more, helping scientists and researchers reach groundbreaking results faster.NVIDIA released CUDA in 2006, opening up a world of applications to the power of accelerated computing. Since then, NVIDIA has built more than 900 domain-specific NVIDIA CUDA-X libraries and AI models, making it easier to adopt accelerated computing and driving incredible scientific breakthroughs. Now, CUDA-X brings accelerated computing to a broad new set of engineering disciplines, including astronomy, particle physics, quantum physics, automotive, aerospace and semiconductor design.The NVIDIA Grace CPU architecture delivers a significant boost to memory bandwidth while reducing power consumption. And NVIDIA NVLink-C2C interconnects provide such high bandwidth that the GPU and CPU can share memory, allowing developers to write less-specialized code, run larger problems and improve application performance.Accelerating Engineering Solvers With NVIDIA cuDSSNVIDIAs superchip architectures allow users to extract greater performance from the same underlying GPU by making more efficient use of CPU and GPU processing capabilities.The NVIDIA cuDSS library is used to solve large engineering simulation problems involving sparse matrices for applications such as design optimization, electromagnetic simulation workflows and more. cuDSS uses Grace GPU memory and the high-bandwidth NVLink-C2C interconnect to factorize and solve large matrices that normally wouldnt fit in device memory. This enables users to solve extremely large problems in a fraction of the time.The coherent shared memory between the GPU and Grace GPU minimizes data movement, significantly reducing overhead for large systems. For a range of large computational engineering problems, tapping the Grace CPU memory and superchip architecture accelerated the most heavy-duty solution steps by up to 4x with the same GPU, with cuDSS hybrid memory.Ansys has integrated cuDSS into its HFSS solver, delivering significant performance enhancements for electromagnetic simulations. With cuDSS, HFSS software achieves up to an 11x speed improvement for the matrix solver.Altair OptiStruct has also adopted the cuDSS Direct Sparse Solver library, substantially accelerating its finite element analysis workloads.These performance gains are achieved by optimizing key operations on the GPU while intelligently using CPUs for shared memory and heterogeneous CPU and GPU execution. cuDSS automatically detects areas where CPU utilization provides additional benefits, further enhancing efficiency.Scaling Up at Warp Speed With Superchip MemoryScaling memory-limited applications on a single GPU becomes possible with the GB200 and GH200 architectures NVLink-CNC interconnects that provide CPU and GPU memory coherency.Many engineering simulations are limited by scale and require massive simulations to produce the resolution necessary to design equipment with intricate components, such as aircraft engines. By tapping into the ability to seamlessly read and write between CPU and GPU memories, engineers can easily implement out-of-core solvers to process larger data.For example, using NVIDIA Warp a Python-based framework for accelerating data generation and spatial computing applications Autodesk performed simulations of up to 48 billion cells using eight GH200 nodes. This is more than 5x larger than the simulations possible using eight NVIDIA H100 nodes.Powering Quantum Computing Research With NVIDIA cuQuantumQuantum computers promise to accelerate problems that are core to many science and industry disciplines. Shortening the time to useful quantum computing rests heavily on the ability to simulate extremely complex quantum systems.Simulations allow researchers to develop new algorithms today that will run at scales suitable for tomorrows quantum computers. They also play a key role in improving quantum processors, running complex simulations of performance and noise characteristics of new qubit designs.So-called state vector simulations of quantum algorithms require matrix operations to be performed on exponentially large vector objects that must be stored in memory. Tensor network simulations, on the other hand, simulate quantum algorithms through tensor contractions and can enable hundreds or thousands of qubits to be simulated for certain important classes of applications.The NVIDIA cuQuantum library accelerates these workloads. cuQuantum is integrated with every leading quantum computing framework, so all quantum researchers can tap into simulation performance with no code changes.Simulations of quantum algorithms are generally limited in scale by memory requirements. The GB200 and GH200 architectures provide an ideal platform for scaling up quantum simulations, as they enable large CPU memory to be used without bottlenecking performance. A GH200 system is up to 3x faster than an H100 system with x86 on quantum computing benchmarks.Learn more about CUDA-X libraries, attend the GTC session on how math libraries can help accelerate applications on NVIDIA Blackwell GPUs and watch NVIDIA founder and CEO Jensen Huangs GTC keynote.
    0 Comentários ·0 Compartilhamentos ·118 Visualizações
  • Where AI and Graphics Converge: NVIDIA Blackwell Universal Data Center GPU Accelerates Demanding Enterprise Workloads
    blogs.nvidia.com
    The first NVIDIA Blackwell-powered data center GPU built for both enterprise AI and visual computing the NVIDIA RTX PRO 6000 Blackwell Server Edition is designed to accelerate the most demanding AI and graphics applications for every industry.Compared to the previous-generation NVIDIA Ada Lovelace architecture L40S GPU, the RTX PRO 6000 Blackwell Server Edition GPU will deliver a multifold increase in performance across a wide array of enterprise workloads up to 5x higher large language model (LLM) inference throughput for agentic AI applications, nearly 7x faster genomics sequencing, 3.3x speedups for text-to-video generation, nearly 2x faster inference for recommender systems and over 2x speedups for rendering.Its part of the NVIDIA RTX PRO Blackwell series of workstation and server GPUs announced today at NVIDIA GTC, the global AI conference taking place through Friday, March 21, in San Jose, California. The RTX PRO lineup includes desktop, laptop and data center GPUs that support AI and creative workloads across industries.With the RTX PRO 6000 Blackwell Server Edition, enterprises across various sectors including architecture, automotive, cloud services, financial services, game development, healthcare, manufacturing, media and entertainment and retail can enable breakthrough performance for workloads such as multimodal generative AI, data analytics, engineering simulation, and visual computing.Content creation, semiconductor manufacturing and genomics analysis companies are already set to harness its capabilities to accelerate compute-intensive, AI-enabled workflows.Universal GPU Delivers Powerful Capabilities for AI and GraphicsThe RTX PRO 6000 Blackwell Server Edition packages powerful RTX AI and graphics capabilities in a passively cooled form factor designed to run 24/7 in data center environments. With 96GB of ultrafast GDDR7 memory and support for Multi-Instance GPU, or MIG, each RTX PRO 6000 can be partitioned into as many as four fully isolated instances with 24GB each to run simultaneous AI and graphics workloads.RTX PRO 6000 is the first universal GPU to enable secure AI with NVIDIA Confidential Computing, which protects AI models and sensitive data from unauthorized access with strong, hardware-based security providing a physically isolated trusted execution environment to secure the entire workload while data is in use.To support enterprise-scale deployments, the RTX PRO 6000 can be configured in high-density accelerated computing platforms for distributed inference workloads or used to deliver virtual workstations with NVIDIA vGPU software to power AI development and graphics-intensive applications.The RTX PRO 6000 GPU delivers supercharged inferencing performance across a broad range of AI models and accelerates real-time, photorealistic ray tracing of complex virtual environments. It includes the latest Blackwell hardware and software innovations like fifth-generation Tensor Cores, fourth-generation RT Cores, DLSS 4, a fully integrated media pipeline and second-generation Transformer Engine with support for FP4 precision.Enterprises can run the NVIDIA Omniverse and NVIDIA AI Enterprise platforms at scale on RTX PRO 6000 Blackwell Server Edition GPUs to accelerate the development and deployment of agentic and physical AI applications, such as image and video generation, LLM inference, recommender systems, computer vision, digital twins and robotics simulation.Accelerated AI Inference and Visual Computing for Any IndustryBlack Forest Labs, creator of the popular FLUX image generation AI, aims to develop and optimize state-of-the-art text-to-image models using RTX PRO 6000 Server Edition GPUs.With the powerful multimodal inference capabilities of the RTX PRO 6000 Server Edition, our customers will be able to significantly reduce latency for image generation workflows, said Robin Rombach, CEO of Black Forest Labs. We anticipate that, with the server edition GPUs support for FP4 precision, our Flux models will run faster, enabling interactive, AI-accelerated content creation.Cloud graphics company OTOY will optimize its OctaneRender real-time rendering application for NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.The new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs unlock brand-new workflows that were previously out of reach for 3D content creators, said Jules Urbach, CEO of OTOY and founder of the Render Network. With 96 GB of VRAM, the new server-edition GPUs can run complex neural rendering models within OctaneRenders GPU path-tracer, enabling artists to tap into incredible new features and tools that blend the precision of traditional CGI augmented with frontier generative AI technology.Semiconductor equipment manufacturer KLA plans to use the RTX PRO 6000 Blackwell Server Edition to accelerate inference workloads powering the wafer manufacturing process the creation of thin discs of semiconductor materials that are core to integrated circuits.KLA and NVIDIA have worked together since 2008 to advance KLAs physics-based AI with optimized high-performance computing solutions. KLAs industry-leading inspection and metrology systems capture and process images by running complex AI algorithms at lightning-fast speeds to find the most critical semiconductor defects.Based on early results, we expect great performance from the RTX PRO 6000 Blackwell Server Edition, said Kris Bhaskar, senior fellow and vice president of AI initiatives at KLA. The increased memory capacity, FP4 reduced precision and new computational capabilities of NVIDIA Blackwell are going to be particularly helpful to KLA and its customers.Boosting Genomics and Drug Discovery WorkloadsThe RTX PRO 6000 Blackwell Server Edition also demonstrates game-changing acceleration for genomic analysis and drug discovery inference workloads, enabled by a new class of dynamic programming instructions.On a single RTX PRO 6000 Blackwell Server Edition GPU, Fastq2bam and DeepVariant elements of the NVIDIA Parabricks pipeline for germline analysis run up to 1.5x faster compared with using an L40S GPU, and 1.75x faster compared with using an NVIDIA H100 GPU.For Smith-Waterman, a core algorithm used in many sequence alignment and variant calling applications, RTX PRO 6000 Blackwell Server Edition GPUs accelerate throughput up to 6.8x compared with L40S GPUs.And for OpenFold2, an AI model that predicts protein structures for drug discovery research, RTX PRO 6000 Blackwell Server Edition GPUs boost inference performance by up to 4.8x compared with L40S GPUs.Genomics company Oxford Nanopore Technologies is collaborating with NVIDIA to bring the latest AI and accelerated computing technologies to its sequencing systems.The NVIDIA Blackwell architecture will help us drive the real-time sequencing analysis of anything, by anyone, anywhere, said Chris Seymour, vice president of advanced platform development at Oxford Nanopore Technologies. With the RTX PRO 6000 Blackwell Server Edition, we have seen up to a 2x improvement in basecalling speed across our Dorado platform.Availability via Global Network of Cloud Providers and System PartnersPlatforms featuring the RTX PRO 6000 Blackwell Server Edition will be available from a global ecosystem of partners starting in May.AWS, Google Cloud, Microsoft Azure, IBM Cloud, CoreWeave, Crusoe, Lambda, Nebius and Vultr will be among the first cloud service providers and GPU cloud providers to offer instances featuring the RTX PRO 6000 Blackwell Server Edition.Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of servers featuring the RTX PRO 6000 Blackwell Server Edition, as are Advantech, Aetina, Aivres, ASRockRack, ASUS, Compal, Foxconn, GIGABYTE, Inventec, MSI, Pegatron, Quanta Cloud Technology (QCT), MiTAC Computing, NationGate, Wistron and Wiwynn.To learn more about the NVIDIA RTX PRO Blackwell series and other advancements in AI, watch the GTC keynote by NVIDIA founder and CEO Jensen Huang:
    0 Comentários ·0 Compartilhamentos ·117 Visualizações
  • New NVIDIA Software for Blackwell Infrastructure Runs AI Factories at Light Speed
    blogs.nvidia.com
    The industrial age was fueled by steam. The digital age brought a shift through software. Now, the AI age is marked by the development of generative AI, agentic AI and AI reasoning, which enables models to process more data to learn and reason to solve complex problems.Just as industrial factories transform raw materials into goods, modern businesses require AI factories to quickly transform data into insights that are scalable, accurate and reliable.Orchestrating this new infrastructure is far more complex than it was to build steam-powered factories. State-of-the-art models demand supercomputing-scale resources. Any downtime risks derailing weeks of progress and reducing GPU utilization.To enable enterprises and developers to manage and run AI factories at light speed, NVIDIA today announced at the NVIDIA GTC global AI conference NVIDIA Mission Control the only unified operations and orchestration software platform that automates the complex management of AI data centers and workloads.NVIDIA Mission Control enhances every aspect of AI factory operations. From configuring deployments to validating infrastructure to operating developer workloads, its capabilities help enterprises get frontier models up and running faster.It is designed to easily transition NVIDIA Blackwell-based systems from pretraining to post-training and now test-time scaling with speed and efficiency. The software enables enterprises to easily pivot between training and inference workloads on their Blackwell-based NVIDIA DGX systems and NVIDIA Grace Blackwell systems, dynamically reallocating cluster resources to match shifting priorities.In addition, Mission Control includes NVIDIA Run:ai technology to streamline operations and job orchestration for development, training and inference, boosting infrastructure utilization by up to 5x.Mission Controls autonomous recovery capabilities, supported by rapid checkpointing and automated tiered restart features, can deliver up to 10x faster job recovery compared with traditional methods that rely on manual intervention, boosting AI training and inference efficiency to keep AI applications in operation.Built on decades of NVIDIA supercomputing expertise, Mission Control lets enterprises simply run models by minimizing time spent managing AI infrastructure. It automates the lifecycle of AI factory infrastructure for all NVIDIA Blackwell-based NVIDIA DGX systems and NVIDIA Grace Blackwell systems from Dell Technologies, Hewlett Packard Enterprise (HPE), Lenovo and Supermicro to make advanced AI infrastructure more accessible to the worlds industries.Enterprises can further simplify and speed deployments of NVIDIA DGX GB300 and DGX B300 systems by using Mission Control with the NVIDIA Instant AI Factory service preconfigured in Equinix AI-ready data centers across 45 markets globally.Advanced Software Provides Enterprises Uninterrupted Infrastructure OversightMission Control automates end-to-end infrastructure management including provisioning, monitoring and error diagnosis to deliver uninterrupted operations. Plus, it continuously monitors every layer of the application and infrastructure stack to predict and identify sources of downtime and inefficiency saving time, energy and costs.Additional NVIDIA Mission Control software benefits include:Simplified cluster setup and provisioning with new automation and standardized application programming interfaces to speed time to deployment with integrated inventory management and visualizations.Seamless workload orchestration for simplified Slurm and Kubernetes workflows.Energy-optimized power profiles to balance power requirements and tune GPU performance for various workload types with developer-selectable controls.Autonomous job recovery to identify, isolate and recover from inefficiencies without manual intervention to maximize developer productivity and infrastructure resiliency.Customizable dashboards that track key performance indicators with access to critical telemetry data about clusters.On-demand health checks to validate hardware and cluster performance throughout the infrastructure lifecycle.Building management integration for enhanced coordination with building management systems to provide more control for power and cooling events, including rapid leakage detection.Leading System Makers Bring NVIDIA Mission Control to Grace Blackwell ServersLeading system makers plan to offer NVIDIA GB200 NVL72 and GB300 NVL72 systems with NVIDIA Mission Control.Dell plans to offer NVIDIA Mission Control software as part of the Dell AI Factory with NVIDIA.The AI industrial revolution demands efficient infrastructure that adapts as fast as business evolves, and the Dell AI Factory with NVIDIA delivers with comprehensive compute, networking, storage and support, said Ihab Tarazi, chief technology officer and senior vice president at Dell Technologies. Pairing NVIDIA Mission Control software and Dell PowerEdge XE9712 and XE9680 servers helps enterprises scale models effortlessly to meet the demands of both training and inference, turning data into actionable insights faster than ever before.HPE will offer the NVIDIA GB200 NVL72 by HPE and GB300 NVL72 by HPE systems with NVIDIA Mission Control software.We are helping service providers and cutting-edge enterprises to rapidly deploy, scale, and optimize complex AI clusters capable of training trillion parameter models, said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. As part of our collaboration with NVIDIA, we will deliver NVIDIA Grace Blackwell rack-scale systems and Mission Control software with HPEs global services and direct liquid cooling expertise to power the new AI era.Lenovo plans to update its Lenovo Hybrid AI Advantage with NVIDIA systems to include NVIDIA Mission Control software.Bringing NVIDIA Mission Control software to Lenovo Hybrid AI Advantage with NVIDIA systems empowers enterprises to navigate the demands of generative and agentic AI workloads with unmatched agility, said Brian Connors, worldwide vice president and general manager of enterprise and SMB segment and AI, infrastructure solutions group, at Lenovo. By automating infrastructure orchestration and enabling seamless transitions between training and inference workloads, Lenovo and NVIDIA are helping customers scale AI innovation at the speed of business.Supermicro plans to incorporate NVIDIA Mission Control software into its Supercluster systems.Supermicro is proud to team with NVIDIA on a Grace Blackwell NVL72 system that is fully supported by NVIDIA Mission Control software, Cenly Chen, chief growth officer at Supermicro. Running on Supermicros AI SuperCluster systems with NVIDIA Grace Blackwell, NVIDIA Mission Control software provides customers with a seamless management software suite to maximize performance on both current NVIDIA GB200 NVL72 systems and future platforms such as NVIDIA GB300 NVL72.Base Command Manager Offers Free Kickstart for AI Cluster ManagementTo help enterprises with infrastructure management, NVIDIA Base Command Manager software is expected to soon be available for free for up to eight accelerators per system, for any cluster size, with the option to purchase NVIDIA Enterprise Support separately.AvailabilityNVIDIA Mission Control for NVIDIA DGX GB200 and DGX B200 systems is available now. NVIDIA GB200 NVL72 systems with Mission Control are expected to soon be available from Dell, HPE, LeNewfonovo and Supermicro.NVIDIA Mission Control is expected to become available for the latest NVIDIA DGX GB300 and DGX B300 systems, as well as GB300 NVL72 systems from leading global providers, later this year.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·114 Visualizações
  • NVIDIA Unveils Open Physical AI Dataset to Advance Robotics and Autonomous Vehicle Development
    blogs.nvidia.com
    Teaching autonomous robots and vehicles how to interact with the physical world requires vast amounts of high-quality data. To give researchers and developers a head start, NVIDIA is releasing a massive, open-source dataset for building the next generation of physical AI.Announced at NVIDIA GTC, a global AI conference taking place this week in San Jose, California, this commercial-grade, pre-validated dataset can help researchers and developers kickstart physical AI projects that can be prohibitively difficult to start from scratch. Developers can either directly use the dataset for model pretraining, testing and validation or use it during post-training to fine-tune world foundation models, accelerating the path to deployment.The initial dataset is now available on Hugging Face, offering developers 15 terabytes of data representing more than 320,000 trajectories for robotics training, plus up to 1,000 Universal Scene Description (OpenUSD) assets, including a SimReady collection. Dedicated data to support end-to-end autonomous vehicle (AV) development which will include 20-second clips of diverse traffic scenarios spanning over 1,000 cities across the U.S. and two dozen European countries is coming soon.The NVIDIA Physical AI Dataset includes hundreds of SimReady assets for rich scenario building.This dataset will grow over time to become the worlds largest unified and open dataset for physical AI development. It could be applied to develop AI models to power robots that safely maneuver warehouse environments, humanoid robots that support surgeons during procedures and AVs that can navigate complex traffic scenarios like construction zones.The NVIDIA Physical AI Dataset is slated to contain a subset of the real-world and synthetic data NVIDIA uses to train, test and validate physical AI for the NVIDIA Cosmos world model development platform, the NVIDIA DRIVE AV software stack, the NVIDIA Isaac AI robot development platform and the NVIDIA Metropolis application framework for smart cities.Early adopters include the Berkeley DeepDrive Center at the University of California, Berkeley, the Carnegie Mellon Safe AI Lab and the Contextual Robotics Institute at University of California, San Diego.We can do a lot of things with this dataset, such as training predictive AI models that help autonomous vehicles better track the movements of vulnerable road users like pedestrians to improve safety, said Henrik Christensen, director of multiple robotics and autonomous vehicle labs at UCSD. A dataset that provides a diverse set of environments and longer clips than existing open-source resources will be tremendously helpful to advance robotics and AV research.Addressing the Need for Physical AI DataThe NVIDIA Physical AI Dataset can help developers scale AI performance during pretraining, where more data helps build a more robust model and during post-training, where an AI model is trained on additional data to improve its performance for a specific use case.Collecting, curating and annotating a dataset that covers diverse scenarios and accurately represents the physics and variation of the real world is time-consuming, presenting a bottleneck for most developers. For academic researchers and small enterprises, running a fleet of vehicles over months to gather data for autonomous vehicle AI is impractical and costly and, since much of the footage collected is uneventful, typically just 10% of data is used for training.But this scale of data collection is essential to building safe, accurate, commercial-grade models. NVIDIA Isaac GR00T robotics models take thousands of hours of video clips for post-training the GR00T N1 model, for example, was trained on an expansive humanoid dataset of real and synthetic data. The NVIDIA DRIVE AV end-to-end AI model for autonomous vehicles requires tens of thousands of hours of driving data to develop.https://blogs.nvidia.com/wp-content/uploads/2025/03/rgb_5sec-1.mp4This open dataset, comprising thousands of hours of multicamera video at unprecedented diversity, scale and geography will particularly benefit the field of safety research by enabling new work on identifying outliers and assessing model generalization performance. The effort contributes to NVIDIA Halos full-stack AV safety system.In addition to harnessing the NVIDIA Physical AI Dataset to help meet their data needs, developers can further boost AI development with tools like NVIDIA NeMo Curator, which process vast datasets efficiently for model training and customization. Using NeMo Curator, 20 million hours of video can be processed in just two weeks on NVIDIA Blackwell GPUs, compared with 3.4 years on unoptimized CPU pipelines.Robotics developers can also tap the new NVIDIA Isaac GR00T blueprint for synthetic manipulation motion generation, a reference workflow built on NVIDIA Omniverse and NVIDIA Cosmos that uses a small number of human demonstrations to create massive amounts of synthetic motion trajectories for robot manipulation.University Labs Set to Adopt Dataset for AI DevelopmentThe robotics labs at UCSD include teams focused on medical applications, humanoids and in-home assistive technology. Christensen anticipates that the Physical AI Datasets robotics data could help develop semantic AI models that understand the context of spaces like homes, hotel rooms and hospitals.One of our goals is to achieve a level of understanding where, if a robot was asked to put your groceries away, it would know exactly which items should go in the fridge and what goes in the pantry, he said.In the field of autonomous vehicles, Christensens lab could apply the dataset to train AI models to understand the intention of various road users and predict the best action to take. His research teams could also use the dataset to support the development of digital twins that simulate edge cases and challenging weather conditions. These simulations could be used to train and test autonomous driving models in situations that are rare in real-world environments.At Berkeley DeepDrive, a leading research center on AI for autonomous systems, the dataset could support the development of policy models and world foundation models for autonomous vehicles.Data diversity is incredibly important to train foundation models, said Wei Zhan, codirector of Berkeley DeepDrive. This dataset could support state-of-the-art research for public and private sector teams developing AI models for autonomous vehicles and robotics.Researchers at Carnegie Mellon Universitys Safe AI Lab plan to use the dataset to advance their work evaluating and certifying the safety of self-driving cars. The team plans to test how a physical AI foundation model trained on this dataset performs in a simulation environment with rare conditions and compare its performance to an AV model trained on existing datasets.This dataset covers different types of roads and geographies, different infrastructure, different weather environments, said Ding Zhao, associate professor at CMU and head of the Safe AI Lab. Its diversity could be quite valuable in helping us train a model with causal reasoning capabilities in the physical world that understands edge cases and long-tail problems.Access the NVIDIA Physical AI dataset on Hugging Face. Build foundational knowledge with courses such as the Learn OpenUSD learning path and Robotics Fundamentals learning path. And to learn more about the latest advancements in physical AI, watch the GTC keynote by NVIDIA founder and CEO Jensen Huang.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·137 Visualizações
  • NVIDIA Unveils AI-Q Blueprint to Connect AI Agents for the Future of Work
    blogs.nvidia.com
    AI agents are the new digital workforce, transforming business operations, automating complex tasks and unlocking new efficiencies. Now, with the ability to collaborate, these agents can work together to solve complex problems and drive even greater impact.Businesses across industries, including sports and finance, can more quickly harness these benefits with AI-Q a new NVIDIA Blueprint for developing agentic systems that can use reasoning to unlock knowledge in enterprise data.Smarter Agentic AI Systems With NVIDIA AI-Q and AgentIQ ToolkitAI-Q provides an easy-to-follow reference for integrating NVIDIA accelerated computing, partner storage platforms, and software and tools including the new NVIDIA Llama Nemotron reasoning models. AI-Q offers a powerful foundation for enterprises to build digital workforces that break down agentic silos and are capable of handling complex tasks with high accuracy and speed.AI-Q integrates fast multimodal extraction and world-class retrieval, using NVIDIA NeMo Retriever, NVIDIA NIM microservices and AI agents.The blueprint is powered by the new NVIDIA AgentIQ toolkit for seamless, heterogeneous connectivity between agents, tools and data. Released today on GitHub, AgentIQ is an open-source software library for connecting, profiling and optimizing teams of AI agents fueled by enterprise data to create multi-agent, end-to-end systems. It can be easily integrated with existing multi-agent systems either in parts or as a complete solution with a simple onboarding process thats 100% opt-in.The AgentIQ toolkit also enhances transparency with full system traceability and profiling enabling organizations to monitor performance, identify inefficiencies and gain fine-grained understanding of how business intelligence is generated. This profiling data can be used with NVIDIA NIM and the NVIDIA Dynamo open-source library to optimize the performance of agentic systems.The New Enterprise AI Agent WorkforceAs AI agents become digital employees, IT teams will support onboarding and training. The AI-Q blueprint and AgentIQ toolkit support digital employees by enabling collaboration between agents and optimizing performance across different agentic frameworks.Enterprises using these tools will be able to more easily connect AI agent teams across solutions like Salesforces Agentforce, Atlassian Rovo in Confluence and Jira, and the ServiceNow AI platform for business transformation to break down silos, streamline tasks and cut response times from days to hours.AgentIQ also integrates with frameworks and tools like CrewAI, LangGraph, Llama Stack, Microsoft Azure AI Agent Service and Letta, letting developers work in their preferred environment.Azure AI Agent Service is integrated with AgentIQ to enable more efficient AI agents and orchestration of multi-agent frameworks using Semantic Kernel, which is fully supported in AgentIQ.A wide range of industries are integrating visual perception and interactive capabilities into their agents and copilots.Financial services leader Visa is using AI agents to streamline cybersecurity, automating phishing email analysis at scale. Using the profiler feature of AI-Q, Visa can optimize agent performance and costs, maximizing AIs role in efficient threat response.Get Started With AI-Q and AgentIQAI-Q integration into the NVIDIA Metropolis VSS blueprint is enabling multimodal agents, combining visual perception with speech, translation and data analytics for enhanced intelligence.Developers can use the AgentIQ toolkit open-source library today and sign up for this hackathon to build hands-on skills for advancing agentic systems.Plus, learn how an NVIDIA solutions architect used the AgentIQ toolkit to improve AI code generation.Agentic systems built with AI-Q require a powerful AI data platform. NVIDIA partners are delivering these customized platforms that continuously process data to let AI agents quickly access knowledge to reason and respond to complex queries.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·127 Visualizações
  • Driving Impact: NVIDIA Expands Automotive Ecosystem to Bring Physical AI to the Streets
    blogs.nvidia.com
    The autonomous vehicle (AV) revolution is here and NVIDIA is at its forefront, bringing more than two decades of automotive computing, software and safety expertise to power innovation from the cloud to the car.At NVIDIA GTC, a global AI conference taking place this week in San Jose, California, dozens of transportation leaders are showcasing their latest advancements with NVIDIA technologies that span passenger cars, trucks, commercial vehicles and more.Mobility leaders are increasingly turning to NVIDIAs three core accelerated compute platforms: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse and NVIDIA Cosmos running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA DRIVE AGX in-vehicle computer to process real-time sensor data for safe, highly automated and autonomous driving capabilities.For manufacturers and developers in the multitrillion-dollar auto industry, this unlocks new possibilities for designing, manufacturing and deploying functionally safe, intelligent mobility solutions offering consumers safer, smarter and more enjoyable experiences.Transforming Passenger VehiclesThe U.S.s largest automaker, General Motors (GM), is collaborating with NVIDIA to develop and build its next-generation vehicles, factories and robots using NVIDIAs accelerated compute platforms. GM has been investing in NVIDIA GPU platforms for training AI models.The companies collaboration now expands to include optimizing factory planning using Omnivese with Cosmos and deploying next-generation vehicles at scale accelerated by the NVIDIA DRIVE AGX. This will help GM build physical AI systems tailored to its company vision, craft and know-how, and ultimately enable mobility thats safer, smarter and more accessible than ever.Volvo Cars, which is using the NVIDIA DRIVE AGX in-vehicle computer in its next-generation electric vehicles, and its subsidiary Zenseact use the NVIDIA DGX platform to analyze and contextualize sensor data, unlock new insights and train future safety models that will enhance overall vehicle performance and safety.Lenovo has teamed with robotics company Nuro to create a robust end-to-end system for level 4 autonomous vehicles that prioritize safety, reliability and convenience. The system is built on NVIDIA DRIVE AGX in-vehicle compute.Advancements in TruckingNVIDIAs AI-driven technologies are also supercharging trucking, helping address pressing challenges like driver shortages, rising e-commerce demands and high operational costs. NVIDIA DRIVE AGX delivers the computational muscle needed for safe, reliable and efficient autonomous operations improving road safety and logistics on a massive scale.Gatik is integrating DRIVE AGX for the onboard AI processing necessary for its freight-only class 6 and 7 trucks, manufactured by Isuzu Motors, which offer driverless middle-mile delivery of a wide range of goods to Fortune 500 customers including Tyson Foods, Kroger and Loblaw.Uber Freight is also adopting DRIVE AGX as the AI computing backbone of its current and future carrier fleets, sustainably enhancing efficiency and saving costs for shippers.Torc is developing a scalable, physical AI compute system for autonomous trucks. The system uses NVIDIA DRIVE AGX in-vehicle compute and the NVIDIA DriveOS operating system with Flexs Jupiter platform and manufacturing capabilities to support Torcs productization and scaled market entry in 2027.Growing Demand for DRIVE AGXNVIDIA DRIVE AGX Orin platform is the AI brain behind todays intelligent fleets and the next wave of mobility is already arriving, as production vehicles built on the NVIDIA DRIVE AGX Thor centralized car computer start to hit the roads.Magna is a key global automotive supplier helping to meet the surging demand for the NVIDIA Blackwell architecture-based DRIVE Thor platform designed for the most demanding processing workloads, including those involving generative AI, vision language models and large language models (LLMs). Magna will develop driving systems built with DRIVE AGX Thor for integration in automakers vehicle roadmaps, delivering active safety and comfort functions along with interior cabin AI experiences.Simulation and Data: The Backbone of AV DevelopmentEarlier this year, NVIDIA announced the Omniverse Blueprint for AV simulation, a reference workflow for creating rich 3D worlds for autonomous vehicle training, testing and validation. The blueprint is expanding to include NVIDIA Cosmos world foundation models (WFMs) to amplify photoreal data variation.Unveiled at the CES trade show in January, Cosmos is already being adopted in automotive, including by Plus, which is embedding Cosmos physical AI models into its SuperDrive technology, accelerating the development of level 4 self-driving trucks.Foretellix is extending its integration of the blueprint, using the Cosmos Transfer WFM to add conditions like weather and lighting to its sensor simulation scenarios to achieve greater situation diversity. Mcity is integrating the blueprint into the digital twin of its AV testing facility to enable physics-based modeling of camera, lidar, radar and ultrasonic sensor data.CARLA, which offers an open-source AV simulator, has integrated the blueprint to deliver high-fidelity sensor simulation. Global systems integrator Capgemini will be the first to use CARLAs Omniverse integration for enhanced sensor simulation in its AV development platform.NVIDIA is using Nexars extensive, high-quality, edge-case data to train and fine-tune NVIDIA Cosmos simulation capabilities. Nexar is tapping into Cosmos, neural infrastructure models and the NVIDIA DGX Cloud platform to supercharge its AI development, refining AV training, high-definition mapping and predictive modeling.Enhancing In-Vehicle Experiences With NVIDIA AI EnterpriseMobility leaders are integrating the NVIDIA AI Enterprise software platform, running on DRIVE AGX, to enhance in-vehicle experiences with generative and agentic AI.At GTC, Cerence AI is showcasing Cerence xUI, its new LLM-based AI assistant platform that will advance the next generation of agentic in-vehicle user experiences. The Cerence xUI hybrid platform runs in the cloud as well as onboard the vehicle, optimized first on NVIDIA DRIVE AGX Orin.As the foundation for Cerence xUI, the CaLLM family of language models is based on open-source foundation models and fine-tuned on Cerence AIs automotive dataset. Tapping into NVIDIA AI Enterprise and bolstering inference performance including through the NVIDIA TensorRT-LLM library and NVIDIA NeMo, Cerence AI has optimized CaLLM to serve as the central agentic orchestrator facilitating enriched driver experiences at the edge and in the cloud.SoundHound will also be demonstrating its next-generation in-vehicle voice assistant, which uses generative AI at the edge with NVIDIA DRIVE AGX, enhancing the in-car experience by bringing cloud-based LLM intelligence directly to vehicles.The Complexity of Autonomy and NVIDIAs Safety-First SolutionSafety is the cornerstone in deploying highly automated and autonomous vehicles to the roads at scale. But building AVs is one of todays most complex computing challenges. It demands immense computational power, precision and an unwavering commitment to safety.AVs and highly automated cars promise to extend mobility to those who need it most, reducing accidents and saving lives. To help deliver on this promise, NVIDIA has developed NVIDIA Halos, a full-stack comprehensive safety system that unifies vehicle architecture, AI models, chips, software, tools and services for the safe development of AVs from the cloud to the car.NVIDIA will host its inaugural AV Safety Day at GTC today, featuring in-depth discussions on automotive safety frameworks and implementation.In addition, NVIDIA will host Automotive Developer Day on Thursday, March 20, offering sessions on the latest advancements in end-to-end AV development and beyond.New Tools for AV DevelopersNVIDIA also released new NVIDIA NIM microservices for automotive designed to accelerate development and deployment of end-to-end stacks from cloud to car. The new NIM microservices for in-vehicle applications, which utilize the nuScenes dataset by Motional, include:BEVFormer, a state-of-the-art transformer-based model that fuses multi-frame camera data into a unified birds-eye-view representation for 3D perception.SparseDrive, an end-to-end autonomous driving model that performs motion prediction and planning simultaneously, outputting a safe planning trajectory.For automotive enterprise applications, NVIDIA offers a variety of models, including NV-CLIP, a multimodal transformer model that generates embeddings from images and text; Cosmos Nemotron, a vision language model that queries and summarizes images and videos for multimodal understanding and AI-powered perception; and many more.Learn more about NVIDIAs latest automotive news by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.
    0 Comentários ·0 Compartilhamentos ·122 Visualizações
  • Enterprises Ignite Big Savings With NVIDIA-Accelerated Apache Spark
    blogs.nvidia.com
    Tens of thousands of companies worldwide rely on Apache Spark to crunch massive datasets to support critical operations, as well as predict trends, customer behavior, business performance and more. The faster a company can process and understand its data, the more it stands to make and save.Thats why companies with massive datasets including the worlds largest retailers and banks have adopted NVIDIA RAPIDS Accelerator for Apache Spark. The open-source software runs on top of the NVIDIA accelerated computing platform to significantly accelerate the processing of end-to-end data science and analytics pipelines without any code changes.To make it even easier for companies to get value out of NVIDIA-accelerated Spark, NVIDIA today unveiled Project Aether a collection of tools and processes that automatically qualify, test, configure and optimize Spark workloads for GPU acceleration at scale.Project Aether Completes a Years Worth of Work in Less Than a WeekCustomers using Spark in production often manage tens of thousands of complex jobs, or more. Migrating from CPU-only to GPU-powered computing offers numerous and significant benefits, but can be a manual and time-consuming process.Project Aether automates the myriad steps that companies previously have done manually, including analyzing all of their Spark jobs to identify the best candidates for GPU acceleration, as well as staging and performing test runs of each job. It uses AI to fine-tune the configuration of each job to obtain the maximum performance.To understand the impact of Project Aether, consider an enterprise that has 100 Spark jobs to complete. With Project Aether, each of these jobs can be configured and optimized for NVIDIA GPU acceleration in as little as four days. The same process done manually by a single data engineer could take up to an entire year.CBA Drives AI Transformation With NVIDIA-Accelerated Apache SparkRunning Apache Spark on NVIDIA accelerated computing helps enterprises around the world complete jobs faster and with less hardware compared with using CPUs only saving time, space, power and cooling, as well as on-premises capital and operational costs in the cloud.Australias largest financial institution, the Commonwealth Bank of Australia, is responsible for processing 60% of the continents financial transactions. CBA was experiencing challenges from the latency and costs associated with running its Spark workloads. Using CPU-only computing clusters, the bank estimates it faced nearly nine years of processing time for its training backlog on top of handling already taxing daily data demands.With 40 million inferencing transactions a day, it was critical we were able to process these in a timely, reliable manner, said Andrew McMullan, chief data and analytics officer at CBA.Running RAPIDS Accelerator for Apache Spark on GPU-powered infrastructure provided CBA with a 640x performance boost, allowing the bank to process a training of 6.3 billion transactions in just five days. Additionally, on its daily volume of 40 million transactions, CBA is now able to conduct inference in 46 minutes and reduce costs by more than 80% compared with using a CPU-based solution.McMullan says another value of NVIDIA-accelerated Apache Spark is how it offers his team the compute time efficiency needed to cost-effectively build models that can help CBA deliver better customer service, anticipate when customers may need assistance with home loans and more quickly detect fraudulent transactions.CBA also plans to use NVIDIA-accelerated Apache Spark to better pinpoint where customers commonly end their digital journeys, enabling the bank to remediate when needed to reduce the rate of abandoned applications.Global EcosystemRAPIDS Accelerator for Apache Spark is available through a global network of partners. It runs on Amazon Web Services, Cloudera, Databricks, Dataiku, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure.Dell Technologies today also announced the integration of RAPIDS Accelerator for Apache Spark with Dell Data Lakehouse.To get assistance through NVIDIA Project Aether with a large-scale migration of Apache Spark workloads, apply for access.To learn more, register for NVIDIA GTC and attend these key sessions featuring Walmart, Capital One, CBA and other industry leaders:How Walmart Uses RAPIDS to Improve Efficiency, and What We Have Learned Along the WayAccelerate Distributed Apache Spark Applications on Kubernetes With RAPIDSBuild Lightning-Fast Data Science Pipelines in Industry With Accelerated ComputingAdvancing Transaction Fraud Detection in Financial Services With NVIDIA RAPIDS on AWSAccelerating Data Intelligence With GPUs and RAPIDS on DatabricksScale Your Apache Spark Data Processing With State-of-the-Art NVIDIA Blackwell GPUs for Cost Savings and PerformanceSee notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·126 Visualizações
  • AI on the Menu: Yum! Brands and NVIDIA Partner to Accelerate Restaurant Industry Innovation
    blogs.nvidia.com
    The quick-service restaurant industry is a marvel of modern logistics, where speed, teamwork and kitchen operations are key ingredients for every order. Yum! Brands is now introducing AI-powered agents at select Pizza Hut and Taco Bell locations to assist and enhance the team member experience.Today at the NVIDIA GTC conference, Yum! Brands announced a strategic partnership with NVIDIA with a goal of deploying multiple AI solutions using NVIDIA technology in 500 restaurants this year.Worlds Largest Restaurant Company Advances AI AdoptionSpanning more than 61,000 locations, Yum! operates more restaurants than any other company in the world. Globally, customers are drawn to the food, value, service and digital convenience from iconic brands like KFC, Taco Bell, Pizza Hut and Habit Burger & Grill.Yum!s industry-leading digital technology team continues to pioneer the companys AI-accelerated strategy with the recent announcement of Byte by Yum!, Yum!s proprietary and digital AI-driven restaurant technology platform.Generative AI-powered customer-facing experiences like automated ordering can help speed operations but theyre often difficult to scale because of complexity and costs.To manage that complexity, developers at Byte by Yum! harnessed NVIDIA NIM microservices and NVIDIA Riva to build new AI-accelerated voice ordering agents in under four months. The voice AI is deployed on Amazon EC2 P4d instances accelerated by NVIDIA A100 GPUs, which enables the agents to understand natural speech, process complex menu orders and suggest add-ons increasing accuracy and customer satisfaction and helping reduce bottlenecks in high-volume locations.The new collaboration with NVIDIA will help Yum! advance its ongoing efforts to have its engineering and data science teams in control of their own intelligence and deliver scalable inference costs, making large-scale deployments possible.At Yum, we have a bold vision to deliver leading-edge, AI-powered technology capabilities to our customers and team members globally, said Joe Park, chief digital and technology officer of Yum! Brands, Inc. and president of Byte by Yum!. We are thrilled to partner with a pioneering company like NVIDIA to help us accelerate this ambition. This partnership will enable us to harness the rich consumer and operational datasets on our Byte by Yum! integrated platform to build smarter AI engines that will create easier experiences for our customers and team members.Rollout of AI Solutions UnderwayYum!s voice AI agents are already being deployed across its brands, including in call centers to handle phone orders when demand surges during events like game days. An expanded rollout of AI solutions at up to 500 restaurants is expected this year.Computer Vision and Restaurant IntelligenceBeyond AI-accelerated ordering, Yum! is also testing NVIDIA computer vision software to analyze drive-thru traffic and explore new use cases for AI to perceive, alert and adjust staffing, with the goal of optimizing service speed.Another initiative focuses on NVIDIA AI-accelerated restaurant operational intelligence. Using NIM microservices, Yum! can deploy applications analyzing performance metrics across thousands of locations to generate customized recommendations for managers, identifying what top-performing stores do differently and applying those insights system-wide.With the NVIDIA AI Enterprise software platform available on AWS Marketplace Byte by Yum! is streamlining AI development and deployment through scalable NVIDIA infrastructure in the cloud.The bottom line: AI is making restaurant operations and dining experiences easier, faster and more personal for the worlds largest restaurant company.
    0 Comentários ·0 Compartilhamentos ·137 Visualizações
  • Telecom Leaders Call Up Agentic AI to Improve Network Operations
    blogs.nvidia.com
    Global telecommunications networks can support millions of user connections per day, generating more than 3,800 terabytes of data per minute on average.That massive, continuous flow of data generated by base stations, routers, switches and data centers including network traffic information, performance metrics, configuration and topology is unstructured and complex. Not surprisingly, traditional automation tools have often fallen short on handling massive, real-time workloads involving such data.To help address this challenge, NVIDIA today announced at the GTC global AI conference that its partners are developing new large telco models (LTMs) and AI agents custom-built for the telco industry using NVIDIA NIM and NeMo microservices within the NVIDIA AI Enterprise software platform. These LTMs and AI agents enable the next generation of AI in network operations.LTMs customized, multimodal large language models (LLMs) trained specifically on telco network data are core elements in the development of network AI agents, which automate complex decision-making workflows, improve operational efficiency, boost employee productivity and enhance network performance.SoftBank and Tech Mahindra have built new LTMs and AI agents, while Amdocs, BubbleRAN and ServiceNow, are dialing up their network operations and optimization with new AI agents, all using NVIDIA AI Enterprise.Its important work at a time when 40% of respondents in a recent NVIDIA-run telecom survey noted theyre deploying AI into their network planning and operations.LTMs Understand the Language of NetworksJust as LLMs understand and generate human language, and NVIDIA BioNeMo NIM microservices understand the language of biological data for drug discovery, LTMs now enable AI agents to master the language of telecom networks.The new partner-developed LTMs powered by NVIDIA AI Enterprise are:Specialized in network intelligence the LTMs can understand real-time network events, predict failures and automate resolutions.Optimized for telco workloads tapping into NVIDIA NIM microservices, the LTMs are optimized for efficiency, accuracy and low latency.Suited for continuous learning and adaptation with post-training scalability, the LTMs can use NVIDIA NeMo to learn from new events, alerts and anomalies to enhance future performance.NVIDIA AI Enterprise provides additional tools and blueprints to build AI agents that simplify network operations and deliver cost savings and operational efficiency, while improving network key performance indicators (KPIs), such as:Reduced downtime AI agents can predict failures before they happen, delivering network resilience.Improved customer experiences AI-driven optimizations lead to faster networks, fewer outages and seamless connectivity.Enhanced security as it continuously scans for threats, AI can help mitigate cyber risks in real time.Industry Leaders Launch LTMs and AI AgentsLeading companies across telecommunications are using NVIDIA AI Enterprise to advance their latest technologies.SoftBank has developed a new LTM based on a large-scale LLM base model, trained on its own network data. Initially focused on network configuration, the model which is available as an NVIDIA NIM microservice can automatically reconfigure the network to adapt to changes in network traffic, including during mass events at stadiums and other venues. SoftBank is also introducing network agent blueprints to help accelerate AI adoption across telco operations.Tech Mahindra has developed an LTM with the NVIDIA agentic AI tools to help address critical network operations. Tapping into this LTM, the companys Adaptive Network Insights Studio provides a 360-degree view of network issues, generating automated reports at various levels of detail to inform and assist IT teams, network engineers and company executives.In addition, Tech Mahindras Proactive Network Anomaly Resolution Hub is powered by the LTM to automatically resolve a significant portion of its network events, lightening engineers workloads and enhancing their productivity.Amdocs Network Assurance Agent, powered by amAIz Agents, automates repetitive tasks such as fault prediction. It also conducts impact analysis and prevention methods for network issues, providing step-by-step guidance on resolving any problems that occur. Plus, the companys Network Deployment Agent simplifies open radio access network (RAN) adoption by automating integration, deployment tasks and interoperability testing, and providing insights to network engineers.BubbleRAN is developing an autonomous multi-agent RAN intelligence platform on a cloud-native infrastructure, where LTMs can observe the network state, configuration, availability and KPIs to facilitate monitoring and troubleshooting. The platform also automates the process of network reconfiguration and policy enforcement through a high-level set of action tools. The companys AI agents satisfy user needs by tapping into advanced retrieval-augmented generation pipelines and telco-specific application programming interfaces, answering real-time, 5G deployment-specific questions.ServiceNows AI agents in telecom built with NVIDIA AI Enterprise on NVIDIA DGX Cloud drive productivity by generating resolution playbooks and predicting potential network disruptions before they occur. This helps communications service providers reduce resolution time and improve customer satisfaction. The new, ready-to-use AI agents also analyze network incidents, identifying root causes of disruptions so they can be resolved faster and avoided in the future.Learn more about the latest agentic AI advancements at NVIDIA GTC, running through Friday, March 21, in San Jose, California.
    0 Comentários ·0 Compartilhamentos ·132 Visualizações
  • NVIDIA Aerial Expands With New Tools for Building AI-Native Wireless Networks
    blogs.nvidia.com
    The telecom industry is increasingly embracing AI to deliver seamless connections even in conditions of poor signal strength while maximizing sustainability and spectral efficiency, the amount of information that can be transmitted per unit of bandwidth.Advancements in AI-RAN technology have set the course toward AI-native wireless networks for 6G, built using AI and accelerated computing from the start, to meet the demands of billions of AI-enabled connected devices, sensors, robots, cameras and autonomous vehicles.To help developers and telecom leaders pioneer these networks, NVIDIA today unveiled new tools in the NVIDIA Aerial Research portfolio.The expanded portfolio of solutions include the Aerial Omniverse Digital Twin on NVIDIA DGX Cloud, the Aerial Commercial Test Bed on NVIDIA MGX, the NVIDIA Sionna 1.0 open-source library and the Sionna Research Kit on NVIDIA Jetson helping accelerate AI-RAN and 6G research.Industry leaders like Amdocs, Ansys, Capgemini, DeepSig, Fujitsu, Keysight, Kyocera, MathWorks, Mediatek, Samsung Research, SoftBank and VIAVI Solutions and more than 150 higher education and research institutions from U.S. and around the world including Northeastern University, Rice University, The University of Texas at Austin, ETH Zurich, Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, HHI, Singapore University of Technology and Design,and University of Oulu are harnessing the NVIDIA Aerial Research portfolio to develop, train, simulate and deploy groundbreaking AI-native wireless innovations.New Tools for Research and DevelopmentThe Aerial Research portfolio provides exceptional flexibility and ease of use for developers at every stage of their research from early experimentation to commercial deployment. Its offerings include:Aerial Omniverse Digital Twin (AODT): A simulation platform to test and fine-tune algorithms in physically precise digital replicas of entire wireless systems, now available on NVIDIA DGX Cloud. Developers can now access AODT everywhere, whether on premises, on laptops, via the public cloud or on an NVIDIA cloud service.Aerial Commercial Test Bed (aka ARC-OTA): A full-stack AI-RAN deployment system that enables developers to deploy new AI models over the air and test them in real time, now available on NVIDIA MGX and available through manufacturers including Supermicro or as a managed offering via Sterling Skywave. ARC-OTA integrates commercial-grade Aerial CUDA-accelerated RAN software with open-source L2+ and 5G core from OpenAirInterface (OAI) and O-RAN-compliant 7.2 split open radio units from WNC and LITEON Technology to enable an end-to-end system for AI-RAN commercial testing.Sionna 1.0: The most widely used GPU-accelerated open-source library for research in communication systems, with more than 135,000 downloads. The latest release of Sionna features a lightning-fast ray tracer for radio propagation, a versatile link-level simulator and new system-level simulation capabilities.Sionna Research Kit: Powered by the NVIDIA Jetson platform, it integrates accelerated computing for AI and machine learning workloads and a software-defined RAN built on OAI. With the kit, researchers can connect 5G equipment and begin prototyping AI-RAN algorithms for next-generation wireless networks in just a few hours.NVIDIA Aerial Research Ecosystem for AI-RAN and 6GThe NVIDIA Aerial Research portfolio includes the NVIDIA 6G Developer Program, an open community that serves more than 2,000 members, representing leading technology companies, academia, research institutions and telecom operators using NVIDIA technologies to complement their AI-RAN and 6G research.Testing and simulation will play an essential role in developing AI-native wireless networks. Companies such as Amdocs, Ansys, Keysight, MathWorks and VIAVI are enhancing their simulation solutions with NVIDIA AODT, while operators have created digital twins of their radio access networks to optimize performance with changing traffic scenarios.Nine out of 10 demonstrations chosen by the AI-RAN Alliance for Mobile World Congress were developed using the NVIDIA Aerial Research portfolio, leading to breakthrough results.SoftBank and Fujitsu demonstrated an up to 50% throughput gain in poor radio environments using AI-based uplink channel interpolation.DeepSig developed OmniPHY, an AI-native air interface that eliminates traditional pilot overhead, harnessing neural networks to achieve up to 70% throughput gains in certain scenarios. Using the NVIDIA AI Aerial platform, OmniPHY integrates machine learning into modulation, reception and demodulation to optimize spectral efficiency, reduce power consumption and enhance wireless network performance.AI-native signal processing is transforming wireless networks, delivering real-world results, said Jim Shea, cofounder and CEO of DeepSig. By integrating deep learning to the air interface and leveraging NVIDIAs tools, were redefining how AI-native wireless networks are designed and built.In addition to the Aerial Research portfolio, using the open ecosystem of NVIDIA CUDA-X libraries, built on CUDA, developers can build applications that deliver dramatically higher performance.Join the NVIDIA 6G Developer Program to access NVIDIA Aerial Research platform tools.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·123 Visualizações
  • From AT&T to the United Nations, AI Agents Redefine Work With NVIDIA AI Enterprise
    blogs.nvidia.com
    AI agents are transforming work, delivering time and cost savings by helping people resolve complex challenges in new ways.Whether developed for humanitarian aid, customer service or healthcare, AI agents built with the NVIDIA AI Enterprise software platform make up a new digital workforce helping professionals accomplish their goals faster at lower costs and for greater impact.AI Agents Enable Growth and Education AI can instantly translate, summarize and process multimodal content in hundreds of languages. Integrated into agentic systems, the technology enables international organizations to engage and educate global stakeholders more efficiently.The United Nations (UN) is working with Accenture to develop a multilingual research agent to support over 150 languages to promote local economic sustainability. The agent will act like a researcher, answering questions about the UNs Sustainable Development Goals and fostering awareness and engagement toward its agenda of global peace and prosperity.Mercy Corps, in collaboration with Cloudera, has deployed an AI-driven Methods Matcher tool that supports humanitarian aid experts in more than 40 countries by providing research, summaries, best-practice guidelines and data-driven crisis responses, providing faster aid delivery in disaster situations.Wikimedia Deutschland, using the DataStax AI Platform, built with NVIDIA AI, can process and embed 10 million Wikidata items in just three days, with 30x faster ingestion performance.AI Agents Provide Tailored Customer Service Across IndustriesAgentic AI enhances customer service with real-time, highly accurate insights for more effective user experiences. AI agents provide 24/7 support, handling common inquiries with more personalized responses while freeing human agents to address more complex issues.Intelligent-routing capabilities categorize and prioritize requests so customers can be quickly directed to the right specialists. Plus, AI agents predictive-analytics capabilities enable proactive support by anticipating issues and empowering human agents with data-driven insights.Companies across industries including telecommunications, finance, healthcare and sports are already tapping into AI agents to achieve massive benefits.AT&T, in collaboration with Quantiphi, developed and deployed a new Ask AT&T AI agent to its call center, leading to a 84% decrease in call center analytics costs.Southern California Edison, working with WWT, is driving Project Orca to enhance data processing and predictions for 100,000+ network assets using agents to reduce downtime, enhance network reliability and enable faster, more efficient ticket resolution.With the adoption of ServiceNow Dispute Management, built with Visa, banks can use AI agents with the solution to achieve up to a 28% reduction in call center volumes and a 30% decrease in time to resolution.The Ottawa Hospital, working with Deloitte, deployed a team of 24/7 patient-care agents to provide preoperative support and answer patient questions regarding upcoming procedures for over 1.2 million people in eastern Ontario, Canada.With the VAST Data Platform, the National Hockey League can unlock over 550,000 hours of historical game footage. This supports sponsorship analysis, helps video producers quickly create broadcast clips and enhances personalized fan content.State-of-the-Art AI Agents Built With NVIDIA AI Enterprise AI agents have emerged as versatile tools that can be adapted and adopted across a wide range of industries. These agents connect to organizational knowledge bases to understand the business context theyre deployed in. Their core functionalities such as question-answering, translation, data processing, predictive analytics and automation can be tailored to improve productivity and save time and costs, by any organization, in any industry.NVIDIA AI Enterprise provides the building blocks for enterprise AI agents. It includes NVIDIA NIM microservices for efficient inference of state-of-the-art models including the new NVIDIA Llama Nemotron reasoning model family and NVIDIA NeMo tools to streamline data processing, model customization, system evaluation, retrieval-augmented generation and guardrailing.NVIDIA Blueprints are reference workflows that showcase best practices for developing high-performance agentic systems. With the AI-Q NVIDIA AI Blueprint, developers can build AI agents into larger agentic systems that can reason, then connect these systems to enterprise data to tackle complex problems, harness other tools, collaborate and operate with greater autonomy.Learn more about AI agent development by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·113 Visualizações
Mais stories