NVIDIA
NVIDIA
This is the Official NVIDIA Page
13 pessoas curtiram isso
371 Publicações
2 fotos
0 Vídeos
0 Anterior
Atualizações recentes
  • Nintendo Switch 2 Leveled Up With NVIDIA AI-Powered DLSS and 4K Gaming
    blogs.nvidia.com
    The Nintendo Switch 2, unveiled April 2, takes performance to the next level, powered by a custom NVIDIA processor featuring an NVIDIA GPU with dedicated RT Cores and Tensor Cores for stunning visuals and AI-driven enhancements.With 1,000 engineer-years of effort across every element from system and chip design to a custom GPU, APIs and world-class development tools the Nintendo Switch 2 brings major upgrades.The new console enables up to 4K gaming in TV mode and up to 120 FPS at 1080p in handheld mode. Nintendo Switch 2 also supports HDR, and AI upscaling to sharpen visuals and smooth gameplay.AI and Ray Tracing for Next-Level VisualsThe new RT Cores bring real-time ray tracing, delivering lifelike lighting, reflections and shadows for more immersive worlds.Tensor Cores power AI-driven features like Deep Learning Super Sampling (DLSS), boosting resolution for sharper details without sacrificing image quality.Tensor Cores also enable AI-powered face tracking and background removal in video chat use cases, enhancing social gaming and streaming.With millions of players worldwide, the Nintendo Switch has become a gaming powerhouse and home to Nintendos storied franchises. Its hybrid design redefined console gaming, bridging TV and handheld play.More Power, Smoother GameplayWith 10x the graphics performance of the Nintendo Switch, the Nintendo Switch 2 delivers smoother gameplay and sharper visuals.Tensor Cores boost AI-powered graphics while keeping power consumption efficient.RT Cores enhance in-game realism with dynamic lighting and natural reflections.Variable Refresh Rate (VRR) via NVIDIA G-SYNC in handheld mode ensures ultra-smooth, tear-free gameplay.Tools for Developers, Upgrades for PlayersDevelopers get improved game engines, better physics and optimized APIs for faster, more efficient game creation.Powered by NVIDIA, Nintendo Switch 2 delivers for both players and developers.
    0 Comentários ·0 Compartilhamentos ·8 Visualizações
  • NVIDIA Showcases Real-Time AI and Intelligent Media Workflows at NAB
    blogs.nvidia.com
    Real-time AI is unlocking new possibilities in media and entertainment, improving viewer engagement and advancing intelligent content creation.At NAB Show, a premier conference for media and entertainment running April 5-9 in Las Vegas, NVIDIA will showcase how emerging AI tools and the technologies underpinning them help streamline workflows for streamers, content creators, sports leagues and broadcasters.Attendees can experience the power of the NVIDIA Blackwell platform, which serves as the foundation of NVIDIA Media2 a collection of NVIDIA technologies including NVIDIA NIM microservices and NVIDIA AI Blueprints for live video analysis, accelerated computing platforms and generative AI software.Attendees can also see NVIDIA Holoscan for Media an advanced real-time AI platform designed for live media workflows and applications in action at the Dell and Verizon booths, as well as experience the NVIDIA AI Blueprint for video search and summarization, which makes it easy to build and customize video analytics AI agents.NVIDIA will also present in these sessions:Agentic Conversational AI Transforming Engagement in Media and EntertainmentAsynchronous Sharing of Media Essence Data in Software-Defined WorkflowsAI on Location: Deploying Private Networks and Edge Compute for Next-Gen Production WorkflowsRedefining Entertainment: AI, Advanced Workflows and the Future of Media With Dell Technologies and NVIDIADriving Innovation With Partners Partners across the industry are showcasing innovative solutions using NVIDIA technologies to accelerate live media.Amazon Web Services (booth W1701) will collaborate with NVIDIA to showcase an esport racing challenge through a live cloud production. The professional-grade racing simulator allows users to analyze their performance through cutting-edge AI-powered insights and step into the spotlight for their own post-race interview. Other demos will offer a peek into the future of live cloud production and generative AI in sports broadcasting.Beamr (booth SL1730MR) will demonstrate how its driving AV1 adoption with GPU-accelerated video processing. Beamrs technology, powered by the NVIDIA NVENC encoder, enables cost-efficient, high-quality and scalable AV1 transformation.Dell (booth SL4616) is collaborating with a wide range of partners to highlight their latest innovations in the media industry. Autodesk will feature its Flame visual effects software for AI-driven compositing; Avid will demonstrate real-time editing and AI metadata tagging on Dell Pro Max high-performance PCs; and Boris FX and RE:Vision Effects will showcase their motion-tracking, slow-motion interpolation and object-removal technologies all running on NVIDIA accelerated computing. In addition, Speed Read AI will showcase the use of NVIDIA RTX-powered workstations to analyze scripts in seconds, while Arcitecta and Elements will demonstrate high-speed media collaboration and post-production workflows on Dell PowerScale storage.HP (booth SL3723) will showcase its desktop and mobile workstation portfolio with NVIDIA RTX PRO Blackwell GPUs, delivering cutting-edge AI performance in a variety of use cases. Attendees can also find HPs newly announced AI solutions, the HP ZGX Nano AI Station G1n and HP ZGX Fury AI Station G1n, developed in collaboration with NVIDIA.Qvest (booth W2055) will spotlight two new AI solutions that help clients increase audience engagement, simplify insight gathering and streamline workflows. The Agentic Live Multi-Camera Video Event Extractor identifies, detects and extracts near-real-time events into structured outputs in an easily configurable, natural language, no-code interface, and the No-Code Media-Centric AI Agent Builder extracts meaningful structured data from unstructured media formats including video, images and complex documents. Both use NVIDIA NIM microservices, NVIDIA NeMo, NVIDIA Holoscan for Media, the NVIDIA AI Blueprint for video search and summarization and more.Monks (booth W2530) will announce its complete suite of products and services for the media and entertainment industry, designed to drive innovation, monetization and efficiency. Monks uses tools under NVIDIA Media2, such as NIDIA NIM microservices and Holoscan for Media, to enable real-time audience feedback, AI-powered selective encoding and contextual content analysis for large archives. The company will also launch a new suite of vision language model service offerings with its strategic partner TwelveLabs.Supermicro (booth W3713) will demonstrate the ease of setting up and running a complete AI video pipeline with WAN 2.1 and Adobe Premiere Pro, all running on the new high-performance Supermicro AS -531AW-TC workstation with an NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPU. With RAVEL Orchestrate handling workstation and AI cluster orchestration, everything can run smoothly from setup and deployment to user access and workload management.Speechmatics (booth W2317) will demonstrate its speech-to-text technology, which taps into NVIDIA accelerated computing to deliver highly accurate, real-time transcription across multiple languages and use cases, from media production to broadcast captioning.Telestream (booth W1501) will showcase its waveform monitoring solution, which seeks to bridge the gap for cloud-native workflows with a microservices architecture that taps into NVIDIA Holoscan for Media. In collaboration with NVIDIA, Telestream will demonstrate the ability to introduce cloud-native waveform monitoring to replicate broadcast center and master control room capabilities for engineering and creative teams.TwelveLabs (booth W3921) will showcase its newest models, which are being trained in part on NVIDIA DGX Cloud, to bring state-of-the-art video understanding to the worlds largest sports teams, clubs and leagues. The company is currently developing models based on NVIDIA NIM microservices to bring media and entertainment customers highly efficient inference and easy integration with leading software frameworks and agentic applications.VAST Data (booth SL9213) will spotlight the VAST InsightEngine a solution that securely ingests, processes, and retrieves all enterprise data in real-time in a demo powered by the NVIDIA AI Enterprise software platform. Developed in collaboration with the National Hockey League, the demo showcases instant access to an archive of over 550,000 hours of hockey game footage. The work is set to redefine sponsorship analytics and empower video producers to instantly search, edit and deliver dynamic broadcast clips fueling hyper-personalized fan experiences.Verizon (booth W2530) will showcase its Private 5G Network with Enterprise AI, which taps into NVIDIA Holoscan for Media for intelligent video prioritization and dynamic bitrate tuning. Directors can now use real-time AI inference to select the best camera angles based on preset criteria and, when the video feeds are selected, use AI for dynamic bitrate optimization to give viewers optimal experiences.Vizrt (booth W3031) will present its solution portfolio, which when matched with NVIDIA accelerated computing and NVIDIA Maxine technology, simplifies complex processes to support the immersive talent reflections, shadow casting and 3D pose tracking of Reality Connect, in addition to Particle Effects, Talent Gesture Control, XR Draw and the AI Gaze Correction feature available in the TriCaster Vizion.V-Nova (booth W1252 and W1454) will spotlight its 6DoF virtual-reality experiences with new immersive content Sharkarma and Weightless in booth W1252 and AI-accelerated optimization in booth W1454, demonstrating how NVIDIA NVENC and NVIDIA GPUs unlock incredible video quality, efficiency and performance for critical video, AI and VR streaming cloud applications.Join NVIDIA at NAB Show 2025.
    0 Comentários ·0 Compartilhamentos ·6 Visualizações
  • From Browsing to Buying: How AI Agents Enhance Online Shopping
    blogs.nvidia.com
    Editors note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copilots. The series also highlights the NVIDIA software and hardware powering advanced AI agents, which form the foundation of AI query engines that gather insights and perform tasks to transform everyday experiences and reshape industries.Online shopping puts a world of choices at peoples fingertips, making it convenient for them to purchase and receive orders all from the comfort of their homes.But too many choices can turn experiences from exciting to exhausting, leaving shoppers struggling to cut through the noise and find exactly what they need.By tapping into AI agents, retailers can deepen their customer engagement, enhance their offerings and maintain a competitive edge in a rapidly shifting digital marketplace.Every digital interaction results in new data being captured. This valuable customer data can be used to fuel generative AI and agentic AI tools that provide personalized recommendations and boost online sales. According to NVIDIAs latest State of AI in Retail and Consumer-Packaged Goods report, 64% of respondents investing in AI for digital retail are prioritizing hyper-personalized recommendations.https://blogs.nvidia.com/wp-content/uploads/2025/04/NVIDIA-AI-Blueprint-for-Retail-Shopping-Assistants-Fashion-Demo-SM-202504.mp4Smart, Seamless and Personalized: The Future of Customer ExperienceAI agents offer a range of benefits that significantly improve the retail customer experience, including:Personalized Experiences: Using customer insights and product information, these digital assistants can deliver the expertise of a companys best sales associate, stylist or designer providing tailored product recommendations, enhancing decision-making, and boosting conversion rates and customer satisfaction.Product Knowledge: AI agents enrich product catalogs with explanatory titles, enhanced descriptions and detailed attributes like size, warranty, sustainability and lifestyle uses. This makes products more discoverable and recommendations more personalized and informative, which increases consumer confidence.Omnichannel Support: AI provides seamless integration of online and offline experiences, facilitating smooth transitions between digital and physical retail environments.Virtual Try-On Capabilities: Customers can easily visualize products on themselves or in their homes in real time, helping improve product expectations and potentially lowering return rates.24/7 Availability: AI agents offer around-the-clock customer support across time zones and languages.Real-World Applications of AI Agents in Retailhttps://blogs.nvidia.com/wp-content/uploads/2025/04/NVIDIA-AI-Blueprint-for-Retail-Shopping-Assistants-Furniture-Demo-SM-202504.mp4AI is redefining digital commerce, empowering retailers to deliver richer, more intuitive shopping experiences. From enhancing product catalogs with accurate, high-quality data to improving search relevance and offering personalized shopping assistance, AI agents are transforming how customers discover, engage with and purchase products online.AI agents for catalog enrichment automatically enhance product information with consumer-focused attributes. These attributes can range from basic details like size, color and material to technical details such as warranty information and compatibility.They also include contextual attributes, like sustainability, and lifestyle attributes, such as for hiking. AI agents can also integrate service attributes including delivery times and return policies making items more discoverable and relevant to customers while addressing common concerns to improve purchase results.Amazon faced the challenge of ensuring complete and accurate product information for shoppers while reducing the effort and time required for sellers to create product listings. To address this, the company implemented generative AI using the NVIDIA TensorRT-LLM library. This technology allows sellers to input a product description or URL, and the system automatically generates a complete, enriched listing. The work helps sellers reach more customers and expand their businesses effectively while making the catalog more responsive and energy efficient.AI agents for search tap into enriched data to deliver more accurate and contextually relevant search results. By employing semantic understanding and personalization, these agents better match customer queries with the right products, making the overall search experience faster and more intuitive.Amazon Music has optimized its search capabilities using the Amazon SageMaker platform with NVIDIA Triton Inference Server and the NVIDIA TensorRT software development kit. This includes implementing vector search and transformer-based spell-correction models.As a result, when users search for music even with typos or vague terms they can quickly find what theyre looking for. These optimizations, which make the search bar more effective and user friendly, have led to faster search times and 73% lower costs for Amazon Music.AI agents for shopping assistants build on the enriched catalog and improved search functionality. They offer personalized recommendations and answer queries in a detailed, relevant, conversational manner, guiding shoppers through their buying journeys with a comprehensive understanding of products and user intent.SoftServe, a leading IT advisor, has launched the SoftServe Gen AI Shopping Assistant, developed using the NVIDIA AI Blueprint for retail shopping assistants. SoftServes shopping assistant offers seamless and engaging shopping experiences by helping customers discover products and access detailed product information quickly and efficiently. One of its standout features is the virtual try-on capability, which allows customers to visualize how clothing and accessories look on them in real time.Defining the Essential Traits of a Powerful AI Shopping AgentHighly skilled AI shopping assistants are designed to be multimodal, understanding text- and image-based prompts, voice and more through large language models (LLMs) and vision language models. These AI agents can search for multiple items simultaneously, complete complicated tasks such as creating a travel wardrobe and answer contextual questions, like whether a product is waterproof or requires drycleaning.This high level of sophistication offers experiences akin to engaging with a companys best sales associate, delivering information to customers in a natural, intuitive way.With software building blocks, developers can design an AI agent with various features.The building blocks of a powerful retail shopping agent include:Multimodal and Multi-Query Capabilities: These agents can process and respond to queries that combine text and images, making search processes more versatile and user friendly. They can also easily be extended to support other modalities such as voice.Integration With LLMs: Advanced LLMs, such as the NVIDIA Llama Nemotron family, bring reasoning capabilities to AI shopping assistants, enabling them to engage in natural, humanlike interactions. NVIDIA NIM microservices provide industry-standard application programming interfaces for simple integration into AI applications, development frameworks and workflows.Management of Structured and Unstructured Data: NVIDIA NeMo Retriever microservices provide the ability to ingest, embed and understand retailers suites of relevant data sources, such as customer preferences and purchases, product catalog text and image data, and more, helping ensure AI agent responses are relevant, accurate and context-aware.Guardrails for Brand Safe, On-Topic Conversations: NVIDIA NeMo Guardrails are implemented to help ensure that conversations with the shopping assistant remain safe and on topic, ultimately protecting brand values and bolstering customer trust.State-of-the-Art Simulation Tools: The NVIDIA Omniverse platform and partner simulation technologies can help visualize products in physically accurate spaces. For example, customers looking to buy a couch could preview how the furniture would look in their own living room.By using these key technologies, retailers can design AI shopping agents that exceed customer expectations, driving higher satisfaction and improved operational efficiency.Retail organizations that harness AI agents are poised to experience evolving capabilities, such as enhanced predictive analytics for further personalized recommendations.And integrating AI with augmented- and virtual-reality technologies is expected to create even more immersive and engaging shopping environments delivering a future where shopping experiences are more immersive, convenient and customer-focused than ever.Learn more about the AI Blueprint for retail shopping assistants.
    0 Comentários ·0 Compartilhamentos ·5 Visualizações
  • No Foolin: GeForce NOW Gets 21 Games in April
    blogs.nvidia.com
    GeForce NOW isnt fooling around.This month, 21 games are joining the cloud gaming library of over 2,000 titles. Whether chasing epic adventures, testing skills in competitive battles or diving into immersive worlds, members can dive into Aprils adventures arrivals, which are truly no joke.Get ready to stream, play and conquer the eight games available this week. Members can also get ahead of the pack with advanced access to South of Midnight, streaming soon before launch.Unleash the MagicSouth of Midnight, an action-adventure game developed by Compulsion Games, offers advanced access for gamers who purchase its Premium Edition. Dive into the titles hauntingly beautiful world before launch, exploring its rich Southern gothic setting and unique magical combat system while balancing magic with melee attacks.Step into the shadows.Set in a mystical version of the American South, the game combines elements of magic, mystery and adventure, weaving a compelling story that draws players in. The endless opportunities for exploration and combat, along with deep lore and engaging characters, make the game a must-play for fans of the action-adventure genre.With its blend of dark fantasy and historical influences, South of Midnight is poised to deliver a unique gaming experience that will leave players spellbound.GeForce NOW members can be among the first to get advanced access to the game without the hassle of downloads or updates. With an Ultimate or Performance membership, experience the games haunting landscapes and cryptid encounters with the highest frame rates and lowest latency no need for the latest hardware.April Is CallingVerdansk is back! Catch it in the cloud.Verdansk, the original and iconic map from Call of Duty: Warzone, is making its highly anticipated return in the games third season, and available to stream on GeForce NOW. Known for its sprawling urban areas, rugged wilderness and points of interest like Dam and Superstore, Verdansk offers a dynamic battleground for intense combat. The map has been rebuilt from the ground up with key enhancements across audio, visuals and gameplay, getting back to basics and delivering nostalgia for fans.Look for the following games available to stream in the cloud this week:South of Midnight Advanced Access (Steam and Xbox, coming soon before launch)Cat Quest (Epic Games Store)Dark Deity 2 (Steam)Hero Siege (Steam)KARMA: The Dark World (Steam)Sky: Children of the Light (Steam)Train Sim World 5 (Steam)Vivat Slovakia (Steam)Heres what to expect for April:South of Midnight (New release on Steam and Xbox, available on PC Game Pass, April 8)Commandos Origins (New release on Steam and Xbox, available on PC Game Pass, April 9)The Talos Principle: Reawakened (New release on Steam, April 10)Night Is Coming (New release on Steam, April 14)Mandragora: Whispers of the Witch Tree (New release on Steam, April 17)Sunderfolk (New release on Steam, April 23)Clair Obscur: Expedition 33 (New release on Steam and Xbox, available on PC Game Pass, April 24)Tempest Rising (New release on Steam, April 24)Aimlabs (Steam)Backrooms: Escape Together (Steam)Blood Strike (Steam)ContractVille (Steam)EXFIL (Steam)March MadnessIn addition to the 14 games announced last month, 26 more joined the GeForce NOW library:Aliens: Dark Descent (Xbox, available on PC Game Pass)Beholder (Epic Games Store)Bus Simulator 21 (Epic Games Store)Citizen Sleeper 2: Starward Vector (Xbox, available on PC Game Pass)Crime Boss: Rockay City (Epic Games Store)Eternal Strands (Xbox, available on PC Game Pass)Fable Anniversary (Steam)FragPunk (New release on Steam, March 6)Galacticare (Xbox, available on PC Game Pass)Ghostrunner 2 (Epic Games Store)Heroes of the Storm (Battle.net)Kingdom Come: Deliverance II (Epic Games Store)Microtopia (Steam)Monster Hunter Wilds (Steam)Nine Sols (Xbox, available on PC Game Pass)One Lonely Outpost (Xbox, available on PC Game Pass)Orcs Must Die! Deathtrap (Xbox, available on PC Game Pass)Prey (Epic Games Store, Steam and Xbox, available on PC Game Pass)Quake Live (Steam)Skydrift Infinity (Epic Games Store)To the Rescue! (Epic Games Store)Undying (Epic Games Store)Warcraft I: Remastered (Battle.net)Warcraft II: Remastered (Battle.net)Warcraft III: Reforged (Battle.net)Warcraft Rumble (Battle.net)What are you planning to play this weekend? Let us know on X or in the comments below.What's a gaming moment that made you laugh out loud? NVIDIA GeForce NOW (@NVIDIAGFN) April 2, 2025
    0 Comentários ·0 Compartilhamentos ·9 Visualizações
  • NVIDIAs Jacob Liberman on Bringing Agentic AI to Enterprises
    blogs.nvidia.com
    AI is rapidly transforming how organizations solve complex challenges.The early stages of enterprise AI adoption focused on using large language models to create chatbots. Now, enterprises are using agentic AI to create intelligent systems that reason, act and execute complex tasks with a degree of autonomy.Jacob Liberman, director of product management at NVIDIA, joined the NVIDIA AI Podcast to explain how agentic AI bridges the gap between powerful AI models and practical enterprise applications.Enterprises are deploying AI agents to free human workers from time-consuming and error-prone tasks. This allows people to spend more time on high-value work that requires creativity and strategic thinking.Liberman anticipates it wont be long before teams of AI agents and human workers collaborate to tackle complex tasks requiring reasoning, intuition and judgement. For example, enterprise software developers will work with AI agents to develop more efficient algorithms. And medical researchers will collaborate with AI agents to design and test new drugs.NVIDIA AI Blueprints help enterprises build their own AI agents including many of the use cases listed above.Blueprints are reference architectures implemented in code that show you how to take NVIDIA software and apply it to some productive task in an enterprise to solve a real business problem, Liberman said.The blueprints are entirely open source. A developer or service provider can deploy a blueprint directly, or customize it by integrating their own technology.Liberman highlighted the versatility of the AI Blueprint for customer service, for example, which features digital humans.The digital human can be made into a bedside digital nurse, a sportscaster or a bank teller with just some verticalization, he said.Other popular NVIDIA Blueprints include a video search and summarization agent, an enterprise multimodal PDF chatbot and a generative virtual screening pipeline for drug discovery.Time Stamps:1:14 What is an AI agent?17:25 How software developers are early adopters of agentic AI.19:50 Explanation of test-time compute and reasoning models.23:05 Using AI agents in cybersecurity and risk management applications.You Might Also LikeImbue CEO Kanjun Que on Transforming AI Agents Into Personal CollaboratorsKanjun Qiu, CEO of Imbue, discusses the emerging era of personal AI agents, drawing a parallel to the PC revolution and explaining how modern AI systems are evolving to enhance user capabilities through collaboration.Telenors Kaaren Hilsen on Launching Norways First AI FactoryKaaren Hilsen, chief innovation officer and head of the AI factory at Telenor, highlights Norways first AI factory, which securely processes sensitive data within the country while promoting data sovereignty and environmental sustainability through green computing initiatives, including a renewable energy-powered data center in Oslo.Firsthands Jon Heller Shares How AI Agents Enhance Consumer Journeys in RetailJon Heller of Firsthand explains how the companys AI Brand Agents are boosting retail and digital marketing by personalizing customer experiences and converting marketing interactions into valuable research data.
    0 Comentários ·0 Compartilhamentos ·15 Visualizações
  • NVIDIA GeForce RTX 50 Series Accelerates Adobe Premiere Pro and Media Encoders 4:2:2 Color Sampling
    blogs.nvidia.com
    Video editing workflows are getting a lot more colorful.Adobe recently announced massive updates to Adobe Premiere Pro (beta) and Adobe Media Encoder, including PC support for 4:2:2 video color editing.The 4:2:2 color format is a game changer for professional video editors, as it retains nearly as much color information as 4:4:4 while greatly reducing file size. This improves color grading and chroma keying using color information to isolate a specific range of hues while maximizing efficiency and quality.In addition, new NVIDIA GeForce RTX 5090 and 5080 laptops built on the NVIDIA Blackwell architecture are out now, accelerating 4:2:2 and advanced AI-powered features across video-editing workflows.Adobe and other industry partners are attending NAB Show a premier gathering of over 100,000 leaders in the broadcast, media and entertainment industries running April 5-9 in Las Vegas. Professionals in these fields will come together for education, networking and exploring the latest technologies and trends.Shed Some Color on 4:2:2Consumer cameras that are limited to 4:2:0 color compression capture a limited amount of color information. 4:2:0 is acceptable for video playback on browsers, but professional video editors often rely on cameras that capture 4:2:2 color depth with precise color accuracy to ensure higher color fidelity.Adobe Premiere Pros beta with 4:2:2 means video data can now provide double the color information with just a 1.3x increase in raw file size over 4:2:0. This unlocks several key benefits within professional video-production workflows:Increased Color Accuracy: 10-bit 4:2:2 retains more color information compared with 8-bit 4:2:0, leading to more accurate color representation and better color grading results.4:2:2 offers more accurate color representation for better color grading results.More Flexibility: The extra color data allows for increased flexibility during color correction and grading, enabling more nuanced adjustments and corrections.Improved Keying: 4:2:2 is particularly beneficial for keying including green screening as it enables cleaner, more accurate extraction of the subject from the background, as well as cleaner edges of small keyed objects like hair.4:2:2 enables cleaner green screen video content.Smaller File Sizes: Compared with 4:4:4, 4:2:2 reduces file sizes without significantly impacting picture quality, offering an optimal balance between quality and storage.Combining 4:2:2 support with NVIDIA hardware increases creative possibilities.Advanced Video EditingProsumer-grade cameras from most major brands support HEVC and H.264 10-bit 4:2:2 formats to deliver superior image quality, manageable file sizes and the flexibility needed for professional video production.GeForce RTX 50 Series GPUs paired with Microsoft Windows 11 come with GPU-powered decode acceleration in HEVC and H.264 10-bit 4:2:2 formats.GPU-powered decode enables faster-than-real-time playback without stuttering, the ability to work with original camera media instead of proxies, smoother timeline responsiveness and reduced CPU load freeing system resources for multi-app workflows and creative tasks.RTX 50 Series 4:2:2 hardware can decode up to six 4K 60 frames-per-second video sources on an RTX 5090-enabled studio PC, enabling smooth multi-camera video-editing workflows on Adobe Premiere Pro.Video exports are also accelerated with NVIDIAs ninth-generation encoder and sixth-generation decoder.NVIDIA and GeForce RTX Laptop GPU encoders and decoders.In GeForce RTX 50 Series GPUs, the ninth-generation NVIDIA video encoder, NVENC, offers an 8% BD-BR upgrade in video encoding efficiency when exporting to HEVC on Premiere Pro.Adobe AI AcceleratedAdobe delivers an impressive array of advanced AI features for idea generation, enabling streamlined processes, improved productivity and opportunities to explore new artistic avenues all accelerated by NVIDIA RTX GPUs.For example, Adobe Media Intelligence, a feature in Premiere Pro (beta) and After Effects (beta), uses AI to analyze footage and apply semantic tags to clips. This lets users more easily and quickly find specific footage by describing its content, including objects, locations, camera angles and even transcribed spoken words.Media Intelligence runs 30% faster on the GeForce RTX 5090 Laptop GPU compared with the GeForce RTX 4090 Laptop GPU.In addition, the Enhance Speech feature in Premiere Pro (beta) improves the quality of recorded speech by filtering out unwanted noise and making the audio sound clearer and more professional. Enhance Speech runs 7x faster on GeForce RTX 5090 Laptop GPUs compared to the MacBook Pro M4 Max.Visit Adobes Premiere Pro page to download a free trial of the beta and explore the slew of AI-powered features across the Adobe Creative Cloud and Substance 3D apps.Unleash (AI)nfinite PossibilitiesGeForce RTX 5090 and 5080 Series laptops deliver the largest-ever generational leap in portable performance for creating, gaming and all things AI.They can run creative generative AI models such as Flux up to 2x faster in a smaller memory footprint, compared with the previous generation.The previously mentioned ninth-generation NVIDIA encoders elevate video editing and livestreaming workflows, and come with NVIDIA DLSS 4 technology and up to 24GB of VRAM to tackle massive 3D projects.NVIDIA Max-Q hardware technologies use AI to optimize every aspect of a laptop the GPU, CPU, memory, thermals, software, display and more to deliver incredible performance and battery life in thin and quiet devices.All GeForce RTX 50 Series laptops include NVIDIA Studio platform optimizations, with over 130 GPU-accelerated content creation apps and exclusive Studio tools including NVIDIA Studio Drivers, tested extensively to enhance performance and maximize stability in popular creative apps.The game-changing NVIDIA GeForce RTX 5090 and 5080 GPU laptops are available now.Adobe will participate in the Creator Lab at NAB Show, offering hands-on training for editors to elevate their skills with Adobe tools. Attend a 30-minute section and try out Puget Systems laptops equipped with GeForce RTX 5080 Laptop GPUs to experience blazing-fast performance and demo new generative AI features.Use NVIDIAs product finder to explore available GeForce RTX 50 Series laptops with complete specifications.New creative app updates and optimizations are powered by the NVIDIA Studio platform. Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·13 Visualizações
  • Speed Demon: NVIDIA Blackwell Takes Pole Position in Latest MLPerf Inference Results
    blogs.nvidia.com
    In the latest MLPerf Inference V5.0 benchmarks, which reflect some of the most challenging inference scenarios, the NVIDIA Blackwell platform set records and marked NVIDIAs first MLPerf submission using the NVIDIA GB200 NVL72 system, a rack-scale solution designed for AI reasoning.Delivering on the promise of cutting-edge AI takes a new kind of compute infrastructure, called AI factories. Unlike traditional data centers, AI factories do more than store and process data they manufacture intelligence at scale by transforming raw data into real-time insights. The goal for AI factories is simple: deliver accurate answers to queries quickly, at the lowest cost and to as many users as possible.The complexity of pulling this off is significant and takes place behind the scenes. As AI models grow to billions and trillions of parameters to deliver smarter replies, the compute required to generate each token increases. This requirement reduces the number of tokens that an AI factory can generate and increases cost per token. Keeping inference throughput high and cost per token low requires rapid innovation across every layer of the technology stack, spanning silicon, network systems and software.The latest updates to MLPerf Inference, a peer-reviewed industry benchmark of inference performance, include the addition of Llama 3.1 405B, one of the largest and most challenging-to-run open-weight models. The new Llama 2 70B Interactive benchmark features much stricter latency requirements compared with the original Llama 2 70B benchmark, better reflecting the constraints of production deployments in delivering the best possible user experiences.In addition to the Blackwell platform, the NVIDIA Hopper platform demonstrated exceptional performance across the board, with performance increasing significantly over the last year on Llama 2 70B thanks to full-stack optimizations.NVIDIA Blackwell Sets New RecordsThe GB200 NVL72 system connecting 72 NVIDIA Blackwell GPUs to act as a single, massive GPU delivered up to 30x higher throughput on the Llama 3.1 405B benchmark over the NVIDIA H200 NVL8 submission this round. This feat was achieved through more than triple the performance per GPU and a 9x larger NVIDIA NVLink interconnect domain.While many companies run MLPerf benchmarks on their hardware to gauge performance, only NVIDIA and its partners submitted and published results on the Llama 3.1 405B benchmark.Production inference deployments often have latency constraints on two key metrics. The first is time to first token (TTFT), or how long it takes for a user to begin seeing a response to a query given to a large language model. The second is time per output token (TPOT), or how quickly tokens are delivered to the user.The new Llama 2 70B Interactive benchmark has a 5x shorter TPOT and 4.4x lower TTFT modeling a more responsive user experience. On this test, NVIDIAs submission using an NVIDIA DGX B200 system with eight Blackwell GPUs tripled performance over using eight NVIDIA H200 GPUs, setting a high bar for this more challenging version of the Llama 2 70B benchmark.Combining the Blackwell architecture and its optimized software stack delivers new levels of inference performance, paving the way for AI factories to deliver higher intelligence, increased throughput and faster token rates.NVIDIA Hopper AI Factory Value Continues IncreasingThe NVIDIA Hopper architecture, introduced in 2022, powers many of todays AI inference factories, and continues to power model training. Through ongoing software optimization, NVIDIA increases the throughput of Hopper-based AI factories, leading to greater value.On the Llama 2 70B benchmark, first introduced a year ago in MLPerf Inference v4.0, H100 GPU throughput has increased by 1.5x. The H200 GPU, based on the same Hopper GPU architecture with larger and faster GPU memory, extends that increase to 1.6x.Hopper also ran every benchmark, including the newly added Llama 3.1 405B, Llama 2 70B Interactive and graph neural network tests. This versatility means Hopper can run a wide range of workloads and keep pace as models and usage scenarios grow more challenging.It Takes an EcosystemThis MLPerf round, 15 partners submitted stellar results on the NVIDIA platform, including ASUS, Cisco, CoreWeave, Dell Technologies, Fujitsu, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Oracle Cloud Infrastructure, Quanta Cloud Technology, Supermicro, Sustainable Metal Cloud and VMware.The breadth of submissions reflects the reach of the NVIDIA platform, which is available across all cloud service providers and server makers worldwide.MLCommons work to continuously evolve the MLPerf Inference benchmark suite to keep pace with the latest AI developments and provide the ecosystem with rigorous, peer-reviewed performance data is vital to helping IT decision makers select optimal AI infrastructure.Learn more about MLPerf.Images and video taken at an Equinix data center in the Silicon Valley.
    0 Comentários ·0 Compartilhamentos ·12 Visualizações
  • NVIDIA GeForce RTX 50 Series Accelerates Adobe Premiere Pro and Media Encoders 4:2:2 Color Sampling
    blogs.nvidia.com
    Video editing workflows are getting a lot more colorful.Adobe recently announced massive updates to Adobe Premiere Pro (beta) and Adobe Media Encoder, including PC support for 4:2:2 video color editing.The 4:2:2 color format is a game changer for professional video editors, as it retains nearly as much color information as 4:4:4 while greatly reducing file size. This improves color grading and chroma keying using color information to isolate a specific range of hues while maximizing efficiency and quality.In addition, new NVIDIA GeForce RTX 5090 and 5080 laptops built on the NVIDIA Blackwell architecture are out now, accelerating 4:2:2 and advanced AI-powered features across video-editing workflows.Adobe and other industry partners are attending NAB Show a premier gathering of over 100,000 leaders in the broadcast, media and entertainment industries running April 5-9 in Las Vegas. Professionals in these fields will come together for education, networking and exploring the latest technologies and trends.Shed Some Color on 4:2:2Consumer cameras that are limited to 4:2:0 color compression capture a limited amount of color information. 4:2:0 is acceptable for video playback on browsers, but professional video editors often rely on cameras that capture 4:2:2 color depth with precise color accuracy to ensure higher color fidelity.Adobe Premiere Pros beta with 4:2:2 means video data can now provide double the color information with just a 1.3x increase in raw file size over 4:2:0. This unlocks several key benefits within professional video-production workflows:Increased Color Accuracy: 10-bit 4:2:2 retains more color information compared with 8-bit 4:2:0, leading to more accurate color representation and better color grading results.4:2:2 offers more accurate color representation for better color grading results.More Flexibility: The extra color data allows for increased flexibility during color correction and grading, enabling more nuanced adjustments and corrections.Improved Keying: 4:2:2 is particularly beneficial for keying including green screening as it enables cleaner, more accurate extraction of the subject from the background, as well as cleaner edges of small keyed objects like hair.4:2:2 enables cleaner green screen video content.Smaller File Sizes: Compared with 4:4:4, 4:2:2 reduces file sizes without significantly impacting picture quality, offering an optimal balance between quality and storage.Combining 4:2:2 support with NVIDIA hardware increases creative possibilities.Advanced Video EditingProsumer-grade cameras from most major brands support HEVC and H.264 10-bit 4:2:2 formats to deliver superior image quality, manageable file sizes and the flexibility needed for professional video production.GeForce RTX 50 Series GPUs paired with Microsoft Windows 11 come with GPU-powered decode acceleration in HEVC and H.264 10-bit 4:2:2 formats.GPU-powered decode enables faster-than-real-time playback without stuttering, the ability to work with original camera media instead of proxies, smoother timeline responsiveness and reduced CPU load freeing system resources for multi-app workflows and creative tasks.RTX 50 Series 4:2:2 hardware can decode up to six 4K 60 frames-per-second video sources on an RTX 5090-enabled studio PC, enabling smooth multi-camera video-editing workflows on Adobe Premiere Pro.Video exports are also accelerated with NVIDIAs ninth-generation encoder and sixth-generation decoder.NVIDIA and GeForce RTX Laptop GPU encoders and decoders.In GeForce RTX 50 Series GPUs, the ninth-generation NVIDIA video encoder, NVENC, offers an 8% BD-BR upgrade in video encoding efficiency when exporting to HEVC on Premiere Pro.Adobe AI AcceleratedAdobe delivers an impressive array of advanced AI features for idea generation, enabling streamlined processes, improved productivity and opportunities to explore new artistic avenues all accelerated by NVIDIA RTX GPUs.For example, Adobe Media Intelligence, a feature in Premiere Pro (beta) and After Effects (beta), uses AI to analyze footage and apply semantic tags to clips. This lets users more easily and quickly find specific footage by describing its content, including objects, locations, camera angles and even transcribed spoken words.Media Intelligence runs 30% faster on the GeForce RTX 5090 Laptop GPU compared with the GeForce RTX 4090 Laptop GPU.In addition, the Enhance Speech feature in Premiere Pro (beta) improves the quality of recorded speech by filtering out unwanted noise and making the audio sound clearer and more professional. Enhance Speech runs 7x faster on GeForce RTX 5090 Laptop GPUs compared to the MacBook Pro M4 Max.Visit Adobes Premiere Pro page to download a free trial of the beta and explore the slew of AI-powered features across the Adobe Creative Cloud and Substance 3D apps.Unleash (AI)nfinite PossibilitiesGeForce RTX 5090 and 5080 Series laptops deliver the largest-ever generational leap in portable performance for creating, gaming and all things AI.They can run creative generative AI models such as Flux up to 2x faster in a smaller memory footprint, compared with the previous generation.The previously mentioned ninth-generation NVIDIA encoders elevate video editing and livestreaming workflows, and come with NVIDIA DLSS 4 technology and up to 24GB of VRAM to tackle massive 3D projects.NVIDIA Max-Q hardware technologies use AI to optimize every aspect of a laptop the GPU, CPU, memory, thermals, software, display and more to deliver incredible performance and battery life in thin and quiet devices.All GeForce RTX 50 Series laptops include NVIDIA Studio platform optimizations, with over 130 GPU-accelerated content creation apps and exclusive Studio tools including NVIDIA Studio Drivers, tested extensively to enhance performance and maximize stability in popular creative apps.The game-changing NVIDIA GeForce RTX 5090 and 5080 GPU laptops are available now.Adobe will participate in the Creator Lab at NAB Show, offering hands-on training for editors to elevate their skills with Adobe tools. Attend a 30-minute section and try out Puget Systems laptops equipped with GeForce RTX 5080 Laptop GPUs to experience blazing-fast performance and demo new generative AI features.Use NVIDIAs product finder to explore available GeForce RTX 50 Series laptops with complete specifications.New creative app updates and optimizations are powered by the NVIDIA Studio platform. Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·16 Visualizações
  • RT NVIDIA AI PC: AI at your fingertips. NVIDIA NIM microservices arrive on RTX AI PCs & workstations making AI tool creation easier than ever. Plus...
    x.com
    RTNVIDIA AI PCAI at your fingertips. NVIDIA NIM microservices arrive on RTX AI PCs & workstations making AI tool creation easier than ever. Plus, Project G-Assist System Assistant expands PC AI abilities with a custom plugin builder.#RTXAIGarage: https://nvda.ws/4iZLlRn
    0 Comentários ·0 Compartilhamentos ·56 Visualizações
  • Bubbles that look real enough to pop! Master the fine art of realistic 3D bubbles in Part 4 of our Studio Sessions tutorial series hosted by Alek...
    x.com
    Bubbles that look real enough to pop! Master the fine art of realistic 3D bubbles in Part 4 of our Studio Sessions tutorial series hosted by Aleksandr Eskin. Watch now https://nvda.ws/4iWy8c2
    0 Comentários ·0 Compartilhamentos ·57 Visualizações
  • Every castle holds a story. @OOsteras "Edrugarth" is an awe-inspiring blend of medieval fantasy and digital artistry. Share your own artwork ...
    x.com
    Every castle holds a story. @OOsteras "Edrugarth" is an awe-inspiring blend of medieval fantasy and digital artistry.Share your own artwork made with an NVIDIA GPU using #StudioShare for a chance to be featured!
    0 Comentários ·0 Compartilhamentos ·60 Visualizações
  • Which part of your creative process do you enjoy most: concept, creation, or final polish?
    x.com
    Which part of your creative process do you enjoy most: concept, creation, or final polish?
    0 Comentários ·0 Compartilhamentos ·58 Visualizações
  • Industrial Ecosystem Adopts Mega NVIDIA Omniverse Blueprint to Train Physical AI in Digital Twins
    blogs.nvidia.com
    Advances in physical AI are enabling organizations to embrace embodied AI across their operations, bringing unprecedented intelligence, automation and productivity to the worlds factories, warehouses and industrial facilities.Humanoid robots can work alongside human teams, autonomous mobile robots (AMRs) can navigate complex warehouse environments, and intelligent cameras and visual AI agents can monitor and optimize entire facilities. In these ways, physical AI is becoming integral to todays industrial operations.Helping industrial enterprises accelerate the development, testing and deployment of physical AI, the Mega NVIDIA Omniverse Blueprint for testing multi-robot fleets in digital twins is now available in preview on build.nvidia.com.At Hannover Messe a trade show on industrial development running through April 4 in Germany manufacturing, warehousing and supply chain leaders such as Accenture and Schaeffler are showcasing their adoption of the blueprint to simulate Digit, a humanoid robot from Agility Robotics, and discussing how they use industrial AI and digital twins to optimize facility layouts, material flow and collaboration between humans and robots inside complex production environments.In addition, NVIDIA ecosystem partners including Delta Electronics, Rockwell Automation and Siemens are announcing further integrations with NVIDIA Omniverse and NVIDIA AI technologies at the event.Digital Twins the Training Ground for Physical AIIndustrial facility digital twins are physically accurate virtual replicas of real-world facilities that serve as critical testing grounds for simulating and validating physical AI and how robots and autonomous fleets interact, collaborate and tackle complex tasks before deployment.Developers can use NVIDIA Omniverse platform technologies and the Universal Scene Description (OpenUSD) framework to develop digital twins of their facilities and processes. This simulation-first approach dramatically accelerates development cycles while reducing the costs and risks associated with real-world testing.Built for a Diversity of Robots and AI AgentsThe Mega blueprint equips industrial enterprises with a reference workflow for combining sensor simulation and synthetic data generation to simulate complex human-robot interactions and verify the performance of autonomous systems in industrial digital twins.Enterprises can use Mega to test various robot brains and policies at scale for mobility, navigation, dexterity and spatial reasoning. This enables fleets comprising different types of robots to work together as a coordinated system.As robot brains execute their missions in simulation, they perceive the results of their actions through sensor simulation and plan their next action. This cycle continues until the policies are refined and ready for deployment.Once validated, these policies are deployed to real robots, which continue to learn from their environment sending sensor information back through the entire loop and creating a continuous learning and improvement cycle.Transforming Industrial Operations With Visual AI AgentsIn addition to AMRs and humanoid robots, advanced visual AI agents extract information from live and recorded video data, enabling new levels of intelligence and automation. These visual AI agents bring real-time contextual awareness to robots and help to improve worker safety, maintain warehouse compliance, support visual inspection and maximize space utilization.To support developers building visual AI agents, which can be integrated with the Mega blueprint, NVIDIA last year announced an AI Blueprint for video search and summarization (VSS). At Hannover Messe, leading partners are featuring how they use the VSS blueprint to improve productivity and operational efficiency.Accelerating Industrial DigitalizationThe industrial world is now experiencing its software-defined moment, with visual AI agents and digital twins as the training ground for physical AI.Join NVIDIA and its partners at Hannover Messe to discover how AI agents and real-time simulation, powered by NVIDIAs Three Computer Solution, are reshaping industrial workflows and driving innovation, automation and efficiency in manufacturing.Read the technical blog to learn more about the Mega blueprint for industrial robot fleets. See the blueprint in action on this interactive demo page.Stay up to date by subscribing to NVIDIA news, joining the Omniverse community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.Explore the new self-paced Learn OpenUSD training curriculum that includes free NVIDIA Deep Learning Institute courses for 3D practitioners and developers.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·67 Visualizações
  • The Dream Life Awaits: Play inZOI on GeForce NOW Anytime, Anywhere
    blogs.nvidia.com
    A new resident is moving into the cloud KRAFTONs inZOI joins the 2,000+ games in the GeForce NOW cloud gaming library.Plus, members can get ready for an exclusive sneak peek as the Sunderfolk First Look Demo comes to the cloud. The demo is exclusively available for players on GeForce NOW until April 7, including Performance and Ultimate members as well as free users.And explore the world of Atomfall part of 12 games joining the cloud this week.Cloud of PossibilitiesLive the life of your dreams in the cloud.In inZOI a groundbreaking life simulation game by Krafton that pushes the genres boundaries take on the role of an intern at AR COMPANY, managing virtual beings called Zois in a simulated city.The game features over 400 mental elements influencing Zois behaviors. Experience the games dynamic weather system, open-world environments inspired by real locations and cinematic cut scenes for key life events and even create in-game objects. inZOI lets players craft unique stories and live out their dreams in a meticulously designed virtual world.Dive into the world of Zois without the need for high-end hardware. Members can manage their virtual homes, customize characters and explore the games dynamic environments from various devices, streaming its detailed graphics and complex simulations with ease.A Magical GatewaySunderfolks First Look Demo has arrived on GeForce NOW, offering a tantalizing look into the magical realm of the Sunderlands. Designed as a TV-first experience, this shared-turn-based tactical role-playing game (RPG) enables using a mobile phone as the gameplay controller. Up to four players can gather around the big screen and embark on a journey filled with strategic battles.This second-screen approach keeps players engaged in real time, adding new layers of immersion. With all six unique character classes unlocked from the start, players can experience the early hours of the game, experimenting with different team compositions and tactics to overcome the challenges that await.Let the magic begin.Accessing the demo is a breeze head to the GeForce NOW app, select Sunderfolk and jump right in. Explore the Sunderlands, engage in flexible turn-based combat and help rebuild the village of Arden to get a taste of the full games depth and camaraderie.Gather the gaming squad, grab a phone and prepare to write a completely new legend in this RPG adventure. The First Look Demo is only available on GeForce NOW, where members can enjoy high-quality graphics and seamless gameplay on their phones and tablets, along with the innovative mobile-as-controller mechanic that makes Sunderfolks couch co-op experience so engaging.Epic Adventures AwaitEnter a world where danger lurks in every shadow.Blending folk horror and intense combat, Atomfall is a survival-action game set in an alternate 1960s Britain, where the Windscale nuclear disaster has left Northern England a radioactive wasteland. Players explore eerie open zones filled with mutated creatures, cultists and Cold War mysteries while scavenging resources, crafting weapons and uncovering the truth behind the disaster. GeForce NOW members can stream it today across their devices of choice.Look for the following games available to stream in the cloud this week:Sunderfolk First Look Demo (New release, March 25)Atomfall (New release on Steam and Xbox available on PC Game Pass, March 27)The First Berserker: Khazan (New release on Steam, March 27)inZOI (New release on Steam, March 27)Beholder (Epic Games Store)Bus Simulator 21 (Epic Games Store)Galacticare (Xbox, available on PC Game Pass)Half-Life 2 RTX Demo (Steam)The Legend of Heroes: Trails through Daybreak II (Steam)One Lonely Outpost (Xbox, available on PC Game Pass)Psychonauts (Xbox, available on PC Game Pass)Undying (Epic Games Store)What are you planning to play this weekend? Let us know on X or in the comments below.Which game do you think deserves a sequel? NVIDIA GeForce NOW (@NVIDIAGFN) March 26, 2025
    0 Comentários ·0 Compartilhamentos ·104 Visualizações
  • Buzz Solutions Uses Vision AI to Supercharge the Electric Grid
    blogs.nvidia.com
    The reliability of the electric grid is critical.From handling demand surges and evolving power needs to preventing infrastructure failures that can cause wildfires, utility companies have a lot to keep tabs on.Buzz Solutions a member of the NVIDIA Inception program for cutting-edge startups is helping by using AI to improve how utilities monitor and maintain their infrastructure.Kaitlyn Albertoli, CEO and cofounder of Buzz Solutions joined the AI Podcast to explain how the companys vision AI technology helps utilities spot potential problems faster.Buzz Solutions helps utility companies analyze the massive amounts of inspection data collected by drones and helicopters. The companys proprietary machine learning algorithms identify potential issues ranging from broken and rusted components to encroaching vegetation and unwelcome wildlife visits before they cause outages or wildfires.To help address substation issues, Buzz Solutions built PowerGUARD, a container-based application pipeline that uses AI to analyze video streams from substation cameras in real time. It detects security, safety, fire, smoke and equipment issues, annotates the video, then sends alerts via email or to a dashboard.PowerGUARD uses the NVIDIA DeepStream software development kit for processing and inference of video streams used in real-time video analytics. DeepStream runs within the NVIDIA Metropolis framework on the NVIDIA Jetson edge AI platform or on cloud-based virtual machines to improve performance, reduce costs and save time.Albertoli believes AI is just getting started in the utility industry, as it enables workers to take action rather than spend months reviewing images manually. We are just at the tip of the iceberg of seeing AI enter into the energy sector and start to provide real value, she said.Time Stamps05:15: How Buzz Solutions saw an opportunity in the massive amounts of inspection data utility companies were collecting but not analyzing.12:25: The importance of modernizing energy infrastructure with actionable intelligence.16:27: How AI identifies critical risks like rusted components, vegetation encroachment and sparking issues before they cause wildfires.20:00: Buzz Solutions innovative use of synthetic data to train algorithms for rare events.You Might Also LikeTelenor Builds Norways First AI Factory, Offering Sustainable and Sovereign Data ProcessingTelenor opened Norways first AI factory in November 2024, enabling organizations to process sensitive data securely on Norwegian soil while prioritizing environmental responsibility. Telenors Chief Innovation Officer and Head of the AI Factory Kaaren Hilsen discusses the AI factorys rapid development, going from concept to reality in under a year.NVIDIAs Josh Parker on How AI and Accelerated Computing Drive SustainabilityAI isnt just about building smarter machines. Its about building a greener world. AI and accelerated computing are helping industries tackle some of the worlds toughest environmental challenges. Joshua Parker, senior director of corporate sustainability at NVIDIA, explains how these technologies are powering a new era of energy efficiency.Currents of Change: ITIFs Daniel Castro on Energy-Efficient AI and Climate ChangeAI is everywhere. So, too, are concerns about advanced technologys environmental impact. Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of its Center for Data Innovation, discusses his AI energy use report that addresses misconceptions about AIs energy consumption. He also talks about the need for continued development of energy-efficient technology.Subscribe to the AI PodcastGet the AI Podcast through Amazon Music, Apple Podcasts, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, SoundCloud, Spotify, Stitcher and TuneIn.
    0 Comentários ·0 Compartilhamentos ·106 Visualizações
  • NVIDIA NIM Microservices Now Available to Streamline Agentic Workflows on RTX AI PCs and Workstations
    blogs.nvidia.com
    Generative AI is unlocking new capabilities for PCs and workstations, including game assistants, enhanced content-creation and productivity tools and more.NVIDIA NIM microservices, available now, and AI Blueprints, in the coming weeks, accelerate AI development and improve its accessibility. Announced at the CES trade show in January, NVIDIA NIM provides prepackaged, state-of-the-art AI models optimized for the NVIDIA RTX platform, including the NVIDIA GeForce RTX 50 Series and, now, the new NVIDIA Blackwell RTX PRO GPUs. The microservices are easy to download and run. They span the top modalities for PC development and are compatible with top ecosystem applications and tools.The experimental System Assistant feature of Project G-Assist was also released today. Project G-Assist showcases how AI assistants can enhance apps and games. The System Assistant allows users to run real-time diagnostics, get recommendations on performance optimizations, or control system software and peripherals all via simple voice or text commands. Developers and enthusiasts can extend its capabilities with a simple plug-in architecture and new plug-in builder.Amid a pivotal moment in computing where groundbreaking AI models and a global developer community are driving an explosion in AI-powered tools and workflows NIM microservices, AI Blueprints and G-Assist are helping bring key innovations to PCs. This RTX AI Garage blog series will continue to deliver updates, insights and resources to help developers and enthusiasts build the next wave of AI on RTX AI PCs and workstations.Ready, Set, NIM!Though the pace of innovation with AI is incredible, it can still be difficult for the PC developer community to get started with the technology.Bringing AI models from research to the PC requires curation of model variants, adaptation to manage all of the input and output data, and quantization to optimize resource usage. In addition, models must be converted to work with optimized inference backend software and connected to new AI application programming interfaces (APIs). This takes substantial effort, which can slow AI adoption.NVIDIA NIM microservices help solve this issue by providing prepackaged, optimized, easily downloadable AI models that connect to industry-standard APIs. Theyre optimized for performance on RTX AI PCs and workstations, and include the top AI models from the community, as well as models developed by NVIDIA.NIM microservices support a range of AI applications, including large language models (LLMs), vision language models, image generation, speech processing, retrieval-augmented generation (RAG)-based search, PDF extraction and computer vision. Ten NIM microservices for RTX are available, supporting a range of applications, including language and image generation, computer vision, speech AI and more. Get started with these NIM microservices today:Language and Reasoning: Deepseek-R1-distill-llama-8B, Mistral-nemo-12B-instruct, Llama3.1-8B-instructImage Generation: Flux.devAudio: Riva Parakeet-ctc-0.6B-asr, Maxine Studio VoiceRAG: Llama-3.2-NV-EmbedQA-1B-v2Computer Vision and Understanding: NV-CLIP, PaddleOCR, Yolo-X-v1NIM microservices are also available through top AI ecosystem tools and frameworks.For AI enthusiasts, AnythingLLM and ChatRTX now support NIM, making it easy to chat with LLMs and AI agents through a simple, user-friendly interface. With these tools, users can create personalized AI assistants and integrate their own documents and data, helping automate tasks and enhance productivity.For developers looking to build, test and integrate AI into their applications, FlowiseAI and Langflow now support NIM and offer low- and no-code solutions with visual interfaces to design AI workflows with minimal coding expertise. Support for ComfyUI is coming soon. With these tools, developers can easily create complex AI applications like chatbots, image generators and data analysis systems.In addition, Microsoft VS Code AI Toolkit, CrewAI and Langchain now support NIM and provide advanced capabilities for integrating the microservices into application code, helping ensure seamless integration and optimization.Visit the NVIDIA technical blog and build.nvidia.com to get started.NVIDIA AI Blueprints Will Offer Pre-Built WorkflowsNVIDIA AI Blueprints give AI developers a head start in building generative AI workflows with NVIDIA NIM microservices.Blueprints are ready-to-use, extensible reference samples that bundle everything needed source code, sample data, documentation and a demo app to create and customize advanced AI workflows that run locally. Developers can modify and extend AI Blueprints to tweak their behavior, use different models or implement completely new functionality.PDF to podcast AI Blueprint coming soon.The PDF to podcast AI Blueprint will transform documents into audio content so users can learn on the go. By extracting text, images and tables from a PDF, the workflow uses AI to generate an informative podcast. For deeper dives into topics, users can then have an interactive discussion with the AI-powered podcast hosts.The AI Blueprint for 3D-guided generative AI will give artists finer control over image generation. While AI can generate amazing images from simple text prompts, controlling image composition using only words can be challenging. With this blueprint, creators can use simple 3D objects laid out in a 3D renderer like Blender to guide AI image generation. The artist can create 3D assets by hand or generate them using AI, place them in the scene and set the 3D viewport camera. Then, a prepackaged workflow powered by the FLUX NIM microservice will use the current composition to generate high-quality images that match the 3D scene.NVIDIA NIM on RTX With Windows Subsystem for LinuxOne of the key technologies that enables NIM microservices to run on PCs is Windows Subsystem for Linux (WSL).Microsoft and NVIDIA collaborated to bring CUDA and RTX acceleration to WSL, making it possible to run optimized, containerized microservices on Windows. This allows the same NIM microservice to run anywhere, from PCs and workstations to the data center and cloud.Get started with NVIDIA NIM on RTX AI PCs at build.nvidia.com.Project G-Assist Expands PC AI Features With Custom Plug-InsAs part of Project G-Assist, an experimental version of the System Assistant feature for GeForce RTX desktop users is now available via the NVIDIA App, with laptop support coming soon.G-Assist helps users control a broad range of PC settings including optimizing game and system settings, charting frame rates and other key performance statistics, and controlling select peripherals settings such as lighting all via basic voice or text commands.G-Assist is built on NVIDIA ACE the same AI technology suite game developers use to breathe life into non-player characters. Unlike AI tools that use massive cloud-hosted AI models that require online access and paid subscriptions, G-Assist runs locally on a GeForce RTX GPU. This means its responsive, free and can run without an internet connection. Manufacturers and software providers are already using ACE to create custom AI Assistants like G-Assist, including MSIs AI Robot engine, the Streamlabs Intelligent AI Assistant and upcoming capabilities in HPs Omen Gaming hub.G-Assist was built for community-driven expansion. Get started with this NVIDIA GitHub repository, including samples and instructions for creating plug-ins that add new functionality. Developers can define functions in simple JSON formats and drop configuration files into a designated directory, allowing G-Assist to automatically load and interpret them. Developers can even submit plug-ins to NVIDIA for review and potential inclusion.Currently available sample plug-ins include Spotify, to enable hands-free music and volume control, and Google Gemini allowing G-Assist to invoke a much larger cloud-based AI for more complex conversations, brainstorming sessions and web searches using a free Google AI Studio API key.In the clip below, youll see G-Assist ask Gemini about which Legend to pick in Apex Legends when solo queueing, and whether its wise to jump into Nightmare mode at level 25 in Diablo IV:For even more customization, follow the instructions in the GitHub repository to generate G-Assist plug-ins using a ChatGPT-based Plug-in Builder. With this tool, users can write and export code, then integrate it into G-Assist enabling quick, AI-assisted functionality that responds to text and voice commands.Watch how a developer used the Plug-in Builder to create a Twitch plug-in for G-Assist to check if a streamer is live:More details on how to build, share and load plug-ins are available in the NVIDIA GitHub repository.Check out the G-Assist article for system requirements and additional information.Build, Create, InnovateNVIDIA NIM microservices for RTX are available at build.nvidia.com, providing developers and AI enthusiasts with powerful, ready-to-use tools for building AI applications.Download Project G-Assist through the NVIDIA Apps Home tab, in the Discovery section. G-Assist currently supports GeForce RTX desktop GPUs, as well as a variety of voice and text commands in the English language. Future updates will add support for GeForce RTX Laptop GPUs, new and enhanced G-Assist capabilities, as well as support for additional languages. Press Alt+G after installation to activate G-Assist.Each week, RTX AI Garage features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X and stay informed by subscribing to the RTX AI PC newsletter.Follow NVIDIA Workstation on LinkedIn and X.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·109 Visualizações
  • Lights, camera, render! In Part 3 of our Studio Sessions tutorial series, Aleksandr Eskin takes the next step in his 3D photorealistic dropper ...
    x.com
    Lights, camera, render! In Part 3 of our Studio Sessions tutorial series, Aleksandr Eskin takes the next step in his 3D photorealistic dropper scene workflow with initial rendering. Watch now: https://nvda.ws/4iEzlEQ
    0 Comentários ·0 Compartilhamentos ·137 Visualizações
  • Multiple monitors or one ultrawide? What's your preference and why?
    x.com
    Multiple monitors or one ultrawide? What's your preference and why?
    0 Comentários ·0 Compartilhamentos ·138 Visualizações
  • Assassins Creed Shadows Emerges From the Mist on GeForce NOW
    blogs.nvidia.com
    Time to sharpen the blade. GeForce NOW brings a legendary addition to the cloud: Ubisofts highly anticipated Assassins Creed Shadows is now available for members to stream.Plus, dive into the updated version of the iconic Fable Anniversary part of 11 games joining the cloud this week.Silent as a ShadowTake the Leap of Faith from the cloud.Explore 16th-century Japan, uncover conspiracies and shape the destiny of a nation all from the cloud.Assassins Creed Shadows unfolds in 1579, during the turbulent Azuchi-Momoyama period of feudal Japan, a time of civil war and cultural exchange.Step into the roles of Naoe, a fictional shinobi assassin and daughter of Fujibayashi Nagato, and Yasuke, a character based on the historical African samurai. Their stories intertwine as they find themselves on opposite sides of a conflict.The games dynamic stealth system enables players to hide in shadows and use a new Observe mechanic to identify targets, tag enemies and highlight objectives. Yasuke and Naoe each have unique abilities and playstyles: Naoe excels in stealth, equipped with classic Assassin techniques and shinobi skills, while Yasuke offers a more combat-focused approach.Navigate the turbulent Sengoku period on GeForce NOW, and experience the games breathtaking landscapes and intense combat at up to 4K resolution and 120 frames per second with an Ultimate membership. Every sword clash and sweeping vista is delivered with exceptional smoothness and clarity.A Classic RebornFable Anniversary revitalizes the original Fable: The Lost Chapters with enhanced graphics, a new save system and Xbox achievements. This action role-playing game invites players to shape their heroes destinies in the whimsical world of Albion.Make every choice from the cloud.Fable Anniversary weaves an epic tale of destiny and choice, following the journey of a young boy whose life is forever changed when bandits raid his peaceful village of Oakvale. Recruited to the Heroes Guild, he embarks on a quest to uncover the truth about his family and confront the mysterious Jack of Blades.Players shape their heros destiny through a series of moral choices. These decisions influence the storys progression and even manifest physically on the character.Stream the title with a GeForce NOW membership across PCs that may not be game-ready, Macs, mobile devices, and Samsung and LG smart TVs. GeForce NOW transforms these devices into powerful gaming rigs, with up to eight-hour gaming sessions for Ultimate members.Unleash the GamesCrash, smash, repeat.Wreckfest 2, the highly anticipated sequel by Bugbear Entertainment to the original demolition derby racing game, promises an even more intense and chaotic experience. The game features a range of customizable cars, from muscle cars to novelty vehicles, each with a story to tell.Play around with multiple modes, including traditional racing with physics-driven handling, and explore demolition derby arenas where the goal is to cause maximum destruction. With enhanced multiplayer features, including skills-based matchmaking and split-screen mode, Wreckfest 2 is the ultimate playground for destruction-racing enthusiasts.Look for the following games available to stream in the cloud this week:Assassins Creed Shadows (New release on Steam and Ubisoft Connect, March 20)Wreckfest 2 (New release on Steam, March 20)Aliens: Dark Descent (Xbox, available on PC Game Pass)Crime Boss: Rockay City (Epic Games Store)Eternal Strands (Xbox, available on PC Game Pass)Fable Anniversary (Steam)Motor Town: Behind the Wheel (Steam)Nine Sols (Xbox, available on PC Game Pass)Quake Live (Steam)Skydrift Infinity (Epic Games Store)To the Rescue! (Epic Games Store)What are you planning to play this weekend? Let us know on X or in the comments below.If you could go on a vacation to any video game realm, where would you go? NVIDIA GeForce NOW (@NVIDIAGFN) March 19, 2025
    0 Comentários ·0 Compartilhamentos ·118 Visualizações
  • EPRI, NVIDIA and Collaborators Launch Open Power AI Consortium to Transform the Future of Energy
    blogs.nvidia.com
    The power and utilities sector keeps the lights on for the worlds populations and industries. As the global energy landscape evolves, so must the tools it relies on.To advance the next generation of electricity generation and distribution, many of the industrys members are joining forces through the creation of the Open Power AI Consortium. The consortium includes energy companies, technology companies and researchers developing AI applications to tackle domain-specific challenges, such as adapting to an increased deployment of distributed energy resources and significant load growth on electric grids.Led by independent, nonprofit energy R&D organization EPRI, the consortium aims to spur AI adoption in the power sector through a collaborative effort to build open models using curated, industry-specific data. The initiative was launched today at NVIDIA GTC, a global AI conference taking place through Friday, March 21, in San Jose, California.Over the next decade, AI has the great potential to revolutionize the power sector by delivering the capability to enhance grid reliability, optimize asset performance, and enable more efficient energy management, said Arshad Mansoor, EPRIs president and CEO. With the Open Power AI Consortium, EPRI and its collaborators will lead this transformation, driving innovation toward a more resilient and affordable energy future.As part of the consortium, EPRI, NVIDIA and Articul8, a member of the NVIDIA Inception program for cutting-edge startups, are developing a set of domain-specific, multimodal large language models trained on massive libraries of proprietary energy and electrical engineering data from EPRI that can help utilities streamline operations, boost energy efficiency and improve grid resiliency.The first version of an industry-first open AI model for electric and power systems was developed using hundreds of NVIDIA H100 GPUs and is expected to soon be available in early access as an NVIDIA NIM microservice.Working with EPRI, we aim to leverage advanced AI tools to address todays unique industry challenges, positioning us at the forefront of innovation and operational excellence, said Vincent Sorgi, CEO of PPL Corporation and EPRI board chair.PPL is a leading U.S. energy company that provides electricity and natural gas to more than 3.6 million customers in Pennsylvania, Kentucky, Rhode Island and Virginia.The Open AI Consortiums Executive Advisory Committee includes executives from over 20 energy companies such as Duke Energy, Pacific Gas & Electric Company and Portland General Electric, as well as leading tech companies such as AWS, Oracle and Microsoft. The consortium plans to further expand its global member base.Powering Up AI to Energize Operations, Drive InnovationGlobal energy consumption is projected to grow by nearly 4% annually through 2027, according to the International Energy Agency. To support this surge in demand, electricity providers are looking to enhance the resiliency of power infrastructure, balance diverse energy sources and expand the grids capacity.AI agents trained on thousands of documents specific to this sector including academic research, industry regulations and standards, and technical documents can enable utility and energy companies to more quickly assess energy needs and prepare the studies and permits required to improve infrastructure.We can bring AI to the global power sector in a much more accelerated way by working together to develop foundation models for the industry, and collaborating with the power sector to y apply solutions tailored to its unique needs, Mansoor said.Utilities could tap the consortiums model to help accelerate interconnection studies, which analyze the feasibility and potential impact of connecting new generators to the existing electric grid. The process varies by region but can take up to four years to complete. By introducing AI agents that can support the analysis, the consortium aims to cut this timeline down by at least 5x.The AI model could also be used to support the preparation of licenses, permits, environmental studies and utility rate cases, where energy companies seek regulatory approval and public comment on proposed changes to electricity rates.Beyond releasing datasets and models, the consortium also aims to develop a standardized framework of benchmarks to help utilities, researchers and other energy sector stakeholders evaluate the performance and reliability of AI technologies.Learn more about the Open Power AI Consortium online and in EPRIs sessions at GTC:Accelerate Energy Transformation With Industry Domain AI Models Arshad Mansoor, president and CEO of EPRIEnergy Transition: Impact of Generative AI in the Power Ecosystem of Generation, Transmission and Distribution Swati Daji, executive vice president and chief financial, risk and operations officer at EPRITo learn more about advancements in AI across industries, watch the GTC keynote by NVIDIA founder and CEO Jensen Huang:See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·130 Visualizações
  • Innovation to Impact: How NVIDIA Research Fuels Transformative Work in AI, Graphics and Beyond
    blogs.nvidia.com
    The roots of many of NVIDIAs landmark innovations the foundational technology that powers AI, accelerated computing, real-time ray tracing and seamlessly connected data centers can be found in the companys research organization, a global team of around 400 experts in fields including computer architecture, generative AI, graphics and robotics.Established in 2006 and led since 2009 by Bill Dally, former chair of Stanford Universitys computer science department, NVIDIA Research is unique among corporate research organizations set up with a mission to pursue complex technological challenges while having a profound impact on the company and the world.We make a deliberate effort to do great research while being relevant to the company, said Dally, chief scientist and senior vice president of NVIDIA Research. Its easy to do one or the other. Its hard to do both.Dally is among NVIDIA Research leaders sharing the groups innovations at NVIDIA GTC, the premier developer conference at the heart of AI, taking place this week in San Jose, California.We make a deliberate effort to do great research while being relevant to the company. Bill Dally, chief scientist and senior vice presidentWhile many research organizations may describe their mission as pursuing projects with a longer time horizon than those of a product team, NVIDIA researchers seek out projects with a larger risk horizon and a huge potential payoff if they succeed.Our mission is to do the right thing for the company. Its not about building a trophy case of best paper awards or a museum of famous researchers, said David Luebke, vice president of graphics research and NVIDIAs first researcher. We are a small group of people who are privileged to be able to work on ideas that could fail. And so it is incumbent upon us to not waste that opportunity and to do our best on projects that, if they succeed, will make a big difference.Innovating as One TeamOne of NVIDIAs core values is one team a deep commitment to collaboration that helps researchers work closely with product teams and industry stakeholders to transform their ideas into real-world impact.Everybody at NVIDIA is incentivized to figure out how to work together because the accelerated computing work that NVIDIA does requires full-stack optimization, said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. You cant do that if each piece of technology exists in isolation and everybodys staying in silos. You have to work together as one team to achieve acceleration.When evaluating potential projects, NVIDIA researchers consider whether the challenge is a better fit for a research or product team, whether the work merits publication at a top conference, and whether theres a clear potential benefit to NVIDIA. If they decide to pursue the project, they do so while engaging with key stakeholders.We are a small group of people who are privileged to be able to work on ideas that could fail. And so it is incumbent upon us to not waste that opportunity. David Luebke, vice president of graphics researchWe work with people to make something real, and often, in the process, we discover that the great ideas we had in the lab dont actually work in the real world, Catanzaro said. Its a tight collaboration where the research team needs to be humble enough to learn from the rest of the company what they need to do to make their ideas work.The team shares much of its work through papers, technical conferences and open-source platforms like GitHub and Hugging Face. But its focus remains on industry impact.We think of publishing as a really important side effect of what we do, but its not the point of what we do, Luebke said.NVIDIA Researchs first effort was focused on ray tracing, which after a decade of sustained work led directly to the launch of NVIDIA RTX and redefined real-time computer graphics. The organization now includes teams specializing in chip design, networking, programming systems, large language models, physics-based simulation, climate science, humanoid robotics and self-driving cars and continues expanding to tackle additional areas of study and tap expertise across the globe.You have to work together as one team to achieve acceleration. Bryan Catanzaro, vice president of applied deep learning researchTransforming NVIDIA and the IndustryNVIDIA Research didnt just lay the groundwork for some of the companys most well-known products its innovations have propelled and enabled todays era of AI and accelerated computing.It began with CUDA, a parallel computing software platform and programming model that enables researchers to tap GPU acceleration for myriad applications. Launched in 2006, CUDA made it easy for developers to harness the parallel processing power of GPUs to speed up scientific simulations, gaming applications and the creation of AI models.Developing CUDA was the single most transformative thing for NVIDIA, Luebke said. It happened before we had a formal research group, but it happened because we hired top researchers and had them work with top architects.Making Ray Tracing a RealityOnce NVIDIA Research was founded, its members began working on GPU-accelerated ray tracing, spending years developing the algorithms and the hardware to make it possible. In 2009, the project led by the late Steven Parker, a real-time ray tracing pioneer who was vice president of professional graphics at NVIDIA reached the product stage with the NVIDIA OptiX application framework, detailed in a 2010 SIGGRAPH paper.The researchers work expanded and, in collaboration with NVIDIAs architecture group, eventually led to the development of NVIDIA RTX ray-tracing technology, including RT Cores that enabled real-time ray tracing for gamers and professional creators.Unveiled in 2018, NVIDIA RTX also marked the launch of another NVIDIA Research innovation: NVIDIA DLSS, or Deep Learning Super Sampling. With DLSS, the graphics pipeline no longer needs to draw all the pixels in a video. Instead, it draws a fraction of the pixels and gives an AI pipeline the information needed to create the image in crisp, high resolution.https://blogs.nvidia.com/wp-content/uploads/2025/03/DLSS4.mp4Accelerating AI for Virtually Any ApplicationNVIDIAs research contributions in AI software kicked off with the NVIDIA cuDNN library for GPU-accelerated neural networks, which was developed as a research project when the deep learning field was still in its initial stages then released as a product in 2014.As deep learning soared in popularity and evolved into generative AI, NVIDIA Research was at the forefront exemplified by NVIDIA StyleGAN, a groundbreaking visual generative AI model that demonstrated how neural networks could rapidly generate photorealistic imagery.While generative adversarial networks, or GANs, were first introduced in 2014, StyleGAN was the first model to generate visuals that could completely pass muster as a photograph, Luebke said. It was a watershed moment.NVIDIA StyleGANNVIDIA researchers introduced a slew of popular GAN models such as the AI painting tool GauGAN, which later developed into the NVIDIA Canvas application. And with the rise of diffusion models, neural radiance fields and Gaussian splatting, theyre still advancing visual generative AI including in 3D with recent models like Edify 3D and 3DGUT.NVIDIA GauGANIn the field of large language models, Megatron-LM was an applied research initiative that enabled the efficient training and inference of massive LLMs for language-based tasks such as content generation, translation and conversational AI. Its integrated into the NVIDIA NeMo platform for developing custom generative AI, which also features speech recognition and speech synthesis models that originated in NVIDIA Research.Achieving Breakthroughs in Chip Design, Networking, Quantum and MoreAI and graphics are only some of the fields NVIDIA Research tackles several teams are achieving breakthroughs in chip architecture, electronic design automation, programming systems, quantum computing and more.In 2012, Dally submitted a research proposal to the U.S. Department of Energy for a project that would become NVIDIA NVLink and NVSwitch, the high-speed interconnect that enables rapid communication between GPU and CPU processors in accelerated computing systems.NVLink Switch trayIn 2013, the circuit research team published work on chip-to-chip links that introduced a signaling system co-designed with the interconnect to enable a high-speed, low-area and low-power link between dies. The project eventually became the link between the NVIDIA Grace CPU and NVIDIA Hopper GPU.In 2021, the ASIC and VLSI Research group developed a software-hardware codesign technique for AI accelerators called VS-Quant that enabled many machine learning models to run with 4-bit weights and 4-bit activations at high accuracy. Their work influenced the development of FP4 precision support in the NVIDIA Blackwell architecture.And unveiled this year at the CES trade show was NVIDIA Cosmos, a platform created by NVIDIA Research to accelerate the development of physical AI for next-generation robots and autonomous vehicles. Read the research paper and check out the AI Podcast episode on Cosmos for details.Learn more about NVIDIA Research at GTC. Watch the keynote by NVIDIA founder and CEO Jensen Huang below:See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·126 Visualizações
  • NVIDIA Blackwell Powers Real-Time AI for Entertainment Workflows
    blogs.nvidia.com
    AI has been shaping the media and entertainment industry for decades, from early recommendation engines to AI-driven editing and visual effects automation. Real-time AI which lets companies actively drive content creation, personalize viewing experiences and rapidly deliver data insights marks the next wave of that transformation.With the NVIDIA RTX PRO Blackwell GPU series, announced yesterday at the NVIDIA GTC global AI conference, media companies can now harness real-time AI for media workflows with unprecedented speed, efficiency and creative potential.NVIDIA Blackwell serves as the foundation of NVIDIA Media2, an initiative that enables real-time AI by bringing together NVIDIA technologies including NVIDIA NIM microservices, NVIDIA AI Blueprints, accelerated computing platforms and generative AI software to transform all aspects of production workflows and experiences, starting with content creation, streaming and live media.Powering Intelligent Content CreationAccelerated computing enables AI-driven workflows to process massive datasets in real time, unlocking faster rendering, simulation and content generation.NVIDIA RTX PRO Blackwell GPUs series include new features that enable unprecedented graphics and AI performance. The NVIDIA Streaming Multiprocessor offers up to 1.5x faster throughput over the NVIDIA Ada generation, and new neural shaders that integrate AI inside of programmable shaders for advanced content creation.Fourth-generation RT Cores deliver up to 2x the performance of the previous generation, enabling the creation of massive photoreal and physically accurate animated scenes. Fifth-generation Tensor Cores deliver up to 4,000 AI trillion operations per second and add support for FP4 precision. And up to 96GB of GDDR7 memory boosts GPU bandwidth and capacity, allowing applications to run faster and work with larger, more complex datasets for massive 3D and AI projects, large-scale virtual-reality environments and more.Elio Disney/PixarOne of the most exciting aspects of new technology is how it empowers our artists with tools to enhance their creative workflows, said Steve May, chief technology officer of Pixar Animation Studios. With Pixars next-generation renderer, RenderMan XPU optimized for the NVIDIA Blackwell platform 99% of Pixar shots can now fit within the 96GB of memory on the NVIDIA RTX PRO 6000 Blackwell GPUs. This breakthrough will fundamentally improve the way we make movies. Lucasfilm Ltd.Our artists were frequently maxing out our 48GB cards with ILM StageCraft environments and having to battle performance issues on set for 6K and 8K real-time renders, said Stephen Hill, principal rendering engineer at Lucasfilm. The new NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPU lifts these limitations were seeing upwards of a 2.5x performance increase over our current production GPUs, and with 96GB of VRAM we now have twice as much memory to play with.In addition, neural rendering with NVIDIA RTX Kit brings cinematic-quality ray tracing and AI-enhanced graphics to real-time engines, elevating visual fidelity in film, TV and interactive media. Including neural texture compression, neural shaders, RTX Global Illumination and Mega Geometry, RTX Kit is a suite of neural rendering technologies that enhance graphics for games, animation, virtual production scenes and immersive experiences.Fueling the Future of Streaming and Data AnalyticsData analytics is transforming raw audience insights into actionable intelligence faster than ever. NVIDIA accelerated computing and AI-powered frameworks enable studios to analyze viewer behavior, predict engagement patterns and optimize content in real time, driving hyper-personalized experiences and smarter creative decisions.With the new GPUs, users can achieve real-time ingestion and data transformation with GPU-accelerated data loading and cleansing at scale.The NVIDIA technologies accelerating streaming and data analytics include a suite of NVIDIA CUDA-X data processing libraries that enable immediate insights from continuous data streams and reduce latency, such as:NVIDIA cuML: Enables GPU-accelerated training and inference for recommendation models using scikit-learn algorithms, providing real-time personalization capabilities and up-to-date relevant content recommendations that boost viewer engagement while reducing churn.NVIDIA cuDF: Offers pandas DataFrame operations on GPUs, enabling faster and more efficient NVIDIA-accelerated extract, transform and load operations and analytics. cuDF helps optimize content delivery by analyzing user data to predict demand and adjust content distribution in real time, improving overall user experiences.Along with cuML and cuDF, accelerated data science libraries provide seamless integration with the open-source Dask library for multi-GPU or multi-node clusters. NVIDIA RTX Blackwell PRO GPUs large GPU memory can further assist with handling massive datasets and spikes in usage without sacrificing performance.And, the video search and summarization blueprint integrates vision language models and large language models and provides cloud-native building blocks to build video analytics, search and summarization applications.Breathing Life Into Live MediaWith NVIDIA RTX PRO Blackwell GPUs, broadcasters can achieve higher performance than ever in high-resolution video processing, real-time augmented reality and AI-driven content production and video analytics.New features include:Ninth-Generation NVIDIA NVENC: Adds support for 4:2:2 encoding, accelerating video encoding speed and improving quality for broadcast and live media applications while reducing costs of storing uncompressed video.Sixth-Generation NVIDIA NVDEC: Provides up to double H.264 decoding throughput and offers support for 4:2:2 H.264 and HEVC decode. Professionals can benefit from high-quality video playback, accelerate video data ingestion and use advanced AI-powered video editing features.Fifth-Generation PCIe: Provides double the bandwidth over the previous generation, improving data transfer speeds from CPU memory and unlocking faster performance for data-intensive tasks.DisplayPort 2.1: Drives high-resolution displays at up to 8K at 240Hz and 16K at 60Hz. Increased bandwidth enables seamless multi-monitor setups, while high dynamic range and higher color depth support deliver more precise color accuracy for tasks like video editing and live broadcasting.The NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition GPU is a transformative force in Cosms mission to redefine immersive entertainment, said Devin Poolman, chief product and technology officer at Cosm, a global immersive technology, media and entertainment company. With its unparalleled performance, we can push the boundaries of real-time rendering, unlocking the ultra-high resolution and fluid frame rates needed to make our live, immersive experiences feel nearly indistinguishable from reality.As a key component of Cosms CX System 12K LED dome displays, RTX PRO 6000 Max-Q enables seamless merging of the physical and digital worlds to deliver shared reality experiences, enabling audiences to engage with sports, live events and cinematic content in entirely new ways.Cosms shared reality experience, featuring its 87-foot-diameter LED dome display in stunning 12K resolution, with millions of pixels shining 10x brighter than the brightest cinematic display. Image courtesy of Cosm.To learn more about NVIDIA Media2, watch the GTC keynote and register to attend sessions from NVIDIA and industry leaders at the show, which runs through Friday, March 21.Try NVIDIA NIM microservices and AI Blueprints on build.nvidia.com.
    0 Comentários ·0 Compartilhamentos ·132 Visualizações
  • NVIDIA Honors Americas Partners Advancing Agentic and Physical AI
    blogs.nvidia.com
    NVIDIA this week recognized 14 partners leading the way across the Americas for their work advancing agentic and physical AI across industries.The 2025 Americas NVIDIA Partner Network awards announced at the GTC 2025 global AI conference represent key efforts by industry leaders to help customers become experts in using AI to solve many of todays greatest challenges. The awards honor the diverse contributions of NPN members fostering AI-driven innovation and growth.This year, NPN introduced three new award categories that reflect how AI is driving economic growth and opportunities, including:Trailblazer, which honors a visionary partner spearheading AI adoption and setting new industry standards.Rising Star, which celebrates an emerging talent helping industries harness AI to drive transformation.Innovation, which recognizes a partner thats demonstrated exceptional creativity and forward thinking.This years NPN ecosystem winners have helped companies across industries use AI to adapt to new challenges and prioritize energy-efficient accelerated computing. NPN partners help customers implement a broad range of AI technologies, including NVIDIA-accelerated AI factories, as well as large language models and generative AI chatbots, to transform business operations.The 2025 NPN award winners for the Americas are:Global Consulting Partner of the Year Accenture is recognized for its impact and depth of engineering with its AI Refinery platform for industries, simulation and robotics, marketing and sovereignty, which helps organizations enhance innovation and growth with custom-built approaches to AI-driven enterprise reinvention.Trailblazer Partner of the Year Advizex is recognized for its commitment to driving innovation in AI and high-performance computing, helping industries like healthcare, manufacturing, retail and government seamlessly integrate advanced AI technologies into existing business frameworks. This enables organizations to achieve significant operations efficiencies, enhanced decision-making, and accelerated digital transformation.Rising Star Partner of the Year AHEAD is recognized for its leadership, technical expertise and deployment of NVIDIA software, NVIDIA DGX systems, NVIDIA HGX and networking technologies to advance AI, benefitting customers across healthcare, financial services, life sciences and higher education.Networking Partner of the Year Computacenter is recognized for advancing high-performance computing and data centers with NVIDIA networking technologies. The company achieved this by using the NVIDIA AI Enterprise software platform, DGX platforms and NVIDIA networking to drive innovation and growth throughout industries with efficient, accelerated data centers.Solution Integration Partner of the Year EXXACT is recognized for its efforts in helping research institutions and businesses tap into generative AI, large language models and high-performance computing. The company harnesses NVIDIA GPUs and networking technologies to deliver powerful computing platforms that accelerate innovation and tackle complex computational challenges across various industries.Enterprise Partner of the Year World Wide Technology (WWT) is recognized for its leadership in advancing AI adoption of customers across industry verticals worldwide. The company expanded its end-to-end AI capabilities by integrating NVIDIA Blueprints into its AI Proving Ground and has made a $500 million commitment to AI development over three years to help speed enterprise generative AI deployments.Software Partner of the Year Mark III is recognized for the work of its cross-functional team spanning data scientists, developers, 3D artists, systems engineers, and HPC and AI architects, as well as its close collaborations with enterprises and institutions, to deploy NVIDIA software, including NVIDIA AI Enterprise and NVIDIA Omniverse, across industries. These efforts have helped many customers build software-powered pipelines and data flywheels with machine learning, generative AI, high-performance computing and digital twins.Higher Education Research Partner of the Year Mark III is recognized for its close engagement with universities, academic institutions and research organizations to cultivate the next generation of leaders across AI, machine learning, generative AI, high-performance computing and digital twins.Healthcare Partner of the Year Lambda is recognized for empowering healthcare and biotech organizations with AI training, fine-tuning and inferencing solutions to speed innovation and drive breakthroughs in AI-driven drug discovery. The company provides AI training, fine-tuning and inferencing solutions at every scale from individual workstations to comprehensive AI factories that help healthcare providers seamlessly integrate NVIDIA accelerated computing and software into their infrastructure.Financial Services Partner of the Year WWT is recognized for driving the digital transformation of the worlds largest banks and financial institutions. The company harnesses NVIDIA AI technologies to optimize data management, enhance cybersecurity and deliver transformative generative AI solutions, helping financial services clients navigate rapid technological changes and evolving customer expectations.Innovation Partner of the Year Cambridge Computer is recognized for supporting customers deploying transformative technologies, including NVIDIA Grace Hopper, NVIDIA Blackwell and the NVIDIA Omniverse platform for physical AI.Service Delivery Partner of the Year SoftServe is recognized for its impact in driving enterprise adoption of NVIDIA AI and Omniverse with custom NVIDIA Blueprints that tap into NVIDIA NIM microservices and NVIDIA NeMo and Riva software. SoftServe helps customers create generative AI services for industries spanning manufacturing, retail, financial services, auto, healthcare and life sciences.Distribution Partner of the Year TD SYNNEX has been recognized for the second consecutive year for supporting customers in accelerating AI growth through rapid delivery of NVIDIA accelerated computing and software, as part of its Destination AI initiative.Rising Star Consulting Partner of the Year Tata Consultancy Services (TCS) is recognized for its growth and commitment to providing industry-specific solutions that help customers adopt AI faster and at scale. Through its recently launched business unit and center of excellence built on NVIDIA AI Enterprise and Omniverse, TCS is poised to accelerate adoption of agentic AI and physical AI solutions to speed innovation for customers worldwide.Canadian Partner of the Year Hypertec is recognized for its advancement of high-performance computing and generative AI across Canada. The company has employed the full-stack NVIDIA platform to accelerate AI for financial services, higher education and research.Public Sector Partner of the Year Government Acquisitions (GAI) is recognized for its rapid AI deployment and robust customer relationships, helping serve the unique needs of the federal government by adding AI to operations to improve public safety and efficiency.Learn more about the NPN program.
    0 Comentários ·0 Compartilhamentos ·135 Visualizações
  • NVIDIA Accelerates Science and Engineering With CUDA-X Libraries Powered by GH200 and GB200 Superchips
    blogs.nvidia.com
    Scientists and engineers of all kinds are equipped to solve tough problems a lot faster with NVIDIA CUDA-X libraries powered by NVIDIA GB200 and GH200 superchips.Announced today at the NVIDIA GTC global AI conference, developers can now take advantage of tighter automatic integration and coordination between CPU and GPU resources enabled by CUDA-X working with these latest superchip architectures resulting in up to 11x speedups for computational engineering tools and 5x larger calculations compared with using traditional accelerated computing architectures.This greatly accelerates and improves workflows in engineering simulation, design optimization and more, helping scientists and researchers reach groundbreaking results faster.NVIDIA released CUDA in 2006, opening up a world of applications to the power of accelerated computing. Since then, NVIDIA has built more than 900 domain-specific NVIDIA CUDA-X libraries and AI models, making it easier to adopt accelerated computing and driving incredible scientific breakthroughs. Now, CUDA-X brings accelerated computing to a broad new set of engineering disciplines, including astronomy, particle physics, quantum physics, automotive, aerospace and semiconductor design.The NVIDIA Grace CPU architecture delivers a significant boost to memory bandwidth while reducing power consumption. And NVIDIA NVLink-C2C interconnects provide such high bandwidth that the GPU and CPU can share memory, allowing developers to write less-specialized code, run larger problems and improve application performance.Accelerating Engineering Solvers With NVIDIA cuDSSNVIDIAs superchip architectures allow users to extract greater performance from the same underlying GPU by making more efficient use of CPU and GPU processing capabilities.The NVIDIA cuDSS library is used to solve large engineering simulation problems involving sparse matrices for applications such as design optimization, electromagnetic simulation workflows and more. cuDSS uses Grace GPU memory and the high-bandwidth NVLink-C2C interconnect to factorize and solve large matrices that normally wouldnt fit in device memory. This enables users to solve extremely large problems in a fraction of the time.The coherent shared memory between the GPU and Grace GPU minimizes data movement, significantly reducing overhead for large systems. For a range of large computational engineering problems, tapping the Grace CPU memory and superchip architecture accelerated the most heavy-duty solution steps by up to 4x with the same GPU, with cuDSS hybrid memory.Ansys has integrated cuDSS into its HFSS solver, delivering significant performance enhancements for electromagnetic simulations. With cuDSS, HFSS software achieves up to an 11x speed improvement for the matrix solver.Altair OptiStruct has also adopted the cuDSS Direct Sparse Solver library, substantially accelerating its finite element analysis workloads.These performance gains are achieved by optimizing key operations on the GPU while intelligently using CPUs for shared memory and heterogeneous CPU and GPU execution. cuDSS automatically detects areas where CPU utilization provides additional benefits, further enhancing efficiency.Scaling Up at Warp Speed With Superchip MemoryScaling memory-limited applications on a single GPU becomes possible with the GB200 and GH200 architectures NVLink-CNC interconnects that provide CPU and GPU memory coherency.Many engineering simulations are limited by scale and require massive simulations to produce the resolution necessary to design equipment with intricate components, such as aircraft engines. By tapping into the ability to seamlessly read and write between CPU and GPU memories, engineers can easily implement out-of-core solvers to process larger data.For example, using NVIDIA Warp a Python-based framework for accelerating data generation and spatial computing applications Autodesk performed simulations of up to 48 billion cells using eight GH200 nodes. This is more than 5x larger than the simulations possible using eight NVIDIA H100 nodes.Powering Quantum Computing Research With NVIDIA cuQuantumQuantum computers promise to accelerate problems that are core to many science and industry disciplines. Shortening the time to useful quantum computing rests heavily on the ability to simulate extremely complex quantum systems.Simulations allow researchers to develop new algorithms today that will run at scales suitable for tomorrows quantum computers. They also play a key role in improving quantum processors, running complex simulations of performance and noise characteristics of new qubit designs.So-called state vector simulations of quantum algorithms require matrix operations to be performed on exponentially large vector objects that must be stored in memory. Tensor network simulations, on the other hand, simulate quantum algorithms through tensor contractions and can enable hundreds or thousands of qubits to be simulated for certain important classes of applications.The NVIDIA cuQuantum library accelerates these workloads. cuQuantum is integrated with every leading quantum computing framework, so all quantum researchers can tap into simulation performance with no code changes.Simulations of quantum algorithms are generally limited in scale by memory requirements. The GB200 and GH200 architectures provide an ideal platform for scaling up quantum simulations, as they enable large CPU memory to be used without bottlenecking performance. A GH200 system is up to 3x faster than an H100 system with x86 on quantum computing benchmarks.Learn more about CUDA-X libraries, attend the GTC session on how math libraries can help accelerate applications on NVIDIA Blackwell GPUs and watch NVIDIA founder and CEO Jensen Huangs GTC keynote.
    0 Comentários ·0 Compartilhamentos ·122 Visualizações
  • Where AI and Graphics Converge: NVIDIA Blackwell Universal Data Center GPU Accelerates Demanding Enterprise Workloads
    blogs.nvidia.com
    The first NVIDIA Blackwell-powered data center GPU built for both enterprise AI and visual computing the NVIDIA RTX PRO 6000 Blackwell Server Edition is designed to accelerate the most demanding AI and graphics applications for every industry.Compared to the previous-generation NVIDIA Ada Lovelace architecture L40S GPU, the RTX PRO 6000 Blackwell Server Edition GPU will deliver a multifold increase in performance across a wide array of enterprise workloads up to 5x higher large language model (LLM) inference throughput for agentic AI applications, nearly 7x faster genomics sequencing, 3.3x speedups for text-to-video generation, nearly 2x faster inference for recommender systems and over 2x speedups for rendering.Its part of the NVIDIA RTX PRO Blackwell series of workstation and server GPUs announced today at NVIDIA GTC, the global AI conference taking place through Friday, March 21, in San Jose, California. The RTX PRO lineup includes desktop, laptop and data center GPUs that support AI and creative workloads across industries.With the RTX PRO 6000 Blackwell Server Edition, enterprises across various sectors including architecture, automotive, cloud services, financial services, game development, healthcare, manufacturing, media and entertainment and retail can enable breakthrough performance for workloads such as multimodal generative AI, data analytics, engineering simulation, and visual computing.Content creation, semiconductor manufacturing and genomics analysis companies are already set to harness its capabilities to accelerate compute-intensive, AI-enabled workflows.Universal GPU Delivers Powerful Capabilities for AI and GraphicsThe RTX PRO 6000 Blackwell Server Edition packages powerful RTX AI and graphics capabilities in a passively cooled form factor designed to run 24/7 in data center environments. With 96GB of ultrafast GDDR7 memory and support for Multi-Instance GPU, or MIG, each RTX PRO 6000 can be partitioned into as many as four fully isolated instances with 24GB each to run simultaneous AI and graphics workloads.RTX PRO 6000 is the first universal GPU to enable secure AI with NVIDIA Confidential Computing, which protects AI models and sensitive data from unauthorized access with strong, hardware-based security providing a physically isolated trusted execution environment to secure the entire workload while data is in use.To support enterprise-scale deployments, the RTX PRO 6000 can be configured in high-density accelerated computing platforms for distributed inference workloads or used to deliver virtual workstations with NVIDIA vGPU software to power AI development and graphics-intensive applications.The RTX PRO 6000 GPU delivers supercharged inferencing performance across a broad range of AI models and accelerates real-time, photorealistic ray tracing of complex virtual environments. It includes the latest Blackwell hardware and software innovations like fifth-generation Tensor Cores, fourth-generation RT Cores, DLSS 4, a fully integrated media pipeline and second-generation Transformer Engine with support for FP4 precision.Enterprises can run the NVIDIA Omniverse and NVIDIA AI Enterprise platforms at scale on RTX PRO 6000 Blackwell Server Edition GPUs to accelerate the development and deployment of agentic and physical AI applications, such as image and video generation, LLM inference, recommender systems, computer vision, digital twins and robotics simulation.Accelerated AI Inference and Visual Computing for Any IndustryBlack Forest Labs, creator of the popular FLUX image generation AI, aims to develop and optimize state-of-the-art text-to-image models using RTX PRO 6000 Server Edition GPUs.With the powerful multimodal inference capabilities of the RTX PRO 6000 Server Edition, our customers will be able to significantly reduce latency for image generation workflows, said Robin Rombach, CEO of Black Forest Labs. We anticipate that, with the server edition GPUs support for FP4 precision, our Flux models will run faster, enabling interactive, AI-accelerated content creation.Cloud graphics company OTOY will optimize its OctaneRender real-time rendering application for NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.The new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs unlock brand-new workflows that were previously out of reach for 3D content creators, said Jules Urbach, CEO of OTOY and founder of the Render Network. With 96 GB of VRAM, the new server-edition GPUs can run complex neural rendering models within OctaneRenders GPU path-tracer, enabling artists to tap into incredible new features and tools that blend the precision of traditional CGI augmented with frontier generative AI technology.Semiconductor equipment manufacturer KLA plans to use the RTX PRO 6000 Blackwell Server Edition to accelerate inference workloads powering the wafer manufacturing process the creation of thin discs of semiconductor materials that are core to integrated circuits.KLA and NVIDIA have worked together since 2008 to advance KLAs physics-based AI with optimized high-performance computing solutions. KLAs industry-leading inspection and metrology systems capture and process images by running complex AI algorithms at lightning-fast speeds to find the most critical semiconductor defects.Based on early results, we expect great performance from the RTX PRO 6000 Blackwell Server Edition, said Kris Bhaskar, senior fellow and vice president of AI initiatives at KLA. The increased memory capacity, FP4 reduced precision and new computational capabilities of NVIDIA Blackwell are going to be particularly helpful to KLA and its customers.Boosting Genomics and Drug Discovery WorkloadsThe RTX PRO 6000 Blackwell Server Edition also demonstrates game-changing acceleration for genomic analysis and drug discovery inference workloads, enabled by a new class of dynamic programming instructions.On a single RTX PRO 6000 Blackwell Server Edition GPU, Fastq2bam and DeepVariant elements of the NVIDIA Parabricks pipeline for germline analysis run up to 1.5x faster compared with using an L40S GPU, and 1.75x faster compared with using an NVIDIA H100 GPU.For Smith-Waterman, a core algorithm used in many sequence alignment and variant calling applications, RTX PRO 6000 Blackwell Server Edition GPUs accelerate throughput up to 6.8x compared with L40S GPUs.And for OpenFold2, an AI model that predicts protein structures for drug discovery research, RTX PRO 6000 Blackwell Server Edition GPUs boost inference performance by up to 4.8x compared with L40S GPUs.Genomics company Oxford Nanopore Technologies is collaborating with NVIDIA to bring the latest AI and accelerated computing technologies to its sequencing systems.The NVIDIA Blackwell architecture will help us drive the real-time sequencing analysis of anything, by anyone, anywhere, said Chris Seymour, vice president of advanced platform development at Oxford Nanopore Technologies. With the RTX PRO 6000 Blackwell Server Edition, we have seen up to a 2x improvement in basecalling speed across our Dorado platform.Availability via Global Network of Cloud Providers and System PartnersPlatforms featuring the RTX PRO 6000 Blackwell Server Edition will be available from a global ecosystem of partners starting in May.AWS, Google Cloud, Microsoft Azure, IBM Cloud, CoreWeave, Crusoe, Lambda, Nebius and Vultr will be among the first cloud service providers and GPU cloud providers to offer instances featuring the RTX PRO 6000 Blackwell Server Edition.Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of servers featuring the RTX PRO 6000 Blackwell Server Edition, as are Advantech, Aetina, Aivres, ASRockRack, ASUS, Compal, Foxconn, GIGABYTE, Inventec, MSI, Pegatron, Quanta Cloud Technology (QCT), MiTAC Computing, NationGate, Wistron and Wiwynn.To learn more about the NVIDIA RTX PRO Blackwell series and other advancements in AI, watch the GTC keynote by NVIDIA founder and CEO Jensen Huang:
    0 Comentários ·0 Compartilhamentos ·120 Visualizações
  • New NVIDIA Software for Blackwell Infrastructure Runs AI Factories at Light Speed
    blogs.nvidia.com
    The industrial age was fueled by steam. The digital age brought a shift through software. Now, the AI age is marked by the development of generative AI, agentic AI and AI reasoning, which enables models to process more data to learn and reason to solve complex problems.Just as industrial factories transform raw materials into goods, modern businesses require AI factories to quickly transform data into insights that are scalable, accurate and reliable.Orchestrating this new infrastructure is far more complex than it was to build steam-powered factories. State-of-the-art models demand supercomputing-scale resources. Any downtime risks derailing weeks of progress and reducing GPU utilization.To enable enterprises and developers to manage and run AI factories at light speed, NVIDIA today announced at the NVIDIA GTC global AI conference NVIDIA Mission Control the only unified operations and orchestration software platform that automates the complex management of AI data centers and workloads.NVIDIA Mission Control enhances every aspect of AI factory operations. From configuring deployments to validating infrastructure to operating developer workloads, its capabilities help enterprises get frontier models up and running faster.It is designed to easily transition NVIDIA Blackwell-based systems from pretraining to post-training and now test-time scaling with speed and efficiency. The software enables enterprises to easily pivot between training and inference workloads on their Blackwell-based NVIDIA DGX systems and NVIDIA Grace Blackwell systems, dynamically reallocating cluster resources to match shifting priorities.In addition, Mission Control includes NVIDIA Run:ai technology to streamline operations and job orchestration for development, training and inference, boosting infrastructure utilization by up to 5x.Mission Controls autonomous recovery capabilities, supported by rapid checkpointing and automated tiered restart features, can deliver up to 10x faster job recovery compared with traditional methods that rely on manual intervention, boosting AI training and inference efficiency to keep AI applications in operation.Built on decades of NVIDIA supercomputing expertise, Mission Control lets enterprises simply run models by minimizing time spent managing AI infrastructure. It automates the lifecycle of AI factory infrastructure for all NVIDIA Blackwell-based NVIDIA DGX systems and NVIDIA Grace Blackwell systems from Dell Technologies, Hewlett Packard Enterprise (HPE), Lenovo and Supermicro to make advanced AI infrastructure more accessible to the worlds industries.Enterprises can further simplify and speed deployments of NVIDIA DGX GB300 and DGX B300 systems by using Mission Control with the NVIDIA Instant AI Factory service preconfigured in Equinix AI-ready data centers across 45 markets globally.Advanced Software Provides Enterprises Uninterrupted Infrastructure OversightMission Control automates end-to-end infrastructure management including provisioning, monitoring and error diagnosis to deliver uninterrupted operations. Plus, it continuously monitors every layer of the application and infrastructure stack to predict and identify sources of downtime and inefficiency saving time, energy and costs.Additional NVIDIA Mission Control software benefits include:Simplified cluster setup and provisioning with new automation and standardized application programming interfaces to speed time to deployment with integrated inventory management and visualizations.Seamless workload orchestration for simplified Slurm and Kubernetes workflows.Energy-optimized power profiles to balance power requirements and tune GPU performance for various workload types with developer-selectable controls.Autonomous job recovery to identify, isolate and recover from inefficiencies without manual intervention to maximize developer productivity and infrastructure resiliency.Customizable dashboards that track key performance indicators with access to critical telemetry data about clusters.On-demand health checks to validate hardware and cluster performance throughout the infrastructure lifecycle.Building management integration for enhanced coordination with building management systems to provide more control for power and cooling events, including rapid leakage detection.Leading System Makers Bring NVIDIA Mission Control to Grace Blackwell ServersLeading system makers plan to offer NVIDIA GB200 NVL72 and GB300 NVL72 systems with NVIDIA Mission Control.Dell plans to offer NVIDIA Mission Control software as part of the Dell AI Factory with NVIDIA.The AI industrial revolution demands efficient infrastructure that adapts as fast as business evolves, and the Dell AI Factory with NVIDIA delivers with comprehensive compute, networking, storage and support, said Ihab Tarazi, chief technology officer and senior vice president at Dell Technologies. Pairing NVIDIA Mission Control software and Dell PowerEdge XE9712 and XE9680 servers helps enterprises scale models effortlessly to meet the demands of both training and inference, turning data into actionable insights faster than ever before.HPE will offer the NVIDIA GB200 NVL72 by HPE and GB300 NVL72 by HPE systems with NVIDIA Mission Control software.We are helping service providers and cutting-edge enterprises to rapidly deploy, scale, and optimize complex AI clusters capable of training trillion parameter models, said Trish Damkroger, senior vice president and general manager, HPC & AI Infrastructure Solutions at HPE. As part of our collaboration with NVIDIA, we will deliver NVIDIA Grace Blackwell rack-scale systems and Mission Control software with HPEs global services and direct liquid cooling expertise to power the new AI era.Lenovo plans to update its Lenovo Hybrid AI Advantage with NVIDIA systems to include NVIDIA Mission Control software.Bringing NVIDIA Mission Control software to Lenovo Hybrid AI Advantage with NVIDIA systems empowers enterprises to navigate the demands of generative and agentic AI workloads with unmatched agility, said Brian Connors, worldwide vice president and general manager of enterprise and SMB segment and AI, infrastructure solutions group, at Lenovo. By automating infrastructure orchestration and enabling seamless transitions between training and inference workloads, Lenovo and NVIDIA are helping customers scale AI innovation at the speed of business.Supermicro plans to incorporate NVIDIA Mission Control software into its Supercluster systems.Supermicro is proud to team with NVIDIA on a Grace Blackwell NVL72 system that is fully supported by NVIDIA Mission Control software, Cenly Chen, chief growth officer at Supermicro. Running on Supermicros AI SuperCluster systems with NVIDIA Grace Blackwell, NVIDIA Mission Control software provides customers with a seamless management software suite to maximize performance on both current NVIDIA GB200 NVL72 systems and future platforms such as NVIDIA GB300 NVL72.Base Command Manager Offers Free Kickstart for AI Cluster ManagementTo help enterprises with infrastructure management, NVIDIA Base Command Manager software is expected to soon be available for free for up to eight accelerators per system, for any cluster size, with the option to purchase NVIDIA Enterprise Support separately.AvailabilityNVIDIA Mission Control for NVIDIA DGX GB200 and DGX B200 systems is available now. NVIDIA GB200 NVL72 systems with Mission Control are expected to soon be available from Dell, HPE, LeNewfonovo and Supermicro.NVIDIA Mission Control is expected to become available for the latest NVIDIA DGX GB300 and DGX B300 systems, as well as GB300 NVL72 systems from leading global providers, later this year.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·117 Visualizações
  • NVIDIA Unveils Open Physical AI Dataset to Advance Robotics and Autonomous Vehicle Development
    blogs.nvidia.com
    Teaching autonomous robots and vehicles how to interact with the physical world requires vast amounts of high-quality data. To give researchers and developers a head start, NVIDIA is releasing a massive, open-source dataset for building the next generation of physical AI.Announced at NVIDIA GTC, a global AI conference taking place this week in San Jose, California, this commercial-grade, pre-validated dataset can help researchers and developers kickstart physical AI projects that can be prohibitively difficult to start from scratch. Developers can either directly use the dataset for model pretraining, testing and validation or use it during post-training to fine-tune world foundation models, accelerating the path to deployment.The initial dataset is now available on Hugging Face, offering developers 15 terabytes of data representing more than 320,000 trajectories for robotics training, plus up to 1,000 Universal Scene Description (OpenUSD) assets, including a SimReady collection. Dedicated data to support end-to-end autonomous vehicle (AV) development which will include 20-second clips of diverse traffic scenarios spanning over 1,000 cities across the U.S. and two dozen European countries is coming soon.The NVIDIA Physical AI Dataset includes hundreds of SimReady assets for rich scenario building.This dataset will grow over time to become the worlds largest unified and open dataset for physical AI development. It could be applied to develop AI models to power robots that safely maneuver warehouse environments, humanoid robots that support surgeons during procedures and AVs that can navigate complex traffic scenarios like construction zones.The NVIDIA Physical AI Dataset is slated to contain a subset of the real-world and synthetic data NVIDIA uses to train, test and validate physical AI for the NVIDIA Cosmos world model development platform, the NVIDIA DRIVE AV software stack, the NVIDIA Isaac AI robot development platform and the NVIDIA Metropolis application framework for smart cities.Early adopters include the Berkeley DeepDrive Center at the University of California, Berkeley, the Carnegie Mellon Safe AI Lab and the Contextual Robotics Institute at University of California, San Diego.We can do a lot of things with this dataset, such as training predictive AI models that help autonomous vehicles better track the movements of vulnerable road users like pedestrians to improve safety, said Henrik Christensen, director of multiple robotics and autonomous vehicle labs at UCSD. A dataset that provides a diverse set of environments and longer clips than existing open-source resources will be tremendously helpful to advance robotics and AV research.Addressing the Need for Physical AI DataThe NVIDIA Physical AI Dataset can help developers scale AI performance during pretraining, where more data helps build a more robust model and during post-training, where an AI model is trained on additional data to improve its performance for a specific use case.Collecting, curating and annotating a dataset that covers diverse scenarios and accurately represents the physics and variation of the real world is time-consuming, presenting a bottleneck for most developers. For academic researchers and small enterprises, running a fleet of vehicles over months to gather data for autonomous vehicle AI is impractical and costly and, since much of the footage collected is uneventful, typically just 10% of data is used for training.But this scale of data collection is essential to building safe, accurate, commercial-grade models. NVIDIA Isaac GR00T robotics models take thousands of hours of video clips for post-training the GR00T N1 model, for example, was trained on an expansive humanoid dataset of real and synthetic data. The NVIDIA DRIVE AV end-to-end AI model for autonomous vehicles requires tens of thousands of hours of driving data to develop.https://blogs.nvidia.com/wp-content/uploads/2025/03/rgb_5sec-1.mp4This open dataset, comprising thousands of hours of multicamera video at unprecedented diversity, scale and geography will particularly benefit the field of safety research by enabling new work on identifying outliers and assessing model generalization performance. The effort contributes to NVIDIA Halos full-stack AV safety system.In addition to harnessing the NVIDIA Physical AI Dataset to help meet their data needs, developers can further boost AI development with tools like NVIDIA NeMo Curator, which process vast datasets efficiently for model training and customization. Using NeMo Curator, 20 million hours of video can be processed in just two weeks on NVIDIA Blackwell GPUs, compared with 3.4 years on unoptimized CPU pipelines.Robotics developers can also tap the new NVIDIA Isaac GR00T blueprint for synthetic manipulation motion generation, a reference workflow built on NVIDIA Omniverse and NVIDIA Cosmos that uses a small number of human demonstrations to create massive amounts of synthetic motion trajectories for robot manipulation.University Labs Set to Adopt Dataset for AI DevelopmentThe robotics labs at UCSD include teams focused on medical applications, humanoids and in-home assistive technology. Christensen anticipates that the Physical AI Datasets robotics data could help develop semantic AI models that understand the context of spaces like homes, hotel rooms and hospitals.One of our goals is to achieve a level of understanding where, if a robot was asked to put your groceries away, it would know exactly which items should go in the fridge and what goes in the pantry, he said.In the field of autonomous vehicles, Christensens lab could apply the dataset to train AI models to understand the intention of various road users and predict the best action to take. His research teams could also use the dataset to support the development of digital twins that simulate edge cases and challenging weather conditions. These simulations could be used to train and test autonomous driving models in situations that are rare in real-world environments.At Berkeley DeepDrive, a leading research center on AI for autonomous systems, the dataset could support the development of policy models and world foundation models for autonomous vehicles.Data diversity is incredibly important to train foundation models, said Wei Zhan, codirector of Berkeley DeepDrive. This dataset could support state-of-the-art research for public and private sector teams developing AI models for autonomous vehicles and robotics.Researchers at Carnegie Mellon Universitys Safe AI Lab plan to use the dataset to advance their work evaluating and certifying the safety of self-driving cars. The team plans to test how a physical AI foundation model trained on this dataset performs in a simulation environment with rare conditions and compare its performance to an AV model trained on existing datasets.This dataset covers different types of roads and geographies, different infrastructure, different weather environments, said Ding Zhao, associate professor at CMU and head of the Safe AI Lab. Its diversity could be quite valuable in helping us train a model with causal reasoning capabilities in the physical world that understands edge cases and long-tail problems.Access the NVIDIA Physical AI dataset on Hugging Face. Build foundational knowledge with courses such as the Learn OpenUSD learning path and Robotics Fundamentals learning path. And to learn more about the latest advancements in physical AI, watch the GTC keynote by NVIDIA founder and CEO Jensen Huang.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·140 Visualizações
  • NVIDIA Unveils AI-Q Blueprint to Connect AI Agents for the Future of Work
    blogs.nvidia.com
    AI agents are the new digital workforce, transforming business operations, automating complex tasks and unlocking new efficiencies. Now, with the ability to collaborate, these agents can work together to solve complex problems and drive even greater impact.Businesses across industries, including sports and finance, can more quickly harness these benefits with AI-Q a new NVIDIA Blueprint for developing agentic systems that can use reasoning to unlock knowledge in enterprise data.Smarter Agentic AI Systems With NVIDIA AI-Q and AgentIQ ToolkitAI-Q provides an easy-to-follow reference for integrating NVIDIA accelerated computing, partner storage platforms, and software and tools including the new NVIDIA Llama Nemotron reasoning models. AI-Q offers a powerful foundation for enterprises to build digital workforces that break down agentic silos and are capable of handling complex tasks with high accuracy and speed.AI-Q integrates fast multimodal extraction and world-class retrieval, using NVIDIA NeMo Retriever, NVIDIA NIM microservices and AI agents.The blueprint is powered by the new NVIDIA AgentIQ toolkit for seamless, heterogeneous connectivity between agents, tools and data. Released today on GitHub, AgentIQ is an open-source software library for connecting, profiling and optimizing teams of AI agents fueled by enterprise data to create multi-agent, end-to-end systems. It can be easily integrated with existing multi-agent systems either in parts or as a complete solution with a simple onboarding process thats 100% opt-in.The AgentIQ toolkit also enhances transparency with full system traceability and profiling enabling organizations to monitor performance, identify inefficiencies and gain fine-grained understanding of how business intelligence is generated. This profiling data can be used with NVIDIA NIM and the NVIDIA Dynamo open-source library to optimize the performance of agentic systems.The New Enterprise AI Agent WorkforceAs AI agents become digital employees, IT teams will support onboarding and training. The AI-Q blueprint and AgentIQ toolkit support digital employees by enabling collaboration between agents and optimizing performance across different agentic frameworks.Enterprises using these tools will be able to more easily connect AI agent teams across solutions like Salesforces Agentforce, Atlassian Rovo in Confluence and Jira, and the ServiceNow AI platform for business transformation to break down silos, streamline tasks and cut response times from days to hours.AgentIQ also integrates with frameworks and tools like CrewAI, LangGraph, Llama Stack, Microsoft Azure AI Agent Service and Letta, letting developers work in their preferred environment.Azure AI Agent Service is integrated with AgentIQ to enable more efficient AI agents and orchestration of multi-agent frameworks using Semantic Kernel, which is fully supported in AgentIQ.A wide range of industries are integrating visual perception and interactive capabilities into their agents and copilots.Financial services leader Visa is using AI agents to streamline cybersecurity, automating phishing email analysis at scale. Using the profiler feature of AI-Q, Visa can optimize agent performance and costs, maximizing AIs role in efficient threat response.Get Started With AI-Q and AgentIQAI-Q integration into the NVIDIA Metropolis VSS blueprint is enabling multimodal agents, combining visual perception with speech, translation and data analytics for enhanced intelligence.Developers can use the AgentIQ toolkit open-source library today and sign up for this hackathon to build hands-on skills for advancing agentic systems.Plus, learn how an NVIDIA solutions architect used the AgentIQ toolkit to improve AI code generation.Agentic systems built with AI-Q require a powerful AI data platform. NVIDIA partners are delivering these customized platforms that continuously process data to let AI agents quickly access knowledge to reason and respond to complex queries.See notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·130 Visualizações
  • Driving Impact: NVIDIA Expands Automotive Ecosystem to Bring Physical AI to the Streets
    blogs.nvidia.com
    The autonomous vehicle (AV) revolution is here and NVIDIA is at its forefront, bringing more than two decades of automotive computing, software and safety expertise to power innovation from the cloud to the car.At NVIDIA GTC, a global AI conference taking place this week in San Jose, California, dozens of transportation leaders are showcasing their latest advancements with NVIDIA technologies that span passenger cars, trucks, commercial vehicles and more.Mobility leaders are increasingly turning to NVIDIAs three core accelerated compute platforms: NVIDIA DGX systems for training the AI-based stack in the data center, NVIDIA Omniverse and NVIDIA Cosmos running on NVIDIA OVX systems for simulation and synthetic data generation, and the NVIDIA DRIVE AGX in-vehicle computer to process real-time sensor data for safe, highly automated and autonomous driving capabilities.For manufacturers and developers in the multitrillion-dollar auto industry, this unlocks new possibilities for designing, manufacturing and deploying functionally safe, intelligent mobility solutions offering consumers safer, smarter and more enjoyable experiences.Transforming Passenger VehiclesThe U.S.s largest automaker, General Motors (GM), is collaborating with NVIDIA to develop and build its next-generation vehicles, factories and robots using NVIDIAs accelerated compute platforms. GM has been investing in NVIDIA GPU platforms for training AI models.The companies collaboration now expands to include optimizing factory planning using Omnivese with Cosmos and deploying next-generation vehicles at scale accelerated by the NVIDIA DRIVE AGX. This will help GM build physical AI systems tailored to its company vision, craft and know-how, and ultimately enable mobility thats safer, smarter and more accessible than ever.Volvo Cars, which is using the NVIDIA DRIVE AGX in-vehicle computer in its next-generation electric vehicles, and its subsidiary Zenseact use the NVIDIA DGX platform to analyze and contextualize sensor data, unlock new insights and train future safety models that will enhance overall vehicle performance and safety.Lenovo has teamed with robotics company Nuro to create a robust end-to-end system for level 4 autonomous vehicles that prioritize safety, reliability and convenience. The system is built on NVIDIA DRIVE AGX in-vehicle compute.Advancements in TruckingNVIDIAs AI-driven technologies are also supercharging trucking, helping address pressing challenges like driver shortages, rising e-commerce demands and high operational costs. NVIDIA DRIVE AGX delivers the computational muscle needed for safe, reliable and efficient autonomous operations improving road safety and logistics on a massive scale.Gatik is integrating DRIVE AGX for the onboard AI processing necessary for its freight-only class 6 and 7 trucks, manufactured by Isuzu Motors, which offer driverless middle-mile delivery of a wide range of goods to Fortune 500 customers including Tyson Foods, Kroger and Loblaw.Uber Freight is also adopting DRIVE AGX as the AI computing backbone of its current and future carrier fleets, sustainably enhancing efficiency and saving costs for shippers.Torc is developing a scalable, physical AI compute system for autonomous trucks. The system uses NVIDIA DRIVE AGX in-vehicle compute and the NVIDIA DriveOS operating system with Flexs Jupiter platform and manufacturing capabilities to support Torcs productization and scaled market entry in 2027.Growing Demand for DRIVE AGXNVIDIA DRIVE AGX Orin platform is the AI brain behind todays intelligent fleets and the next wave of mobility is already arriving, as production vehicles built on the NVIDIA DRIVE AGX Thor centralized car computer start to hit the roads.Magna is a key global automotive supplier helping to meet the surging demand for the NVIDIA Blackwell architecture-based DRIVE Thor platform designed for the most demanding processing workloads, including those involving generative AI, vision language models and large language models (LLMs). Magna will develop driving systems built with DRIVE AGX Thor for integration in automakers vehicle roadmaps, delivering active safety and comfort functions along with interior cabin AI experiences.Simulation and Data: The Backbone of AV DevelopmentEarlier this year, NVIDIA announced the Omniverse Blueprint for AV simulation, a reference workflow for creating rich 3D worlds for autonomous vehicle training, testing and validation. The blueprint is expanding to include NVIDIA Cosmos world foundation models (WFMs) to amplify photoreal data variation.Unveiled at the CES trade show in January, Cosmos is already being adopted in automotive, including by Plus, which is embedding Cosmos physical AI models into its SuperDrive technology, accelerating the development of level 4 self-driving trucks.Foretellix is extending its integration of the blueprint, using the Cosmos Transfer WFM to add conditions like weather and lighting to its sensor simulation scenarios to achieve greater situation diversity. Mcity is integrating the blueprint into the digital twin of its AV testing facility to enable physics-based modeling of camera, lidar, radar and ultrasonic sensor data.CARLA, which offers an open-source AV simulator, has integrated the blueprint to deliver high-fidelity sensor simulation. Global systems integrator Capgemini will be the first to use CARLAs Omniverse integration for enhanced sensor simulation in its AV development platform.NVIDIA is using Nexars extensive, high-quality, edge-case data to train and fine-tune NVIDIA Cosmos simulation capabilities. Nexar is tapping into Cosmos, neural infrastructure models and the NVIDIA DGX Cloud platform to supercharge its AI development, refining AV training, high-definition mapping and predictive modeling.Enhancing In-Vehicle Experiences With NVIDIA AI EnterpriseMobility leaders are integrating the NVIDIA AI Enterprise software platform, running on DRIVE AGX, to enhance in-vehicle experiences with generative and agentic AI.At GTC, Cerence AI is showcasing Cerence xUI, its new LLM-based AI assistant platform that will advance the next generation of agentic in-vehicle user experiences. The Cerence xUI hybrid platform runs in the cloud as well as onboard the vehicle, optimized first on NVIDIA DRIVE AGX Orin.As the foundation for Cerence xUI, the CaLLM family of language models is based on open-source foundation models and fine-tuned on Cerence AIs automotive dataset. Tapping into NVIDIA AI Enterprise and bolstering inference performance including through the NVIDIA TensorRT-LLM library and NVIDIA NeMo, Cerence AI has optimized CaLLM to serve as the central agentic orchestrator facilitating enriched driver experiences at the edge and in the cloud.SoundHound will also be demonstrating its next-generation in-vehicle voice assistant, which uses generative AI at the edge with NVIDIA DRIVE AGX, enhancing the in-car experience by bringing cloud-based LLM intelligence directly to vehicles.The Complexity of Autonomy and NVIDIAs Safety-First SolutionSafety is the cornerstone in deploying highly automated and autonomous vehicles to the roads at scale. But building AVs is one of todays most complex computing challenges. It demands immense computational power, precision and an unwavering commitment to safety.AVs and highly automated cars promise to extend mobility to those who need it most, reducing accidents and saving lives. To help deliver on this promise, NVIDIA has developed NVIDIA Halos, a full-stack comprehensive safety system that unifies vehicle architecture, AI models, chips, software, tools and services for the safe development of AVs from the cloud to the car.NVIDIA will host its inaugural AV Safety Day at GTC today, featuring in-depth discussions on automotive safety frameworks and implementation.In addition, NVIDIA will host Automotive Developer Day on Thursday, March 20, offering sessions on the latest advancements in end-to-end AV development and beyond.New Tools for AV DevelopersNVIDIA also released new NVIDIA NIM microservices for automotive designed to accelerate development and deployment of end-to-end stacks from cloud to car. The new NIM microservices for in-vehicle applications, which utilize the nuScenes dataset by Motional, include:BEVFormer, a state-of-the-art transformer-based model that fuses multi-frame camera data into a unified birds-eye-view representation for 3D perception.SparseDrive, an end-to-end autonomous driving model that performs motion prediction and planning simultaneously, outputting a safe planning trajectory.For automotive enterprise applications, NVIDIA offers a variety of models, including NV-CLIP, a multimodal transformer model that generates embeddings from images and text; Cosmos Nemotron, a vision language model that queries and summarizes images and videos for multimodal understanding and AI-powered perception; and many more.Learn more about NVIDIAs latest automotive news by watching the NVIDIA GTC keynote and register for sessions from NVIDIA and industry leaders at the show, which runs through March 21.
    0 Comentários ·0 Compartilhamentos ·125 Visualizações
  • Enterprises Ignite Big Savings With NVIDIA-Accelerated Apache Spark
    blogs.nvidia.com
    Tens of thousands of companies worldwide rely on Apache Spark to crunch massive datasets to support critical operations, as well as predict trends, customer behavior, business performance and more. The faster a company can process and understand its data, the more it stands to make and save.Thats why companies with massive datasets including the worlds largest retailers and banks have adopted NVIDIA RAPIDS Accelerator for Apache Spark. The open-source software runs on top of the NVIDIA accelerated computing platform to significantly accelerate the processing of end-to-end data science and analytics pipelines without any code changes.To make it even easier for companies to get value out of NVIDIA-accelerated Spark, NVIDIA today unveiled Project Aether a collection of tools and processes that automatically qualify, test, configure and optimize Spark workloads for GPU acceleration at scale.Project Aether Completes a Years Worth of Work in Less Than a WeekCustomers using Spark in production often manage tens of thousands of complex jobs, or more. Migrating from CPU-only to GPU-powered computing offers numerous and significant benefits, but can be a manual and time-consuming process.Project Aether automates the myriad steps that companies previously have done manually, including analyzing all of their Spark jobs to identify the best candidates for GPU acceleration, as well as staging and performing test runs of each job. It uses AI to fine-tune the configuration of each job to obtain the maximum performance.To understand the impact of Project Aether, consider an enterprise that has 100 Spark jobs to complete. With Project Aether, each of these jobs can be configured and optimized for NVIDIA GPU acceleration in as little as four days. The same process done manually by a single data engineer could take up to an entire year.CBA Drives AI Transformation With NVIDIA-Accelerated Apache SparkRunning Apache Spark on NVIDIA accelerated computing helps enterprises around the world complete jobs faster and with less hardware compared with using CPUs only saving time, space, power and cooling, as well as on-premises capital and operational costs in the cloud.Australias largest financial institution, the Commonwealth Bank of Australia, is responsible for processing 60% of the continents financial transactions. CBA was experiencing challenges from the latency and costs associated with running its Spark workloads. Using CPU-only computing clusters, the bank estimates it faced nearly nine years of processing time for its training backlog on top of handling already taxing daily data demands.With 40 million inferencing transactions a day, it was critical we were able to process these in a timely, reliable manner, said Andrew McMullan, chief data and analytics officer at CBA.Running RAPIDS Accelerator for Apache Spark on GPU-powered infrastructure provided CBA with a 640x performance boost, allowing the bank to process a training of 6.3 billion transactions in just five days. Additionally, on its daily volume of 40 million transactions, CBA is now able to conduct inference in 46 minutes and reduce costs by more than 80% compared with using a CPU-based solution.McMullan says another value of NVIDIA-accelerated Apache Spark is how it offers his team the compute time efficiency needed to cost-effectively build models that can help CBA deliver better customer service, anticipate when customers may need assistance with home loans and more quickly detect fraudulent transactions.CBA also plans to use NVIDIA-accelerated Apache Spark to better pinpoint where customers commonly end their digital journeys, enabling the bank to remediate when needed to reduce the rate of abandoned applications.Global EcosystemRAPIDS Accelerator for Apache Spark is available through a global network of partners. It runs on Amazon Web Services, Cloudera, Databricks, Dataiku, Google Cloud, Microsoft Azure and Oracle Cloud Infrastructure.Dell Technologies today also announced the integration of RAPIDS Accelerator for Apache Spark with Dell Data Lakehouse.To get assistance through NVIDIA Project Aether with a large-scale migration of Apache Spark workloads, apply for access.To learn more, register for NVIDIA GTC and attend these key sessions featuring Walmart, Capital One, CBA and other industry leaders:How Walmart Uses RAPIDS to Improve Efficiency, and What We Have Learned Along the WayAccelerate Distributed Apache Spark Applications on Kubernetes With RAPIDSBuild Lightning-Fast Data Science Pipelines in Industry With Accelerated ComputingAdvancing Transaction Fraud Detection in Financial Services With NVIDIA RAPIDS on AWSAccelerating Data Intelligence With GPUs and RAPIDS on DatabricksScale Your Apache Spark Data Processing With State-of-the-Art NVIDIA Blackwell GPUs for Cost Savings and PerformanceSee notice regarding software product information.
    0 Comentários ·0 Compartilhamentos ·129 Visualizações
Mais stories