-
- EXPLORER
-
-
-
-
This is the Official NVIDIA Page
Mises à jour récentes
-
X.COMđ Incredible art. One shared planet. Celebrate #EarthDay with our latest Studio Standouts community art showcase video featuring talented digital a...đ Incredible art. One shared planet.Celebrate #EarthDay with our latest Studio Standouts community art showcase video featuring talented digital artists working on RTX GPUs from around the world. đšđđș Dive in: https://nvda.ws/3YafAgC0 Commentaires 0 Parts 55 VueConnectez-vous pour aimer, partager et commenter!
-
BLOGS.NVIDIA.COMNVIDIA Research at ICLR â Pioneering the Next Wave of Multimodal Generative AIAdvancing AI requires a full-stack approach, with a powerful foundation of computing infrastructure â including accelerated processors and networking technologies â connected to optimized compilers, algorithms and applications. NVIDIA Research is innovating across this spectrum, supporting virtually every industry in the process. At this weekâs International Conference on Learning Representations (ICLR), taking place April 24-28 in Singapore, more than 70 NVIDIA-authored papers introduce AI developments with applications in autonomous vehicles, healthcare, multimodal content creation, robotics and more. âICLR is one of the worldâs most impactful AI conferences, where researchers introduce important technical innovations that move every industry forward,â said Bryan Catanzaro, vice president of applied deep learning research at NVIDIA. âThe research weâre contributing this year aims to accelerate every level of the computing stack to amplify the impact and utility of AI across industries.â Research That Tackles Real-World Challenges Several NVIDIA-authored papers at ICLR cover groundbreaking work in multimodal generative AI and novel methods for AI training and synthetic data generation, including:Â Fugatto: The worldâs most flexible audio generative AI model, Fugatto generates or transforms any mix of music, voices and sounds described with prompts using any combination of text and audio files. Other NVIDIA models at ICLR improve audio large language models (LLMs) to better understand speech. HAMSTER: This paper demonstrates that a hierarchical design for vision-language-action models can improve their ability to transfer knowledge from off-domain fine-tuning data â inexpensive data that doesnât need to be collected on actual robot hardware â to improve a robotâs skills in testing scenarios. Â Â Hymba: This family of small language models uses a hybrid model architecture to create LLMs that blend the benefits of transformer models and state space models, enabling high-resolution recall, efficient context summarization and common-sense reasoning tasks. With its hybrid approach, Hymba improves throughput by 3x and reduces cache by almost 4x without sacrificing performance. LongVILA: This training pipeline enables efficient visual language model training and inference for long video understanding. Training AI models on long videos is compute and memory-intensive â so this paper introduces a system that efficiently parallelizes long video training and inference, with training scalability up to 2 million tokens on 256 GPUs. LongVILA achieves state-of-the-art performance across nine popular video benchmarks. LLaMaFlex: This paper introduces a new zero-shot generation technique to create a family of compressed LLMs based on one large model. The researchers found that LLaMaFlex can generate compressed models that are as accurate or better than state-of-the art pruned, flexible and trained-from-scratch models â a capability that could be applied to significantly reduce the cost of training model families compared to techniques like pruning and knowledge distillation. Proteina: This model can generate diverse and designable protein backbones, the framework that holds a protein together. It uses a transformer model architecture with up to 5x as many parameters as previous models. SRSA: This framework addresses the challenge of teaching robots new tasks using a preexisting skill library â so instead of learning from scratch, a robot can apply and adapt its existing skills to the new task. By developing a framework to predict which preexisting skill would be most relevant to a new task, the researchers were able to improve zero-shot success rates on unseen tasks by 19%. STORM: This model can reconstruct dynamic outdoor scenes â like cars driving or trees swaying in the wind â with a precise 3D representation inferred from just a few snapshots. The model, which can reconstruct large-scale outdoor scenes in 200 milliseconds, has potential applications in autonomous vehicle development. Discover the latest work from NVIDIA Research, a global team of around 400 experts in fields including computer architecture, generative AI, graphics, self-driving cars and robotics.Â0 Commentaires 0 Parts 45 Vue
-
BLOGS.NVIDIA.COMAll Roads Lead Back to Oblivion: Bethesdaâs âThe Elder Scrolls IV: Oblivion Remasteredâ Arrives on GeForce NOWGet the controllers ready and clear the calendar â itâs a jam-packed GFN Thursday. Time to revisit a timeless classic for a dose of remastered nostalgia. GeForce NOW is bringing members a surprise from Bethesda â The Elder Scrolls IV: Oblivion Remastered is now available in the cloud. Clair Obscur: Expedition 33, the spellbinding turn-based role-playing game, is ready to paint its adventure across GeForce NOW for members to stream in style. Sunderfolk, from Dreamhavenâs Secret Door studio, launches on GeForce NOW, following an exclusive First Look Demo for members. And get ready to crack the case with the sharpest minds in the business â Capcomâs Ace Attorney Investigations Collection heads to the cloud this week, offering members the thrilling adventures of prosecutor Miles Edgeworth. Stream it all across devices, along with eight other games added to the cloud this week, including Zenless Zone Zeroâs latest update. A Legendary Quest Forge your path in the cloud. Step back into the world of Cyrodiil in style with the award-winning The Elder Scrolls IV: Oblivion Remastered in the cloud. The revitalization of the iconic 2006 role-playing game offers updated visuals, gameplay and plenty of more content. Explore a meticulously recreated world, navigate story paths as diverse character archetypes and engage in an epic quest to save Tamriel from a Daedric invasion. The remaster includes all previously released expansions â Shivering Isles, Knights of the Nine and additional downloadable content â providing a comprehensive experience for new and returning fans. Rediscover the vast landscape of Cyrodiil like never before with a GeForce NOW membership and stop the forces of Oblivion from overtaking the land. Ultimate and Performance members enjoy higher resolutions and longer gaming sessions for immersive gaming anytime, anywhere. A Whole New World Sunderfolk is a turn-based tactical role-playing adventure for up to four players that offers an engaging couch co-op experience. Control characters using a smartphone app, which serves as both a controller and a hub for cards, inventory and rules. Make game night unforgettable with the cloud. In the underground fantasy world of Arden, take on the roles of anthropomorphic animal heroes tasked with defending their town from the corruption of shadowstone. Six unique classes â from the fiery Pyromancer salamander to the tactical Bard bat â are equipped with distinct skill cards. Missions range from combat and exploration to puzzles and rescues, requiring teamwork and coordination. Get into the mischief streaming it on GeForce NOW. Gather the squad and rekindle the spirit of game night from the comfort of the couch, streaming on the big screen with GeForce NOW and using a mobile device as a controller for a unique, immersive co-op experience. No Objections Here Channel your inner Miles Edgeworth. Experience both Ace Attorney Investigations games in one gorgeous collection, stepping into the shoes of Miles Edgeworth, the prosecutor of prosecutors from the Ace Attorney mainline games. Leave the courtroom behind and walk with Edgeworth around the crime scene to gather evidence and clues, including by talking with persons of interest. Solve tough, intriguing cases through wit, logic and deduction. Members can level up their detective work across devices with a premium GeForce NOW membership. Ultimate and Performance members get extended session times to crack cases without interruptions. Tears, Fears and Parasol Spears Zeroing in on secrets. Zenless Zone Zero v1.7, âBury Your Tears With the Past,â marks the dramatic conclusion of the first seasonâs storyline. Team with a special investigator to infiltrate enemy ranks, uncover the truth behind the Exaltistsâ conspiracy and explore the mysteries of the Sacrifice Core, adding new depth to the gameâs lore and characters. The update also introduces two new S-Rank Agents â Vivian, a versatile Ether Anomaly fighter, and Hugo, an Ice Attack specialist â each bringing unique combat abilities to the roster. Alongside limited-time events, quality-of-life improvements and more, the update offers fresh gameplay modes and exclusive rewards. Quest for Fresh Adventures Defy the monolith. Clair Obscur: Expedition 33 is a visually stunning, dark fantasy role-playing game available now for members to stream. A mysterious entity called the Paintress erases everyone of a certain age each year after painting their number on a monolith. Join a desperate band of survivors â most with only a year left to live â on the 33rd expedition to end this cycle of death by confronting the Paintress and her monstrous creations. Dodge, parry and counterattack in battle while exploring a richly imagined world inspired by French Belle Ăpoque art and filled with complex, emotionally driven characters. Look for the following games available to stream in the cloud this week: The Elder Scrolls IV: Oblivion Remastered (New release on Steam and Xbox, available on PC Game Pass, April 22) Sunderfolk (New release on Steam, April 23) Clair Obscur: Expedition 33 (New release on Steam and Xbox, available on PC Game Pass, April 24) Ace Attorney Investigations Collection (Steam and Xbox, available on the Microsoft Store) Ace Attorney Investigations Collection Demo (Steam and Xbox, available on the Microsoft Store) Dead Rising Deluxe Remaster Demo (Steam) EXFIL (Steam) Sands of Aura (Epic Games Store) What are you planning to play this weekend? Let us know on X or in the comments below. What's a game on GFN that deserves more love? â NVIDIA GeForce NOW (@NVIDIAGFN) April 22, 20250 Commentaires 0 Parts 21 Vue
-
X.COMâ ïž Warning: may cause hunger... Check out this deliciously stylized BBQ scene built in Blender on an NVIDIA RTX GPU by Rabia TĂŒrkoÄlu. đđ„ Sh...â ïž Warning: may cause hunger...Check out this deliciously stylized BBQ scene built in Blender on an NVIDIA RTX GPU by Rabia TĂŒrkoÄlu. đđ„Share your digital art creations using #StudioShare for a chance to be featured. đ0 Commentaires 0 Parts 44 Vue
-
BLOGS.NVIDIA.COMHow the Economics of Inference Can Maximize AI ValueAs AI models evolve and adoption grows, enterprises must perform a delicate balancing act to achieve maximum value. Thatâs because inference â the process of running data through a model to get an output â offers a different computational challenge than training a model. Pretraining a model â the process of ingesting data, breaking it down into tokens and finding patterns â is essentially a one-time cost. But in inference, every prompt to a model generates tokens, each of which incur a cost. That means that as AI model performance and use increases, so do the amount of tokens generated and their associated computational costs. For companies looking to build AI capabilities, the key is generating as many tokens as possible â with maximum speed, accuracy and quality of service â without sending computational costs skyrocketing. As such, the AI ecosystem has been working to make inference cheaper and more efficient. Inference costs have been trending down for the past year thanks to major leaps in model optimization, leading to increasingly advanced, energy-efficient accelerated computing infrastructure and full-stack solutions. According to the Stanford University Institute for Human-Centered AIâs 2025 AI Index Report, âthe inference cost for a system performing at the level of GPT-3.5 dropped over 280-fold between November 2022 and October 2024. At the hardware level, costs have declined by 30% annually, while energy efficiency has improved by 40% each year. Open-weight models are also closing the gap with closed models, reducing the performance difference from 8% to just 1.7% on some benchmarks in a single year. Together, these trends are rapidly lowering the barriers to advanced AI.â As models evolve and generate more demand and create more tokens, enterprises need to scale their accelerated computing resources to deliver the next generation of AI reasoning tools or risk rising costs and energy consumption. What follows is a primer to understand the concepts of the economics of inference, enterprises can position themselves to achieve efficient, cost-effective and profitable AI solutions at scale. Key Terminology for the Economics of AI Inference Knowing key terms of the economics of inference helps set the foundation for understanding its importance. Tokens are the fundamental unit of data in an AI model. Theyâre derived from data during training as text, images, audio clips and videos. Through a process called tokenization, each piece of data is broken down into smaller constituent units. During training, the model learns the relationships between tokens so it can perform inference and generate an accurate, relevant output. Throughput refers to the amount of data â typically measured in tokens â that the model can output in a specific amount of time, which itself is a function of the infrastructure running the model. Throughput is often measured in tokens per second, with higher throughput meaning greater return on infrastructure. Latency is a measure of the amount of time between inputting a prompt and the start of the modelâs response. Lower latency means faster responses. The two main ways of measuring latency are: Time to First Token: A measurement of the initial processing time required by the model to generate its first output token after a user prompt. Time per Output Token: The average time between consecutive tokens â or the time it takes to generate a completion token for each user querying the model at the same time. Itâs also known as âinter-token latencyâ or token-to-token latency. Time to first token and time per output token are helpful benchmarks, but theyâre just two pieces of a larger equation. Focusing solely on them can still lead to a deterioration of performance or cost. To account for other interdependencies, IT leaders are starting to measure âgoodput,â which is defined as the throughput achieved by a system while maintaining target time to first token and time per output token levels. This metric allows organizations to evaluate performance in a more holistic manner, ensuring that throughput, latency and cost are aligned to support both operational efficiency and an exceptional user experience. Energy efficiency is the measure of how effectively an AI system converts power into computational output, expressed as performance per watt. By using accelerated computing platforms, organizations can maximize tokens per watt while minimizing energy consumption. How the Scaling Laws Apply to Inference Cost The three AI scaling laws are also core to understanding the economics of inference: Pretraining scaling: The original scaling law that demonstrated that by increasing training dataset size, model parameter count and computational resources, models can achieve predictable improvements in intelligence and accuracy. Post-training: A process where models are fine-tuned for accuracy and specificity so they can be applied to application development. Techniques like retrieval-augmented generation can be used to return more relevant answers from an enterprise database. Test-time scaling (aka âlong thinkingâ or âreasoningâ): A technique by which models allocate additional computational resources during inference to evaluate multiple possible outcomes before arriving at the best answer. While AI is evolving and post-training and test-time scaling techniques become more sophisticated, pretraining isnât disappearing and remains an important way to scale models. Pretraining will still be needed to support post-training and test-time scaling. Profitable AI Takes a Full-Stack Approach In comparison to inference from a model thatâs only gone through pretraining and post-training, models that harness test-time scaling generate multiple tokens to solve a complex problem. This results in more accurate and relevant model outputs â but is also much more computationally expensive. Smarter AI means generating more tokens to solve a problem. And a quality user experience means generating those tokens as fast as possible. The smarter and faster an AI model is, the more utility it will have to companies and customers. Enterprises need to scale their accelerated computing resources to deliver the next generation of AI reasoning tools that can support complex problem-solving, coding and multistep planning without skyrocketing costs. This requires both advanced hardware and a fully optimized software stack. NVIDIAâs AI factory product roadmap is designed to deliver the computational demand and help solve for the complexity of inference, while achieving greater efficiency. AI factories integrate high-performance AI infrastructure, high-speed networking and optimized software to produce intelligence at scale. These components are designed to be flexible and programmable, allowing businesses to prioritize the areas most critical to their models or inference needs. To further streamline operations when deploying massive AI reasoning models, AI factories run on a high-performance, low-latency inference management system that ensures the speed and throughput required for AI reasoning are met at the lowest possible cost to maximize token revenue generation. Learn more by reading the ebook âAI Inference: Balancing Cost, Latency and Performance.â0 Commentaires 0 Parts 55 Vue
-
BLOGS.NVIDIA.COMCapital One Banks on AI for Financial ServicesFinancial services has long been at the forefront of adopting technological innovations. Today, generative AI and agentic systems are redefining the industry, from customer interactions to enterprise operations. Prem Natarajan, executive vice president, chief scientist and head of AI at Capital One, joined the NVIDIA AI Podcast to discuss how his organization is building proprietary AI systems that deliver value to over 100 million customers. âAI is at its best when it transfers cognitive burden from the human to the system,â Natarajan said. âIt allows the human to have that much more fun and experience that magic.â Capital Oneâs strategy centers on a âtest, iterate, refineâ approach that balances innovation with rigorous risk management. The companyâs first agentic AI deployment is a chat concierge that helps customers navigate the car-buying process, such as by scheduling test drives. Rather than simply integrating third-party solutions, Capital One builds proprietary AI technologies that tap into its vast data repositories. âYour data advantage is your AI advantage,â Natarajan emphasized. âProprietary data allows you to build proprietary AI that provides enduring differentiated services for your customers.â Capital Oneâs AI architecture combines open-weight foundation models with deep customizations using proprietary data. This approach, Natarajan explained, supports the creation of specialized models that excel at financial services tasks and integrate into multi-agent workflows that can take actions. Natarajan stressed that responsible AI is fundamental to Capital Oneâs design process. His teams take a âresponsibility through designâ approach, implementing robust guardrails â both technological and human-in-the-loop â to ensure safe deployment. The concept of an AI factory â where raw data is processed and refined to produce actionable intelligence â aligns naturally with Capital Oneâs cloud-native technology stack. AI factories incorporate all the components required for financial institutions to generate intelligence, combining hardware, software, networking and development tools for AI applications in financial services. Time Stamps 1:10 â Natarajanâs background and journey to Capital One. 4:50 â Capital Oneâs approach to generative AI and agentic systems. 15:56 â Challenges in implementing responsible AI in financial services. 28:46 â AI factories and Capital Oneâs cloud-native advantage. You Might Also LikeâŠÂ NVIDIAâs Jacob Liberman on Bringing Agentic AI to Enterprises Agentic AI enables developers to create intelligent multi-agent systems that reason, act and execute complex tasks with a degree of autonomy. Jacob Liberman, director of product management at NVIDIA, explains how agentic AI bridges the gap between powerful AI models and practical enterprise applications. Telenor Builds Norwayâs First AI Factory, Offering Sustainable and Sovereign Data Processing Telenor opened Norwayâs first AI factory in November 2024, enabling organizations to process sensitive data securely on Norwegian soil while prioritizing environmental responsibility. Telenorâs Chief Innovation Officer and Head of the AI Factory Kaaren Hilsen discusses the AI factoryâs rapid development, going from concept to reality in under a year. Imbue CEO Kanjun Qiu on Transforming AI Agents Into Personal Collaborators Kanjun Qiu, CEO of Imbue, explores the emerging era where individuals can create and use their own AI agents. Drawing a parallel to the PC revolution of the late 1970s and â80s, Qiu discusses how modern AI systems are evolving to work collaboratively with users, enhancing their capabilities rather than just automating tasks.0 Commentaires 0 Parts 56 Vue
-
X.COMRT NVIDIA GeForce: GeForce RTX 5090 and 5080 laptops are available now! đ» Experience breakthrough performance and next-gen AI built for gamers and ...RTâNVIDIA GeForceGeForce RTX 5090 and 5080 laptops are available now! đ»Experience breakthrough performance and next-gen AI built for gamers and creators, all in a thin, portable design. See what press are saying about the all new RTX 50 Series Laptops.Learn More: https://nvda.ws/41Zpwfe0 Commentaires 0 Parts 51 Vue
-
X.COMThe grand finale is here! đ«§ Part 5 of our photorealistic 3D render tutorial hosted by the talented Aleksandr Eskin wraps up with final touches & re...The grand finale is here! đ«§Part 5 of our photorealistic 3D render tutorial hosted by the talented Aleksandr Eskin wraps up with final touches & rendering in Houdini & Octane. đș Learn something new: https://nvda.ws/3Yb2yiz0 Commentaires 0 Parts 46 Vue
-
BLOGS.NVIDIA.COMProject G-Assist Plug-In Builder Lets Anyone Customize AI on GeForce RTX AI PCsAI is rapidly reshaping whatâs possible on a PC â whether for real-time image generation or voice-controlled workflows. As AI capabilities grow, so does their complexity. Tapping into the power of AI can entail navigating a maze of system settings, software and hardware configurations. Enabling users to explore how on-device AI can simplify and enhance the PC experience, Project G-Assist â an AI assistant that helps tune, control and optimize GeForce RTX systems â is now available as an experimental feature in the NVIDIA app. Developers can try out AI-powered voice and text commands for tasks like monitoring performance, adjusting settings and interacting with supporting peripherals. Users can even summon other AIs powered by GeForce RTX AI PCs. And it doesnât stop there. For those looking to expand Project G-Assist capabilities in creative ways, the AI supports custom plug-ins. With the new ChatGPT-based G-Assist Plug-In Builder, developers and enthusiasts can create and customize G-Assistâs functionality, adding new commands, connecting external tools and building AI workflows tailored to specific needs. With the plug-in builder, users can generate properly formatted code with AI, then integrate the code into G-Assist â enabling quick, AI-assisted functionality that responds to text and voice commands. Teaching PCs New Tricks: Plug-Ins and APIs Explained Plug-ins are lightweight add-ons that give software new capabilities. G-Assist plug-ins can control music, connect with large language models and much more. Under the hood, these plug-ins tap into application programming interfaces (APIs), which allow different software and services to talk to each other. Developers can define functions in simple JSON formats, write logic in Python and quickly integrate new tools or features into G-Assist. With the G-Assist Plug-In Builder, users can: Use a responsive small language model running locally on GeForce RTX GPUs for fast, private inference. Extend G-Assistâs capabilities with custom functionality tailored to specific workflows, games and tools. Interact with G-Assist directly from the NVIDIA overlay, without tabbing out of an application or workflow. Invoke AI-powered GPU and system controls from applications using C++ and Python bindings. Integrate with agentic frameworks using tools like Langflow, letting G-Assist function as a component in larger AI pipelines and multi-agent systems. Built for Builders: Using Free APIs to Expand AI PC Capabilities NVIDIAâs GitHub repository provides everything needed to get started on developing with G-Assist â including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Developers can define functions in JSON and drop config files into a designated directory, where G-Assist can automatically load and interpret them. Users can even submit plug-ins for review and potential inclusion in the NVIDIA GitHub repository to make new capabilities available for others. Hundreds of free, developer-friendly APIs are available today to extend G-Assist capabilities â from automating workflows to optimizing PC setups to boosting online shopping. For ideas, find searchable indices of free APIs for use across entertainment, productivity, smart home, hardware and more on publicapis.dev, free-apis.github.io, apilist.fun and APILayer. Available sample plug-ins include Spotify, which enables hands-free music and volume control, and Google Gemini, which allows G-Assist to invoke a much larger cloud-based AI for more complex conversations, brainstorming and web searches using a free Google AI Studio API key. In the clip below, G-Assist asks Gemini for advice on which Legend to pick in the hit game Apex Legends when solo queueing, as well as whether itâs wise to jump into Nightmare mode for level 25 in Diablo IV: And in the following clip, a developer uses the new plug-in builder to create a Twitch plug-in for G-Assist that checks if a streamer is live. After generating the necessary JSON manifest and Python files, the developer simply drops them into the G-Assist directory to enable voice commands like, âHey, Twitch, is [streamer] live?â In addition, users can customize G-Assist to control select peripherals and software applications with simple commands, such as to benchmark or adjust fan speeds, or to change lighting on supported Logitech G, Corsair, MSI and Nanoleaf devices. Other examples include a Stock Checker plug-in that lets users quickly look up real-time stock prices and performance data, or a Weather plug-in allows users to ask G-Assist for current weather conditions in any city. Details on how to build, share and load plug-ins are available on the NVIDIA GitHub repository. Start Building Today With the G-Assist Plugin Builder and open API support, anyone can extend G-Assist to fit their exact needs. Explore the GitHub repository and submit features for review to help shape the next wave of AI-powered PC experiences. Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X â and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X. See notice regarding software product information.0 Commentaires 0 Parts 63 Vue
-
BLOGS.NVIDIA.COMEnterprises Onboard AI Teammates Faster With NVIDIA NeMo Tools to Scale Employee ProductivityAn AI agent is only as accurate, relevant and timely as the data that powers it. Now generally available, NVIDIA NeMo microservices are helping enterprise IT quickly build AI teammates that tap into data flywheels to scale employee productivity. The microservices provide an end-to-end developer platform for creating state-of-the-art agentic AI systems and continually optimizing them with data flywheels informed by inference and business data, as well as user preferences. With a data flywheel, enterprise IT can onboard AI agents as digital teammates. These agents can tap into user interactions and data generated during AI inference to continuously improve model performance â turning usage into insight and insight into action. Building Powerful Data Flywheels for Agentic AI Without a constant stream of high-quality inputs â from databases, user interactions or real-world signals â an agentâs understanding can weaken, making responses less reliable and agents less productive. Maintaining and improving the models that power AI agents in production requires three types of data: inference data to gather insights and adapt to evolving data patterns, up-to-date business data to provide intelligence, and user feedback data to advise if the model and application are performing as expected. NeMo microservices help developers tap into these three data types. NeMo microservices speed AI agent development with end-to-end tools for curating, customizing, evaluating and guardrailing the models that drive their agents. NVIDIA NeMo microservices â including NeMo Customizer, NeMo Evaluator and NeMo Guardrails â can be used alongside NeMo Retriever and NeMo Curator to ease enterprisesâ experiences building, optimizing and scaling AI agents through custom enterprise data flywheels. For example: NeMo Customizer accelerates large language model fine-tuning, delivering up to 1.8x higher training throughput. This high-performance, scalable microservice uses popular post-training techniques including supervised fine-tuning and low-rank adaptation. NeMo Evaluator simplifies the evaluation of AI models and workflows on custom and industry benchmarks with just five application programming interface (API) calls. NeMo Guardrails improves compliance protection by up to 1.4x with only half a second of additional latency, helping organizations implement robust safety and security measures that align with organizational policies and guidelines. With NeMo microservices, developers can build data flywheels that boost AI agent accuracy and efficiency. Deployed through the NVIDIA AI Enterprise software platform, NeMo microservices are easy to operate and can run on any accelerated computing infrastructure, on premises or in the cloud, with enterprise-grade security, stability and support. The microservices have become generally available at a time when enterprises are building large-scale multi-agent systems, where hundreds of specialized agents â with distinct goals and workflows â collaborate to tackle complex tasks as digital teammates, working alongside employees to assist, augment and accelerate work across functions. This enterprise-wide impact positions AI agents as a trillion-dollar opportunity â with applications spanning automated fraud detection, shopping assistants, predictive machine maintenance and document review â and underscores the critical role data flywheels play in transforming business data into actionable insights. Data flywheels built with NVIDIA NeMo microservices constantly curate data, retrain models and evaluate their performance, all with minimal human interactions and maximum autonomy. Industry Pioneers Boost AI Agent Accuracy With NeMo Microservices NVIDIA partners and industry pioneers are using NeMo microservices to build responsive AI agent platforms so that digital teammates can help get more done. Working with Arize and Quantiphi, AT&T has built an advanced AI-powered agent using NVIDIA NeMo, designed to process a knowledge base of nearly 10,000 documents, refreshed weekly. The scalable, high-performance AI agent is fine-tuned for three key business priorities: speed, cost efficiency and accuracy â all increasingly critical as adoption scales. AT&T boosted AI agent accuracy by up to 40% using NeMo Customizer and Evaluator by fine-tuning a Mistral 7B model to help deliver personalized services, prevent fraud and optimize network performance. BlackRock is working with NeMo microservices for agentic AI capabilities in its Aladdin tech platform, which unifies the investment management process through a common data language. Teaming with Galileo, Ciscoâs Outshift team is using NVIDIA NeMo microservices to power a coding assistant that delivers 40% fewer tool selection errors and achieves up to 10x faster response times. Nasdaq is accelerating its Nasdaq Gen AI Platform with NeMo Retriever microservices and NVIDIA NIM microservices. NeMo Retriever enhanced the platformâs search capabilities, leading to up to 30% improved accuracy and response times, in addition to cost savings. Broad Model and Partner Ecosystem Support for NeMo Microservices NeMo microservices support a broad range of popular open models, including Llama, the Microsoft Phi family of small language models, Google Gemma, Mistral and Llama Nemotron Ultra, currently the top open model on scientific reasoning, coding and complex math benchmarks. Meta has tapped NVIDIA NeMo microservices through new connectors for Meta Llamastack. Users can access the same capabilities â including Customizer, Evaluator and Guardrails â via APIs, enabling them to run the full suite of agent-building workflows within their environment. âWith Llamastack integration, agent builders can implement data flywheels powered by NeMo microservices,â said Raghotham Murthy, software engineer, GenAI, at Meta. âThis allows them to continuously optimize models to improve accuracy, boost efficiency and reduce total cost of ownership.â Leading AI software providers such as Cloudera, Datadog, Dataiku, DataRobot, DataStax, SuperAnnotate, Weights & Biases and more have integrated NeMo microservices into their platforms. Developers can use NeMo microservices in popular AI frameworks including CrewAI, Haystack by deepset, LangChain, LlamaIndex and Llamastack. Enterprises can build data flywheels with NeMo Retriever microservices using NVIDIA AI Data Platform offerings from NVIDIA-Certified Storage partners including DDN, Dell Technologies, Hewlett Packard Enterprise, Hitachi Vantara, IBM, NetApp, Nutanix, Pure Storage, VAST Data and WEKA. Leading enterprise platforms including Amdocs, Cadence, Cohesity, SAP, ServiceNow and Synopsys are using NeMo Retriever microservices in their AI agent solutions. Enterprises can run AI agents on NVIDIA-accelerated infrastructure, networking and software from leading system providers including Cisco, Dell, Hewlett Packard Enterprise and Lenovo. Consulting giants including Accenture, Deloitte and EY are building AI agent platforms for enterprises using NeMo microservices. Developers can download NeMo microservices from the NVIDIA NGC catalog. The microservices can be deployed as part of NVIDIA AI Enterprise with extended-life software branches for API stability, proactive security remediation and enterprise-grade support.0 Commentaires 0 Parts 67 Vue
-
X.COMSteel meets serenity in this dreamy NVIDIA RTX accelerated render by @_xvni. âïžđ» The reflections? The textures? Perfection. đ€ Show us your art...Steel meets serenity in this dreamy NVIDIA RTX accelerated render by @_xvni. âïžđ»The reflections? The textures? Perfection. đ€Show us your art with #StudioShare for a chance to be featured! đ0 Commentaires 0 Parts 58 Vue
-
X.COMWhat software did you first learn digital art on? đšWhat software did you first learn digital art on? đš0 Commentaires 0 Parts 61 Vue
-
X.COMCinematic lighting. Moody reflections. A story behind every pixel... â Rendered on a NVIDIA RTX GPU by @naridarbandi. đ âš Share your digital art ...Cinematic lighting. Moody reflections. A story behind every pixel... â Rendered on a NVIDIA RTX GPU by @naridarbandi. đâš Share your digital art with #StudioShare to be featured next!0 Commentaires 0 Parts 87 Vue
-
X.COMStep into the world of stylized fantasy... âš Learn how to create a stunning 3D environment from scratch with the talented @bosse_ton_art in our lates...Step into the world of stylized fantasy... âšLearn how to create a stunning 3D environment from scratch with the talented @bosse_ton_art in our latest Studio Sessions tutorial.đ§Șđ Watch now: https://nvda.ws/43HFV9e0 Commentaires 0 Parts 76 Vue
-
X.COMRT ASUS North America: Transform any room into a home studio with #NVIDIABroadcast & a @NVIDIAStudio RTX GPU. đïž Take your livestreams, voice chat...RTâASUS North AmericaTransform any room into a home studio with #NVIDIABroadcast & a @NVIDIAStudio RTX GPU. đïžTake your livestreams, voice chats, and video conference calls to the next level with AI-enhanced voice and video - check out the demo at our #NABShow booth! đ0 Commentaires 0 Parts 74 Vue
-
X.COMWhat do you think theyâre listening to? đ§đ This feel-good short created by loek.3d (IG) on an RTX GPU is pure animation joy. đ Share your cr...What do you think theyâre listening to? đ§đThis feel-good short created by loek.3d (IG) on an RTX GPU is pure animation joy. đShare your creations with #StudioShare for a chance to be featured!0 Commentaires 0 Parts 71 Vue
-
BLOGS.NVIDIA.COMKeeping AI on the Planet: NVIDIA Technologies Make Every Day About Earth DayWhether at sea, land or in the sky â even outer space â NVIDIA technology is helping research scientists and developers alike explore and understand oceans, wildlife, the climate and far out existential risks like asteroids. These increasingly intelligent developments are helping to analyze environmental pollutants, damage to habitats and natural disaster risks at an accelerated pace. This, in turn, enables partnerships with local governments to take climate mitigation steps like pollution prevention and proactive planting. Sailing the Seas of AI Amphitrite, based in France, uses satellite data with AI to simulate and predict ocean currents and weather. Its AI models, driven by the NVIDIA AI and Earth-2 platforms, offer insights for positioning vessels to best harness the power of ocean currents. This helps determine when itâs best to travel, as well as the optimal course, reducing travel times, fuel consumption and carbon emissions. Amphitrite is a member of the NVIDIA Inception program for cutting-edge startups. https://blogs.nvidia.com/wp-content/uploads/2025/04/amphitrite-suez-canal.mp4 Watching Over Wildlife With AI MĂŒnchen, Germany-based OroraTech monitors animal poaching and wildfires with NVIDIA CUDA and Jetson. The NVIDIA Inception program member uses the EarthRanger platform to offer a wildfire detection and monitoring service that uses satellite imagery and AI to safeguard the environment and prevent poaching. Keeping AI on the Weather Weather agencies and climate scientists worldwide are using NVIDIA CorrDiff, a generative AI weather model enabling kilometer-scale forecasts of wind, temperature and precipitation type and amount. CorrDiff is part of the NVIDIA Earth-2 platform for simulating weather and climate conditions. Itâs available as an easy-to-deploy NVIDIA NIM microservice. In another climate effort, NVIDIA Research announced a new generative AI model, called StormCast, for reliable weather prediction at a scale larger than storms. The model, outlined in a paper, can help with disaster and mitigation planning, saving lives. Avoiding Mass Extinction Events Researchers reported in Nature how a new method was able to spot 10-meter asteroids within the main asteroid belt located between Jupiter and Mars. Such space rocks can range from bus-sized to several Costco stores in width and deliver destruction to cities. It used NASAâs James Webb Space Telescope (JWST), which was tapped for views of these asteroids from previous research and enabled by NVIDIA accelerated computing. Boosting Energy Efficiency With Liquid-Cooled Blackwell NVIDIA GB200 NVL72 rack-scale, liquid-cooled systems, built on the Blackwell platform, offer exceptional performance while balancing energy costs and heat. It delivers 40x higher revenue potential, 30x higher throughput, 25x more energy efficiency and 300x more water efficiency than air-cooled architectures. NVIDIA GB300 NVL72 systems built on the Blackwell Ultra platform offer a 50x higher revenue potential, 35x higher throughput with 30x more energy efficiency. Enroll in the free new NVIDIA Deep Learning Institute course Applying AI Weather Models With NVIDIA Earth-2. Learn more about NVIDIA Earth-2 and NVIDIA Blackwell.0 Commentaires 0 Parts 74 Vue
-
X.COMRT Gerald Undone: NEW VIDEO! The 50 Series "Blackwell" GPUs from NVIDIA added decode/encode support for 4:2:2, which means editing mirrorless camera f...RTâGerald UndoneNEW VIDEO! The 50 Series "Blackwell" GPUs from NVIDIA added decode/encode support for 4:2:2, which means editing mirrorless camera footage just got faster. đđ€Link: https://youtu.be/CPL67Kc-0X80 Commentaires 0 Parts 81 Vue
-
X.COMWhich do you prefer: creating something entirely from scratch or remixing/enhancing existing work? đWhich do you prefer: creating something entirely from scratch or remixing/enhancing existing work? đ0 Commentaires 0 Parts 77 Vue
-
BLOGS.NVIDIA.COMChill Factor: NVIDIA Blackwell Platform Boosts Water Efficiency by Over 300xTraditionally, data centers have relied on air cooling â where mechanical chillers circulate chilled air to absorb heat from servers, helping them maintain optimal conditions. But as AI models increase in size, and the use of AI reasoning models rises, maintaining those optimal conditions is not only getting harder and more expensive â but more energy-intensive. While data centers once operated at 20 kW per rack, todayâs hyperscale facilities can support over 135 kW per rack, making it an order of magnitude harder to dissipate the heat generated by high-density racks. To keep AI servers running at peak performance, a new approach is needed for efficiency and scalability. One key solution is liquid cooling â by reducing dependence on chillers and enabling more efficient heat rejection, liquid cooling is driving the next generation of high-performance, energy-efficient AI infrastructure. The NVIDIA GB200 NVL72 and the NVIDIA GB300 NVL72 are rack-scale, liquid-cooled systems designed to handle the demanding tasks of trillion-parameter large language model inference. Their architecture is also specifically optimized for test-time scaling accuracy and performance, making it an ideal choice for running AI reasoning models while efficiently managing energy costs and heat. Liquid-cooled NVIDIA Blackwell compute tray. Driving Unprecedented Water Efficiency and Cost Savings in AI Data Centers Historically, cooling alone has accounted for up to 40% of a data centerâs electricity consumption, making it one of the most significant areas where efficiency improvements can drive down both operational expenses and energy demands. Liquid cooling helps mitigate costs and energy use by capturing heat directly at the source. Instead of relying on air as an intermediary, direct-to-chip liquid cooling transfers heat in a technology cooling system loop. That heat is then cycled through a coolant distribution unit via liquid-to-liquid heat exchanger, and ultimately transferred to a facility cooling loop. Because of the higher efficiency of this heat transfer, data centers and AI factories can operate effectively with warmer water temperatures â reducing or eliminating the need for mechanical chillers in a wide range of climates. The NVIDIA GB200 NVL72 rack-scale, liquid-cooled system, built on the NVIDIA Blackwell platform, offers exceptional performance while balancing energy costs and heat. It packs unprecedented compute density into each server rack, delivering 40x higher revenue potential, 30x higher throughput, 25x more energy efficiency and 300x more water efficiency than traditional air-cooled architectures. Newer NVIDIA GB300 NVL72 systems built on the Blackwell Ultra platform boast a 50x higher revenue potential and 35x higher throughput with 30x more energy efficiency. Data centers spend an estimated $1.9-2.8M per megawatt (MW) per year, which amounts to nearly $500,000 spent annually on cooling-related energy and water costs. By deploying the liquid-cooled GB200 NVL72 system, hyperscale data centers and AI factories can achieve up to 25x cost savings, leading to over $4 million dollars in annual savings for a 50 MW hyperscale data center. For data center and AI factory operators, this means lower operational costs, enhanced energy efficiency metrics and a future-proof infrastructure that scales AI workloads efficiently â without the unsustainable water footprint of legacy cooling methods. Moving Heat Outside the Data Center As compute density rises and AI workloads drive unprecedented thermal loads, data centers and AI factories must rethink how they remove heat from their infrastructure. The traditional methods of heat rejection that supported predictable CPU-based scaling are no longer sufficient on their own. Today, there are multiple options for moving heat outside the facility, but four major categories dominate current and emerging deployments. Key Cooling Methods in a Changing Landscape Mechanical Chillers: Mechanical chillers use a vapor compression cycle to cool water, which is then circulated through the data center to absorb heat. These systems are typically air-cooled or water-cooled, with the latter often paired with cooling towers to reject heat. While chillers are reliable and effective across diverse climates, they are also highly energy-intensive. In AI-scale facilities where power consumption and sustainability are top priorities, reliance on chillers can significantly impact both operational costs and carbon footprint. Evaporative Cooling: Evaporative cooling uses the evaporation of water to absorb and remove heat. This can be achieved through direct or indirect systems, or hybrid designs. These systems are much more energy-efficient than chillers but come with high water consumption. In large facilities, they can consume millions of gallons of water per megawatt annually. Their performance is also climate-dependent, making them less effective in humid or water-restricted regions. Dry Coolers: Dry coolers remove heat by transferring it from a closed liquid loop to the ambient air using large finned coils, much like an automotive radiator. These systems donât rely on water and are ideal for facilities aiming to reduce water usage or operate in dry climates. However, their effectiveness depends heavily on the temperature of the surrounding air. In warmer environments, they may struggle to keep up with high-density cooling demands unless paired with liquid-cooled IT systems that can tolerate higher operating temperatures. Pumped Refrigerant Systems: Pumped refrigerant systems use liquid refrigerants to move heat from the data center to outdoor heat exchangers. Unlike chillers, these systems donât rely on large compressors inside the facility and they operate without the use of water. This method offers a thermodynamically efficient, compact and scalable solution that works especially well for edge deployments and water-constrained environments. Proper refrigerant handling and monitoring are required, but the benefits in power and water savings are significant. Each of these methods offers different advantages depending on factors like climate, rack density, facility design and sustainability goals. As liquid cooling becomes more common and servers are designed to operate with warmer water, the door opens to more efficient and environmentally friendly cooling strategies â reducing both energy and water use while enabling higher compute performance. Optimizing Data Centers for AI Infrastructure As AI workloads grow exponentially, operators are reimagining data center design with infrastructure built specifically for high-performance AI and energy efficiency. Whether theyâre transforming their entire setup into dedicated AI factories or upgrading modular components, optimizing inference performance is crucial for managing costs and operational efficiency. To get the best performance, high compute capacity GPUs arenât enough â they need to be able to communicate with each other at lightning speed. NVIDIA NVLink boosts communication, enabling GPUs to operate as a massive, tightly integrated processing unit for maximum performance with a full-rack power density of 120 kW. This tight, high-speed communication is crucial for todayâs AI tasks, where every second saved on transferring data can mean more tokens per second and more efficient AI models. Traditional air cooling struggles at these power levels. To keep up, data center air would need to be either cooled to below-freezing temperatures or flow at near-gale speeds to carry the heat away, making it increasingly impractical to cool dense racks with air alone. At nearly 1,000x the density of air, liquid cooling excels at carrying heat away thanks to its superior heat capacitance and thermal conductivity. By efficiently transferring heat away from high-performance GPUs, liquid cooling reduces reliance on energy-intensive and noisy cooling fans, allowing more power to be allocated to computation rather than cooling overhead. Liquid Cooling in Action Innovators across the industry are leveraging liquid cooling to slash energy costs, improve density and drive AI efficiency: Vertivâs reference architecture for NVIDIA GB200 NVL72 servers reduces annual energy consumption by 25%, cuts rack space requirements by 75% and shrinks the power footprint by 30%. Schneider Electricâs liquid-cooling infrastructure supports up to 132 kW per rack, improving energy efficiency, scalability and overall performance for GB200 NVL72 AI data centers. CoolIT Systemsâ high-density CHx2000 liquid-to-liquid coolant distribution units provide 2MW cooling capacity at 5°C approach temperature, ensuring reliable thermal management for GB300 NVL72 deployments. Also, CoolIT Systemsâ OMNI All-Metal Coldplates with patented Split-Flow technology provide targeted cooling of over 4,000W thermal design power while reducing pressure drop. Boydâs advanced liquid-cooling solutions, incorporating the companyâs over two decades of high-performance compute industry experience, include coolant distribution units, liquid-cooling loops and cold plates to further maximize energy efficiency and system reliability for high-density AI workloads. Cloud service providers are also adopting cutting-edge cooling and power innovations. Next-generation AWS data centers, featuring jointly developed liquid cooling solutions, increase compute power by 12% while reducing energy consumption by up to 46% â all while maintaining water efficiency. Cooling the AI Infrastructure of the Future As AI continues to push the limits of computational scale, innovations in cooling will be essential to meeting the thermal management challenges of the post-Mooreâs law era. NVIDIA is leading this transformation through initiatives like the COOLERCHIPS program, a U.S. Department of Energy-backed effort to develop modular data centers with next-generation cooling systems that are projected to reduce costs by at least 5% and improve efficiency by 20% over traditional air-cooled designs. Looking ahead, data centers must evolve not only to support AIâs growing demands but do so sustainably â maximizing energy and water efficiency while minimizing environmental impact. By embracing high-density architectures and advanced liquid cooling, the industry is paving the way for a more efficient AI-powered future. Learn more about breakthrough solutions for data center energy and water efficiency presented at NVIDIA GTC 2025 and discover how accelerated computing is driving a more efficient future with NVIDIA Blackwell.0 Commentaires 0 Parts 76 Vue
-
BLOGS.NVIDIA.COMMaking Brain Waves: AI Startup Speeds Disease Research With Lab in the LoopAbout 15% of the worldâs population â over a billion people â are affected by neurological disorders, from commonly known diseases like Alzheimerâs and Parkinsonâs to hundreds of lesser-known, rare conditions. BrainStorm Therapeutics, a San Diego-based startup, is accelerating the development of cures for these conditions using AI-powered computational drug discovery paired with lab experiments using organoids: tiny, 3D bundles of brain cells created from patient-derived stem cells. This hybrid, iterative method, where clinical data and AI models inform one another to accelerate drug development, is known as lab in the loop. âThe brain is the last frontier in modern biology,â said BrainStormâs founder and CEO Robert Fremeau, who was previously a scientific director in neuroscience at Amgen and a faculty member at Duke University and the University of California, San Francisco. âBy combining our organoid disease models with the power of generative AI, we now have the ability to start to unravel the underlying complex biology of disease networks.â The company aims to lower the failure rate of drug candidates for brain diseases during clinical trials â currently over 93% â and identify therapeutics that can be applied to multiple diseases. Achieving these goals would make it faster and more economically viable to develop treatments for rare and common conditions. âThis alarmingly high clinical trial failure rate is mainly due to the inability of traditional preclinical models with rodents or 2D cells to predict human efficacy,â said Jun Yin, cofounder and chief technology officer at BrainStorm. âBy integrating human-derived brain organoids with AI-driven analysis, weâre building a platform that better reflects the complexity of human neurobiology and improves the likelihood of clinical success.â Fremeau and Yin believe that BrainStormâs platform has the potential to accelerate development timelines, reduce research and development costs, and significantly increase the probability of bringing effective therapies to patients. BrainStorm Therapeuticsâ AI models, which run on NVIDIA GPUs in the cloud, were developed using the NVIDIA BioNeMo Framework, a set of programming tools, libraries and models for computational drug discovery. The company is a member of NVIDIA Inception, a global network of cutting-edge startups. Clinical Trial in a Dish BrainStorm Therapeutics uses AI models to develop gene maps of brain diseases, which they can use to identify promising targets for potential drugs and clinical biomarkers. Organoids allow them to screen thousands of drug molecules per day directly on human brain cells, enabling them to test the effectiveness of potential therapies before starting clinical trials. âBrains have brain waves that can be picked up in a scan like an EEG, or electroencephalogram, which measures the electrical activity of neurons,â said Maya Gosztyla, the companyâs cofounder and chief operating officer. âOur organoids also have spontaneous brain waves, allowing us to model the complex activity that you would see in the human brain in this much smaller system. We treat it like a clinical trial in a dish for studying brain diseases.â BrainStorm Therapeutics is currently using patient-derived organoids for its work on drug discovery for Parkinsonâs disease, a condition tied to the loss of neurons that produce dopamine, a neurotransmitter that helps with physical movement and cognition. âIn Parkinsonâs disease, multiple genetic variants contribute to dysfunction across different cellular pathways, but they converge on a common outcome â the loss of dopamine neurons,â Fremeau said. âBy using AI models to map and analyze the biological effects of these variants, we can discover disease-modifying treatments that have the potential to slow, halt or even reverse the progression of Parkinsonâs.â The BrainStorm team used single-cell sequencing data from brain organoids to fine-tune foundation models available through the BioNeMo Framework, including the Geneformer model for gene expression analysis. The organoids were derived from patients with mutations in the GBA1 gene, the most common genetic risk factor for Parkinsonâs disease. BrainStorm is also collaborating with the NVIDIA BioNeMo team to help optimize open-source access to the Geneformer model. Accelerating Drug Discovery Research With its proprietary platform, BrainStorm can mirror human brain biology and simulate how different treatments might work in a patientâs brain. âThis can be done thousands of times, much quicker and much cheaper than can be done in a wet lab â so we can narrow down therapeutic options very quickly,â Gosztyla said. âThen we can go in with organoids and test the subset of drugs the AI model thinks will be effective. Only after it gets through those steps will we actually test these drugs in humans.â View of an organoid using Fluorescence Imaging Plate Reader, or FLIPR â a technique used to study the effect of compounds on cells during drug screening. This technology led to the discovery that Donepezil, a drug prescribed for Alzheimerâs disease, could also be effective in treating Rett syndrome, a rare genetic neurodevelopmental disorder. Within nine months, the BrainStorm team was able to go from organoid screening to applying for a phase 2 clinical trial of the drug in Rett patients. This application was recently cleared by the U.S. Food and Drug Administration. BrainStorm also plans to develop multimodal AI models that integrate data from cell sequencing, cell imaging, EEG scans and more. âYou need high-quality, multimodal input data to design the right drugs,â said Yin. âAI models trained on this data will help us understand disease better, find more effective drug candidates and, eventually, find prognostic biomarkers for specific patients that enable the delivery of precision medicine.â The companyâs next project is an initiative with the CURE5 Foundation to conduct the most comprehensive repurposed drug screen to date for CDKL5 Deficiency Disorder, another rare genetic neurodevelopmental disorder. âRare disease research is transforming from a high-risk niche to a dynamic frontier,â said Fremeau. âThe integration of BrainStormâs AI-powered organoid technology with NVIDIA accelerated computing resources and the NVIDIA BioNeMo platform is dramatically accelerating the pace of innovation while reducing the cost â so what once required a decade and billions of dollars can now be investigated with significantly leaner resources in a matter of months.â Get started with NVIDIA BioNeMo for AI-accelerated drug discovery.0 Commentaires 0 Parts 76 Vue
-
X.COMThe fight is over. But this knight's legacy echoes in every arrow...đčâïž This epic Blender render by @blendreams captures the beauty in defeat â ...The fight is over. But this knight's legacy echoes in every arrow...đčâïžThis epic Blender render by @blendreams captures the beauty in defeat â or maybe... a final victory?Share your creations with #StudioShare for a chance to be featured.0 Commentaires 0 Parts 88 Vue
-
X.COMMorning artist or midnight creator? đ đMorning artist or midnight creator? đ đ0 Commentaires 0 Parts 66 Vue
-
X.COMRT Twinmotion: Twinmotion's 2025.1.1 release brings support for @NVIDIAStudio DLSS 4, the latest version of NVIDIA's neural rendering tech. DLSS 4's f...RTâTwinmotionTwinmotion's 2025.1.1 release brings support for @NVIDIAStudio DLSS 4, the latest version of NVIDIA's neural rendering tech. DLSS 4's features include Super Resolution, Deep Learning Anti-Aliasing, Frame Generation, and Multi Frame Generation! â¶ïž http://twinmotion.com/download0 Commentaires 0 Parts 89 Vue
-
X.COMLearn how to turn a concept into a storybook-worthy scene. đ đČ @bosse_ton_art walks through his 2D narrative painting workflow in @Photoshopâful...Learn how to turn a concept into a storybook-worthy scene. đ đČ@bosse_ton_art walks through his 2D narrative painting workflow in @Photoshopâfull of atmosphere, character, and fantasy vibes.đŁ Dive in to the newest Studio Sessions tutorial: https://nvda.ws/3Rle07F0 Commentaires 0 Parts 89 Vue
-
X.COMThe April #NVIDIAStudio Driver is here. â Download now for the latest optimizations âĄïž https://nvda.ws/42x5JTnThe April #NVIDIAStudio Driver is here. â Download now for the latest optimizations âĄïž https://nvda.ws/42x5JTn0 Commentaires 0 Parts 96 Vue
-
X.COMWhatâs the best way to get creative inspiration? đĄWhatâs the best way to get creative inspiration? đĄ0 Commentaires 0 Parts 89 Vue
-
X.COMLevel up your video editing workflow! đŹđLevel up your video editing workflow! đŹđ80 LEVEL:âDiscover how NVIDIA's GeForce RTX 50 Series GPUs unlock AI-powered capabilities in video editing workflows.Details: https://80.lv/articles/discover-how-nvidia-s-geforce-rtx-50-series-gpus-unlock-ai-powered-capabilities-in-video-editing-workflows/#ad #sponsored @nvidia0 Commentaires 0 Parts 99 Vue
-
BLOGS.NVIDIA.COMAI Bites Back: Researchers Develop Model to Detect Malaria Amid Venezuelan Gold RushGold prospecting in Venezuela has led to a malaria resurgence, but researchers have developed AI to take a bite out of the problem. In Venezuelaâs Bolivar state, deforestation for gold mining in waters has disturbed mosquito populations, which are biting miners and infecting them with the deadly parasite. Venezuela was certified as malaria-free in 1961 by the World Health Organization. Itâs estimated that worldwide there were 263 million cases of malaria and 597,000 deaths in 2023, according to the WHO. In the Venezuelan outbreak, the area affected is rural and has limited access to medical clinics, so detection with microscopy by trained professionals is lacking. But researchers at the intersection of medicine and technology have tapped AI and NVIDIA GPUs to come up with a solution. They recently published a paper in Nature, describing the development of a convolutional neural network (CNN) for automatically detecting malaria parasites in blood samples. âAt some point in Venezuela, malaria was almost eradicated,â said 25-year-old Diego Ramos-Briceño, who has a bachelorâs in engineering that he earned while also pursuing a doctorate in medicine. âI believe it was around 135,000 cases last year.â Identifying Malaria Parasites in Blood Samples The researchers â Ramos-Briceño, Alessandro Flammia-DâAleo, Gerardo FernĂĄndez-LĂłpez, FhabiĂĄn CarriĂłn-Nessi and David Forero-Peña â used the CNN to identify Plasmodium falciparum and Plasmodium vivax in thick blood smears, achieving 99.51% accuracy. To develop the model, the team acquired a dataset of 5,941 labeled thick blood smear microscope images from the Chittagong Medical College Hospital, in Bangladesh. They processed this dataset to create nearly 190,000 labeled images. âWhat we wanted for the neural network to learn is the morphology of the parasite, so from out of the nearly 6,000 microscope level images, we extracted every single parasite, and from all that data augmentation and segmentation, we ended up having almost 190,000 images for model training,â said Ramos-Briceño. The model comes as traditional microscopy methods are also challenged by limitations in accuracy and consistency, according to the research paper. Harnessing Gaming GPUs and CUDA for Model Training, Inference To run model training, the malaria paperâs team tapped into an RTX 3060 GPU from a computer science teacher mentoring their research. âWe used PyTorch Lightning with NVIDIA CUDA acceleration that enabled us to do efficient parallel computation that significantly sped up the matrix operations and the preparations of the neural network compared with what a CPU would have done,â said Ramos-Briceño. For inference, malaria determinations from blood samples can be made within several seconds, he said, using such GPUs. Clinics lacking trained microscopists could use the model and introduce their own data for transfer learning so that the model performs optimally with the types of images they submit, handling the lighting conditions and other factors, he said. âFor communities that are far away from the urban setting, where thereâs more access to resources, this could be a way to approach the malaria problem,â said Ramos-Briceño.0 Commentaires 0 Parts 96 Vue
-
BLOGS.NVIDIA.COMSpring Into Action With 11 New Games on GeForce NOWAs the days grow longer and the flowers bloom, GFN Thursday brings a fresh lineup of games to brighten the week. Dive into thrilling hunts and dark fantasy adventures with the arrivals of titles like Hunt: Showdown 1896 â now available on Xbox and PC Game Pass â and Mandragora: Whispers of the Witch Tree on GeForce NOW. Whether chasing bounties in the Colorado Rockies or battling chaos in a cursed land, players will gain unforgettable experiences with these games in the cloud. Plus, roll with the punches in Capcomâs MARVEL vs. CAPCOM Fighting Collection: Arcade Classics, part of 11 games GeForce NOW is adding to its cloud gaming library â featuring over 2,000 titles playable with GeForce RTX 4080 performance. Spring Into Gaming Anywhere With the arrivals of Hunt: Showdown 1896 and Mandragora: Whispers of the Witch Tree in the cloud, GeForce NOW members can take their gaming journeys anywhere, from the wild frontiers of the American West to the shadowy forests of a dark fantasy realm. Itâs the wild, wild west. Hunt: Showdown 1896 transports players to the untamed Rockies, where danger lurks behind every pine and in every abandoned mine. PC Game Pass members â and those who own the game on Xbox â can stream the action instantly. Whether players are tracking monstrous bounties solo or teaming with friends, the gameâs tense player vs. player vs. environment action and new map, Mammonâs Gulch, are ideal for springtime exploration. Jump into the hunt from the living room, in the backyard or even on the go â no high-end PC required with GeForce NOW. Every whisper is a warning. Step into a beautifully hand-painted world teetering on the edge of chaos in Mandragora: Whispers of the Witch Tree. As an Inquisitor, battle nightmarish creatures and uncover secrets beneath the budding canopies of Faelduum. With deep role-playing game mechanics and challenging combat, Mandragora is ideal for players seeking a fresh adventure this season. GeForce NOW members can continue their quest wherever spring takes them â including on their laptops, tablets and smartphones. Time for New Games Everyoneâs shouting from the excitement of being in the cloud. Catch MARVEL vs. CAPCOM Fighting Collection: Arcade Classics in the cloud this week. In this legendary collection of arcade classics from the fan-favorite Marvel and Capcom crossover games, dive into an action-packed lineup of seven titles, including heavy hitters X-MEN vs. STREET FIGHTER and MARVEL vs. CAPCOM 2 New Age of Heroes, as well as THE PUNISHER. Each game in the collection can be played online or in co-op mode. Whether new or returning to the series from their arcade days, players of all levels can together enjoy these timeless classics in the cloud. Look for the following games available to stream in the cloud this week: Forever Skies (New release on Steam, available April 14) Night Is Coming (New release on Steam, available April 14) Hunt: Showdown 1896 (New release on Xbox, available on PC Game Pass April 15) Crime Scene Cleaner (New release on Xbox, available on PC Game Pass April 17) Mandragora: Whispers of the Witch Tree (New release on Steam, available April 17) Tempest Rising (New release on Steam, Advanced Access starts April 17) Aimlabs (Steam) Blue Prince (Steam, Xbox) ContractVille (Steam) Gedonia 2 (Steam) MARVEL vs. CAPCOM Fighting Collection: Arcade Classics (Steam) Path of Exile 2 (Epic Games Store) What are you planning to play this weekend? Let us know on X or in the comments below.0 Commentaires 0 Parts 95 Vue
Plus de lecture