NVIDIA
NVIDIA
This is the Official NVIDIA Page
204 pessoas curtiram isso
28 Publicações
0 fotos
0 Vídeos
Atualizações Recentes
  • Fangs Out, Frames Up: Vampire: The Masquerade Bloodlines 2 Leads a Killer GFN Thursday
    blogs.nvidia.com
    The nights grow longer and the shadows get bolder with Vampire The Masquerade: Bloodlines 2 on GeForce NOW, launching with GeForce RTX 5080-power.Members can sink their teeth into the action role-playing game from Paradox Interactive as part of nine games coming to the cloud this week, including NINJA GAIDEN 4.Be among the first to play The Outer Worlds 2 with early access available in the cloud starting tomorrow, Oct. 24.https://blogs.nvidia.com/wp-content/uploads/2025/10/gfn-server-rollout-map-sofia-server-light-up-16x9-1.mp4Atlanta is the latest region to get GeForce RTX 5080-class power, with Sofia, Bulgaria, coming next. Stay tuned to GFN Thursday for updates as more regions upgrade to NVIDIA Blackwell RTX. Follow along with the latest progress on the server rollout page.The Night Belongs to the CloudDont invite them in just stream.Vampire: The Masquerade Bloodlines 2, Paradoxs sequel to the cult classic, invites fans to sink their fangs into Seattles dark nightlife, putting players in the immortal boots of Phyre the newly awakened Elder vampire with a mysterious voice in their head. In every dark corner hides a new alliance or a rival, and every choice carves a path through the bloody politics of the night.Master Seattles secrets and the supernatural abilities of any chosen clan shadow hunting with the Banu Haqim, blood bending with the Tremere or going fists first as a Brujah. Every story twist and council drama is shaped by dialogue, alliances and whispers from both friends and the ever-present stranger in the protagonists mind.Every neon-lit block streams beautifully on GeForce NOW, with GeForce RTX 5080-class power rolling out for the highest frame rates and sharpest graphics in the cloud no waiting for downloads or mortal PC specs required.Where Shadows Fall, Legends RiseSlice first, ask questions never.NINJA GAIDEN 4, a lightning-fast action-adventure title from Team NINJA, slices its way back into the spotlight, packed with brutal, razor-sharp combat and featuring a new protagonist, Yakumo. Known for its intensity, precision and stylish flair, the game demands focus and rewards mastery.This new entry throws Yakumo into a deadly conflict blending myth and modern chaos. Explore sprawling levels filled with relentless enemies, cinematic boss battles that punish hesitation and a fluid combat system that chains combos together like a storm of steel. Every encounter feels like a duel where speed, timing and finesse define survival.On GeForce NOW, NINJA GAIDEN 4 reaches its sharpest edge. No installs or massive downloads stand in the way gain instant access with cloud-powered performance that keeps every slash crisp and every dodge responsive. Whether running on a high-end rig, a laptop or even a mobile device, the action always hits at peak intensity with GeForce NOW.Boarding Pass to ChaosSpace has a new HR problem.Early access for The Outer Worlds 2 is reaching the cloud, bringing all the offbeat charm, sharp wit and spacefaring chaos the series is loved for. From Obsidian Entertainment, the masters of branching stories and immersive worlds, this sequel leans even harder into meaningful choices and unexpected consequences.On GeForce NOW, hopping into early access means instant boarding with no installs or waiting around. Every quip, shootout and twist in the storyline streams smoothly, no matter the screen. Its the perfect way to jump into space mischief before the game fully launches in the cloud on Wednesday, Oct. 29.Roaring Good GamesLife finds a way again. Frontier Developments returns with Jurassic World Evolution 3, the next installment of the park-management series that lets players design, build and wrangle their own prehistoric paradise. Bigger storms, smarter dinosaurs and even bolder decisions keep every moment thrilling because in this world, control is just an illusion.In addition, members can look for the following:NINJA GAIDEN 4 (New release on Steam and Xbox, available on PC Game Pass, Oct. 20)Jurassic World Evolution 3 (New release on Steam, Oct. 21)Painkiller (New release on Steam, Oct. 21)Vampire: The Masquerade Bloodlines 2 (New release on Steam, Oct. 21, GeForce RTX 5080-ready)The Outer Worlds 2 Advanced Access (New release on Steam, Battle.net and Xbox, available on PC Game Pass, Oct. 24, GeForce RTX 5080-ready)Tormented Souls 2 (New release on Steam and Xbox, available on PC Game Pass, Oct. 23)Super Fantasy Kingdom (New release on Steam, Oct. 24)VEIN (New release on Steam, Oct. 24)Tom Clancys Splinter Cell: Pandora Tomorrow (Steam)What are you planning to play this weekend? Let us know on X or in the comments below.What anime would you want to turn into a video game, or vice versa? NVIDIA GeForce NOW (@NVIDIAGFN) October 22, 2025
    0 Comentários ·0 Compartilhamentos
  • Open Source AI Week How Developers and Contributors Are Advancing AI Innovation
    blogs.nvidia.com
    As Open Source AI Week comes to a close, were celebrating the innovation, collaboration and community driving open-source AI forward. Catch up on the highlights and stay tuned for more announcements coming next week at NVIDIA GTC Washington, D.C.Wrapping Up a Week of Open-Source Momentum From the stages of the PyTorch Conference to workshops across Open Source AI Week, this week spotlighted the creativity and progress defining the future of open AI.Here are some highlights from the event:Honoring open-source contributions: Jonathan Dekhtiar, senior deep learning framework engineer at NVIDIA, received the PyTorch Contributor Award for his key role in designing the release mechanisms and packaging solutions for Python software and libraries that enable GPU-accelerated computing.CEO of Modular visits the NVIDIA booth: Chris Lattner, CEO of Modular and founder and chief architect of the open-source LLVM Compiler Infrastructure project, picks up the NVIDIA DGX Spark.Seven questions with founding researcher at fast.ai: Jeremy Howard, founding researcher at fast.ai and advocate for accessible deep learning, shares his insights on the future of open-source AI.In his keynote at the PyTorch Conference, Howard also highlighted the growing strength of open-source communities, recognizing NVIDIA for its leadership in advancing openly available, high-performing AI models.The one company, actually, that has stood out, head and shoulders above the others, and that is two, he said. One is Metathe creators of PyTorch. The other is NVIDIA, who, just in recent months, has created some of the worlds best models and they are open source, and they are openly licensed.vLLM Adds Upstream Support for NVIDIA Nemotron Models Open-source innovation is accelerating. NVIDIA and the vLLM team are partnering to add vLLM upstream support for NVIDIA Nemotron models, transforming open large language model serving with lightning-fast performance, efficient scaling and simplified deployment across NVIDIA GPUs.vLLMs optimized inference engine empowers developers to run Nemotron models like the new Nemotron Nano 2 a highly efficient small language reasoning model with a hybrid Transformer-Mamba architecture and a configurable thinking budget.Learn more about how vLLM is accelerating open model innovation.NVIDIA Expands Open Access to Nemotron RAG Models NVIDIA is making eight NVIDIA Nemotron RAG models openly available on Hugging Face, expanding access beyond research to include the full suite of commercial models.This release gives developers a wider range of tools to build retrieval-augmented generation (RAG) systems, improve search and ranking accuracy, and extract structured data from complex documents.The newly released models include Llama-Embed-Nemotron-8B, which provides multilingual text embeddings built on Llama 3.1, and Omni-Embed-Nemotron-3B, which supports cross-modal retrieval for text, images, audio and video.Developers can also access six production-grade models for text embedding, reranking and PDF data extraction key components for real-world retrieval and document intelligence applications.With these open-source models, developers, researchers and organizations can more easily integrate and experiment with RAG-based systems.Developers can get started with Nemotron RAG on Hugging Face.Building and Training AI Models With the Latest Open Datasets NVIDIA is expanding access to high-quality open datasets that help developers overcome the challenges of large-scale data collection and focus on building advanced AI systems.The latest release includes a collection of Nemotron-Personas datasets for Sovereign AI. Each dataset is fully synthetic and grounded in real-world demographic, geographic and cultural data with no personally identifiable information. The growing collection, which features personas from the U.S., Japan and India, enables model builders to design AI agents and systems that reflect the linguistic, social and contextual nuance of the nations they serve.NVIDIA earlier this year released the NVIDIA PhysicalAI Open Datasets onHuggingFace, featuringmore than 7 million robotics trajectories and 1,000OpenUSD SimReady assets. Downloaded more than 6 million times, the datasets combines realworld and synthetic data from the NVIDIACosmos, Isaac, DRIVE andMetropolis platforms to kickstart physical AI development.NVIDIA Inception Startups Highlight AI Innovation At the PyTorch Conferences Startup Showcase, 11 startups including members from the NVIDIA Inception program are sharing their work developing practical AI applications and connecting with investors, potential customers and peers.Runhouse, an AI infrastructure startup optimizing model deployment and orchestration, was crowned the 2025 PyTorch Startup Showcase Award Winner. The Community Choice Award was presented to CuraVoice, with CEO Sakhi Patel, CTO Shrey Modi, and advisor Rahul Vishwakarma accepting the award on behalf of the team.CuraVoice provides an AI-powered voice simulation platform powered by NVIDIA Riva for speech recognition and text-to-speech, and NVIDIA NeMo for conversational AI models for healthcare students and professionals, offering interactive exercises and adaptive feedback to improve patient communication skills.Shrey Modi, CTO of CuraVoice, accepts the PyTorch Startup Showcase Community Choice Award.In addition to CuraVoice, other Inception members, including Backfield AI, Graphsignal, Okahu AI, Snapshot AI and XOR, were featured participants in the Startup Showcase.Snapshot AI delivers actionable, real-time insights to engineering teams using recursive retrieval-augmented generation (RAG), transformers and multimodal AI. The companys platform taps into the NVIDIA CUDA Toolkit to deliver high-performance analysis and rapid insights at scale.XOR is a cybersecurity startup offering AI agents that automatically fix vulnerabilities in the supply chain of other AIs. The company helps enterprises eliminate vulnerabilities while complying with regulatory requirements. XORs agentic technology uses NVIDIA cuVS vector search for indexing, real-time retrieval and code analysis. The company also uses GPU-based machine learning to train models to detect hidden backdoor patterns and prioritize of high-value security outcomes.From left to right: Dmitri Melikyan (Graphsignal, Inc.), Tobias Heldt (XOR), Youssef Harkati (BrightOnLABS), Vidhi Kothari (Seer Systems), Jonah Sargent (Node One) and Scott Suchyta (NVIDIA) at the Startup Showcase.Highlights From Open Source AI Week Attendees of Open Source AI Week are getting a peek at the latest advancements and creative projects that are shaping the future of open technology.Heres a look at whats happening onsite:The worlds smallest AI supercomputer: NVIDIA DGX Spark represents the cutting edge of AI computing hardware for enterprise and research applications.Humanoids and robot dogs, up close: Unitree robots are on display, captivating attendees with advanced mobility powered by the latest robotics technology.Why open source is important: Learn how it can empower developers to build stronger communities, iterate on features, and seamlessly integrate the best of open source AI.Accelerating AI Research Through Open Models A study from the Center for Security and Emerging Technology (CSET) published today shows how access to open model weights unlocks more opportunities for experimentation, customization and collaboration across the global research community.The report outlines seven high-impact research use cases where open models are making a difference including fine-tuning, continued pretraining, model compression and interpretability.With access to weights, developers can adapt models for new domains, explore new architectures and extend functionality to meet their specific needs. This also supports trust and reproducibility. When teams can run experiments on their own hardware, share updates and revisit earlier versions, they gain control and confidence in their results.Additionally, the study found that nearly all open model users share their data, weights and code, building a fast-growing culture of collaboration. This open exchange of tools and knowledge strengthens partnerships between academia, startups and enterprises, facilitating innovation.NVIDIA is committed to empowering the research community through the NVIDIA Nemotron family of open models featuring not just open weights, but also pretraining and post-training datasets, detailed training recipes, and research papers that share the latest breakthroughs.Read the full CSET study to learn how open models are helping the AI community move forward.Advancing Embodied Intelligence Through Open-Source Innovation At the PyTorch Conference, Jim Fan, director of robotics and distinguished research scientist at NVIDIA, discussed the Physical Turing Test a way of measuring the performance of intelligent machines in the physical world.With conversational AI now capable of fluent, lifelike communication, Fan noted that the next challenge is enabling machines to act with similar naturalism. The Physical Turing Test asks: can an intelligent machine perform a real-world task so fluidly that a human cannot tell whether a person or a robot completed it?Fan highlighted that progress in embodied AI and physical AI depends on generating large amounts of diverse data, access to open robot foundation models and simulation frameworks and walked through a unified workflow for developing embodied AI.With synthetic data workflows like NVIDIA Isaac GR00T-Dreams built on NVIDIA Cosmos world foundation models developers can generate virtual worlds from images and prompts, speeding the creation of large sets of diverse and physically accurate data.That data can then be used to post-train NVIDIA Isaac GR00T N open foundation models for generalized humanoid robot reasoning and skills. But before the models are deployed in the real world, these new robot skills need to be tested in simulation.Open simulation and learning frameworks such as NVIDIA Isaac Sim and Isaac Lab allow robots to practice countless times across millions of virtual environments before operating in the real world, dramatically accelerating learning and deployment cycles.Plus, with Newton, an open-source, differentiable physics engine built on NVIDIA Warp and OpenUSD, developers can bring high-fidelity simulation to complex robotic dynamics such as motion, balance and contact reducing the simulation-to-real gap.This accelerates the creation of physically capable AI systems that learn faster, perform more safely and operate effectively in real-world environments.However, scaling embodied intelligence isnt just about compute its about access. Fan reaffirmed NVIDIAs commitment to open source, emphasizing how the companys frameworks and foundation models are shared to empower developers and researchers globally.Developers can get started with NVIDIAs open embodied and physical AI models on Hugging Face.LlamaEmbedNemotron8B Ranks Among Top Open Models for Multilingual Retrieval NVIDIAs LlamaEmbedNemotron8B model has been recognized as the top open and portable model on the Multilingual Text Embedding Benchmark leaderboard.Built on the metallama/Llama3.18B architecture, LlamaEmbedNemotron8B is a research text embedding model that converts text into 4,096dimensional vector representations. Designed for flexibility, it supports a wide range of use cases, including retrieval, reranking, semantic similarity and classification across more than 1,000 languages.Trained on a diverse collection of 16 million querydocument pairs half from public sources and half synthetically generated the model benefits from refined data generation techniques, hardnegative mining and modelmerging approaches that contribute to its broad generalization capabilities.This result builds on NVIDIAs ongoing research in open, highperforming AI models. Following earlier leaderboard recognition for the LlamaNeMoRetrieverColEmbed model, the success of LlamaEmbedNemotron8B highlights the value of openness, transparency and collaboration in advancing AI for the developer community.Check out Llama-Embed-Nemotron-8B on Hugging Face, and learn more about the model, including architectural highlights, training methodology and performance evaluation.What Open Source Teaches Us About Making AI BetterOpen models are shaping the future of AI, enabling developers, enterprises and governments to innovate with transparency, customization and trust. In the latest episode of the NVIDIA AI Podcast, NVIDIAs Bryan Catanzaro and Jonathan Cohen discuss how open models, datasets and research are laying the foundation for shared progress across the AI ecosystem.The NVIDIA Nemotron family of open models represents a full-stack approach to AI development, connecting model design to the underlying hardware and software that power it. By releasing Nemotron models, data and training methodologies openly, NVIDIA aims to help others refine, adapt and build upon its work, resulting in a faster exchange of ideas and more efficient systems.When we as a community come together contributing ideas, data and models we all move faster, said Catanzaro in the episode. Open technologies make that possible.Theres more happening this week at Open Source AI Week, including the start of the PyTorch Conference bringing together developers, researchers and innovators pushing the boundaries of open AI.Attendees can tune in to the special keynote address by Jim Fan, director of robotics and distinguished research scientist at NVIDIA, to hear the latest advancements in robotics from simulation and synthetic data to accelerated computing. The keynote, titled The Physical Turing Test: Solving General Purpose Robotics, will take place on Wednesday, Oct. 22, from 9:50-10:05 a.m. PT.Andrej Karpathys Nanochat Teaches Developers How to Train LLMs in Four Hours Computer scientist Andrej Karpathy recently introduced Nanochat, calling it the best ChatGPT that $100 can buy. Nanochat is an open-source, full-stack large language model (LLM) implementation built for transparency and experimentation. In about 8,000 lines of minimal, dependency-light code, Nanochat runs the entire LLM pipeline from tokenization and pretraining to fine-tuning, inference and chat all through a simple web user interface.NVIDIA is supporting Karpathys open-source Nanochat project by releasing two NVIDIA Launchables, making it easy to deploy and experiment with Nanochat across various NVIDIA GPUs.With NVIDIA Launchables, developers can train and interact with their own conversational model in hours with a single click. The Launchables dynamically support different-sized GPUs including NVIDIA H100 and L40S GPUs on various clouds without need for modification. They also automatically work on any eight-GPU instance on NVIDIA Brev, so developers can get compute access immediately.The first 10 users to deploy these Launchables will also receive free compute access to NVIDIA H100 or L40S GPUs.Start training with Nanochat by deploying a Launchable:Nanochat Speedrun on NVIDIA H100Nanochat Speedrun on NVIDIA L40SAndrej Karpathys Next Experiments Begin With NVIDIA DGX SparkToday, Karpathy received an NVIDIA DGX Spark the worlds smallest AI supercomputer, designed to bring the power of Blackwell right to a developers desktop. With up to a petaflop of AI processing power and 128GB of unified memory in a compact form factor, DGX Spark empowers innovators like Karpathy to experiment, fine-tune and run massive models locally.Building the Future of AI With PyTorch and NVIDIA PyTorch, the fastest-growing AI framework, derives its performance from the NVIDIA CUDA platform and uses the Python programming language to unlock developer productivity. This year, NVIDIA added Python as a first-class language to the CUDA platform, giving the PyTorch developer community greater access to CUDA.CUDA Python includes key components that make GPU acceleration in Python easier than ever, with built-in support for kernel fusion, extension module integration and simplified packaging for fast deployment.Following PyTorchs open collaboration model, CUDA Python is available on GitHub and PyPI.According to PyPI Stats, PyTorch averaged overtwomillion daily downloads, peaking at2,303,217onOctober14,andhad 65million total downloads last month.Every month, developers worldwide download hundreds of millions of NVIDIA libraries including CUDA, cuDNN, cuBLAS and CUTLASS mostly within Python and PyTorch environments. CUDA Python provides nvmath-python, a new library that acts as the bridge between Python code and these highly optimized GPU libraries.Plus, kernel enhancements and support for next-generation frameworks make NVIDIA accelerated computing more efficient, adaptable and widely accessible.NVIDIA maintains a long-standing collaboration with the PyTorch community through open-source contributions and technical leadership, as well as by sponsoring and participating in community events and activations.At PyTorch Conference 2025 in San Francisco, NVIDIA will host a keynote address, five technical sessions and nine poster presentations.NVIDIAs on the ground at Open Source AI Week. Stay tuned for a celebration highlighting the spirit of innovation, collaboration and community that drives open-source AI forward. Follow NVIDIA AI Developer on social channels for additional news and insights.NVIDIA Spotlights Open Source Innovation Open Source AI Week kicks off on Monday with a series of hackathons, workshops and meetups spotlighting the latest advances in AI, machine learning and open-source innovation.The event brings together leading organizations, researchers and open-source communities to share knowledge, collaborate on tools and explore how openness accelerates AI development.NVIDIA continues to expand access to advanced AI innovation by providing open-source tools, models and datasets designed to empower developers. With more than 1,000 open-source tools on NVIDIA GitHub repositories and over 500 models and 100 datasets on the NVIDIA Hugging Face collections, NVIDIA is accelerating the pace of open, collaborative AI development.Over the past year, NVIDIA has become the top contributor in Hugging Face repositories, reflecting a deep commitment to sharing models, frameworks and research that empower the community.https://blogs.nvidia.com/wp-content/uploads/2025/10/1016.mp4Openly available models, tools and datasets are essential to driving innovation and progress. By empowering anyone to use, modify and share technology, it fosters transparency and accelerates discovery, fueling breakthroughs that benefit both industry and communities alike. Thats why NVIDIA is committed to supporting the open source ecosystem.Were on the ground all week stay tuned for a celebration highlighting the spirit of innovation, collaboration and community that drives open-source AI forward, with the PyTorch Conference serving as the flagship event.
    0 Comentários ·0 Compartilhamentos
  • NVIDIA GTC Washington, DC: Live Updates on Whats Next in AI
    blogs.nvidia.com
    Countdown to GTC Washington, DC: What to Watch Next Week Next week, Washington, D.C., becomes the center of gravity for artificial intelligence. NVIDIA GTC Washington, D.C., lands at the Walter E. Washington Convention Center Oct. 27-29 and for those who care about where computing is headed, this is the moment to pay attention.The headline act: NVIDIA founder and CEO Jensen Huangs keynote address on Tuesday, Oct. 28, at 12 p.m. ET. Expect more than product news expect a roadmap for how AI will reshape industries, infrastructure and the public sector.Before that, the pregame show kicks off at 8:30 a.m. ET with Brad Gerstner, Patrick Moorhead and Kristina Partsinevelos offering sharp takes on whats coming.But GTC offers more than a keynote. It provides full immersion: 70+ sessions, hands-on workshops and demos covering everything from agentic AI and robotics to quantum computing and AI-native telecom networks. Its where developers meet decision-makers, and ideas turn into action. Exhibits-only passes are still available.Bookmark this space. Starting Monday, NVIDIA will live-blog the news, the color and the context, straight from the floor.
    0 Comentários ·0 Compartilhamentos
  • 0 Comentários ·0 Compartilhamentos
  • GeForce NOW Brings 18 Games to the Cloud in October for a Spooky Good Time
    blogs.nvidia.com
    Editors note: This blog has been updated to include an additional game for October,The Outer Worlds 2.October is creeping in with plenty of gaming treats. From thrilling adventures to spinetingling scares, the cloud gaming lineup is packed with 18 new games, including the highly anticipated shooter Battlefield 6, launching on GeForce NOW this month. But first, catch the six games coming this week.Miami and Warsaw, Poland, are the latest regions to get GeForce RTX 5080-class power, with Portland and Ashburn coming up next. Stay tuned to GFN Thursday for updates as more regions upgrade to Blackwell RTX. Follow along with the latest progress on the server rollout page.Portland and Ashburn will be the next regions to light up with GeForce RTX 5080-class power.This week, inZOI and Total War: Warhammer III join the lineup of GeForce RTX 5080-ready titles, both already available on the service. Look for the GeForce RTX 5080 Ready row in the app or check out the full list.Falling for New GamesCatch the games ready to play today:Train Sim World 6 (New release on Steam, Sept. 30)Alien: Rogue Incursion Evolved Edition (New release on Steam, Sept. 30)Car Dealer Simulator (Steam)Nightingale (Epic Games Store)Ready or Not (Epic Games Store)STALCRAFT: X (Steam)New GeForce RTX 5080-ready games:inZOI (Steam)Total War: Warhammer III (Steam and Epic Games Store)Catch the full list of games coming to the cloud in October:King of Meat (New release on Steam, Oct. 7)Seafarer: The Ship Sim (New release on Steam, Oct. 7)Little Nightmares III (New release on Steam, Oct. 9)Battlefield 6 (New release on Steam and EA app, Oct. 10)Ball x Pit (New release on Steam, Oct. 15)Fellowship (New release on Steam, Oct. 16)Jurassic World Evolution 3 (New release on Steam, Oct. 21)Painkiller (New release on Steam, Oct. 21)Vampire: The Masquerade Bloodlines 2 (New release on Steam, Oct. 21)Tormented Souls 2 (New release on Steam, Oct. 23)Super Fantasy Kingdom (New release on Steam, Oct. 24)Earth vs. Mars (New release on Steam, Oct. 29)The Outer Worlds 2 (New release on Steam, Battle.net and Xbox, available on PC Game Pass, Oct. 29)ARC Raiders (New release on Steam, Oct. 30)Stacked SeptemberIn addition to the 17 games announced in September, an extra dozen joined over the month, including the newly added Train Sim World 6 this week:Call of Duty: Modern Warfare III (Steam, Battle.net and Xbox, available on PC Game Pass)Field of Glory II: Medieval (Steam)Goblin Cleanup (New release on Steam)Phoenix Wright: Ace Attorney Trilogy (Steam)Professional Fishing 2 (New release on Steam)Project Winter (New release on Epic Games Store)Renown (New release on Steam)Sworn (New release on Xbox, available on PC Game Pass)Two Point Campus (Steam, Epic Games Store)Two Point Museum (Steam, Epic Games Store)Town to City (New release on Steam)What are you planning to play this weekend? Let us know on X or in the comments below.If you could only play one genre of game for the rest of your life, what would it be and why? Extra credit: screenshots or clips of that genre in action! NVIDIA GeForce NOW (@NVIDIAGFN) October 1, 2025
    0 Comentários ·0 Compartilhamentos
  • 0 Comentários ·0 Compartilhamentos
  • Open Secret: How NVIDIA Nemotron Models, Datasets and Techniques Fuel AI Development
    blogs.nvidia.com
    Open technologies made available to developers and businesses to adopt, modify and innovate with have been part of every major technology shift, from the birth of the internet to the early days of cloud computing. AI should follow the same path.Thats why the NVIDIA Nemotron family of multimodal AI models, datasets and techniques is openly available. Accessible for research and commercial use, from local PCs to enterprise-scale systems, Nemotron provides an open foundation for building AI applications. Its available for developers to get started on GitHub, Hugging Face and OpenRouter.Nemotron enables developers, startups and enterprises of any size to use models trained with transparent, open-source training data. It offers tools to accelerate every phase of development, from customization to deployment.The technologys transparency means that its adopters can understand how their models work and trust the results they provide.Nemotrons capabilities for generalized intelligence and agentic AI reasoning and its adaptability to specialized AI use cases have led to its widespread use today by AI innovators and leaders across industries such as manufacturing, healthcare, education and retail.Whats NVIDIA Nemotron?NVIDIA Nemotron is a collection of open-source AI technologies designed for efficient AI development at every stage. It includes:Multimodal models: State-of-the-art AI models, delivered as open checkpoints, that excel at graduate-level scientific reasoning, advanced math, coding, instruction following, tool calling and visual reasoning.Pretraining, post-training and multimodal datasets: Collections of carefully chosen text, image and video data that teach AI models skills including language, math and problem-solving.Numerical precision algorithms and recipes: Advanced precision techniques that make AI faster and cheaper to run while keeping answers accurate.System software for scaling training efficiently on GPU clusters: Optimized software and frameworks that unlock accelerating training and inference on NVIDIA GPUs at massive scale for the largest models.Post-training methodologies and software: Fine-tuning steps that make AI smarter, safer and better at specific jobs.Nemotron is part of NVIDIAs wider efforts to provide open, transparent and adaptable AI platforms for developers, industry leaders and AI infrastructure builders across the private and public sectors.Whats the Difference Between Generalized Intelligence and Specialized Intelligence?NVIDIA built Nemotron to raise the bar for generalized intelligence capabilities including AI reasoning while also accelerating specialization, helping businesses worldwide adopt AI for industry-specific challenges.Generalized intelligence refers to models trained on vast public datasets to perform a wide range of tasks. It serves as the engine needed for broad problem-solving and reasoning tasks. Specialized intelligence learns the unique language, processes and priorities of an industry or organization, giving AI models the ability to adapt to specific real-world applications.To deliver AI at scale across every industry, both are essential.Thats why Nemotron provides pretrained foundation models optimized for a range of computing platforms, as well as tools like NVIDIA NeMo and NVIDIA Dynamo to transform generalized AI models into custom models tailored for specialized intelligence.How Are Developers and Enterprises Using Nemotron?NVIDIA is building Nemotron to accelerate the work of developers everywhere and to inform the design of future AI systems.From researchers to startups and global enterprises, developers need flexible, trustworthy AI. Nemotron offers the tools to build, customize and integrate AI for virtually any field.CrowdStrike is integrating its Charlotte AI AgentWorks no-code platform for security teams with Nemotron, helping to power and secure the agentic ecosystem. This collaboration redefines security operations by enabling analysts to build and deploy specialized AI agents at scale, leveraging trusted, enterprise-grade security with Nemotron models.DataRobot is using Nemotron as the open foundation for training, customizing and managing AI agents at scale in the Agent Workforce Platform co-developed with NVIDIA a solution for building, operating and governing a fully functional AI agent workforce, in on-premises, hybrid and multi-cloud environments.ServiceNowintroduced the Apriel Nemotron 15B model earlier this year in partnership with NVIDIA. Post-trained with data from both companies, the model is purpose-built for real-time workflow execution and delivers advanced reasoning in a smaller size, making it faster, more efficient, and cost-effective.UK-LLM, a sovereign AI initiative led by University College London, used Nemotron open-source techniques and datasets to develop an AI reasoning model for English and Welsh.NVIDIA also uses the insights gained from developing Nemotron to inform the design of its next-generation systems, including Grace Blackwell, Vera Rubin and Feynman. The latest innovations in AI models, including reduced precision, sparse arithmetic, new attention mechanisms and optimization algorithms, all shape GPU architectures.For example, NVFP4, a new data format that uses just four bits per parameter during large language model (LLM) training, was discovered with Nemotron. This advancement which dramatically reduces energy use is influencing the design of future NVIDIA systems.NVIDIA also improves Nemotron with open technologies built by the broader AI community.Alibabas Qwen open model has provided data augmentation that has improved Nemotrons pretraining and post-training datasets. The latest Qwen3-Next architecture pushed the frontier of long-context AI, the model leverages Gated Delta Networks from NVIDIA research and MIT.DeepSeek R1, a pioneer in AI reasoning, led to the development of Nemotron math, code and reasoning open datasets that can be used to teach models how to think.OpenAIs gpt-oss open-weight models demonstrate incredible reasoning, math and tool calling capabilities, including adjustable reasoning settings, that can be used to strengthen Nemotron post-training datasets.The Llama collection of open models by Meta is the foundation for Llama-Nemotron, an open family of models that used Nemotron datasets and recipes to add advanced reasoning capabilities.Start training and customizing AI models and agents with NVIDIA Nemotron models and data on Hugging Face, or try models for free on OpenRouter. Developers using NVIDIA RTX PCs can access Nemotron via the llama.cpp framework.Join NVIDIA for Agentic AI Day at NVIDIA GTC Washington, D.C. on Wednesday, Oct. 29. The event will bring together developers, researchers and technology leaders to highlight how NVIDIA technologies are accelerating national AI priorities and powering the next generation of AI agents.Stay up to date on agentic AI, Nemotron and more by subscribing to NVIDIA developer news, joining the developer community and following NVIDIA AI on LinkedIn, Instagram, X and Facebook.
    0 Comentários ·0 Compartilhamentos
  • Canada Goes All In on AI: NVIDIA Joins Nations Technology Leaders in Montreal to Shape Sovereign AI Strategy
    blogs.nvidia.com
    Canadas role as a leader in artificial intelligence was on full display at this weeks All In Canada AI Ecosystem event.NVIDIA Vice President of Generative AI Software Kari Briski today joined Canadas Minister of Artificial Intelligence and Digital Innovation Evan Solomon and Aiden Gomez, cofounder and CEO of Cohere, in a special address moderated by SiriusXM host Amber Mac.Youre all here to deliver the next big thing, Solomon told the room full of founders, researchers, investors and students. The AI revolution is the birth not just of a new technology this is the birth of the age of the entrepreneur.The session comes as Canadian communications technology company and NVIDIA Cloud Partner TELUS announces the launch of Canadas first fully sovereign AI factory in Rimouski, Quebec, powered by the latest NVIDIA accelerated computing, and financial services company RBC Capital Markets continues building AI agents for capital markets using NVIDIA software.For our government, for our country, All In means building digital sovereignty the most pressing policy, democratic issue of our time, Solomon said.Every nation should develop its own AI not just outsource it, Briski said during the panel conversation following Solomons address. AI must reflect local values, understand cultural context and align with national norms and policies. It needs to speak and write in the nuanced patterns of your natural language. Digital intelligence isnt something you can simply outsource.From left to right, Amber Mac, SiriusXM podcast host and moderator; Kari Briski, vice president of generative AI software for enterprise at NVIDIA; Aiden Gomez, cofounder and CEO of Cohere; Evan Solomon, Canadas Minister of Artificial Intelligence.The event marks a pivotal moment in Canadas AI journey, bringing together public and private sector leaders to spotlight the national infrastructure, innovation and policy that shape the future of artificial intelligence. It underscores the countrys commitment to digital sovereignty, economic competitiveness and the responsible development of AI.Canada must own the tools and the rules that matter at this critical moment, Solomon said. We need our digital insurance policy and thats what were building.Canadas AI momentum is accelerating.TELUS new facility, powered by NVIDIAs computing and software, and built in collaboration with HPE, offers end-to-end AI capabilities from model training to inferencing while ensuring full data residency and control within Canadian borders.The factory is already serving clients including OpenText, and is powered by 99% renewable energy and TELUS PureFibre network.Accenture will develop and deploy industry-specific solutions on the TELUS sovereign AI platform, accelerating AI adoption across its Canadian clients.And League, Canadas leading healthcare consumer experience provider, will run its comprehensive suite of AI-powered healthcare solutions using the TELUS Sovereign AI Factory.This event is the latest in a global wave of initiatives as countries activate AI to supercharge their economies and research ecosystems.Over the past year, NVIDIA founder and CEO Jensen Huang has appeared at events in France, Germany, India, Japan and the U.K., joining heads of state and industry leaders to highlight national AI strategies, announce infrastructure investments and accelerate public-private collaboration.Leadership is not a birthright, Solomon said. It has to be earned again and again and the competition is fierce.And last year, during a visit to Canada, Huang highlighted Canadas pioneering role in modern AI, describing it as the epicenter of innovation in modern AI, building on the foundational work of pioneering Canadian AI researchers such as Geoffrey Hinton and Yoshua Bengio, who is also speaking at the conference.RBC Capital Markets works with NVIDIA software to build enterprise-grade AI agents for Capital Markets. This enables global institutions to deploy intelligent systems tailored to local needs.These agents customized with NVIDIA NeMo agent lifecycle tools and deployed using NVIDIA NIM microservices are helping transform RBC Capital Markets research for faster delivery of insights to clients.RBC Capital Markets, TELUS and NVIDIA are sharing more on best practices for agentic AI development in a special session at All In on Wednesday from 4:15-5 p.m. ET.
    0 Comentários ·0 Compartilhamentos
  • AI On: How Onboarding Teams of AI Agents Drives Productivity and Revenue for Businesses
    blogs.nvidia.com
    AI is no longer solely a back-office tool. Its a strategic partner that can augment decision-making across every line of business.Whether users aim to reduce operational overhead or personalize customer experiences at scale, custom AI agents are key.As AI agents are adopted across enterprises, managing their deployment will require a deliberate strategy. The first steps are architecting the enterprise AI infrastructure to optimize for fast, cost-efficient inference and creating a data pipeline that keeps agents continuously fed with timely, contextual information.Alongside human and hardware resourcing, onboarding AI agents will become a core strategic function for businesses as leaders orchestrate digital talent across the organization.Heres how to onboard teams of AI agents:1. Choose the Right AI Agent for the TaskJust as human employees are hired for specific roles, AI agents must be selected and trained based on the task theyre meant to perform. Enterprises now have access to a variety of AI models including for language, vision, speech and reasoning each with unique strengths.For that reason, proper model selection is critical to achieving business outcomes:Choose a reasoning agent to solve complex problems that require puzzling through answers.Use a code-generation copilot to assist developers with writing, changing and merging code.Deploy a video analytics AI agent for analyzing site inspections or product defects.Onboard a customer service AI assistant thats grounded in a specific knowledge base rather than a generic foundation model.Model selection affects agent performance, costs, security and business alignment. The right model enables the agent to accurately address business challenges, align with compliance requirements and safeguard sensitive data. Choosing an unsuitable model can lead to overconsumption of computing resources, higher operational costs and inaccurate predictions that negatively impact agent decision-making.With software like NVIDIA NIM and NeMo microservices, developers can swap in different models and connect tools based on their needs. The result: task-specific agents fine-tuned to meet a business goals, data strategy and compliance requirements.2. Upskill AI Agents by Connecting Them to DataOnboarding AI agents requires building a strong data strategy.AI agents work best with a consistent stream of data thats specific to the task and the business theyre operating within.Institutional knowledge the accumulated wisdom and experience within an organization is a crucial asset that can often be lost when employees leave or retire. AI agents can play a pivotal role in capturing and preserving this knowledge for employees to use.Connecting AI to data sources: To function at their best, AI agents must interpret a variety of data types, from structured databases to unstructured formats such as PDFs, images and videos. Such connection enables the agents to generate tailored, context-aware responses that go beyond the capabilities of a standalone foundation model, delivering more precise and valuable outcomes.AI as a knowledge repository: AI agents benefit from systems that capture, process and reuse data. A data flywheel continuously collects, processes and uses information to iteratively improve the underlying system. AI systems benefit from this flywheel, recording interactions, decisions and problem-solving approaches to self-optimize their model performance and efficiency. For example, integrating AI into customer service operations allows the system to learn from every conversation, capturing valuable feedback and questions. This data is then used to refine responses and maintain a comprehensive repository of institutional knowledge.NVIDIA NeMo supports the development of powerful data flywheels, providing the tools for continuously curating, refining and evaluating data and models. This enables AI agents to improve accuracy and optimize performance through ongoing adaptation and learning.3. Onboard AI Agents Into Lines of BusinessOnce enterprises create the cloud-based, on-premises or hybrid AI infrastructure to support AI agents and refine the data strategy to feed those agents timely and contextual information, the next step is to systematically deploy AI agents across business units, moving from pilot to scale.According to a recent IDC survey of 125 chief information officers, the top three areas that enterprises are looking to integrate agentic AI are IT processes, business operations and customer service.In each area, AI agents help enhance the productivity of existing employees, such as by automating the ticketing process for IT engineers or giving employees easy access to data to help serve customers.AI agents in the enterprise could also be onboarded for:For telecom operations, Amdocs builds verticalized AI agents using its amAIz platform to handle complex, multistep customer journeys spanning sales, billing and care and advance autonomous networks from optimized planning to efficient deployment. This helps ensure performance of the networks and the services they support.NVIDIA has partnered with various enterprises, such as enterprise software company ServiceNow, and global systems integrators, like Accenture and Deloitte, to build and deploy AI agents for maximum business impact across use cases and lines of business.4. Provide Guardrails and Governance for AI AgentsJust like employees need clear guidelines to stay on track, AI models require well-defined guardrails to ensure they provide reliable, accurate outputs and operate within ethical boundaries.Topical guardrails: Topical guardrails prevent the AI from veering off into areas where they arent equipped to provide accurate answers. For instance, a customer service AI assistant should focus on resolving customer queries and not drift into unrelated topics such as upsells and offerings.Content safety guardrails: Content safety guardrails moderate human-LLM interactions by classifying prompts and responses as safe or unsafe and tagging violations by category when unsafe. These guardrails filter out unwanted language and make sure references are made only to reliable sources, so the AIs output is trustworthy.Jailbreak guardrails: With a growing number of agents having access to sensitive information, the agents could become vulnerable to data breaches over time. Jailbreak guardrails are designed to help with adversarial threats as well as detect and block jailbreak and prompt injection attempts targeting LLMs. These help ensure safer AI interactions by identifying malicious prompt manipulations in real time.NVIDIA NeMo Guardrails empower enterprises to set and enforce domain-specific guidelines by providing a flexible, programmable framework that keeps AI agents aligned with organizational policies, helping ensure they consistently operate within approved topics, maintain safety standards and comply with security requirements with the least latency added at inference.Get Started Onboarding AI AgentsThe best AI agents are not one-size-fits-all. Theyre custom-trained, purpose-built and continuously learning.Business leaders can start their AI agent onboarding process by asking:What business outcomes do we want AI to drive?What knowledge and tools does the AI need access to?Who are the human collaborators or overseers?In the near future, every line of business will have dedicated AI agents trained on its data, tuned to its goals and aligned with its compliance needs. The organizations that invest in thoughtful onboarding, secure data strategies and continuous learning are poised to lead the next phase of enterprise transformation.Watch this on-demand webinar to learn how to create an automated data flywheel that continuously collects feedback to onboard, fine-tune and scale AI agents across enterprises.Stay up to date on agentic AI, NVIDIA Nemotron and more by subscribing to NVIDIA AI news, joining the community and following NVIDIA AI on LinkedIn, Instagram, X and Facebook.Explore the self-paced video tutorials and livestreams.
    0 Comentários ·0 Compartilhamentos
  • NVIDIA, OpenAI Announce the Biggest AI Infrastructure Deployment in History
    blogs.nvidia.com
    OpenAI and NVIDIA just announced a landmark AI infrastructure partnership an initiative that will scale OpenAIs compute with multi-gigawatt data centers powered by millions of NVIDIA GPUs.To discuss what this means for the next generation of AI development and deployment, the two companies CEOs, and the president of OpenAI, spoke this morning with CNBCs Jon Fortt.This is the biggest AI infrastructure project in history, said NVIDIA founder and CEO Jensen Huang in the interview. This partnership is about building an AI infrastructure that enables AI to go from the labs into the world.Through the partnership, OpenAI will deploy at least 10 gigawatts of NVIDIA systems for OpenAIs next-generation AI infrastructure, including the NVIDIA Vera Rubin platform. NVIDIA also intends to invest up to $100 billion in OpenAI progressively as each gigawatt is deployed.Theres no partner but NVIDIA that can do this at this kind of scale, at this kind of speed, said Sam Altman, CEO of OpenAI.The million-GPU AI factories built through this agreement will help OpenAI meet the training and inference demands of its next frontier of AI models.Building this infrastructure is critical to everything we want to do, Altman said. This is the fuel that we need to drive improvement, drive better models, drive revenue, drive everything.(L to R): OpenAI President Greg Brockman, NVIDIA Founder and CEO Jensen Huang, and OpenAI CEO Sam AltmanBuilding Million-GPU Infrastructure to Meet AI DemandSince the launch of OpenAIs ChatGPT which in 2022 became the fastest application in history to reach 100 million users the company has grown its user base to more than 700 million weekly active users and delivered increasingly advanced capabilities, including support for agentic AI, AI reasoning, multimodal data and longer context windows.To support its next phase of growth, the companys AI infrastructure must scale up to meet not only training but inference demands of the most advanced models for agentic and reasoning AI users worldwide.The cost per unit of intelligence will keep falling and falling and falling, and we think thats great, said Altman. But on the other side, the frontier of AI, maximum intellectual capability, is going up and up. And that enables more and more use and a lot of it.Without enough computational resources, Altman explained, people would have to choose between impactful use cases, for example either researching a cancer cure or offering free education.No one wants to make that choice, he said. And so increasingly, as we see this, the answer is just much more capacity so that we can serve the massive need and opportunity.In 2016, NVIDIA CEO Jensen Huang hand-delivered the first NVIDIA DGX system to OpenAIs headquarters in San Francisco.The first gigawatt of NVIDIA systems built with NVIDIA Vera Rubin GPUs will generate their first tokens in the second half of 2026.The partnership expands on a long-standing collaboration between NVIDIA and OpenAI, which began with Huang hand-delivering the first NVIDIA DGX system to the company in 2016.This is a billion times more computational power than that initial server, said Greg Brockman, president of OpenAI. Were able to actually create new breakthroughs, new modelsto empower every individual and business because well be able to reach the next level of scale.Huang emphasized that though this is the start of a massive buildout of AI infrastructure around the world, its just the beginning.Were literally going to connect intelligence to every application, to every use case, to every device and were just at the beginning, Huang said. This is the first 10 gigawatts, I assure you of that.Watch the CNBC interview below.
    0 Comentários ·0 Compartilhamentos
  • At Climate Week NYC, NVIDIA Details AIs Key Role in the Sustainable Energy Transition
    blogs.nvidia.com
    Energy efficiency in large language model inference has improved 100,000x in the past 10 years demonstrating that accelerated computing is sustainable computing.At Climate Week NYC, taking place through Sept. 26 in New York City, NVIDIA is showcasing how accelerated computing is propelling the sustainable energy transition and advancing climate research.The summit brings together researchers, startups, scientists, technologists, nonprofits and policymakers to discuss bold ideas for climate action. This years theme is energy: where it comes from, how to scale it and how AI can optimize the grid in newfound ways.Throughout the week, NVIDIA will be highlighting its own innovative climate technologies, as well as a pair of recently published product carbon footprint reports on the emissions intensity of NVIDIA GPUs.AI Usage Isnt Black and White It Can Actually Be GreenAI can play a critical role in stabilizing energy grids by pinpointing anomalies at a rapid rate. These timely insights can allow operators to respond to issues efficiently before they affect the larger grid.Forecasted AI-Induced Energy Savings Across Energy-Intensive Sectors by 2035SectorSubsector2035 AI Energy Savings (%)2035 Demand: Reference Case Scenario (in petajoules)IndustryIron and Steel31,160IndustryCement4500IndustryChemicals210,440IndustryAluminum4260IndustryPaper21,860IndustryOther813,650TransportationLight commercial vehicles68,160TransportationHeavy duty trucks33,670TransportationCars34,460TransportationBuses61,690TransportationAviation43,120TransportationShipping4940TransportationRail7530BuildingsResidential14,780BuildingsNon-residential41,760Data Sources: Davide DAmbrosio et al., Energy and AI (Paris: International Energy Agency, April 2025) and Net Zero America: Potential Pathways, Infrastructure, and Impacts, Princeton University, as featured in CSIS report AI for the Grid.According to Net-Zero America Projects calculations, if AI applications are fully adopted, nearly 4.5% of projected energy demand in 2035 will be saved across the three most energy-intensive sectors industry, transportation and buildings.NVIDIA joined a Climate Week panel discussion on AI: Powering a More Productive Energy Future yesterday following the release of Center for Strategic and International Studies findings on AI and energy. Panelists included Crusoe Energy Systems, a company that builds and operates clean computing infrastructure, and Emerald AI, a startup developing an AI solution to control data center power use during times of peak grid demand.This discussion centered around how AI will advance sustainability solutions at an unprecedented pacefrom responsible grid and power infrastructure scaling to reliable transportation and nuclear energy optimization.Startup Ecosystem Advances AI Energy Efficiency, Sustainability ProjectsEmerald AI, a NVIDIA NVentures portfolio company, is collaborating with NVIDIA on a recently unveiled NVIDIA Omniverse Blueprint for building high-performance, grid-friendly and energy-efficient AI infrastructure.This new reference design enables the transformation of data centers into fully integrated AI factories optimized so that every watt of energy contributes to intelligence generation.As a collaborator on NVIDIAs reference design for giga-scale AI factories, were helping prove that AI compute can be power-flexible, said Varun Sivaram, founder and CEO of Emerald AI. Its a paradigm shift with a massive prize: unlocking 100 gigawatts of untapped power grid capacity and resolving AIs energy bottleneck while promoting affordable, reliable and clean power grids.Emerald AI is a member of the NVIDIA Inception program for startups, within the Sustainable Futures initiative. These companies are pioneering developments in fields such as green computing, sustainable infrastructure, wildlife conservation and more.Sustainable Futures members Vibrant Planet, FortyGuard, Pachama and Wherobots are also attending this weeks summit.Decreasing the Carbon Footprint of NVIDIA Products and OperationsNVIDIA is continuously working to decrease its own carbon footprint.Its first product carbon footprint summary comparison was recently released revealing a 24% reduction in embodied carbon emissions intensity between NVIDIA HGX H100 and HGX B200 baseboards.Scope and methodology of the product carbon footprint analysis for NVIDIA HGX B200.NVIDIA will continue to publish product carbon footprint summaries of newly released products to spotlight improvements in energy efficiency and sustainability.In terms of NVIDIAs physical footprint, all offices and data centers under the companys operational control run on 100% renewable energy and carbon-free electricity is purchased to cover 100% of the companys leased data centers footprint.NVIDIA headquarters in Santa Clara, CaliforniaApplying AI to Climate and Weather ResearchHigh-resolution, AI-powered weather models are helping strengthen energy systems and reduce vulnerability to unpredictable climate events.When used to support energy grid stability, these simulations can help utilities more precisely direct maintenance crews to remove obstacles close to power lines ahead of storms.AI-driven climate models are also poised to increase the adoption and usage of renewables across the energy grid by lowering costs and ramping up efficiency.Josh Parker, head of sustainability at NVIDIA, and Holly Paeper, president of commercial HVAC Americas for Trane Technologies, spoke during a Climate Week NYC fireside chat at the Nest Climate Campus.With these climate models, grid operators can accurately determine factors like the amount of power wind turbines will generate on a given day, or how much energy collected in solar batteries will need to be saved to compensate and keep a citys lights on and stable.These insights can help energy providers manage load and lower the cost of adopting renewables, presenting a path forward to decarbonize the grid.The NVIDIA Earth-2 platform offers tools, microservices, and reference implementations that help developers build applications to simulate and visualize weather and climate predictions at a global scale.Work powered by Earth-2 is featured on an array of panels at Climate Week, including Columbia Universitys GenAI for Climate Science, AI for Energy and Energy for AI, Where the Internet Lives, Presented by Google, and the AWS Climate Tech & AI Forum.Learn more about NVIDIA Earth-2 and sustainable computing solutions.
    0 Comentários ·0 Compartilhamentos
  • The UKs Goldilocks Moment for AI: NVIDIA, UK and US Leaders Highlight AI Infrastructure Investments
    blogs.nvidia.com
    The U.K. was the center of the AI world this week as NVIDIA, U.K. and U.S. leaders announced new initiatives toward making the nation an AI superpower.NVIDIA founder and CEO Jensen Huangs visit to the U.K. this week culminated in an NVIDIA AI ecosystem celebration today at Vision Hall in London, where he welcomed U.K. Prime Minister Keir Starmer and U.K. Secretary of State for Science, Innovation and Technology Liz Kendall on stage to discuss the nations Goldilocks moment where world-class universities, researchers, startups and venture capitalists converge to create a golden network for building out the nations AI.Huang also highlighted plans for the U.K.s AI infrastructure on a panel with Peter Kyle, U.K. Secretary of State for Business and Trade, and Howard Lutnick, U.S. Secretary of Commerce.Panel with U.K. Secretary of State for Business and Trade Peter Kyle, U.S. Secretary of Commerce Howard Lutnick and NVIDIA founder and CEO Jensen Huang.The United Kingdom was the birthplace of the industrial revolution, Huang said. Now, its the worlds third largest AI market home to 3,700 companies and 60,000 employees, a powerhouse of talent and enterprise making it an ideal ecosystem for pushing forth the next industrial revolution driven by AI.This is the biggest-ever tech agreement between the United States and the United Kingdom, said Starmer. He highlighted that the U.Ks work with NVIDIA and the U.S. government is focused on using AI for security and safety for both countries, in addition to business and trade.[AI], which some people find strange and confusing, can open doors, said Secretary Kendall. The role of this government is to open doors and together with [NVIDIA], we can open those doors and give that hope to people in this country.U.K. Secretary of State for Science, Innovation and Technology Liz Kendall with NVIDIA founder and CEO Jensen Huang.This isnt just about London, Secretary Kyle said, highlighting the collaborations reach will span Belfast, Manchester, Edinburgh and other parts of the country. What youre seeing is the renewal, the refocus and crucially, the modernizing of the special relationship [between the U.K. and U.S.], he said.That modernization taps into AI and digital infrastructure.Secretary Lutnick emphasized the importance of building this infrastructure, highlighting how the U.S. is bringing more energy online to power the AI factories that will make this transformation possible.One of the things we need to do is to build and thats part of our philosophy: to build sufficiently at home so that we have the capacity to care for ourselves and the rest of the world if we need to, Lutnick said.Combining supercomputing hyperscale facilities and AI investments in Britain will result in incredible scientific exploration and discovery and crucially, the commercialization of it, Kyle added.I believe the greatest natural resource of the U.K. is the incredible researchers and scientists, Huang said. It is really quite extraordinary. Your universities are creating critical thinkers, independent thinkers, creative thinkers and they invent amazing things.Over 500 attendees from startups, venture capital companies, higher education and industry came together at the event to forge connections, exchange ideas and catalyze new growth opportunities at the forefront of AI.NVIDIA Announces Investment in the UKHuang announced NVIDIAs 2 billion investment in partnership with venture capital firms that have a longstanding U.K. presence, including Accel, Air Street Capital, Balderton Capital, Hoxton Ventures and Phoenix Court all of which attended the AI ecosystem celebration.Designed to foster the U.K. AI startup ecosystem and scale AIs impact across industries, the investment will act as an economic catalyst, bringing more innovative technologies and AI applications to market through new companies and jobs.Read more about other AI makers leading innovation in the U.K.More Key Announcements From the WeekNVIDIA and partners are deploying 120,000 NVIDIA Blackwell GPUs the largest AI infrastructure rollout in U.K. history, 100x more powerful than todays top supercomputers.By 2026, up to 60,000 GPUs will power new AI factories with Microsoft, Nscale, OpenAI and CoreWeave, enabling the next wave of U.K. innovation.NVIDIA and Oxford Quantum Circuits are building a quantum-GPU supercenter advancing the U.K.s leadership at the frontier of science.With techUK and QA, NVIDIA is launching a robotics R&D hub to train and upskill the next generation of AI talent.Built on NVIDIA Grace Hopper Superchips, Isambard-AI the U.K.s most powerful AI supercomputer, based at the University of Bristol is accelerating national projects including for healthcare, climate science and public services.Learn more about NVIDIAs collaboration with the U.K. to fuel innovation, economic growth and jobs.
    0 Comentários ·0 Compartilhamentos
  • Safety First, Always, NVIDIA VP of Automotive Says, Unveiling the Future of AI-Defined Vehicles at IAA Mobility
    blogs.nvidia.com
    At this weeks IAA Mobility conference in Munich, NVIDIA Vice President of Automotive Ali Kani outlined how cloud-to-car AI platforms are bringing new levels of safety, intelligence and trust to the road.NVIDIA and its partners didnt just show off cars at the conference they showed off what cars are becoming: AI-defined machines, built as much in the data center as they are in the factory.Kani framed this shift during his IAA keynote today: vehicles are moving from being dependent on horsepower to compute power, from mechanical systems to software stacks.In Germany and around the world, automotive engineering is now infused with silicon acceleration, as automakers and suppliers adopt NVIDIAs cloud-to-car platform to drive safety, intelligence and efficiency into tomorrows vehicles.NVIDIA is the only company that offers an end-to-end compute stack for autonomous driving. Its three AI compute platforms critical for autonomy are:NVIDIA DGX for training AI in data centersNVIDIA Omniverse and Cosmos to simulate worlds and generate synthetic data for testing and validationNVIDIA DRIVE AGX in-vehicle computers to process sensor data in real timeTogether, these platforms form a feedback loop for learning, testing and deployment that tightens the cycle of innovation while keeping safety front and center.Its All About Safety: NVIDIA Halos Sets the StandardSafety is a core theme at IAA. NVIDIA Halos is a full-stack, comprehensive safety system that unifies vehicle architecture, AI models, chips, software, tools and services to ensure the safe development of autonomous vehicles, from cloud to car.NVIDIA Halos brings together safety-assessed systems-on-a-chip, the safety-certified NVIDIA DriveOS operating system and the DRIVE AGX Hyperion architecture into a unified platform for autonomous driving. This platform is backed by the NVIDIA Halos Certified Program and its AI Systems Inspection Lab, which deliver rigorous validation to ensure real-time AI operates with end-to-end reliability.With AI-driven workflows and high-fidelity sensor simulations built with NVIDIA Omniverse and Cosmos, automakers can train, test and safely validate vehicle performance even under conditions that are hard to experiment with in the real world, such as rare or hazardous traffic situations and edge-case events, and in complex environments.Simulation tools are increasingly critical for advancing safe, scalable autonomous vehicle development.The popular autonomous driving simulator CARLA now integrates the NVIDIA Cosmos Transfer world foundation model, along with NVIDIA Omniverse NuRec reconstruction libraries, to bring diverse, high-fidelity simulations directly into autonomous vehicle testing pipelines.Capgemini and TCS are already tapping into this integration to expand their simulation capabilities and push the boundaries of software-defined vehicle development.Expanding the Ecosystem: Automotive Leaders Embrace Cloud-to-Car AIAutomotive leaders are embracing NVIDIAs cloud-to-car AI platform to transform their next-generation vehicles.Lucid headlined the IAA showcase with its all-electric Lucid Gravity SUV, which is accelerated by the NVIDIA DRIVE AGX platform, uses the NVIDIA Blackwell architecture and operates on NVIDIA DriveOS.Mercedes-Benz introduced its all-new GLC with EQ-technology and announced expansions to its CLA family with the first fully electric shooting brake all built on NVIDIA AI, DRIVE AV software and accelerated compute.Lotus is featuring the all-electric Eletre SUV, the hyper-GT Emeya and the Theory 1 concept all accelerated by NVIDIA DRIVE AGX to deliver high-performance, AI-driven functions for intelligent and safer mobility.ZYT is showcasing its autonomous vehicle software platforms built on NVIDIA DRIVE AGX, highlighting how this advanced technology accelerates safer and smarter mobility.Volvo Cars highlighted its ES90 Single Motor Extended Range Ultra and EX90 Twin Motor Performance Ultra models, equipped with enhanced safety and driver-assistance capabilities and powered by NVIDIA DRIVE AGX and DriveOS for improved AI performance and safety.XPENG showcased how NVIDIA DRIVE AGX underpins its G6, G9 and X9 models, delivering XPILOT smart driving assistance, advanced autonomy and intelligent cockpit features.Global Tech Leaders Accelerate Software-Defined Vehicles With NVIDIABeyond automakers, technology leaders across the globe are building on NVIDIA AI to accelerate the development of software-defined vehicles.MediaTek is working closely with NVIDIA to bring GPU-powered intelligence into its Dimensity Auto Cockpit solutions, enabling advanced in-car experiences through premium graphics and intelligent assistants.ThunderSoft introduced its new AI Box built on DRIVE AGX, designed to run large-scale AI models for intelligent cockpits. The AI Box is complete with personalized copilots, safety monitoring and immersive cabin experiences.Cerence is presenting its xUI AI assistant at IAA, built on CaLLM models and running on NVIDIA DRIVE AGX with DriveOS. With NVIDIA NeMo Guardrails, it ensures safe, context-aware, brand-specific voice interactions across both edge and cloud.ZF Group is showcasing its ProAI supercomputer accelerated by NVIDIA DRIVE AGX. The supercomputer unifies advanced driver-assistance systems, automated driving or chassis control into a scalable architecture to unlock capabilities, from entry-level deployments to full autonomy.RoboSense is integrating its high-performance automotive-grade digital lidar with the DRIVE AGX platform, enhancing system performance, while Desay SV is showcasing its NVIDIA DRIVE Thor-based domain controller, a next-generation smart mobility solution shaped by advanced AI.Magna is showcasing its future-ready, centralized advanced driver-assistance system platform designed for flexibility and scalability. This advanced system integrates a comprehensive suite of sensors, accelerated by NVIDIA DRIVE AGX Thor.Watch Kanis IAA keynote to see how NVIDIA is accelerating the future of autonomous driving with a cloud-to-car platform. Learn more about NVIDIAs work in autonomous vehicles and the NVIDIA automotive partner ecosystem. Follow NVIDIA DRIVE on LinkedIn and X.
    0 Comentários ·0 Compartilhamentos
  • Paint It Blackwell: GeForce RTX 5080 SuperPOD Rollout Begins
    blogs.nvidia.com
    GeForce NOW Blackwell RTX 5080-class SuperPODs are now rolling out, unlocking a new level of ultra high-performance, cinematic cloud gaming.GeForce NOW Ultimate members will see GeForce RTX 5080 performance arriving to a server near them, enabling even richer experiences in blockbuster titles like DUNE: Awakening, Borderlands 4, Hell Is Us, Dying Light: The Beast, Cronos: The New Dawn, Clair Obscur: Expedition 33 and more.They all come with breathtaking graphics and lowest-latency gameplay, thanks to NVIDIA DLSS 4 technology and next-generation AI features. Experience the new Cinematic-Quality Streaming mode for stunning color and fidelity across the latest devices.Look at all the room for activities gaming.The new Install-to-Play feature is expanding the cloud library to nearly 4,500 games for Ultimate and Performance members.This week kicks it off with three new games, including the launch of Borderlands 4. GeForce NOW is the ultimate way to play the Borderlands franchises latest entry free with the purchase of a new 12-month Ultimate membership bundle.Make sure to follow along on GFN Thursdays for server updates.Blackwell to the FutureGame-changer.The future of cloud gaming has arrived. With the NVIDIA Blackwell RTX upgrade, GeForce NOW brings GeForce RTX 5080-class performance to the cloud for the first time. Ultimate members can harness DLSS 4 Multi-Frame Generation, cutting-edge AI enhancements and ultralow click-to-pixel latency, enabling up to 5K at 120 frames per second for premium, responsive gameplay.Members will also see the GeForce NOW library instantly double. Over 2,200 Steam titles opted in by publishers for cloud streaming are hitting the cloud today, with more to come, letting members build and manage their own cloud gaming library. Alongside new Install-to-Play titles, GeForce NOW will continue to roll out ready-to-play titles each week.Coming to a zone near you.NVIDIA Blackwell RTX servers are starting to power up worldwide, so more members can start streaming with unprecedented performance on virtually any device, including PCs, Macs, Chromebooks, LG TVs (4K at 120Hz) and even Steam Decks (now up to 90 fps). Keep an eye on GFN Thursday updates and check the server rollout webpage for new regions going live.Ultimate members will soon see GeForce RTX 5080 performance in their area, with AAA titles like DUNE: Awakening, Borderlands 4, Hell Is Us, Dying Light: The Beast, Cronos: The New Dawn, Clair Obscur: Expedition 33 and more playable at ultimate quality. Look for the new GeForce RTX 5080 Ready row in the app for the full list of GeForce RTX 5080-optimized games, updated weekly with fresh additions.New row, who dis?Dont let this cloud pass by. The Blackwell RTX upgrade is ready for gamers to secure their spots no downloads, no hardware upgrades, just next-level gaming for the same $19.99 per month. Or members can choose to subscribe to the 12-month membership for $199.99 providing more value at less than $17 a month.Break Free With Borderlands 4Welcome to Border-LOL-lands.Get ready for a blast of chaos, color and fun as Borderlands 4 launches on GeForce NOW. The galaxys wildest looter shooter is back, sending four new Vault Hunters on a loot-crazed rampage jam-packed with quippy dialogue and sci-fi shenanigans, all wrapped in the franchises signature mayhem.Gameplay turns the dial to 11 with new double jumps, dashes, grappling hooks and air-glide moves that make every firefight a circus act. Experience a world with sprawling landscapes and nonstop dynamic events.Build the perfect Vault Hunter with deep skill trees and use different weapons each with unique behaviors and effects, now with a revamped loot system where every Legendary feels special. Play it solo or with up to three friends.Play the frantic co-op looter shooter on GeForce NOW with the NVIDIA Blackwell RTX upgrade for cinematic-quality visuals, ultrafast load times and stunning performance. Enjoy NVIDIA DLSS 4-fueled graphics and low-latency gameplay streaming from GeForce RTX 5080 gaming rigs in the cloud.The cloud is the best way to play.The title lands in the cloud on Thursday, Sept. 11. Gamers who upgrade to or purchase a 12-month GeForce NOW Ultimate membership between now and Tuesday, Sept. 25 will get the title for free, available to play as soon as it launches. Unleash chaos across the galaxy with outrageous weapons, irreverent humor and the signature co-op action that makes this iconic looter-shooter franchise a fan favorite.Game OnSet sail for moonlit mystery in Nod-Krai.Genshin Impact Version Luna I: Song of the Welkin Moon is available to play instantly on GeForce NOW no need to wait for downloads or updates. Head on a new adventure through the magical new region of Nod-Krai, where the story, exploration and battles are all shaped by the mysterious power of the moon. Play as three new characters animal-loving Lauma, energetic Flins and inventive Aino as they face off against rival factions, unravel secrets and wield creative new abilities in a world full of quirky creatures and vibrant islands. Plus, anniversary rewards await for everyone who jumps in.In addition, members can look for the following:Firefighting Simulator: Ignite (New release on Steam, Sept. 9)Borderlands 4 (New release on Steam and Epic Games Store, Sept. 11)Professional Fishing 2 (New release on Steam, Sept. 11)What are you planning to play this weekend? Let us know on X or in the comments below.Keep your responses comingwin this bundle! Steam Deck, @ASUS ROG Swift 27" 4K 240Hz G-SYNC Monitor, @Logitech G920 racing wheel, & more!How to enter:1. Follow @NVIDIAGFN2. Reply using #BlackwellonGFN3. Share: What device are you turning into an RTX 5080 GPU? pic.twitter.com/TwSpTyFUze NVIDIA GeForce NOW (@NVIDIAGFN) September 8, 2025
    0 Comentários ·0 Compartilhamentos
  • Reaching Across the Isles: UK-LLM Brings AI to UK Languages With NVIDIA Nemotron
    blogs.nvidia.com
    Celtic languages including Cornish, Irish, Scottish Gaelic and Welsh are the U.K.s oldest living languages. To empower their speakers, the UK-LLM sovereign AI initiative is building an AI model based on NVIDIA Nemotron that can reason in both English and Welsh, a language spoken by about 850,000 people in Wales today.Enabling high-quality AI reasoning in Welsh will support the delivery of public services including healthcare, education and legal resources in the language.I want every corner of the U.K. to be able to harness the benefits of artificial intelligence. By enabling AI to reason in Welsh, were making sure that public services from healthcare to education are accessible to everyone, in the language they live by, said U.K. Prime Minister Keir Starmer. This is a powerful example of how the latest AI technology, trained on the U.K.s most advanced AI supercomputer in Bristol, can serve the public good, protect cultural heritage and unlock opportunity across the country.The UK-LLM project, established in 2023 as BritLLM and led by University College London, has previously released two models for U.K. languages. Its new model for Welsh, developed in collaboration with Wales Bangor University and NVIDIA, aligns with Welsh government efforts to boost the active use of the language, with the goal of achieving a million speakers by 2050 an initiative known as Cymraeg 2050.U.K.-based AI cloud provider Nscale will make the new model available to developers through its application programming interface. The aim is to ensure that Welsh remains a living, breathing language that continues to develop with the times, said Gruffudd Prys, senior terminologist and head of the Language Technologies Unit at Canolfan Bedwyr, the universitys center for Welsh language services, research and technology. AI shows enormous potential to help with second-language acquisition of Welsh as well as for enabling native speakers to improve their language skills.This new model could also boost the accessibility of Welsh resources by enabling public institutions and businesses operating in Wales to translate content or provide bilingual chatbot services. This can help groups including healthcare providers, educators, broadcasters, retailers and restaurant owners ensure their written content is as readily available in Welsh as they are in English.Beyond Welsh, the UK-LLM team aims to apply the same methodology used for its new model to develop AI models for other languages spoken across the U.K. such as Cornish, Irish, Scots and Scottish Gaelic as well as work with international collaborators to build models for languages from Africa and Southeast Asia.This collaboration with NVIDIA and Bangor University enabled us to create new training data and train a new model in record time, accelerating our goal to build the best-ever language model for Welsh, said Pontus Stenetorp, professor of natural language processing and deputy director for the Centre of Artificial Intelligence at University College London. Our aim is to take the insights gained from the Welsh model and apply them to other minority languages, in the U.K. and across the globe.Tapping Sovereign AI Infrastructure for Model DevelopmentThe new model for Welsh is based on NVIDIA Nemotron, a family of open-source models that features open weights, datasets and recipes. The UK-LLM development team has tapped the 49-billion-parameter Llama Nemotron Super model and 9-billion-parameter Nemotron Nano model, post-training them on Welsh-language data.Compared with languages like English or Spanish, theres less available source data in Welsh for AI training. So to create a sufficiently large Welsh training dataset, the team used NVIDIA NIM microservices for gpt-oss-120b and DeepSeek-R1 to translate NVIDIA Nemotron open datasets with over 30 million entries from English to Welsh.They used a GPU cluster through the NVIDIA DGX Cloud Lepton platform and are harnessing hundreds of NVIDIA GH200 Grace Hopper Superchips on Isambard-AI the U.K.s most powerful supercomputer, backed by 225 million in government investment and based at University of Bristol to accelerate their translation and training workloads.This new dataset supplements existing Welsh data from the teams previous efforts.Capturing Linguistic Nuances With Careful EvaluationBangor University, located in Gwynedd the county with the highest percentage of Welsh speakers is supporting the new models development with linguistic and cultural expertise.Welsh translation of: The aim is to ensure that Welsh remains a living, breathing language that continues to develop with the times. Gruffudd Prys, Bangor UniversityPrys, from the universitys Welsh-language center, brings to the collaboration about two decades of experience with language technology for Welsh. He and his team are helping to verify the accuracy of machine-translated training data and manually translated evaluation data, as well as assess how the model handles nuances of Welsh that AI typically struggles with such as the way consonants at the beginning of Welsh words change based on neighboring words.The model, as well as the Welsh training and evaluation datasets, are expected to be made available for enterprise and public sector use, supporting additional research, model training and application development.Its one thing to have this AI capability exist in Welsh, but its another to make it open and accessible for everyone, Prys said. That subtle distinction can be the difference between this technology being used or not being used.Deploy Sovereign AI Models With NVIDIA Nemotron, NIM MicroservicesThe framework used to develop UK-LLMs model for Welsh can serve as a foundation for multilingual AI development around the world.Benchmark-topping Nemotron models, data and recipes are publicly available for developers to build reasoning models tailored to virtually any language, domain and workflow. Packaged as NVIDIA NIM microservices, Nemotron models are optimized for cost-effective compute and run anywhere, from laptop to cloud.Europes enterprises will be able to run open, sovereign models on the Perplexity AI-powered search engine.Get started with NVIDIA Nemotron.Welsh translation:Ymestyn Ar Draws yr Ynysoedd: Mae DU-LLM yn Dod Deallusrwydd Artiffisial i Ieithoedd y DU Gyda NVIDIA NemotronWedii hyfforddi ar yr uwch gyfrifiadur Isambard-AI, mae model newydd a ddatblygwyd gan University College London, NVIDIA a Phrifysgol Bangor yn manteisio ar dechnegau a setiau data ffynhonnell agored NVIDIA Nemotron i alluogi rhesymu Deallusrwydd Artiffisial ar gyfer y Gymraeg ac ieithoedd eraill y DU ar gyfer gwasanaethau cyhoeddus gan gynnwys gofal iechyd, addysg ac adnoddau cyfreithiol.Ieithoedd Celtaidd gan gynnwys Cernyweg, Gwyddeleg, Gaeleg yr Alban a Chymraeg yw ieithoedd byw hynaf y DU. Er mwyn grymuso eu siaradwyr, mae menter Deallusrwydd Artiffisial sofran y DU-LLM yn adeiladu model Deallusrwydd Artiffisial yn seiliedig ar NVIDIA Nemotron a all resymu yn Saesneg a Chymraeg hefyd, iaith a siaredir gan tua 850,000 o bobl yng Nghymru heddiw.Bydd galluogi rhesymu Deallusrwydd Artiffisial o ansawdd uchel yn y Gymraeg yn cefnogir ddarpariaeth o wasanaethau cyhoeddus gan gynnwys gofal iechyd, addysg ac adnoddau cyfreithiol yn yr iaith.Rwyf am i bob cwr or DU allu harneisio manteision deallusrwydd artiffisial. Drwy alluogi deallusrwydd artiffisial i resymu yn y Gymraeg, rydym yn sicrhau bod gwasanaethau cyhoeddus o ofal iechyd i addysg yn hygyrch i bawb, yn yr iaith maen nhwn byw ynddi, meddai Prif Weinidog y DU, Keir Starmer. Mae hon yn enghraifft bwerus o sut y gall y dechnoleg dddiweddaraf, wedii hyfforddi ar uwch gyfrifiadur deallusrwydd artiffisial mwyaf datblygedig y DU ym Mryste, wasanaethu lles y cyhoedd, amddiffyn treftadaeth ddiwylliannol a datgloi cyfleoedd ledled y wlad.Mae prosiect DU-LLM, a sefydlwyd yn 2023 fel BritLLM ac a arweinir gan University College London, wedi rhyddhau dau fodel ar gyfer ieithoedd y DU yn flaenorol. Mae ei fodel newydd ar gyfer y Gymraeg, a ddatblygwyd mewn cydweithrediad Phrifysgol Bangor Cymru ac NVIDIA, yn cyd-fynd ag ymdrechion llywodraeth Cymru i hybu defnydd gweithredol or iaith, gydar nod o gyflawni miliwn o siaradwyr erbyn 2050 menter or enw Cymraeg 2050.Bydd darparwr cwmwl Deallusrwydd Artiffisial yn y DU, Nscale, yn sicrhau bod y model newydd ar gael i ddatblygwyr trwy ei ryngwyneb rhaglennu rhaglenni (API).Y nod yw sicrhau bod y Gymraeg yn parhau i fod yn iaith fyw, syn anadlu ac syn parhau i ddatblygu gydar oes, meddai Gruffudd Prys, uwch derminolegydd a phennaeth yr Uned Technolegau Iaith yng Nghanolfan Bedwyr, canolfan y brifysgol ar gyfer gwasanaethau, ymchwil a thechnoleg y Gymraeg. Mae deallusrwydd artiffisial yn dangos potensial aruthrol i helpu gyda chaffael y Gymraeg fel ail iaith yn ogystal galluogi siaradwyr brodorol i wella eu sgiliau iaith.Gallair model newydd hwn hefyd roi hwb i hygyrchedd adnoddau Cymraeg drwy alluogi sefydliadau cyhoeddus a busnesau syn gweithredu yng Nghymru i gyfieithu cynnwys neu ddarparu gwasanaethau sgwrsfot dwyieithog. Gall hyn helpu grwpiau gan gynnwys darparwyr gofal iechyd, addysgwyr, darlledwyr, manwerthwyr a pherchnogion bwytai i sicrhau bod eu cynnwys ysgrifenedig yr un mor hawdd ar gael yn y Gymraeg ag y mae yn Saesneg.Y tu hwnt ir Gymraeg, mae tm y DU-LLM yn anelu at gymhwysor un fethodoleg a ddefnyddiwyd ar gyfer ei fodel newydd i ddatblygu modelau Deallusrwydd Artiffisial ar gyfer ieithoedd eraill a siaredir ledled y DU fel Cernyweg, Gwyddeleg, Sgoteg a Gaeleg yr Alban yn ogystal gweithio gyda chydweithwyr rhyngwladol i adeiladu modelau ar gyfer ieithoedd o Affrica a De-ddwyrain Asia.Maer cydweithrediad hwn gydag NVIDIA a Phrifysgol Bangor wedi ein galluogi i greu data hyfforddi newydd a hyfforddi model newydd mewn amser record, gan gyflymu ein nod o adeiladur model iaith gorau erioed ar gyfer y Gymraeg, meddai Pontus Stenetorp, yr athro prosesu iaith naturiol a dirprwy gyfarwyddwr y Ganolfan Deallusrwydd Artiffisial yn University College London. Ein nod yw cymryd y mewnwelediadau a gafwyd or model Cymraeg au cymhwyso i ieithoedd lleiafrifol eraill, yn y DU ac ar draws y byd.Manteisio ar Seilwaith Deallusrwydd Artiffisial Sofran ar gyfer Datblygu ModelMaer model newydd ar gyfer y Gymraeg yn seiliedig ar NVIDIA Nemotron, teulu o fodelau ffynhonnell agored syn cynnwys pwysau, setiau data a ryseitiau agored.Maer tm datblygu DU-LLM wedi manteisio ar fodel 49-biliwn-paramedr Llama Nemotron Super a model 9-biliwn-paramedr Nemotron Nano, gan eu hl hyfforddi ar ddata iaith Gymraeg.Oi gymharu ag ieithoedd fel Saesneg neu Sbaeneg, mae llai o ddata ffynhonnell ar gael yn y Gymraeg ar gyfer hyfforddiant Deallusrwydd Artiffisial. Felly, er mwyn creu set ddata hyfforddi Cymraeg ddigon mawr, defnyddiodd y tm ficrowasanaethau NVIDIA NIM ar gyfer gpt-oss-120b a DeepSeek-R1 i gyfieithu setiau data agored NVIDIA gyda dros 30 miliwn o gofnodion or Saesneg ir Gymraeg.Defnyddion nhw glwstwr GPU drwy blatfform NVIDIA DGX Cloud Lepton ac yn harneisio cannoedd o Uwchsglodion NVIDIA GH200 Grace Hopper ar Isambard-AI uwchgyfrifiadur mwyaf pwerus y DU, gyda chefnogaeth 225 miliwn o fuddsoddiad gan y llywodraeth ac wedii leoli ym Mhrifysgol Bryste i gyflymu eu llwythi gwaith cyfieithu a hyfforddi.Maer set ddata newydd hon yn ategu data presennol yr iaith Gymraeg o ymdrechion blaenorol y tm.Cipio Naws Ieithyddol Gyda Gwerthusiad GofalusMae Prifysgol Bangor, sydd wedii lleoli yng Ngwynedd y sir gydar ganran uchaf o siaradwyr Cymraegs yn cefnogi datblygiad y model newydd gydag arbenigedd ieithyddol a diwylliannol.Mae Prys, o ganolfan Gymraeg y brifysgol, yn dod thua dau ddegawd o brofiad gyda thechnoleg iaith ar gyfer y Gymraeg ir cydweithrediad. Mae ef ai dm yn helpu i wirio cywirdeb data hyfforddi a gyfieithir gan beiriannau a data gwerthuso a gyfieithir llaw, yn ogystal ag asesu sut maer model yn ymdrin naws Gymraeg y mae Deallusrwydd Artiffisial fel arfer yn cael trafferth nhw megis y ffordd y mae cytseiniaid ar ddechrau geiriau Cymraeg yn newid yn seiliedig ar eiriau cyfagos.Disgwylir ir model, yn ogystal r setiau data hyfforddiant a gwerthusor Gymraeg, fod ar gael i fentrau ar sector cyhoeddus eu defnyddio, gan gefnogi ymchwil ychwanegol, hyfforddiant modelu a datblygu rhaglenni.Maen un peth cael y gallu Deallusrwydd Artiffisial hwn yn bodoli yn y Gymraeg, ond maen beth arall ei wneud yn agored ac yn hygyrch i bawb, meddai Prys. Gall y gwahaniaeth cynnil hwnnw fod y gwahaniaeth rhwng y dechnoleg hon yn cael ei defnyddio ai peidio.Defnyddio Modelau Deallusrwydd Artiffisial Sofran Gyda NVIDIA Nemotron, Microwasanaethau NIMGall y fframwaith a ddefnyddiwyd i ddatblygu model DU-LLM ar gyfer y Gymraeg fod yn sylfaen ar gyfer datblygu Deallusrwydd Artiffisial amlieithog ledled y byd.Mae modelau, data a ryseitiau Nemotron, syn cyrraedd y brig, ar gael yn gyhoeddus i ddatblygwyr er mwyn iddynt adeiladu modelau rhesymu sydd wediu teilwra i bron unrhyw iaith, parth a llif gwaith. Wediu pecynnu fel microgwasanaethau NVIDIA NIM, mae modelau Nemotron wediu hoptimeiddio ar gyfer cyfrifiadura cost-effeithiol a rhedeg yn unrhyw le, o liniadur ir cwmwl.Bydd mentrau Ewrop yn gallu rhedeg modelau agored, sofran ar y peiriant chwilio Perplexity wedii bweru gan Ddeallusrwydd Artiffisial.Dewch i ddechrau arni gyda NVIDIA Nemotron.
    0 Comentários ·0 Compartilhamentos
  • AI On: 6 Ways AI Agents Are Raising Team Performance and How to Measure It
    blogs.nvidia.com
    Editors note: This post is part of the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copilots. The series also highlights the NVIDIA software and hardware powering advanced AI agents, which form the foundation of AI query engines that gather insights and perform tasks to transform everyday experiences and reshape industries.AI agents are expected to be involved in most business tasks within three years, with effective human-agent collaboration projected to increase human engagement in high-value tasks by 65%.AI agents can help achieve and exceed efficiency goals as they learn, reason and adjust based on context and outcomes. As they become increasingly central to business strategies, understanding where they deliver impact and justify investment is essential for leaders.Here are six ways agentic AI boosts team performance and practical tips for measuring its impact.1. Accelerating Software Development With AI AgentsAI agents can act as intelligent copilots, helping automate code generation, testing and deployment.They can pinpoint errors early, resulting in higher-quality, faster releases, and speed onboarding of new engineers by providing AI-curated information and context on documentation.For example, NVIDIA ChipNeMo a team of specialized agents built on custom large language models (LLMs) and trained on NVIDIAs internal chip design data helped 5,000 NVIDIA engineers in design, verification and documentation save 4,000 engineering days in just one year.Since deployment, ChipNeMo has:Learn about building agents with NVIDIA Nemotron and improving AI code generation using NVIDIA NeMo Agent Toolkit.2. Driving Data-Backed Decision-MakingAgents can help businesses across industries easily glean insights from complex, time-sensitive data for critical decision-making, such as on investments or business strategy.BlackRocks Aladdin Copilot an embedded AI assistant serving thousands of users across hundreds of financial institutions lets teams garner portfolio insights, assess investment research and monitor available cash balances through simple text prompts. Its helped reduce research time from minutes to seconds while enhancing data-driven investment decisions.VAST Data uses agents to rapidly gather and synthesize information from internal and external sources. For its sales teams, this means faster access to useful, up-to-date insights on client accounts.3. Optimizing IT OperationsAgents excel at maintaining IT operations, including by proactively monitoring infrastructure and automating decision-making.AI agents in IT operations offer:In fast-paced telco environments, agents can help manage networks by analyzing real-time performance indicators and predicting service failures. For example, Telenor Group integrated the NVIDIA Blueprint for telco network configuration to deploy intelligent, autonomous networks that meet the performance demands of 5G and beyond.4. Streamlining Industrial and Manufacturing OperationsAble to interact with the physical world, video analytics AI agents can monitor assembly lines for quality checks and anomaly detection.Pegatron developed the PEGA AI Factory platform to accelerate the development of AI agents across the company by 400% in the last four years. In addition, the companys digital twin platform PEGAVERSE was built on the NVIDIA Omniverse platform and lets engineers virtually simulate, test and optimize production lines before theyre built, cutting factory construction time by 40%.Pegatron also augmented its assembly process using video analytics AI agents, powered by NVIDIA AI Blueprint for video search and summarization, and saw a 7% reduction in labor costs per assembly line and a 67% decrease in defect rates.Siemens is bringing generative AI into their solutions with the Industrial Copilot to tap real-time factory data to guide maintenance technicians and shopfloor operators. Interviews with maintenance engineers indicate that this could save on average 25% reactive maintenance time.Foxconn uses digital twins and AI agents to optimize its production lines, reducing deployment time by 50%, as well as to simulate robots and monitor quality and safety in real time.5. Enhancing Customer ServiceAgents excel at handling customer service at scale, reducing customer wait times by handling thousands of inquiries simultaneously.AT&T employees and contractors use a generative AI solution called Ask AT&T, which has over 100 solutions and agents in production. Built with LLMs served by NVIDIA NeMo and NIM microservices, Ask AT&T helps fetch relevant documentation and autonomously resolve routine inquiries.Offering 24/7 personalized support, Ask AT&T shares context-relevant suggestions by recalling organizational information from emails, meetings and past transactions. And to continuously improve agent performance, real-time feedback loops are built into the system using a data flywheel.These automated services resulted in 84% lower call center transcript analytics costs.6. Delivering Personalized EducationAI agents are making individualized learning support more accessible, scalable and effective while freeing up instructors for more in-depth teaching.Faced with surging class sizes and a shortage of teaching assistants, Clemson University developed an AI-powered TA built with the NVIDIA Blueprint for retrieval-augmented generation to guide students through challenging concepts.Rather than simply providing answers, the virtual TA walks students through problems step by step, encouraging active problem-solving and critical thinking to promote deeper understanding and academic integrity.The assistant also personalizes feedback and hints in alignment with course content, assignment deadlines and student submissions. It operates 24/7, giving every student timely, tailored support regardless of enrollment size.How Can the Success of AI Agents Be Measured?Measuring the impact of AI agents isnt just a box to check its essential to maximizing investment. The way users define success will directly shape how well these systems deliver value. Too often, businesses deploy agents without a clear measurement framework, making it difficult to prove return on investment or identify areas for improvement.When setting up an evaluation strategy, users should consider which metrics matter most for their goals. For example:Adoption and engagement: Track whether the technology is being embraced. Metrics include how many eligible users interact with the agent and how frequently along with how long the sessions last. High engagement means the agent is routinely providing effective support.Task completion: Look beyond usage to outcomes. Measure how many tasks or requests the agent handles and what portions are fulfilled without human intervention. In software development, users can measure the automated code generation rate to see how much of the software is being developed by an agent. A high automated task completion rate means employees are freed up for higher-value work.Productivity and efficiency gains: Quantify time saved. Metrics like time to resolve IT issues, report generation time for decision-making and average handling time for customer service interactions help demonstrate clear efficiency improvements.Business outcomes: Connect agent performance to bottom-line results. This could mean cost per interaction in support, time to market in software development or unplanned downtime reduction in IT operations.High-quality user experience: Ensure the system is both trusted and effective. Consider a code quality score for developers, prediction accuracy in data-backed decision-making or customer satisfaction scores in service scenarios.The key takeaway: measuring AI agent success goes far beyond a single number. Adoption, efficiency, accuracy and business impact all matter. By choosing the right mix of metrics upfront, businesses can validate success while continually refining and improving how agents deliver value.Read more stories on how customers are adopting AI applications to reshape their daily operations and increase their return on investment.Stay up to date on agentic AI, NVIDIA Nemotron and more by subscribing to NVIDIA news, joining the community and following NVIDIA AI on LinkedIn, Instagram, X and Facebook. Plus, explore self-paced video tutorials and livestreams.
    0 Comentários ·0 Compartilhamentos
  • NVIDIA Pledges AI Education Funding for K-12 Programs
    blogs.nvidia.com
    NVIDIA today announced new AI education support for K-12 programs at a White House event to celebrate public-private partnerships that advance artificial intelligence education for Americas youth.The commitment comes after recent NVIDIA announcements to support AI education and academic research, including a $30 million contribution to the National AI Intelligence Research pilot and a U.S. National Science Foundation partnership in support of academic research. Over the last five years, NVIDIA has invested $125 million in higher education and academic research in the United States.Pledging $25 million in support with AI education programs, NVIDIA is partnering with Study Fetch and CK-12 two leading K-12 learning platforms to tailor the NVIDIA Deep Learning Institute (DLI) and NVIDIA Academy content offerings to meet the instructional needs of U.S. K-12 classrooms.The NVIDIA effort aligns with the White House executive order Advancing Artificial Intelligence Education for American Youth, announced in April. Additionally, in support of the executive order, NVIDIA signed the White Houses Pledge to Americas Youth: Investing in AI Education, committing to delivering AI literacy, credentialing and educator enablement.Supporting Americas Educators in Driving AI LiteracyIn the first year, NVIDIA will support curriculum adaptation, platform integration, educator training, institutional engagement and ecosystem-wide outreach.The NVIDIA DLI program integrations with Study Fetch and CK-12 will make available NVIDIAs industry-leading training materials to help empower high school educators in applying DLI Teaching Kits.NVIDIA DLI courses are geared toward teaching professional skills to developers. NVIDIA is partnering with Study Fetch and CK-12, which will curate the course material content for high school students to get hands-on experience with AI, aiming to spark curiosity, build practical skills and prepare the next generation of job seekers to thrive in the AI-driven economy.The NVIDIA partnership aims to reach 1 million K-12 students within three years.Preparing the Next Generation for AI LeadershipThe White House initiative and NVIDIA commitments are united on a central mission to drive American leadership in AI.Winning the AI Race: Americas AI Action Plan was announced in July by the White House, supported with executive orders to accelerate federal permitting of data center infrastructure and promote exportation of the American AI technology stack.Aligned with the White House AI Action Plan, NVIDIA and the U.S. National Science Foundation recently committed $152 million in support to Ai2 for the development of open AI models to drive U.S. academic and nonprofit scientific leadership.
    0 Comentários ·0 Compartilhamentos
  • 0 Comentários ·0 Compartilhamentos
  • At Gamescom 2025, NVIDIA DLSS 4 and Ray Tracing Come to This Years Biggest Titles
    blogs.nvidia.com
    With over 175 games now supporting NVIDIA DLSS 4 a suite of advanced, AI-powered neural rendering technologies gamers and tech enthusiasts everywhere can experience breakthrough performance in this years most anticipated titles, including Borderlands 4, Hell Is Us and Fate Trigger.Plus, path tracing is making its way to Resident Evil Requiem and Directive 8020, as well as ray tracing in upcoming releases like Phantom Blade Zero, PRAGMATA and CINDER CITY enabling crystal-clear visuals for more immersive gameplayDLSS 4 and path tracing are no longer cutting-edge graphical experiments theyre the foundation of modern PC gaming titles, said Matt Wuebbling, vice president of global GeForce marketing at NVIDIA. Developers are embracing AI-powered rendering to unlock stunning visuals and massive performance gains, enabling gamers everywhere to experience the future of real-time graphics today.These announcements come alongside a new NVIDIA GeForce RTX 50 Series bundle for Borderlands 4 and updates to the NVIDIA app a companion platform for content creators, gamers and AI enthusiasts using NVIDIA GeForce RTX GPUs.DLSS 4 Now Accelerating Over 175 Games and ApplicationsLaunched with the GeForce RTX 50 Series earlier this year, DLSS 4 with Multi Frame Generation uses AI to generate up to three frames for every traditionally rendered frame, delivering performance boosts of up to 8x over traditional rendering.In addition to Multi Frame Generation, DLSS 4 titles include support for DLSS Super Resolution, Ray Reconstruction and NVIDIA Reflex technology unlocking incredible performance gains and responsive gameplay for every GeForce RTX 50 Series owner.New titles announced at Gamescom that will support the latest RTX technologies include:Directive 8020 and Resident Evil Requiem, which are launching with DLSS 4 and path tracingBlack State, CINDER CITY (formerly Project LLL), Cronos: The New Dawn, Dying Light: The Beast, Honeycomb: The World Beyond, Lost Soul Aside, The Outer Worlds 2, Phantom Blade Zero and PRAGMATA, which are launching with DLSS 4 and ray tracingBorderlands 4 and Fate Trigger, which are launching with DLSS 4 with Multi Frame GenerationIndiana Jones and the Great Circle, which in September will add support for RTX Hair, a technology that uses new hardware capabilities in RTX 50 Series GPUs to model hair with greater path-traced detail and realismMany of these RTX titles will also launch on the GeForce NOW cloud gaming platform, including Borderlands 4, CINDER CITY (formerly Project LLL), Hell Is Us and The Outer Worlds 2.NVIDIA App Adds Global DLSS Overrides and Software UpdatesThe NVIDIA app is the essential companion for NVIDIA GeForce RTX GPU users, simplifying the process of keeping PCs updated with the latest GeForce Game Ready and NVIDIA Studio Drivers.New updates to the NVIDIA app include:Global DLSS Overrides: Easily enable DLSS Multi-Frame Generation or DLSS Super Resolution profiles globally across hundreds of DLSS Override titles, instead of needing to configure per title.Project G-Assist Upgrades: The latest update to Project G-Assist an on-device AI assistant that lets users control and tune their RTX systems with voice and text commands introduces a significantly more efficient AI model that uses 40% less memory. Despite its smaller footprint, it responds to queries faster and more accurately calls the right tools.Highly Requested Legacy 3D Settings: Use easily configurable control panel settings including anisotropic filtering, anti-aliasing and ambient occlusion to enhance classic games.The NVIDIA app beta update launches Tuesday, Aug. 19, at 9 a.m. PT, with full availability coming the following week.NVIDIA ACE Enhances Voice-Driven Gaming ExperiencesNVIDIA ACE a suite of generative AI technologies that power lifelike non-playable character interactions in games like Kraftons inZOI now features in Iconic Interactives The Oversight Bureau, a darkly comic, voice-driven puzzle game.Using speech-to-text technology powered by ACE, players can interact naturally with in-game characters using speech, with Iconics Narrative Engine interpreting the input and determining and delivering the pre-recorded character dialogue that best fits the story and situation.This system keeps developers in creative control while offering players real agency in games all running locally on RTX AI PCs with sub-second latency.The Oversight Bureau launches later this year and will be playable at NVIDIAs Gamescom B2B press suite.NVIDIA RTX Remix Evolves With Community Expansions and New Particle SystemNVIDIA RTX Remix, an open-source modding platform for remastering classic games with path tracing and neural rendering, continues to grow thanks to its passionate community.Modders have been using large language models to extend RTX Remixs capabilities. For example, one modder vibe coded a plug-in that connects RTX Remix to Adobe Substance 3D, the industry-standard tool for 3D texturing and materials. Another modder made it possible for RTX Remix to use classic game data to instantly make objects glow with emissive effects.RTX Remixs open-source community has even expanded compatibility to allow many new titles to be remastered, including iconic games like Call Of Duty 4: Modern Warfare, Knights Of The Old Republic, Doom 3, Half-Life: Black Mesa and Bioshock.Some of these games were featured in the RTX Remixs $50K Mod Contest, which wrapped up at Gamescom. Painkiller RTX by Merry Pencil Studios won numerous awards, including Best Overall RTX Remix Mod. Explore all mod submissions on ModDB.com.At Gamescom, NVIDIA also unveiled a new RTX Remix particle system that brings dynamic, realistically lit and physically accurate particles to 165 classic games the majority of which have never had a particle editor.Modders can use the system to change the look, size, quantity, light emission, turbulence and even gravity of particles in games. The new particle system will be available in September.Borderlands 4 GeForce RTX 50 Series Bundle Available NowTo celebrate Gearboxs Borderlands 4, which will be enhanced by DLSS 4 with Multi Frame Generation and NVIDIA Reflex, NVIDIA is introducing a new GeForce RTX 50 Series bundle.Players who purchase a GeForce RTX 5090, 5080, 5070 Ti, or 5070 desktop system or graphics card or laptops with a GeForce RTX 5090 Laptop GPU, RTX 5080 Laptop GPU, RTX 5070 Ti Laptop GPU or RTX 5070 Laptop GPU from participating retailers will receive a copy of Borderlands 4 and The Gilded Glory Pack DLC. The offer is available through Monday, Sept. 22.Learn more about GeForce announcements at Gamescom.
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 0 Comentários ·0 Compartilhamentos
  • New Lightweight AI Model for Project G-Assist Brings Support for 6GB NVIDIA GeForce RTX and RTX PRO GPUs
    blogs.nvidia.com
    At Gamescom, NVIDIA is releasing its first major update to Project GAssist an experimental on-device AI assistant that allows users to tune their NVIDIA RTX systems with voice and text commands.The update brings a new AI model that uses 40% less VRAM, improves tool-calling intelligence and extends G-Assist support to all RTX GPUs with 6GB or more VRAM, including laptops. Plus, a new G-Assist Plug-In Hub enables users to easily discover and download plug-ins to enable more G-Assist features.NVIDIA also announced a new path-traced particle system, coming in September to the NVIDIA RTX Remix modding platform, that brings fully simulated physics, dynamic shadows and realistic reflections to visual effects.In addition, NVIDIA named the winners of the NVIDIA and ModDB RTX Remix Mod Contest. Check out the winners and finalist RTX mods in the RTX Remix GeForce article.G-Assist Gets Smarter, Expands to More RTX PCsThe modern PC is a powerhouse, but unlocking its full potential means navigating a complex maze of settings across system software, GPU and peripheral utilities, control panels and more.Project G-Assist is a free, on-device AI assistant built to cut through that complexity. It acts as a central command center, providing easy access to functions previously buried in menus through voice or text commands. Users can ask the assistant to:Run diagnostics to optimize game performanceDisplay or chart frame rates, latency and GPU temperaturesAdjust GPU or even peripheral settings, such as keyboard lightingThe G-Assist update also introduces a new, significantly more efficient AI model thats faster and uses 40% less memory while maintaining response accuracy. The more efficient model means that G-Assist can now run on all RTX GPUs with 6GB or more VRAM, including laptops.Getting started is simple:Install the latest Game Ready Driver (580.97 and above) from the NVIDIA app.Open the NVIDIA app, go to Settings > About and opt in to Beta and Experimental Features / Early Access. Then re-launch the app; it should be on version 11.0.5.On the NVIDIA app, go to Home, scroll down to Discover and download the G-Assist 0.1.17 update.Press Alt+G to activate.Another G-Assist update coming in September will introduce support for laptop-specific commands for features like NVIDIA BatteryBoost and Battery OPS.Introducing the G-Assist Plug-In Hub With Mod.ioNVIDIA is collaborating with mod.io to launch the G-Assist Plug-In Hub, which allows users to easily access G-Assist plug-ins, as well as discover and download community-created ones.With the mod.io plug-in, users can ask G-Assist to discover and install new plug-ins.With the latest update, users can also directly ask G-Assist what new plug-ins are available in the hub and install them using natural language, thanks to a mod.io plug-in.The recent G-Assist Plug-In Hackathon showcased the incredible creativity of the G-Assist community. Heres a sneak peek of what they came up with:Some finalists include:Omniplay allows gamers to use G-Assist to research lore from online wikis or take notes in real time while gamingLaunchpad lets gamers set, launch and toggle custom app groups on the fly to boost productivityFlux NIM Microservice for G-Assist allows gamers to easily generate AI images from within G-Assist, using on-device NVIDIA NIM microservicesThe winners of the hackathon will be announced on Wednesday, Aug. 20.Building custom plug-ins is simple. Theyre based on a foundation of JSON and Python scripts and the Project G-Assist Plug-In Builder helps further simplify development by enabling users to code plug-ins with natural language.Mod It Like Its Hot With RTX RemixClassic PC games remain beloved for their unforgettable stories, characters and gameplay but their dated graphics can be a barrier for new and longtime players.NVIDIA RTX Remix enables modders to revitalize these timeless titles with the latest NVIDIA gaming technologies bridging nostalgic gameplay with modern visuals.Since the platforms release, the RTX Remix modding community has grown with over 350 active projects and over 100 mods released. The mods span a catalog of beloved games like Half-Life 2, Need for Speed: Underground, Portal 2 and Deus Ex and have amassed over 2 million downloads.In May, NVIDIA invited modders to participate in the NVIDIA and ModDB RTX Remix Mod Contest for a chance to win $50,000 in cash prizes. At Gamescom, NVIDIA announced the winners:Best Overall RTX Mod Winner: Painkiller RTX Remix, by Binq_AdamsBest Use of RTX in a Mod Winner: Painkiller RTX Remix, by Binq_AdamsRunner-Up: Vampire: The Masquerade Bloodlines RTX Remaster, by SafemilkMost Complete RTX Mod Winner: Painkiller RTX Remix, by Binq_AdamsRunner-Up: I-Ninja Remixed, by g.i.george333Community Choice RTX Mod Winner: Call of Duty 2 RTX Remix of Carentan, by tadpole3159These modders tapped RTX Remix and generative AI to bring their creations to life from enhancing textures to quickly creating images and 3D assets.For example, the Merry Pencil Studios modder team used a workflow that seamlessly connected RTX Remix and ComfyUI, allowing them to simply select textures in the RTX Remix viewport and, with a single click in ComfyUI, restore them.The results are stunning, with each texture meticulously recreated with physically based materials layered with grime and rust. With a fully path-traced lighting system, the games gothic horror atmosphere has never felt more immersive to play through.All mods submitted to the RTX Remix Modding Contest, as well as 100 more Remix mods, are available to download from ModDB. For a sneak peek at RTX Remix projects under active development, check out the RTX Remix Showcase Discord server.Another RTX Remix update coming in September will allow modders to create new particles that match the look of those found in modern titles. This opens the door for over 165 RTX Remix-compatible games to have particles for the first time.To get started creating RTX mods, download NVIDIA RTX Remix from the home screen of the NVIDIA app. Read the RTX Remix article to learn more about the contest and winners.Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, productivity apps and more on AI PCs and workstations.Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X and stay informed by subscribing to the RTX AI PC newsletter. Join NVIDIAs Discord server to connect with community developers and AI enthusiasts for discussions on whats possible with RTX AI.Follow NVIDIA Workstation on LinkedIn and X.See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    1K
    · 0 Comentários ·0 Compartilhamentos
Mais Stories
CGShares https://cgshares.com