NVIDIA
NVIDIA
This is the Official NVIDIA Page
  • 10 Bikers vinden dit leuk
  • 169 Berichten
  • 2 foto's
  • 0 Video’s
  • 0 voorbeeld
  • company
Zoeken
Actueel
  • BLOGS.NVIDIA.COM
    What Is Retrieval-Augmented Generation, aka RAG?
    Editors note: This article, originally published on November 15, 2023, has been updated.To understand the latest advance in generative AI, imagine a courtroom.Judges hear and decide cases based on their general understanding of the law. Sometimes a case like a malpractice suit or a labor dispute requires special expertise, so judges send court clerks to a law library, looking for precedents and specific cases they can cite.Like a good judge, large language models (LLMs) can respond to a wide variety of human queries. But to deliver authoritative answers that cite sources, the model needs an assistant to do some research.The court clerk of AI is a process called retrieval-augmented generation, or RAG for short.How It Got Named RAGPatrick Lewis, lead author of the 2020 paper that coined the term, apologized for the unflattering acronym that now describes a growing family of methods across hundreds of papers and dozens of commercial services he believes represent the future of generative AI.Patrick LewisWe definitely would have put more thought into the name had we known our work would become so widespread, Lewis said in an interview from Singapore, where he was sharing his ideas with a regional conference of database developers.We always planned to have a nicer sounding name, but when it came time to write the paper, no one had a better idea, said Lewis, who now leads a RAG team at AI startup Cohere.So, What Is Retrieval-Augmented Generation (RAG)?Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.In other words, it fills a gap in how LLMs work. Under the hood, LLMs are neural networks, typically measured by how many parameters they contain. An LLMs parameters essentially represent the general patterns of how humans use words to form sentences.That deep understanding, sometimes called parameterized knowledge, makes LLMs useful in responding to general prompts at light speed. However, it does not serve users who want a deeper dive into a current or more specific topic.Combining Internal, External ResourcesLewis and colleagues developed retrieval-augmented generation to link generative AI services to external resources, especially ones rich in the latest technical details.The paper, with coauthors from the former Facebook AI Research (now Meta AI), University College London and New York University, called RAG a general-purpose fine-tuning recipe because it can be used by nearly any LLM to connect with practically any external resource.Building User TrustRetrieval-augmented generation gives models sources they can cite, like footnotes in a research paper, so users can check any claims. That builds trust.Whats more, the technique can help models clear up ambiguity in a user query. It also reduces the possibility a model will make a wrong guess, a phenomenon sometimes called hallucination.Another great advantage of RAG is its relatively easy. A blog by Lewis and three of the papers coauthors said developers can implement the process with as few as five lines of code.That makes the method faster and less expensive than retraining a model with additional datasets. And it lets users hot-swap new sources on the fly.How People Are Using RAGWith retrieval-augmented generation, users can essentially have conversations with data repositories, opening up new kinds of experiences. This means the applications for RAG could be multiple times the number of available datasets.For example, a generative AI model supplemented with a medical index could be a great assistant for a doctor or nurse. Financial analysts would benefit from an assistant linked to market data.In fact, almost any business can turn its technical or policy manuals, videos or logs into resources called knowledge bases that can enhance LLMs. These sources can enable use cases such as customer or field support, employee training and developer productivity.The broad potential is why companies including AWS, IBM, Glean, Google, Microsoft, NVIDIA, Oracle and Pinecone are adopting RAG.Getting Started With Retrieval-Augmented GenerationTo help users get started, NVIDIA developed an AI Blueprint for building virtual assistants. Organizations can use this reference architecture to quickly scale their customer service operations with generative AI and RAG, or get started building a new customer-centric solution.The blueprint uses some of the latest AI-building methodologies and NVIDIA NeMo Retriever, a collection of easy-to-use NVIDIA NIM microservices for large-scale information retrieval. NIM eases the deployment of secure, high-performance AI model inferencing across clouds, data centers and workstations.These components are all part of NVIDIA AI Enterprise, a software platform that accelerates the development and deployment of production-ready AI with the security, support and stability businesses need.There is also a free hands-on NVIDIA LaunchPad lab for developing AI chatbots using RAG so developers and IT teams can quickly and accurately generate responses based on enterprise data.Getting the best performance for RAG workflows requires massive amounts of memory and compute to move and process data. The NVIDIA GH200 Grace Hopper Superchip, with its 288GB of fast HBM3e memory and 8 petaflops of compute, is ideal it can deliver a 150x speedup over using a CPU.Once companies get familiar with RAG, they can combine a variety of off-the-shelf or custom LLMs with internal or external knowledge bases to create a wide range of assistants that help their employees and customers.RAG doesnt require a data center. LLMs are debuting on Windows PCs, thanks to NVIDIA software that enables all sorts of applications users can access even on their laptops.An example application for RAG on a PC.PCs equipped with NVIDIA RTX GPUs can now run some AI models locally. By using RAG on a PC, users can link to a private knowledge source whether that be emails, notes or articles to improve responses. The user can then feel confident that their data source, prompts and response all remain private and secure.A recent blog provides an example of RAG accelerated by TensorRT-LLM for Windows to get better results fast.The History of RAGThe roots of the technique go back at least to the early 1970s. Thats when researchers in information retrieval prototyped what they called question-answering systems, apps that use natural language processing (NLP) to access text, initially in narrow topics such as baseball.The concepts behind this kind of text mining have remained fairly constant over the years. But the machine learning engines driving them have grown significantly, increasing their usefulness and popularity.In the mid-1990s, the Ask Jeeves service, now Ask.com, popularized question answering with its mascot of a well-dressed valet. IBMs Watson became a TV celebrity in 2011 when it handily beat two human champions on the Jeopardy! game show.Today, LLMs are taking question-answering systems to a whole new level.Insights From a London LabThe seminal 2020 paper arrived as Lewis was pursuing a doctorate in NLP at University College London and working for Meta at a new London AI lab. The team was searching for ways to pack more knowledge into an LLMs parameters and using a benchmark it developed to measure its progress.Building on earlier methods and inspired by a paper from Google researchers, the group had this compelling vision of a trained system that had a retrieval index in the middle of it, so it could learn and generate any text output you wanted, Lewis recalled.The IBM Watson question-answering system became a celebrity when it won big on the TV game show Jeopardy!When Lewis plugged into the work in progress a promising retrieval system from another Meta team, the first results were unexpectedly impressive.I showed my supervisor and he said, Whoa, take the win. This sort of thing doesnt happen very often, because these workflows can be hard to set up correctly the first time, he said.Lewis also credits major contributions from team members Ethan Perez and Douwe Kiela, then of New York University and Facebook AI Research, respectively.When complete, the work, which ran on a cluster of NVIDIA GPUs, showed how to make generative AI models more authoritative and trustworthy. Its since been cited by hundreds of papers that amplified and extended the concepts in what continues to be an active area of research.How Retrieval-Augmented Generation WorksAt a high level, heres how an NVIDIA technical brief describes the RAG process.When users ask an LLM a question, the AI model sends the query to another model that converts it into a numeric format so machines can read it. The numeric version of the query is sometimes called an embedding or a vector.Retrieval-augmented generation combines LLMs with embedding models and vector databases.The embedding model then compares these numeric values to vectors in a machine-readable index of an available knowledge base. When it finds a match or multiple matches, it retrieves the related data, converts it to human-readable words and passes it back to the LLM.Finally, the LLM combines the retrieved words and its own response to the query into a final answer it presents to the user, potentially citing sources the embedding model found.Keeping Sources CurrentIn the background, the embedding model continuously creates and updates machine-readable indices, sometimes called vector databases, for new and updated knowledge bases as they become available.Retrieval-augmented generation combines LLMs with embedding models and vector databases.Many developers find LangChain, an open-source library, can be particularly useful in chaining together LLMs, embedding models and knowledge bases. NVIDIA uses LangChain in its reference architecture for retrieval-augmented generation.The LangChain community provides its own description of a RAG process.Looking forward, the future of generative AI lies in creatively chaining all sorts of LLMs and knowledge bases together to create new kinds of assistants that deliver authoritative results users can verify.Explore generative AI sessions and experiences at NVIDIA GTC, the global conference on AI and accelerated computing, running March 18-21 in San Jose, Calif., and online.
    0 Reacties 0 aandelen 30 Views
  • BLOGS.NVIDIA.COM
    First Star Wars Outlaws Story Pack Hits GeForce NOW
    Get ready to dive deeper into the criminal underworld of a galaxy far, far away as GeForce NOW brings the first major story pack for Star Wars Outlaws to the cloud this week.The season of giving continues GeForce NOW members can access a new free reward: a special in-game Star Wars Outlaws enhancement.Its all part of an exciting GFN Thursday, topped with five new games joining the more than 2,000 titles supported in the GeForce NOW library, including the launch of S.T.A.L.K.E.R. 2: Heart of Chornobyl and Xbox Gaming Studios fan favorites Fallout 3: Game of the Year Edition and The Elder Scrolls IV: Oblivion.And make sure not to pass this opportunity up gamers who want to take the Performance and Ultimate memberships for a spin can do so with 25% off Day Passes, now through Friday, Nov. 22. Day Passes give access to 24 continuous hours of powerful cloud gaming.A New Saga BeginsThe galaxys most electrifying escapade gets even more exciting with the new Wild Card story pack for Star Wars Outlaws.This thrilling story pack invites scoundrels to join forces with the galaxys smoothest operator, Lando Calrissian, for a high-stakes Sabacc tournament thatll keep players on the edge of their seats. As Kay Vess, gamers bluff, charm and blast their way through new challenges, exploring uncharted corners of the Star Wars galaxy. Meanwhile, a free update will scatter fresh Contract missions across the stars, offering members ample opportunities to build their reputations and line their pockets with credits.To kick off this thrilling underworld adventure, GeForce NOW members are in for a special reward with the Forest Commando Character Pack.Time to get wild.The pack gives Kay and Nix, her loyal companion, a complete set of gear thats perfect for missions in lush forest worlds. Get equipped with tactical trousers, a Bantha leather belt loaded with attachments, a covert poncho to shield against jungle rain and a hood for Nix thats great for concealment in thick forests.Members of the GeForce NOW rewards program can check their email for instructions on how to claim the reward. Ultimate and Performance members can start redeeming style packages today. Dont miss out this offer is available through Saturday, Dec. 21, on a first-come, first-served basis.Welcome to the ZoneWelcome to the zone.S.T.A.L.K.E.R. 2: Heart of Chornobyl, the highly anticipated sequel in the cult-classic S.T.A.L.K.E.R. series, is a first-person-shooter survival-horror game set in the Chornobyl Exclusion Zone.In the game which blends postapocalyptic fiction with Ukrainian folklore and the eerie reality of the Chornobyl disaster players can explore a vast open world filled with mutated creatures, anomalies and other stalkers while uncovering the zones secrets and battling for survival.The title features advanced graphics and physics powered by Unreal Engine 5 for stunningly realistic and detailed environments. Players choices impact the game world and narrative, which comprises a nonlinear storyline with multiple possible endings.Players will take on challenging survival mechanics to test their skills and decision-making abilities. Members can make their own epic story with a Performance membership for enhanced GeForce RTX-powered streaming at 1440p or an Ultimate membership for up to 4K 120 frames per second streaming, offering the crispest visuals and smoothest gameplay.Adventures AwaitVault 101 has opened.Members can emerge from Vault 101 into the irradiated ruins of Washington, D.C., in Fallout 3: Game of the Year Edition, which includes all five downloadable content packs released for Fallout 3. Experience the game that redefined the postapocalyptic genre with its morally ambiguous choices, memorable characters and the innovative V.A.T.S. combat system. Whether revisiting the Capital Wasteland, exploring the Mojave Desert or delving into the realm of Cyrodiil, these iconic titles have never looked or played better thanks to the power of GeForce NOWs cloud streaming technology.Members can look for the following games available to stream in the cloud this week:Towers of Aghasba (New release on Steam, Nov. 19)S.T.A.L.K.E.R. 2: Heart of Chornobyl (New release on Steam and Xbox, available on PC Game Pass, Nov. 20)Star Wars Outlaws (New release on Steam, Nov. 21)The Elder Scrolls IV: Oblivion Game of the Year Edition (Epic Games Store, Steam and Xbox, available on PC Game Pass)Fallout 3: Game of the Year Edition (Epic Games Store, Steam and Xbox, available on PC Game Pass)What are you planning to play this weekend? Let us know on X or in the comments below.which sci-fi series or movie would make a great game? NVIDIA GeForce NOW (@NVIDIAGFN) November 20, 2024
    0 Reacties 0 aandelen 2 Views
  • BLOGS.NVIDIA.COM
    What Is Robotics Simulation?
    Robots are moving goods in warehouses, packaging foods and helping assemble vehicles bringing enhanced automation to use cases across industries.There are two keys to their success: Physical AI and robotics simulation.Physical AI describes AI models that can understand and interact with the physical world. Physical AI embodies the next wave of autonomous machines and robots, such as self-driving cars, industrial manipulators, mobile robots, humanoids and even robot-run infrastructure like factories and warehouses.With virtual commissioning of robots in digital worlds, robots are first trained using robotic simulation software before they are deployed for real-world use cases.Robotics Simulation SummarizedAn advanced robotics simulator facilitates robot learning and testing of virtual robots without requiring the physical robot. By applying physics principles and replicating real-world conditions, these simulators generate synthetic datasets to train machine learning models for deployment on physical robots.Simulations are used for initial AI model training and then to validate the entire software stack, minimizing the need for physical robots during testing. NVIDIA Isaac Sim, a reference application built on the NVIDIA Omniverse platform, provides accurate visualizations and supports Universal Scene Description (OpenUSD)-based workflows for advanced robot simulation and validation.NVIDIAs 3 Computer Framework Facilitates Robot SimulationThree computers are needed to train and deploy robot technology.A supercomputer to train and fine-tune powerful foundation and generative AI models.A development platform for robotics simulation and testing.An onboard runtime computer to deploy trained models to physical robots.Only after adequate training in simulated environments can physical robots be commissioned.The NVIDIA DGX platform can serve as the first computing system to train models.NVIDIA Ominverse running on NVIDIA OVX servers functions as the second computer system, providing the development platform and simulation environment for testing, optimizing and debugging physical AI.NVIDIA Jetson Thor robotics computers designed for onboard computing serve as the third runtime computer.Who Uses Robotics Simulation?Today, robot technology and robot simulations boost operations massively across use cases.Global leader in power and thermal technologies Delta Electronics uses simulation to test out its optical inspection algorithms to detect product defects on production lines.Deep tech startup Wandelbots is building a custom simulator by integrating Isaac Sim into its application, making it easy for end users to program robotic work cells in simulation and seamlessly transfer models to a real robot.Boston Dynamics is activating researchers and developers through its reinforcement learning researcher kit.Robotics Company Fourier is simulating real-world conditions to train humanoid robots with the precision and agility needed for close robot-human collaboration.Using NVIDIA Isaac Sim, robotics company Galbot built DexGraspNet, a comprehensive simulated dataset for dexterous robotic grasps containing over 1 million ShadowHand grasps on 5,300+ objects. The dataset can be applied to any dexterous robotic hand to accomplish complex tasks that require fine-motor skills.Using Robotics Simulation for Planning and Control OutcomesIn complex and dymanic industrial settings, robotics simulation is evolving to integrate digital twins, enhancing planning, control and learning outcomes.Developers import computer-aided design models into a robotics simulator to build virtual scenes and employ algorithms to create the robot operating system and enable task and motion planning. While traditional methods involve prescribing control signals, the shift toward machine learning allows robots to learn behaviors through methods like imitation and reinforcement learning, using simulated sensor signals.This evolution continues with digital twins in complex facilities like manufacturing assembly lines, where developers can test and refine real-time AIs entirely in simulation. This approach saves software development time and costs, and reduces downtime by anticipating issues. For instance, using NVIDIA Omniverse, Metropolis and cuOpt, developers can use digital twins to develop, test and refine physical AI in simulation before deploying in industrial infrastructure.High-Fidelity, Physics-Based Simulation BreakthroughsHigh-fidelity, physics-based simulations have supercharged industrial robotics through real-world experimentation in virtual environments.NVIDIA PhysX, integrated into Omniverse and Isaac Sim, empowers roboticists to develop fine- and gross-motor skills for robot manipulators, rigid and soft body dynamics, vehicle dynamics and other critical features that ensure the robot obeys the laws of physics. This includes precise control over actuators and modeling of kinematics, which are essential for accurate robot movements.To close the sim-to-real gap, Isaac Lab offers a high-fidelity, open-source framework for reinforcement learning and imitation learning that facilitates seamless policy transfer from simulated environments to physical robots. With GPU parallelization, Isaac Lab accelerates training and improves performance, making complex tasks more achievable and safe for industrial robots.To learn more about creating a locomotion reinforcement learning policy with Isaac Sim and Isaac Lab, read this developer blog.Teaching Collision-Free Motion for AutonomyIndustrial robot training often occurs in specific settings like factories or fulfillment centers, where simulations help address challenges related to various robot types and chaotic environments. A critical aspect of these simulations is generating collision-free motion in unknown, cluttered environments.Traditional motion planning approaches that attempt to address these challenges can come up short in unknown or dynamic environments. SLAM, or simultaneous localization and mapping, can be used to generate 3D maps of environments with camera images from multiple viewpoints. However, these maps require revisions when objects move and environments are changed.The NVIDIA Robotics research team and the University of Washington introduced Motion Policy Networks (MNets), an end-to-end neural policy that generates real-time, collision-free motion using a single fixed cameras data stream. Trained on over 3 million motion planning problems and 700 million simulated point clouds, MNets navigates unknown real-world environments effectively.While the MNets model applies direct learning for trajectories, the team also developed a point cloud-based collision model called CabiNet, trained on over 650,000 procedurally generated simulated scenes.With the CabiNet model, developers can deploy general-purpose, pick-and-place policies of unknown objects beyond a flat tabletop setup. Training with a large synthetic dataset allowed the model to generalize to out-of-distribution scenes in a real kitchen environment, without needing any real data.How Developers Can Get Started Building Robotic SimulatorsGet started with technical resources, reference applications and other solutions for developing physically accurate simulation pipelines by visiting the NVIDIA Robotics simulation use case page.Robot developers can tap into NVIDIA Isaac Sim, which supports multiple robot training techniques:Synthetic data generation for training perception AI modelsSoftware-in-the-loop testing for the entire robot stackRobot policy training with Isaac LabDevelopers can also pair ROS 2 with Isaac Sim to train, simulate and validate their robot systems. The Isaac Sim to ROS 2 workflow is similar to workflows executed with other robot simulators such as Gazebo. It starts with bringing a robot model into a prebuilt Isaac Sim environment, adding sensors to the robot, and then connecting the relevant components to the ROS 2 action graph and simulating the robot by controlling it through ROS 2 packages.Stay up to date by subscribing to our newsletter and follow NVIDIA Robotics on LinkedIn, Instagram, X and Facebook.
    0 Reacties 0 aandelen 2 Views
  • BLOGS.NVIDIA.COM
    Into the Omniverse: How Generative AI Fuels Personalized, Brand-Accurate Visuals With OpenUSD
    Editors note: This post is part of Into the Omniverse, a blog series focused on how developers, 3D artists and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.3D product configurators are changing the way industries like retail and automotive engage with customers by offering interactive, customizable 3D visualizations of products.Using physically accurate product digital twins, even non-3D artists can streamline content creation and generate stunning marketing visuals.With the new NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI, developers can start using the NVIDIA Omniverse platform and Universal Scene Description (OpenUSD) to easily build personalized, on-brand and product-accurate marketing content at scale.By integrating generative AI into product configurators, developers can optimize operations and reduce production costs. With repetitive tasks automated, teams can focus on the creative aspects of their jobs.Developing Controllable Generative AI for Content ProductionThe new Omniverse Blueprint introduces a robust framework for integrating generative AI into 3D workflows to enable precise and controlled asset creation.Example images created using the NVIDIA Omniverse Blueprint for 3D conditioning for precise visual generative AI.Key highlights of the blueprint include:Model conditioning to ensure that the AI-generated visuals adhere to specific brand requirements like colors and logos.Multimodal approach that combines 3D and 2D techniques to offer developers complete control over final visual outputs while ensuring the products digital twin remains accurate.Key components such as an on-brand hero asset, a simple and untextured 3D scene, and a customizable application built with the Omniverse Kit App Template.OpenUSD integration to enhance development of 3D visuals with precise visual generative AI.Integration of NVIDIA NIM, such as the Edify 360 NIM, Edify 3D NIM, USD Code NIM and USD Search NIM microservices, allows the blueprint to be extensible and customizable. The microservices are available to preview on build.nvidia.com.How Developers Are Building AI-Enabled Content PipelinesKatana Studio developed a content creation tool with OpenUSD called COATcreate that empowers marketing teams to rapidly produce 3D content for automotive advertising. By using 3D data prepared by creative experts and vetted by product specialists in OpenUSD, even users with limited artistic experience can quickly create customized, high-fidelity, on-brand content for any region or use case without adding to production costs.Global marketing leader WPP has built a generative AI content engine for brand advertising with OpenUSD. The Omniverse Blueprint for precise visual generative AI helped facilitate the integration of controllable generative AI in its content creation tools. Leading global brands like The Coca-Cola Company are already beginning to adopt tools from WPP to accelerate iteration on its creative campaigns at scale.Watch the replay of a recent livestream with WPP for more on its generative AI- and OpenUSD-enabled workflow:The NVIDIA creative team developed a reference workflow called CineBuilder on Omniverse that allows companies to use text prompts to generate ads personalized to consumers based on region, weather, time of day, lifestyle and aesthetic preferences.Developers at independent software vendors and production services agencies are building content creation solutions infused with controllable generative AI and built on OpenUSD. Accenture Song, Collective World, Grip, Monks and WPP are among those adopting Omniverse Blueprints to accelerate development.Read the tech blog on developing product configurators with OpenUSD and get started developing solutions using the DENZA N7 3D configurator and CineBuilder reference workflow.Get Plugged Into the World of OpenUSDVarious resources are available to help developers get started building AI-enabled product configuration solutions:Omniverse Blueprint: 3D Conditioning for Precise Visual Generative AIReference Architecture: 3D Conditioning for Precise Visual Generative AIReference Architecture: Generative AI Workflow for Content CreationReference Architecture: Product ConfiguratorEnd-to-End Configurator Example GuideDLI Course: Building a 3D Product Configurator With OpenUSDLivestream: OpenUSD for Marketing and AdvertisingFor more on optimizing OpenUSD workflows, explore the new self-paced Learn OpenUSD training curriculum that includes free Deep Learning Institute courses for 3D practitioners and developers. For more resources on OpenUSD, attend our instructor-led Learn OpenUSD courses at SIGGRAPH Asia on December 3, explore the Alliance for OpenUSD forum and visit the AOUSD website.Dont miss the CES keynote delivered by NVIDIA founder and CEO Jensen Huang live in Las Vegas on Monday, Jan. 6, at 6:30 p.m. PT for more on the future of AI and graphics.Stay up to date by subscribing to NVIDIA news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Reacties 0 aandelen 42 Views
  • BLOGS.NVIDIA.COM
    Efficiency Meets Personalization: How AI Agents Improve Customer Service
    Editors note: This post is the first in the AI On blog series, which explores the latest techniques and real-world applications of agentic AI, chatbots and copilots. The series will also highlight the NVIDIA software and hardware powering advanced AI agents, which form the foundation of AI query engines that gather insights and perform tasks to transform everyday experiences and reshape industries.Whether its getting a complex service claim resolved or having a simple purchase inquiry answered, customers expect timely, accurate responses to their requests.AI agents can help organizations meet this need. And they can grow in scope and scale as businesses grow, helping keep customers from taking their business elsewhere.AI agents can be used as virtual assistants, which use artificial intelligence and natural language processing to handle high volumes of customer service requests. By automating routine tasks, AI agents ease the workload on human agents, allowing them to focus on tasks requiring a more personal touch.AI-powered customer service tools like chatbots have become table stakes across every industry looking to increase efficiency and keep buyers happy. According to a recent IDC study on conversational AI, 41% of organizations use AI-powered copilots for customer service and 60% have implemented them for IT help desks.Now, many of those same industries are looking to adopt agentic AI, semi-autonomous tools that have the ability to perceive, reason and act on more complex problems.How AI Agents Enhance Customer ServiceA primary value of AI-powered systems is the time they free up by automating routine tasks. AI agents can perform specific tasks, or agentic operations, essentially becoming part of an organizations workforce working alongside humans who can focus on more complex customer issues.AI agents can handle predictive tasks and problem-solve, can be trained to understand industry-specific terms and can pull relevant information from an organizations knowledge bases, wherever that data resides.With AI agents, companies can:Boost efficiency: AI agents handle common questions and repetitive tasks, allowing support teams to prioritize more complicated cases. This is especially useful during high-demand periods.Increase customer satisfaction: Faster, more personalized interactions result in happier and more loyal customers. Consistent and accurate support improves customer sentiment and experience.Scale Easily: Equipped to handle high volumes of customer support requests, AI agents scale effortlessly with growing businesses, reducing customer wait times and resolving issues faster.AI Agents for Customer Service Across IndustriesAI agents are transforming customer service across sectors, helping companies enhance customer conversations, achieve high-resolution rates and improve human representative productivity.For instance, ServiceNow recently introduced IT and customer service management AI agents to boost productivity by autonomously solving many employee and customer issues. Its agents can understand context, create step-by-step resolutions and get live agent approvals when needed.To improve patient care and reduce preprocedure anxiety, The Ottawa Hospital is using AI agents that have consistent, accurate and continuous access to information. The agent has the potential to improve patient care and reduce administrative tasks for doctors and nurses.The city of Amarillo, Texas, uses a multilingual digital assistant named Emma to provide its residents with 24/7 support. Emma brings more effective and efficient disbursement of important information to all residents, including the one-quarter who dont speak English.AI agents meet current customer service demands while preparing organizations for the future.Key Steps for Designing AI Virtual Assistants for Customer SupportAI agents for customer service come in a wide range of designs, from simple text-based virtual assistants that resolve customer issues, to animated avatars that can provide a more human-like experience.Digital human interfaces can add warmth and personality to the customer experience. These agents respond with spoken language and even animated avatars, enhancing service interactions with a touch of real-world flair. A digital human interface lets companies customize the assistants appearance and tone, aligning it with the brands identity.There are three key building blocks to creating an effective AI agent for customer service:Collect and organize customer data: AI agents need a solid base of customer data (such as profiles, past interactions, and transaction histories) to provide accurate, context-aware responses.Use memory functions for personalization: Advanced AI systems remember past interactions, allowing agents to deliver personalized support that feels human.Build an operations pipeline: Customer service teams should regularly review feedback and update the AI agents responses to ensure its always improving and aligned with business goals.Powering AI Agents With NVIDIA NIM MicroservicesNVIDIA NIM microservices power AI agents by enabling natural language processing, contextual retrieval and multilingual communication. This allows AI agents to deliver fast, personalized and accurate support tailored to diverse customer needs.Key NVIDIA NIM microservices for customer service agents include:NVIDIA NIM for Large Language Models Microservices that bring advanced language models to applications and enable complex reasoning, so AI agents can understand complicated customer queries.NVIDIA NeMo Retriever NIM Embedding and reranking microservices that support retrieval-augmented generation pipelines allow virtual assistants to quickly access enterprise knowledge bases and boost retrieval performance by ranking relevant knowledge-base articles and improving context accuracy.NVIDIA NIM for Digital Humans Microservices that enable intelligent, interactive avatars to understand speech and respond in a natural way. NVIDIA Riva NIM microservices for text-to-speech, automatic speech recognition (ASR), and translation services enable AI agents to communicate naturally across languages. The recently released Riva NIM microservices for ASR enable additional multilingual enhancements. To build realistic avatars, Audio2Face NIM converts streamed audio to facial movements for real-time lip syncing. 2D and 3D Audio2Face NIM microservices support varying use cases.Getting Started With AI Agents for Customer ServiceNVIDIA AI Blueprints make it easy to start building and setting up virtual assistants by offering ready-made workflows and tools to accelerate deployment. Whether for a simple AI-powered chatbot or a fully animated digital human interface, the blueprints offer resources to create AI assistants that are scalable, aligned with an organizations brand and deliver a responsive, efficient customer support experience.Editors note: IDC figures are sourced to IDC, Market Analysis Perspective: Worldwide Conversational AI Tools and Technologies, 2024 US51619524, Sept 2024
    0 Reacties 0 aandelen 41 Views
  • BLOGS.NVIDIA.COM
    The Need for Speed: NVIDIA Accelerates Majority of Worlds Supercomputers to Drive Advancements in Science and Technology
    Starting with the release of CUDA in 2006, NVIDIA has driven advancements in AI and accelerated computing and the most recent TOP500 list of the worlds most powerful supercomputers highlights the culmination of the companys achievements in the field.This year, 384 systems on the TOP500 list are powered by NVIDIA technologies. Among the 53 new to the list, 87% 46 systems are accelerated. Of those accelerated systems, 85% use NVIDIA Hopper GPUs, driving advancements in areas like climate forecasting, drug discovery and quantum simulation.Accelerated computing is much more than floating point operations per second (FLOPS). It requires full-stack, application-specific optimization. At SC24 this week, NVIDIA announced the release of cuPyNumeric, an NVIDIA CUDA-X library that enables over 5 million developers to seamlessly scale to powerful computing clusters without modifying their Python code.NVIDIA also revealed significant updates to the NVIDIA CUDA-Q development platform, which empowers quantum researchers to simulate quantum devices at a scale previously thought computationally impossible.And, NVIDIA received nearly a dozen HPCwire Readers and Editors Choice awards across a variety of categories, marking its 20th consecutive year of recognition.A New Era of Scientific Discovery With Mixed Precision and AIMixed-precision floating-point operations and AI have become the tools of choice for researchers grappling with the complexities of modern science. They offer greater speed, efficiency and adaptability than traditional methods, without compromising accuracy.This shift isnt just theoretical its already happening. At SC24, two Gordon Bell finalist projects revealed how using AI and mixed precision helped advance genomics and protein design.In his paper titled Using Mixed Precision for Genomics, David Keyes, a professor at King Abdullah University of Science and Technology, used 0.8 exaflops of mixed precision to explore relationships between genomes and their generalized genotypes, and then to the prevalence of diseases to which they are subject.Similarly, Arvind Ramanathan, a computational biologist from the Argonne National Laboratory, harnessed 3 exaflops of AI performance on the NVIDIA Grace Hopper-powered Alps system to speed up protein design.To further advance AI-driven drug discovery and the development of lifesaving therapies, researchers can use NVIDIA BioNeMo, powerful tools designed specifically for pharmaceutical applications. Now in open source, the BioNeMo Framework can accelerate AI model creation, customization and deployment for drug discovery and molecular design.Across the TOP500, the widespread use of AI and mixed-precision floating-point operations reflects a global shift in computing priorities. A total of 249 exaflops of AI performance are now available to TOP500 systems, supercharging innovations and discoveries across industries.TOP500 total AI, FP32 and FP64 FLOPs by year.NVIDIA-accelerated TOP500 systems excel across key metrics like AI and mix-precision system performance. With over 190 exaflops of AI performance and 17 exaflops of single-precision (FP32), NVIDIAs accelerated computing platform is the new engine of scientific computing. NVIDIA also delivers 4 exaflops of double-precision (FP64) performance for certain scientific calculations that still require it.Accelerated Computing Is Sustainable ComputingAs the demand for computational capacity grows, so does the need for sustainability.In the Green500 list of the worlds most energy-efficient supercomputers, systems with NVIDIA accelerated computing rank among eight of the top 10. The JEDI system at EuroHPC/FZJ, for example, achieves a staggering 72.7 gigaflops per watt, setting a benchmark for whats possible when performance and sustainability align.For climate forecasting, NVIDIA announced at SC24 two new NVIDIA NIM microservices for NVIDIA Earth-2, a digital twin platform for simulating and visualizing weather and climate conditions. The CorrDiff NIM and FourCastNet NIM microservices can accelerate climate change modeling and simulation results by up to 500x.In a world increasingly conscious of its environmental footprint, NVIDIAs innovations in accelerated computing balance high performance with energy efficiency to help realize a brighter, more sustainable future.Supercomputing Community Embraces NVIDIAThe 11 HPCwire Readers Choice and Editors Choice awards NVIDIA received represent the work of the entire scientific community of engineers, developers, researchers, partners, customers and more.The awards include:Readers Choice: Best AI Product or Technology NVIDIA GH200 Grace Hopper SuperchipReaders Choice: Best HPC Interconnect Product or Technology NVIDIA Quantum-X800Readers Choice: Best HPC Server Product or Technology NVIDIA Grace CPU SuperchipReaders Choice: Top 5 New Products or Technologies to Watch NVIDIA Quantum-X800Readers Choice: Top 5 New Products or Technologies to Watch NVIDIA Spectrum-XReaders and Editors Choice: Top 5 New Products or Technologies to Watch NVIDIA Blackwell GPUEditors Choice: Top 5 New Products or Technologies to Watch NVIDIA CUDA-QReaders Choice: Top 5 Vendors to Watch NVIDIAReaders Choice: Best HPC Response to Societal Plight NVIDIA Earth-2Editors Choice: Best Use of HPC in Energy (one of two named contributors) Real-time simulation of CO2 plume migration in carbon capture and storageReaders Choice Award: Best HPC Collaboration (one of 11 named contributors) National Artificial Intelligence Research Resource PilotWatch the replay of NVIDIAs special address at SC24 and learn more about the companys news in the SC24 online press kit.See notice regarding software product information.
    0 Reacties 0 aandelen 41 Views
  • BLOGS.NVIDIA.COM
    From Algorithms to Atoms: NVIDIA ALCHEMI NIM Catalyzes Sustainable Materials Research for EV Batteries, Solar Panels and More
    More than 96% of all manufactured goods ranging from everyday products, like laundry detergent and food packaging, to advanced industrial components, such as semiconductors, batteries and solar panels rely on chemicals that cannot be replaced with alternative materials.With AI and the latest technological advancements, researchers and developers are studying ways to create novel materials that could address the worlds toughest challenges, such as energy storage and environmental remediation.Announced today at the Supercomputing 2024 conference in Atlanta, the NVIDIA ALCHEMI NIM microservice accelerates such research by optimizing AI inference for chemical simulations that could lead to more efficient and sustainable materials to support the renewable energy transition.Its one of the many ways NVIDIA is supporting researchers, developers and enterprises to boost energy and resource efficiency in their workflows, including to meet requirements aligned with the global Net Zero Initiative.NVIDIA ALCHEMI for Material and Chemical SimulationsExploring the universe of potential materials, using the nearly infinite combinations of chemicals each with unique characteristics can be extremely complex and time consuming. Novel materials are typically discovered through laborious, trial-and-error synthesis and testing in a traditional lab.Many of todays plastics, for example, are still based on material discoveries made in the mid-1900s.More recently, AI has emerged as a promising accelerant for chemicals and materials innovation.With the new ALCHEMI NIM microservice, researchers can test chemical compounds and material stability in simulation, in a virtual AI lab, which reduces costs, energy consumption and time to discovery.For example, running MACE-MP-0, a pretrained foundation model for materials chemistry, on an NVIDIA H100 Tensor Core GPU, the new NIM microservice speeds evaluations of a potential compositions simulated long-term stability 100x. The below figure shows a 25x speedup from using the NVIDIA Warp Python framework for high-performance simulation, followed by a 4x speedup with in-flight batching. All in all, evaluating 16 million structures would have taken months with the NIM microservice, it can be done in just hours.By letting scientists examine more structures in less time, the NIM microservice can boost research on materials for use with solar and electric batteries, for example, to bolster the renewable energy transition.NVIDIA also plans to release NIM microservices that can be used to simulate the manufacturability of novel materials to determine how they might be brought from test tubes into the real world in the form of batteries, solar panels, fertilizers, pesticides and other essential products that can contribute to a healthier, greener planet.SES AI, a leading developer of lithium-metal batteries, is using the NVIDIA ALCHEMI NIM microservice with the AIMNet2 model to accelerate the identification of electrolyte materials used for electric vehicles.SES AI is dedicated to advancing lithium battery technology through AI-accelerated material discovery, using our Molecular Universe Project to explore and identify promising candidates for lithium metal electrolyte discovery, said Qichao Hu, CEO of SES AI. Using the ALCHEMI NIM microservice with AIMNet2 could drastically improve our ability to map molecular properties, reducing time and costs significantly and accelerating innovation.SES AI recently mapped 100,000 molecules in half a day, with the potential to achieve this in under an hour using ALCHEMI. This signals how the microservice is poised to have a transformative impact on material screening efficiency.Looking ahead, SES AI aims to map the properties of up to 10 billion molecules within the next couple of years, pushing the boundaries of AI-driven, high-throughput discovery.The new microservice will soon be available for researchers to test for free through the NVIDIA NGC catalog be notified of ALCHEMIs launch. It will also be downloadable from build.nvidia.com, and the production-grade NIM microservice will be offered through the NVIDIA AI Enterprise software platform.Learn more about the NVIDIA ALCHEMI NIM microservice, and hear the latest on how AI and supercomputing are supercharging researchers and developers workflows by joining NVIDIA at SC24, running through Friday, Nov. 22.See notice regarding software product information.
    0 Reacties 0 aandelen 69 Views
  • BLOGS.NVIDIA.COM
    Foxconn Expands Blackwell Testing and Production With New Factories in U.S., Mexico and Taiwan
    To meet demand for Blackwell, now in full production, Foxconn, the worlds largest electronics manufacturer, is using NVIDIA Omniverse. The platform for developing industrial AI simulation applications is helping bring facilities in the U.S., Mexico and Taiwan online faster than ever.Foxconn uses NVIDIA Omniverse to virtually integrate their facility and equipment layouts, NVIDIA Isaac Sim for autonomous robot testing and simulation, and NVIDIA Metropolis for vision AI.Omniverse enables industrial developers to maximize efficiency through test and optimization in a digital twin before deploying costly change-orders to the physical world. Foxconn expects its Mexico facility alone to deliver significant cost savings and a reduction in kilowatt-hour usage of more than 30% annually.Worlds Largest Electronics Maker Plans With Omniverse and AITo meet demands at Foxconn, factory planners are building physical AI-powered robotic factories with Omniverse and NVIDIA AI.The company has built digital twins with Omniverse that allow their teams to virtually integrate facility and equipment information from leading industry applications, such as Siemens Teamcenter X and Autodesk Revit. Floor plan layouts are optimized first in the digital twin, and planners can locate optimal camera positions that help measure and identify ways to streamline operations with Metropolis visual AI agents.In the construction process, the Foxconn teams use the Omniverse digital twin as the source of truth to communicate and validate the accurate layout and placement of equipment.Virtual integration on Omniverse offers significant advantages, potentially saving factory planners millions by reducing costly change orders in real-world operations.Delivering Robotics for Manufacturing With Omniverse Digital TwinOnce the digital twin of the factory is built, it becomes a virtual gym for Foxconns fleets of autonomous robots including industrial manipulators and autonomous mobile robots. Foxconns robot developers can simulate, test and validate their AI robot models in NVIDIA Isaac Sim before deploying to their real world robots.Using Omniverse, Foxconn can simulate robot AIs before deploying to NVIDIA Jetson-driven autonomous mobile robots.On assembly lines, they can simulate with Isaac Manipulator libraries and AI models for automated optical inspection, object identification, defect detection and trajectory planning.Omniverse also enables their facility planners to test and optimize intelligent camera placement before installing in the physical world ensuring they have complete coverage of the factory floor to support worker safety, and provide the foundation for visual AI agent frameworks.Creating Efficiencies While Building Resilient Supply ChainsUsing NVIDIA Omniverse and AI, Foxconn plans to replicate its precision production lines across the world. This will enable it to quickly deploy high-quality production facilities that meet unified standards, increasing the companys competitive edge and adaptability in the market.Foxconns ability to rapidly replicate will accelerate its global deployments and enhance its resilience in the supply chain in the face of disruptions, as it can quickly adjust production strategies and reallocate resources to ensure continuity and stability to meet changing demands.Foxconns Mexico facility will begin production early next year and the Taiwan location will begin production in December.Learn more about Blackwell and Omniverse.
    0 Reacties 0 aandelen 71 Views
  • BLOGS.NVIDIA.COM
    Microsoft and NVIDIA Supercharge AI Development on RTX AI PCs
    Generative AI-powered laptops and PCs are unlocking advancements in gaming, content creation, productivity and development. Today, over 600 Windows apps and games are already running AI locally on more than 100 million GeForce RTX AI PCs worldwide, delivering fast, reliable and low-latency performance.At Microsoft Ignite, NVIDIA and Microsoft announced tools to help Windows developers quickly build and optimize AI-powered apps on RTX AI PCs, making local AI more accessible. These new tools enable application and game developers to harness powerful RTX GPUs to accelerate complex AI workflows for applications such as AI agents, app assistants and digital humans.RTX AI PCs Power Digital Humans With Multimodal Small Language ModelsMeet James, an interactive digital human knowledgeable about NVIDIA and its products. James uses a collection of NVIDIA NIM microservices, NVIDIA ACE and ElevenLabs digital human technologies to provide natural and immersive responses.NVIDIA ACE is a suite of digital human technologies that brings life to agents, assistants and avatars. To achieve a higher level of understanding so that they can respond with greater context-awareness, digital humans must be able to visually perceive the world like humans do.Enhancing digital human interactions with greater realism demands technology that enables perception and understanding of their surroundings with greater nuance. To achieve this, NVIDIA developed multimodal small language models that can process both text and imagery, excel in role-playing and are optimized for rapid response times.The NVIDIA Nemovision-4B-Instruct model, soon to be available, uses the latest NVIDIA VILA and NVIDIA NeMo framework for distilling, pruning and quantizing to become small enough to perform on RTX GPUs with the accuracy developers need.The model enables digital humans to understand visual imagery in the real world and on the screen to deliver relevant responses. Multimodality serves as the foundation for agentic workflows and offers a sneak peek into a future where digital humans can reason and take action with minimal assistance from a user.NVIDIA is also introducing the Mistral NeMo Minitron 128k Instruct family, a suite of large-context small language models designed for optimized, efficient digital human interactions, coming soon. Available in 8B-, 4B- and 2B-parameter versions, these models offer flexible options for balancing speed, memory usage and accuracy on RTX AI PCs. They can handle large datasets in a single pass, eliminating the need for data segmentation and reassembly. Built in the GGUF format, these models enhance efficiency on low-power devices and support compatibility with multiple programming languages.Turbocharge Gen AI With NVIDIA TensorRT Model Optimizer for WindowsWhen bringing models to PC environments, developers face the challenge of limited memory and compute resources for running AI locally. And they want to make models available to as many people as possible, with minimal accuracy loss.Today, NVIDIA announced updates to NVIDIA TensorRT Model Optimizer (ModelOpt) to offer Windows developers an improved way to optimize models for ONNX Runtime deployment.With the latest updates, TensorRT ModelOpt enables models to be optimized into an ONNX checkpoint for deploying the model within ONNX runtime environments using GPU execution providers such as CUDA, TensorRT and DirectML.TensorRT-ModelOpt includes advanced quantization algorithms, such as INT4-Activation Aware Weight Quantization. Compared to other tools such as Olive, the new method reduces the memory footprint of the model and improves throughput performance on RTX GPUs.During deployment, the models can have up to 2.6x reduced memory footprint compared to FP16 models. This results in faster throughput, with minimal accuracy degradation, allowing them to run on a wider range of PCs.Learn more about how developers on Microsoft systems, from Windows RTX AI PCs to NVIDIA Blackwell-powered Azure servers, are transforming how users interact with AI on a daily basis.
    0 Reacties 0 aandelen 57 Views
  • BLOGS.NVIDIA.COM
    NVIDIA and Microsoft Showcase Blackwell Preview, Omniverse Industrial AI and RTX AI PCs at Microsoft Ignite
    NVIDIA and Microsoft today unveiled product integrations designed to advance full-stack NVIDIA AI development on Microsoft platforms and applications.At Microsoft Ignite, Microsoft announced the launch of the first cloud private preview of the Azure ND GB200 V6 VM series, based on the NVIDIA Blackwell platform. The Azure ND GB200 v6 will be a new AI-optimized virtual machine (VM) series and combines the NVIDIA GB200 NVL72 rack design with NVIDIA Quantum InfiniBand networking.In addition, Microsoft revealed that Azure Container Apps now supports NVIDIA GPUs, enabling simplified and scalable AI deployment. Plus, the NVIDIA AI platform on Azure includes new reference workflows for industrial AI and an NVIDIA Omniverse Blueprint for creating immersive, AI-powered visuals.At Ignite, NVIDIA also announced multimodal small language models (SLMs) for RTX AI PCs and workstations, enhancing digital human interactions and virtual assistants with greater realism.NVIDIA Blackwell Powers Next-Gen AI on Microsoft AzureMicrosofts new Azure ND GB200 V6 VM series will harness the powerful performance of NVIDIA GB200 Grace Blackwell Superchips, coupled with advanced NVIDIA Quantum InfiniBand networking. This offering is optimized for large-scale deep learning workloads to accelerate breakthroughs in natural language processing, computer vision and more.The Blackwell-based VM series complements previously announced Azure AI clusters with ND H200 V5 VMs, which provide increased high-bandwidth memory for improved AI inferencing. The ND H200 V5 VMs are already being used by OpenAI to enhance ChatGPT.Azure Container Apps Enables Serverless AI Inference With NVIDIA Accelerated ComputingServerless computing provides AI application developers increased agility to rapidly deploy, scale and iterate on applications without worrying about underlying infrastructure. This enables them to focus on optimizing models and improving functionality while minimizing operational overhead.The Azure Container Apps serverless containers platform simplifies deploying and managing microservices-based applications by abstracting away the underlying infrastructure.Azure Container Apps now supports NVIDIA-accelerated workloads with serverless GPUs, allowing developers to use the power of accelerated computing for real-time AI inference applications in a flexible, consumption-based, serverless environment. This capability simplifies AI deployments at scale while improving resource efficiency and application performance without the burden of infrastructure management.Serverless GPUs allow development teams to focus more on innovation and less on infrastructure management. With per-second billing and scale-to-zero capabilities, customers pay only for the compute they use, helping ensure resource utilization is both economical and efficient. NVIDIA is also working with Microsoft to bring NVIDIA NIM microservices to serverless NVIDIA GPUs in Azure to optimize AI model performance.NVIDIA Unveils Omniverse Reference Workflows for Advanced 3D ApplicationsNVIDIA announced reference workflows that help developers to build 3D simulation and digital twin applications on NVIDIA Omniverse and Universal Scene Description (OpenUSD) accelerating industrial AI and advancing AI-driven creativity.A reference workflow for 3D remote monitoring of industrial operations is coming soon to enable developers to connect physically accurate 3D models of industrial systems to real-time data from Azure IoT Operations and Power BI.These two Microsoft services integrate with applications built on NVIDIA Omniverse and OpenUSD to provide solutions for industrial IoT use cases. This helps remote operations teams accelerate decision-making and optimize processes in production facilities.The Omniverse Blueprint for precise visual generative AI enables developers to create applications that let nontechnical teams generate AI-enhanced visuals while preserving brand assets. The blueprint supports models like SDXL and Shutterstock Generative 3D to streamline the creation of on-brand, AI-generated images.Leading creative groups, including Accenture Song, Collective, GRIP, Monks and WPP, have adopted this NVIDIA Omniverse Blueprint to personalize and customize imagery across markets.Accelerating Gen AI for Windows With RTX AI PCsNVIDIAs collaboration with Microsoft extends to bringing AI capabilities to personal computing devices.At Ignite, NVIDIA announced its new multimodal SLM, NVIDIA Nemovision-4B Instruct, for understanding visual imagery in the real world and on screen. Its coming soon to RTX AI PCs and workstations and will pave the way for more sophisticated and lifelike digital human interactions.Plus, updates to NVIDIA TensorRT Model Optimizer (ModelOpt) offer Windows developers a path to optimize a model for ONNX Runtime deployment. TensorRT ModelOpt enables developers to create AI models for PCs that are faster and more accurate when accelerated by RTX GPUs. This enables large models to fit within the constraints of PC environments, while making it easy for developers to deploy across the PC ecosystem with ONNX runtimes.RTX AI-enabled PCs and workstations offer enhanced productivity tools, creative applications and immersive experiences powered by local AI processing.Full-Stack Collaboration for AI DevelopmentNVIDIAs extensive ecosystem of partners and developers brings a wealth of AI and high-performance computing options to the Azure platform.SoftServe, a global IT consulting and digital services provider, today announced the availability of SoftServe Gen AI Industrial Assistant, based on the NVIDIA AI Blueprint for multimodal PDF data extraction, on the Azure marketplace. The assistant addresses critical challenges in manufacturing by using AI to enhance equipment maintenance and improve worker productivity.At Ignite, AT&T will showcase how its using NVIDIA AI and Azure to enhance operational efficiency, boost employee productivity and drive business growth through retrieval-augmented generation and autonomous assistants and agents.Learn more about NVIDIA and Microsofts collaboration and sessions at Ignite.See notice regarding software product information.
    Love
    1
    0 Reacties 0 aandelen 46 Views
  • BLOGS.NVIDIA.COM
    How the Department of Energys AI Initiatives Are Transforming Science, Industry and Government
    The U.S. Department of Energy oversees national energy policy and production. As big a job as that is, the DOE also does so much more enough to have earned the nickname Department of Everything.In this episode of the NVIDIA AI Podcast, Helena Fu, director of the DOEs Office of Critical and Emerging Technologies (CET) and DOEs chief AI officer, talks about the departments latest AI efforts. With initiatives touching national security, infrastructure and utilities, and oversight of 17 national labs and 34 scientific user facilities dedicated to scientific discovery and industry innovation, DOE and CET are central to AI-related research and development throughout the country.The AI Podcast How the Department of Energy Is Tapping AI to Transform Science, Industry and Government Ep. 236Hear more from Helena Fu by watching the on-demand session, AI for Science, Energy and Security, from AI Summit DC. And learn more about software-defined infrastructure for power and utilities.Time Stamps2:20: Four areas of focus for the CET include AI, microelectronics, quantum information science and biotechnology.10:55: Introducing AI-related initiatives within the DOE, including FASST, or Frontiers in AI for Science, Security and Technology.16:30: Discussing future applications of AI, large language models and more.19:35: The opportunity of AI applied to materials discovery and applications across science, energy and national security.You Might Also LikeNVIDIAs Josh Parker on How AI and Accelerated Computing Drive Sustainability Ep. 234AI isnt just about building smarter machines. Its about building a greener world. AI and accelerated computing are helping industries tackle some of the worlds toughest environmental challenges. Joshua Parker, senior director of corporate sustainability at NVIDIA, explains how these technologies are powering a new era of energy efficiency.Currents of Change: ITIFs Daniel Castro on Energy-Efficient AI and Climate ChangeAI is everywhere. So, too, are concerns about advanced technologys environmental impact. Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of its Center for Data Innovation, discusses his AI energy use report that addresses misconceptions about AIs energy consumption. He also talks about the need for continued development of energy-efficient technology.How the Ohio Supercomputer Center Drives the Future of Computing Ep. 213The Ohio Supercomputer Centers Open OnDemand program empowers the states educational institutions and industries with computational services, training and educational programs. Theyve even helped NASCAR simulate race car designs. Alan Chalker, the director of strategic programs at the OSC, talks about all things supercomputing.Anima Anandkumar on Using Generative AI to Tackle Global Challenges Ep. 204Anima Anandkumar, Bren Professor at Caltech and former senior director of AI research at NVIDIA, speaks to generative AIs potential to make splashes in the scientific community, from accelerating drug and vaccine research to predicting extreme weather events like hurricanes or heat waves.Subscribe to the AI PodcastGet the AI Podcast through iTunes, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.
    0 Reacties 0 aandelen 57 Views
  • BLOGS.NVIDIA.COM
    AI at COP29: Balancing Innovation and Sustainability
    As COP29 attendees gather in Baku, Azerbaijan, to tackle climate change, the role AI plays in environmental sustainability is front and center.A panel hosted by Deloitte brought together industry leaders to explore ways to reduce AIs environmental footprint and align its growth with climate goals.Experts from Crusoe Energy Systems, EON, the International Energy Agency (IEA) and NVIDIA sat down for a conversation about the energy efficiency of AI.The Environmental Impact of AIDeloittes recent report, Powering Artificial Intelligence: A study of AIs environmental footprint, shows AIs potential to drive a climate-neutral economy. The study looks at how organizations can achieve Green AI in the coming decades and addresses AIs energy use.Deloitte analysis predicts that AI adoption will fuel data center power demand, likely reaching 1,000 terawatt-hours (TWh) by 2030, and potentially climbing to 2,000 TWh by 2050. This will account for 3% of global electricity consumption, indicating faster growth than in other uses like electric cars and green hydrogen production.While data centers currently consume around 2% of total electricity, and AI is a small fraction of that, the discussion at COP29 emphasized the need to meet rising energy demands with clean energy sources to support global climate goals.Energy Efficiency From the Ground UpNVIDIA is prioritizing energy-efficient data center operations with innovations like liquid-cooled GPUs. Direct-to-chip liquid cooling allows data centers to cool systems more effectively than traditional air conditioning, consuming less power and water.We see a very rapid trend toward direct-to-chip liquid cooling, which means water demands in data centers are dropping dramatically right now, said Josh Parker, senior director of legal corporate sustainability at NVIDIA.As AI continues to scale, the future of data centers will hinge on designing for energy efficiency from the outset. By prioritizing energy efficiency from the ground up, data centers can meet the growing demands of AI while contributing to a more sustainable future.Parker emphasized that existing data center infrastructure is becoming dated and less efficient. The data shows that its 10x more efficient to run workloads on accelerated computing platforms than on traditional data center platforms, he said. Theres a huge opportunity for us to reduce the energy consumed in existing infrastructures.The Path to Green ComputingAI has the potential to play a large role in moving toward climate-neutral economies, according to Deloittes study. This approach, often called Green AI, involves reducing the environmental impact of AI throughout the value chain with practices like purchasing renewable energy and improving hardware design.Until now, Green AI has mostly been led by industry leaders. Take accelerated computing, for instance, which is all about doing more with less. It uses special hardware like GPUs to perform tasks faster and with less energy than general-purpose servers that use CPUs, which handle a task at a time.Thats why accelerated computing is sustainable computing.Accelerated computing is actually the most energy-efficient platform that weve seen for AI but also for a lot of other computing applications, said Parker.The trend in energy efficiency for accelerated computing over the last several years shows a 100,000x reduction in energy consumption. And just in the past 2 years, weve become 25x more efficient for AI inference. Thats a 96% reduction in energy for the same computational workload, he said.Reducing Energy Consumption Across SectorsInnovations like the NVIDIA Blackwell and Hopper architectures significantly improve energy efficiency with each new generation. NVIDIA Blackwell is 25x more energy-efficient for large language models, and the NVIDIA H100 Tensor Core GPU is 20x more efficient than CPUs for complex workloads.AI has the potential to make other sectors much more energy efficient, said Parker. Murex, a financial services firm, achieved a 4x reduction in energy use and 7x faster performance with the NVIDIA Grace Hopper Superchip.In manufacturing, were seeing around 30% reductions in energy requirements if you use AI to help optimize the manufacturing process through digital twins, he said.For example, manufacturing company Wistron improved energy efficiency using digital twins and NVIDIA Omniverse, a platform for developing OpenUSD applications for industrial digitalization and physical AI simulation. The company reduced its electricity consumption by 120,000 kWh and carbon emissions by 60,000 kg annually.A Tool for Energy ManagementDeloitte reports that AI can help optimize resource use and reduce emissions, playing a crucial role in energy management. This means it has the potential to lower the impact of industries beyond its own carbon footprint.Combined with digital twins, AI is transforming energy management systems by improving the reliability of renewable sources like solar and wind farms. Its also being used to optimize facility layouts, monitor equipment, stabilize power grids and predict climate patterns, aiding in global efforts to reduce carbon emissions.COP29 discussions emphasized the importance of powering AI infrastructure with renewables and setting ethical guidelines. By innovating with the environment in mind, industries can use AI to build a more sustainable world.Watch a replay of the on-demand COP29 panel discussion.
    0 Reacties 0 aandelen 58 Views
  • BLOGS.NVIDIA.COM
    Hopper Scales New Heights, Accelerating AI and HPC Applications for Mainstream Enterprise Servers
    Since its introduction, the NVIDIA Hopper architecture has transformed the AI and high-performance computing (HPC) landscape, helping enterprises, researchers and developers tackle the worlds most complex challenges with higher performance and greater energy efficiency.During the Supercomputing 2024 conference, NVIDIA announced the availability of the NVIDIA H200 NVL PCIe GPU the latest addition to the Hopper family. H200 NVL is ideal for organizations with data centers looking for lower-power, air-cooled enterprise rack designs with flexible configurations to deliver acceleration for every AI and HPC workload, regardless of size.According to a recent survey, roughly 70% of enterprise racks are 20kW and below and use air cooling. This makes PCIe GPUs essential, as they provide granularity of node deployment, whether using one, two, four or eight GPUs enabling data centers to pack more computing power into smaller spaces. Companies can then use their existing racks and select the number of GPUs that best suits their needs.Enterprises can use H200 NVL to accelerate AI and HPC applications, while also improving energy efficiency through reduced power consumption. With a 1.5x memory increase and 1.2x bandwidth increase over NVIDIA H100 NVL, companies can use H200 NVL to fine-tune LLMs within a few hours and deliver up to 1.7x faster inference performance. For HPC workloads, performance is boosted up to 1.3x over H100 NVL and 2.5x over the NVIDIA Ampere architecture generation.Complementing the raw power of the H200 NVL is NVIDIA NVLink technology. The latest generation of NVLink provides GPU-to-GPU communication 7x faster than fifth-generation PCIe delivering higher performance to meet the needs of HPC, large language model inference and fine-tuning.The NVIDIA H200 NVL is paired with powerful software tools that enable enterprises to accelerate applications from AI to HPC. It comes with a five-year subscription for NVIDIA AI Enterprise, a cloud-native software platform for the development and deployment of production AI. NVIDIA AI Enterprise includes NVIDIA NIM microservices for the secure, reliable deployment of high-performance AI model inference.Companies Tapping Into Power of H200 NVLWith H200 NVL, NVIDIA provides enterprises with a full-stack platform to develop and deploy their AI and HPC workloads.Customers are seeing significant impact for multiple AI and HPC use cases across industries, such as visual AI agents and chatbots for customer service, trading algorithms for finance, medical imaging for improved anomaly detection in healthcare, pattern recognition for manufacturing, and seismic imaging for federal science organizations.Dropbox is harnessing NVIDIA accelerated computing for its services and infrastructure.Dropbox handles large amounts of content, requiring advanced AI and machine learning capabilities, said Ali Zafar, VP of Infrastructure at Dropbox. Were exploring H200 NVL to continually improve our services and bring more value to our customers.The University of New Mexico has been using NVIDIA accelerated computing in various research and academic applications.As a public research university, our commitment to AI enables the university to be on the forefront of scientific and technological advancements, said Prof. Patrick Bridges, director of the UNM Center for Advanced Research Computing. As we shift to H200 NVL, well be able to accelerate a variety of applications, including data science initiatives, bioinformatics and genomics research, physics and astronomy simulations, climate modeling and more.H200 NVL Available Across EcosystemDell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro are expected to deliver a wide range of configurations supporting H200 NVL.Additionally, H200 NVL will be available in platforms from Aivres, ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, MSI, Pegatron, QCT, Wistron and Wiwynn.Some systems are based on the NVIDIA MGX modular architecture, which enables computer makers to quickly and cost-effectively build a vast array of data center infrastructure designs.Platforms with H200 NVL will be available from NVIDIAs global systems partners beginning in December. To complement availability from leading global partners, NVIDIA is also developing an Enterprise Reference Architecture for H200 NVL systems.The reference architecture will incorporate NVIDIAs expertise and design principles, so partners and customers can design and deploy high-performance AI infrastructure based on H200 NVL at scale. This includes full-stack hardware and software recommendations, with detailed guidance on optimal server, cluster and network configurations. Networking is optimized for the highest performance with the NVIDIA Spectrum-X Ethernet platform.NVIDIA technologies will be showcased on the showroom floor at SC24, taking place at the Georgia World Congress Center through Nov. 22. To learn more, watch NVIDIAs special address.See notice regarding software product information.
    0 Reacties 0 aandelen 58 Views
  • BLOGS.NVIDIA.COM
    NVIDIA Releases cuPyNumeric, Enabling Scientists to Harness GPU Acceleration at Cluster Scale
    Whether theyre looking at nanoscale electron behaviors or starry galaxies colliding millions of light years away, many scientists share a common challenge they must comb through petabytes of data to extract insights that can advance their fields.With the NVIDIA cuPyNumeric accelerated computing library, researchers can now take their data-crunching Python code and effortlessly run it on CPU-based laptops and GPU-accelerated workstations, cloud servers or massive supercomputers. The faster they can work through their data, the quicker they can make decisions about promising data points, trends worth investigating and adjustments to their experiments.To make the leap to accelerated computing, researchers dont need expertise in computer science. They can simply write code using the familiar NumPy interface or apply cuPyNumeric to existing code, following best practices for performance and scalability.Once cuPyNumeric is applied, they can run their code on one or thousands of GPUs with zero code changes.The latest version of cuPyNumeric, now available on Conda and GitHub, offers support for the NVIDIA GH200 Grace Hopper Superchip, automatic resource configuration at run time and improved memory scaling. It also supports HDF5, a popular file format in the scientific community that helps efficiently manage large, complex data.Researchers at the SLAC National Accelerator Laboratory, Los Alamos National Laboratory, Australia National University, UMass Boston, the Center for Turbulence Research at Stanford University and the National Payments Corporation of India are among those who have integrated cuPyNumeric to achieve significant improvements in their data analysis workflows.Less Is More: Limitless GPU Scalability Without Code ChangesPython is the most common programming language for data science, machine learning and numerical computing, used by millions of researchers in scientific fields including astronomy, drug discovery, materials science and nuclear physics. Tens of thousands of packages on GitHub depend on the NumPy math and matrix library, which had over 300 million downloads last month. All of these applications could benefit from accelerated computing with cuPyNumeric.Many of these scientists build programs that use NumPy and run on a single CPU-only node limiting the throughput of their algorithms to crunch through increasingly large datasets collected by instruments like electron microscopes, particle colliders and radio telescopes.cuPyNumeric helps researchers keep pace with the growing size and complexity of their datasets by providing a drop-in replacement for NumPy that can scale to thousands of GPUs. cuPyNumeric doesnt require code changes when scaling from a single GPU to a whole supercomputer. This makes it easy for researchers to run their analyses on accelerated computing systems of any size.Solving the Big Data Problem, Accelerating Scientific DiscoveryResearchers at SLAC National Accelerator Laboratory, a U.S. Department of Energy lab operated by Stanford University, have found that cuPyNumeric helps them speed up X-ray experiments conducted at the Linac Coherent Light Source.A SLAC team focused on materials science discovery for semiconductors found that cuPyNumeric accelerated its data analysis application by 6x, decreasing run time from minutes to seconds. This speedup allows the team to run important analyses in parallel when conducting experiments at this highly specialized facility.By using experiment hours more efficiently, the team anticipates it will be able to discover new material properties, share results and publish work more quickly.Other institutions using cuPyNumeric include:Australia National University, where researchers used cuPyNumeric to scale the Levenberg-Marquardt optimization algorithm to run on multi-GPU systems at the countrys National Computational Infrastructure. While the algorithm can be used for many applications, the researchers initial target is large-scale climate and weather models.Los Alamos National Laboratory, where researchers are applying cuPyNumeric to accelerate data science, computational science and machine learning algorithms. cuPyNumeric will provide them with additional tools to effectively use the recently launched Venado supercomputer, which features over 2,500 NVIDIA GH200 Grace Hopper Superchips.Stanford Universitys Center for Turbulence Research, where researchers are developing Python-based computational fluid dynamics solvers that can run at scale on large accelerated computing clusters using cuPyNumeric. These solvers can seamlessly integrate large collections of fluid simulations with popular machine learning libraries like PyTorch, enabling complex applications including online training and reinforcement learning.UMass Boston, where a research team is accelerating linear algebra calculations to analyze microscopy videos and determine the energy dissipated by active materials. The team used cuPyNumeric to decompose a matrix of 16 million rows and 4,000 columns.National Payments Corporation of India, the organization behind a real-time digital payment system used by around 250 million Indians daily and expanding globally. NPCI uses complex matrix calculations to track transaction paths between payers and payees. With current methods, it takes about 5 hours to process data for a one-week transaction window on CPU systems. A trial showed that applying cuPyNumeric to accelerate the calculations on multi-node NVIDIA DGX systems could speed up matrix multiplication by 50x, enabling NPCI to process larger transaction windows in less than an hour and detect suspected money laundering in near real time.To learn more about cuPyNumeric, see a live demo in the NVIDIA booth at the Supercomputing 2024 conference in Atlanta, join the theater talk in the expo hall and participate in the cuPyNumeric workshop. Watch the NVIDIA special address at SC24.
    0 Reacties 0 aandelen 56 Views
  • BLOGS.NVIDIA.COM
    Faster Forecasts: NVIDIA Launches Earth-2 NIM Microservices for 500x Speedup in Delivering Higher-Resolution Simulations
    NVIDIA today at SC24 announced two new NVIDIA NIM microservices that can accelerate climate change modeling simulation results by 500x in NVIDIA Earth-2.Earth-2 is a digital twin platform for simulating and visualizing weather and climate conditions. The new NIM microservices offer climate technology application providers advanced generative AI-driven capabilities to assist in forecasting extreme weather events.NVIDIA NIM microservices help accelerate the deployment of foundation models while keeping data secure.Extreme weather incidents are increasing in frequency, raising concerns over disaster safety and preparedness, and possible financial impacts.Natural disasters were responsible for roughly $62 billion of insured losses during the first half of this year. Thats about 70% more than the 10-year average, according to a report in Bloomberg.NVIDIA is releasing the CorrDiff NIM and FourCastNet NIM microservices to help weather technology companies more quickly develop higher-resolution and more accurate predictions. The NIM microservices also deliver leading energy efficiency compared with traditional systems.New CorrDiff NIM Microservices for Higher-Resolution ModelingNVIDIA CorrDiff is a generative AI model for kilometer-scale super resolution. Its capability to super-resolve typhoons over Taiwan was recently shown at GTC 2024. CorrDiff was trained on the Weather Research and Forecasting (WRF) models numerical simulations to generate weather patterns at 12x higher resolution.High-resolution forecasts capable of visualizing within the fewest kilometers are essential to meteorologists and industries. The insurance and reinsurance industries rely on detailed weather data for assessing risk profiles. But achieving this level of detail using traditional numerical weather prediction models like WRF or High-Resolution Rapid Refresh is often too costly and time-consuming to be practical.The CorrDiff NIM microservice is 500x faster and 10,000x more energy-efficient than traditional high-resolution numerical weather prediction using CPUs. Also, CorrDiff is now operating at 300x larger scale. It is super-resolving or increasing the resolution of lower-resolution images or videos for the entire United States and predicting precipitation events, including snow, ice and hail, with visibility in the kilometers.Enabling Large Sets of Forecasts With New FourCastNet NIM MicroserviceNot every use case requires high-resolution forecasts. Some applications benefit more from larger sets of forecasts at coarser resolution.State-of-the-art numerical models like IFS and GFS are limited to 50 and 20 sets of forecasts, respectively, due to computational constraints.The FourCastNet NIM microservice, available today, offers global, medium-range coarse forecasts. By using the initial assimilated state from operational weather centers such as European Centre for Medium-Range Weather Forecasts or National Oceanic and Atmospheric Administration, providers can generate forecasts for the next two weeks, 5,000x faster than traditional numerical weather models.This opens new opportunities for climate tech providers to estimate risks related to extreme weather at a different scale, enabling them to predict the likelihood of low-probability events that current computational pipelines overlook.Learn more about CorrDiff and FourCastNet NIM microservices on ai.nvidia.com.
    0 Reacties 0 aandelen 54 Views
  • BLOGS.NVIDIA.COM
    AI Will Drive Scientific Breakthroughs, NVIDIA CEO Says at SC24
    NVIDIA kicked off SC24 in Atlanta with a wave of AI and supercomputing tools set to revolutionize industries like biopharma and climate science.The announcements, delivered by NVIDIA founder and CEO Jensen Huang and Vice President of Accelerated Computing Ian Buck, are rooted in the companys deep history in transforming computing.Supercomputers are among humanitys most vital instruments, driving scientific breakthroughs and expanding the frontiers of knowledge, Huang said. Twenty-five years after creating the first GPU, we have reinvented computing and sparked a new industrial revolution.NVIDIAs journey in accelerated computing began with CUDA in 2006 and the first GPU for scientific computing, Huang said.Milestones like Tokyo Techs Tsubame supercomputer in 2008, the Oak Ridge National Laboratorys Titan supercomputer in 2012 and the AI-focused NVIDIA DGX-1 delivered to OpenAI in 2016 highlight NVIDIAs transformative role in the field.Since CUDAs inception, weve driven down the cost of computing by a millionfold, Huang said. For some, NVIDIA is a computational microscope, allowing them to see the impossibly small. For others, its a telescope exploring the unimaginably distant. And for many, its a time machine, letting them do their lifes work within their lifetime.At SC24, NVIDIAs announcements spanned tools for next-generation drug discovery, real-time climate forecasting and quantum simulations.Central to the companys advancements are CUDA-X libraries, described by Huang as the engines of accelerated computing, which power everything from AI-driven healthcare breakthroughs to quantum circuit simulations.Huang and Buck highlighted examples of real-world impact, including Nobel Prize-winning breakthroughs in neural networks and protein prediction, powered by NVIDIA technology.AI will accelerate scientific discovery, transforming industries and revolutionizing every one of the worlds $100 trillion markets, Huang said.CUDA-X Libraries Power New FrontiersAt SC24, NVIDIA announced the new cuPyNumeric library, a GPU-accelerated implementation of NumPy, designed to supercharge applications in data science, machine learning and numerical computing.With over 400 CUDA-X libraries, including cuDNN for deep learning and cuQuantum for quantum circuit simulations, NVIDIA continues to lead in enhancing computing capabilities across various industries.Real-Time Digital Twins With Omniverse BlueprintNVIDIA unveiled the NVIDIA Omniverse Blueprint for real-time computer-aided engineering digital twins, a reference workflow designed to help developers create interactive digital twins for industries like aerospace, automotive, energy and manufacturing.Built on NVIDIA acceleration libraries, physics-AI frameworks and interactive, physically based rendering, the blueprint accelerates simulations by up to 1,200x, setting a new standard for real-time interactivity.Early adopters, including Siemens, Altair, Ansys and Cadence, are already using the blueprint to optimize workflows, cut costs and bring products to market faster.Quantum Leap With CUDA-QNVIDIAs focus on real-time, interactive technologies extends across fields, from engineering to quantum simulations.In partnership with Google, NVIDIAs CUDA-Q now powers detailed dynamical simulations of quantum processors, reducing weeks-long calculations to minutes.Buck explained that with CUDA-Q, developers of all quantum processors can perform larger simulations and explore more scalable qubit designs.AI Breakthroughs in Drug Discovery and ChemistryWith the open-source release of BioNeMo Framework, NVIDIA is advancing AI-driven drug discovery as researchers gain powerful tools tailored specifically for pharmaceutical applications.BioNeMo accelerates training by 2x compared to other AI software, enabling faster development of lifesaving therapies.NVIDIA also unveiled DiffDock 2.0, a breakthrough tool for predicting how drugs bind to target proteins critical for drug discovery.Powered by the new cuEquivariance library, DiffDock 2.0 is 6x faster than before, enabling researchers to screen millions of molecules with unprecedented speed and accuracy.And the NVIDIA ALCHEMI NIM microservice, NVIDIA introduces generative AI to chemistry, allowing researchers to design and evaluate novel materials with incredible speed.Scientists start by defining the properties they want like strength, conductivity, low toxicity or even color, Buck explained.A generative model suggests thousands of potential candidates with the desired properties. Then the ALCHEMI NIM sorts candidate compounds for stability by solving for their lowest energy states using NVIDIA Warp.This microservice is a game-changer for materials discovery, helping developers tackle challenges in renewable energy and beyond.These innovations demonstrate how NVIDIA is harnessing AI to drive breakthroughs in science, transforming industries and enabling faster solutions to global challenges.Earth-2 NIM Microservices: Redefining Climate Forecasts in Real TimeBuck also announced two new microservices CorrDiff NIM and FourCastNet NIM to accelerate climate change modeling and simulation results by up to 500x in the NVIDIA Earth-2 platform.Earth-2, a digital twin for simulating and visualizing weather and climate conditions, is designed to empower weather technology companies with advanced generative AI-driven capabilities.These tools deliver higher-resolution and more accurate predictions, enabling the forecasting of extreme weather events with unprecedented speed and energy efficiency.With natural disasters causing $62 billion in insured losses in the first half of this year 70% higher than the 10-year average NVIDIAs innovations address a growing need for precise, real-time climate forecasting. These tools highlight NVIDIAs commitment to leveraging AI for societal resilience and climate preparedness.Expanding Production With Foxconn CollaborationAs demand for AI systems like the Blackwell supercomputer grows, NVIDIA is scaling production through new Foxconn facilities in the U.S., Mexico and Taiwan.Foxconn is building the production and testing facilities using NVIDIA Omniverse to bring up the factories as fast as possible.Scaling New Heights With HopperNVIDIA also announced the general availability of the NVIDIA H200 NVL, a PCIe GPU based on the NVIDIA Hopper architecture optimized for low-power, air-cooled data centers.The H200 NVL offers up to 1.7x faster large language model inference and 1.3x more performance on HPC applications, making it ideal for flexible data center configurations.It supports a variety of AI and HPC workloads, enhancing performance while optimizing existing infrastructure.And the GB200 Grace Blackwell NVL4 Superchip integrates four NVIDIA NVLink-connected Blackwell GPUs unified with two Grace CPUs over NVLink-C2C, Buck said. It provides up to 2x performance for scientific computing, training and inference applications over the prior generation. |The GB200 NVL4 superchip will be available in the second half of 2025.The talk wrapped up with an invitation to attendees to visit NVIDIAs booth at SC24 to interact with various demos, including James, NVIDIAs digital human, the worlds first real-time interactive wind tunnel and the Earth-2 NIM microservices for climate modeling.Learn more about how NVIDIAs innovations are shaping the future of science at SC24.
    0 Reacties 0 aandelen 56 Views
  • BLOGS.NVIDIA.COM
    Japan Tech Leaders Supercharge Sovereign AI With NVIDIA AI Enterprise and Omniverse
    From call centers to factories to hospitals, AI is sweeping Japan.Undergirding it all: the exceptional resources of the island nations world-class universities and global technology leaders such as Fujitsu, The Institute of Science Tokyo, NEC and NTT.NVIDIA software NVIDIA AI Enterprise for building and deploying AI agents and NVIDIA Omniverse for bringing AI into the physical world is playing a crucial role in supporting Japans transformation into a global hub for AI development.The bigger picture: Japans journey to AI sovereignty is well underway to support the nation in building, developing and sharing AI innovations at home and across the world.Japanese AI Pioneers to Power Homegrown InnovationPutting Japan in a position to become a global AI leader begins with AI-driven language models. Japanese tech leaders are developing advanced AI models that can better interpret Japanese cultural and linguistic nuances.These models enable developers to build AI applications for industries requiring high-precision outcomes, such as healthcare, finance and manufacturing.As Japans tech giants support AI adoption across the country, theyre using NVIDIA AI Enterprise software.Fujitsus Takane model is specifically built for high-stakes sectors like finance and security.The model is designed to prioritize security and accuracy with Japanese data, which is crucial for sensitive fields. It excels in both domestic and international Japanese LLM benchmarks for natural Japanese expression and accuracy.The companies plan to use NVIDIA NeMo for additional fine-tuning, and Fujitsu has tapped NVIDIA to support making Takane available as an NVIDIA NIM to broaden accessibility for the developer community.NECs cotomi model uses NeMos parallel processing techniques for efficient model training. Its already integrated with NECs solutions in finance, manufacturing, healthcare and local governments.NTT Group is moving forward with NTT Communications launch of NTTs large language model tsuzumi, which is accelerated with NVIDIA TensorRT-LLM for AI agent customer experiences and use cases such as document summarization.Meanwhile, startups such as Kotoba Technologies, a Tokyo-based software developer, will unveil its Kotoba-Whisper model, built using NVIDIA NeMo for AI model building.The transcription application built on the Kotoba-Whisper model performed live transcription during this weeks conversation between SoftBank Chairman and CEO Masayoshi Son and NVIDIA founder and CEO Jensen Huang at NVIDIA AI Summit Japan.Kotoba Technologies reports that using NeMos automatic speech recognition for data preprocessing delivers superior transcription performance.Kotoba-Whisper is already used in healthcare to create medical records from patient conversations, in customer call centers and for automatic meeting minutes creation across various industries.These models are used by developers and researchers, especially those focusing on Japanese language AI applications.Academic Contributions to Japans Sovereign AI VisionJapanese universities, meanwhile, are powering the ongoing transformation with a wave of AI innovations.Nagoya Universitys Ruri-Large, built using NVIDIAs Nemotron-4 340B which is also available as a NIM microservice is a Japanese embedding model. It achieves high document retrieval performance with high-quality synthetic data generated by Nemotron-4 340B, and it enables the enhancement of language model capabilities through retrieval-augmented generation using external, authoritative knowledge bases.The National Institute of Informatics will introduce LLM.jp-3-13B-Instruct, a sovereign AI model developed from scratch. Supported by several Japanese government-backed programs, this model underscores the nations commitment to self-sufficiency in AI. Its expected to be available as a NIM microservice soon.The Institute of Science Tokyo and Japans National Institute of Advanced Industrial Science and Technology, better known as AIST, will present the Llama 3.1 Swallow model. Optimized for Japanese tasks, its now a NIM microservice that can integrate into generative AI workflows for uses ranging from cultural research to business applications.The University of Tokyos Human Genome Center uses NVIDIA AI Enterprise and NVIDIA Parabricks software for rapid genomic analysis, advancing life sciences and precision medicine.Japans Tech Providers Helping Organizations Adopt AIIn addition, technology providers are working to bring NVIDIA AI technologies of all kinds to organizations across Japan.Accenture will deploy AI agent solutions based on the Accenture AI Refinery across all industries in Japan, customizing with NVIDIA NeMo and deploying with NVIDIA NIM for a Japanese-specific solution.Dell Technologies is deploying the Dell AI Factory with NVIDIA globally with a key focus on the Japanese market and will support NVIDIA NIM microservices for Japanese enterprises across various industries.Deloitte will integrate NIM microservices that support the leading Japanese language models including LLM.jp, Kotoba, Ruri-large, Swallow and more, into its multi-agent solution.HPE has launched HPE Private Cloud AI platform, supporting NVIDIA AI Enterprise in a private environment. This solution can be tailored for organizations looking to tap into Japans sovereign AI NIM microservices, meeting the needs of companies that prioritize data sovereignty while using advanced AI capabilities.Bringing Physical AI to Industries With NVIDIA OmniverseThe proliferation of language models across academia, startups and enterprises, however, is just the start of Japans AI revolution.A leading maker of industrial robots, a top automaker and a retail giant are all embracing NVIDIA Omniverse and AI, as physics-based simulation drives the next wave of automation.Industrial automation provider Yaskawa, which has shipped 600,000 robots, is developing adaptive robots for increased autonomy. Yaskawa is now adopting NVIDIA Isaac libraries and AI models to create adaptive robot applications for factory automation and other industries such as food, logistics, medical, agriculture and more.Its using NVIDIA Isaac Manipulator, a reference workflow of NVIDIA-accelerated libraries and AI models, to help its developers build AI-enabled manipulators, or robot arms.Its also using NVIDIA FoundationPose for precise 6D pose estimation and tracking.More broadly, NVIDIA and Yaskawa teams use AI-powered simulations and digital twin technology powered by Omniverse to accelerate the development and deployment of Yaskawas robotic solutions, saving time and resources.Meanwhile, Toyota is looking into how to build robotic factory lines in Omniverse to improve tasks in robot motion in metal-forging processes.And another iconic Japanese company, Seven & i Holdings, is using Omniverse to gather insights from video cameras in research to optimize retail and enhance safety.To learn more, check out our blog on these use cases.See notice regarding software product information.
    0 Reacties 0 aandelen 67 Views
  • BLOGS.NVIDIA.COM
    Japans Startups Drive AI Innovation With NVIDIA Accelerated Computing
    Lifelike digital humans engage with audiences in real time. Autonomous systems streamline complex logistics. And AI-driven language tools break down communication barriers on the fly.This isnt sci-fi. This is Tokyos startup scene.Supercharged by AI and world-class academic and industrial might the region has become a global innovation hub. And the NVIDIA Inception program is right in the middle of it.With over 370 AI-driven startups in the program and a 250,000-person strong NVIDIA developer community, Japans AI startup ecosystem is as bold as it is fast-moving.This weeks NVIDIA AI Summit Japan puts these achievements in the spotlight, capturing the regions relentless innovation momentum.NVIDIA founder and CEO Jensen Huang and SoftBank Group Chairman and CEO Masayoshi Son opened the summit with a fireside chat to discuss AIs transformative role, with Jensen diving into Japans growing AI ecosystem and its push toward sovereign AI.Sessions followed with leaders from METI (Japans Ministry of Economy, Trade and Industry), the University of Tokyo and other key players. Their success is no accident.Tokyos academic powerhouses, global technology and industrial giants, and technology-savvy population of 14 million, provide the underpinnings of a global AI hub that stretches from the bustling startup scene in Shibuya to new hotbeds of tech development in Chiyoda and beyond.Supercharging Japans Creative ClassIconic works from anime to manga have not only redefined entertainment in Japan theyve etched themselves into global culture, inspiring fans across continents, languages and generations.Now, Japans vibrant visual pop culture is spilling into AI, finding fresh ways to surprise and connect with audiences.Take startup AiHUBs digital celebrity Sali.Sali isnt just a character in the traditional sense. Shes a digital being with presence responsive and lifelike. She blinks, she smiles, she reacts.Here, AI is doing something quietly revolutionary, slipping under the radar to redefine how people interact with media.At AI Summit Japan, AiHUB revealed that it will adopt the NVIDIA Avatar Cloud Engine, or ACE, in the lip-sync module of its digital human framework, providing Sali nuanced expressions and human-like emotional depth.ACE doesnt just make Sali relatable it puts her in a league of characters who transcend screens and pages.This integration reduced development and future management costs by approximately 50% while improving the expressiveness of the avatars, according to AiHUB.SDK Adoption: From Hesitation to High VelocityIn the global tech race, success doesnt always hinge on the heroes youd expect.The unsung stars here are software development kits those bundles of tools, libraries and documentation that cut the guesswork out of innovation. And in Japans fast-evolving AI ecosystem, these once-overlooked SDKs are driving an improbable revolution.For years, Japans tech companies treated SDKs with caution. Now, however, with AI advancing at lightspeed and NVIDIA GPUs powering the engine, SDKs have moved from a quiet corner to center stage.Take NVIDIA NeMo, a platform for building large language models, or LLMs. Its swiftly becoming the background for Japans latest wave of real-time, AI-driven communication technologies.One company at the forefront is Kotoba Technologies, which has cracked the code on real-time speech recognition thanks to NeMos powerful tools.Under a key Japanese government grant, Kotobas language tools dont just capture sound they translate it live. Its a blend of computational heft and human ingenuity, redefining how multilingual communication happens in non-English-speaking countries like Japan.Kotobas tools are used in customer call centers and for automatic meeting minutes creation across various industries. It was also used to perform live transcription during the AI Summit Japan fireside chat between Huang and Son.And if LLMs are the engines driving Japans AI, then companies like APTO supply the fuel. Using NVIDIA NeMo Curator, APTO is changing the game in data annotation, handling the intensive prep work that makes LLMs effective.By refining data quality for big clients like RIKEN, Ricoh and ORIX, APTO has mastered the fine art of sifting valuable signals from noise. Through tools like WordCountFilter an ingenious mechanism that prunes short or unnatural sentences its supercharging performance.APTOs data quality control boosted model accuracy scores and slashed training time.Across Japan, developers are looking to move on AI fast, and theyre embracing SDKs to go further, faster.The Power of Cross-Sector SynergyThe gears of Japans AI ecosystem increasingly turn in sync thanks to NVIDIA-powered infrastructure that enables startups to build on each others breakthroughs.As Japans population ages, solutions like these address security needs as well as an intensifying labor shortage. Here, ugo and Asilla have taken on the challenge, using autonomous security systems to manage facilities across the country.Asillas cutting-edge anomaly detection was developed with security in mind but is now finding applications in healthcare and retail. Built on the NVIDIA DeepStream and Triton Inference Server SDKs, Asillas tech doesnt just identify risks it responds to them.In high-stakes environments, ugo and Asillas systems, powered by the NVIDIA Jetson platform, are already in action, identifying potential security threats and triggering real-time responses.NVIDIAs infrastructure is also at the heart of Kotoba Technologies language tools, as well as AiHUBs lifelike digital avatars. Running on an AI backbone, these various tools seamlessly bridge media, communication and human interaction.The Story Behind the Story: Tokyo IPC and Osaka Innovation HubAll of these startups are part of a larger ecosystem thats accelerating Japans rise as an AI powerhouse.Leading the charge is UTokyo IPC, the wholly owned venture capital arm of the University of Tokyo, operating through its flagship accelerator program, 1stRound.Cohosted by 18 universities and four national research institutions, this program serves as the nexus where academia and industry converge, providing hands-on guidance, resources and strategic support.By championing the real-world deployment of seed-stage deep-tech innovations, UTokyo IPC is igniting Japans academic innovation landscape and setting the standard for others to follow.Meanwhile, Osakas own Innovation Hub, OIH, expands this momentum beyond Tokyo, providing startups with coworking spaces and networking events. Its Startup Acceleration Program brings early-stage projects to market faster.Fast-moving hubs like these are core to Japans AI ecosystem, giving startups the mentorship, funding and resources they need to go from prototype to fully commercialized product.And through NVIDIAs accelerated computing technologies and the Inception program, Japans fast-moving startups are united with AI innovators across the globe.Image credit: ugo.
    0 Reacties 0 aandelen 62 Views
  • BLOGS.NVIDIA.COM
    NVIDIA and Global Consulting Leaders Speed AI Adoption Across Japans Industries
    Consulting giants including Accenture, Deloitte, EY Strategy and Consulting Co., Ltd. (or EY Japan), FPT, Kyndryl and Tata Consultancy Services Japan (TCS Japan) are working with NVIDIA to establish innovation centers in Japan to accelerate the nations goal of embracing enterprise AI and physical AI across its industrial landscape.The centers will use NVIDIA AI Enterprise software, local language models and NVIDIA NIM microservices to help clients in Japan advance the development and deployment of AI agents tailored to their industries respective needs, boosting productivity with a digital workforce.Using the NVIDIA Omniverse platform, Japanese firms can develop digital twins and simulate complex physical AI systems, driving innovation in manufacturing, robotics and other sectors.Like many nations, Japan is navigating complex social and demographic challenges, which is leading to a smaller workforce as older generations retire. Leaning into its manufacturing and robotics leadership, the country is seeking opportunities to solve these challenges using AI.The Japanese government in April published a paper on its aims to become the worlds most AI-friendly country. AI adoption is strong and growing, as IDC reports that the Japanese AI systems market reached approximately $5.9 billion this year, with a year-on-year growth rate of 31.2%.1The consulting giants initiatives and activities include:Accenture has established the Accenture NVIDIA Business Group and will provide solutions and services incorporating a Japanese large language model (LLM), which uses NVIDIA NIM and NVIDIA NeMo, as a Japan-specific offering. In addition, Accenture will deploy agentic AI solutions based on Accenture AI Refinery to all industries in Japan, accelerating total enterprise reinvention for its clients. In the future, Accenture plans to build new services using NVIDIA AI Enterprise and Omniverse at Accenture Innovation Hub Tokyo.Deloitte is establishing its AI Experience Center in Tokyo, which will serve as an executive briefing center to showcase generative AI solutions built on NVIDIA technology. This facility builds on the Deloitte Japan NVIDIA Practice announced in June and will allow clients to experience firsthand how AI can revolutionize their operations. The center will also offer NVIDIA AI and Omniverse Blueprints to help enterprises in Japan adopt agentic AI effectively.EY Strategy and Consulting Co., Ltd (EY Japan) is developing a multitude of digital transformation (DX) solutions in Japan across diverse industries including finance, retail, media and manufacturing. The new EY Japan DX offerings will be built with NVIDIA AI Enterprise to serve the countrys growing demand for digital twins, 3D applications, multimodal AI and generative AI.FPT is launching FPT AI Factory in Japan with NVIDIA Hopper GPUs and NVIDIA AI Enterprise software to support the countrys AI transformation by using business data in a secure, sovereign environment. FPT is integrating the NVIDIA NeMo framework with FPT AI Studio for building, pretraining and fine-tuning generative AI models, including FPTs multi-language LLM, named Saola. In addition, to provide end-to-end AI integration services, FPT plans to train over 1,000 software engineers and consultants domestically in Japan, and over 7,000 globally by 2026.IT infrastructure services provider Kyndryl has launched a dedicated AI private cloud in Japan. Built in collaboration with Dell Technologies using the Dell AI Factory with NVIDIA, this new AI private cloud will provide a controlled, secure and sovereign location for customers to develop, test and plan implementation of AI on the end-to-end NVIDIA AI platform, including NVIDIA accelerated computing and networking, as well as the NVIDIA AI Enterprise software.TCS Japan will begin offering its TCS global AI offerings built on the full NVIDIA AI stack in the automotive and manufacturing industries. These solutions will be hosted in its showcase centers at TCS Japans Azabudai office in Tokyo.Located in the Tokyo and Kansai metropolitan areas, these new consulting centers offer hands-on experience with NVIDIAs latest technologies and expert guidance helping accelerate AI transformation, solve complex social challenges and support the nations economic growth.To learn more, watch the NVIDIA AI Summit Japan fireside chat with NVIDIA founder and CEO Jensen Huang.Editors note: IDC figures are sourced to IDC, 2024 Domestic AI System Market Forecast Announced, April 2024. The IDC forecast amount was converted to USD by NVIDIA, while the CAGR (31.2%) was calculated based on JPY.
    0 Reacties 0 aandelen 56 Views
  • BLOGS.NVIDIA.COM
    Lab Confidential: Japan Research Keeps Healthcare Data Secure
    Established 77 years ago, Mitsui & Co stays vibrant by building businesses and ecosystems with new technologies like generative AI and confidential computing.Digital transformation takes many forms at the Tokyo-based conglomerate with 16 divisions. In one case, its an autonomous trucking service, in another its a geospatial analysis platform. Mitsui even collaborates with a partner at the leading edge of quantum computing.One new subsidiary, Xeureka, aims to accelerate R&D in healthcare, where it can take more than a billion dollars spent over a decade to bring to market a new drug.We create businesses using new digital technology like AI and confidential computing, said Katsuya Ito, a project manager in Mitsuis digital transformation group. Most of our work is done in collaboration with tech companies in this case NVIDIA and Fortanix, a San Francisco based security software company.In Pursuit of Big DataThough only three years old, Xeureka already completed a proof of concept addressing one of drug discoverys biggest problems getting enough data.Speeding drug discovery requires powerful AI models built with datasets larger than most pharmaceutical companies have on hand. Until recently, sharing across companies has been unthinkable because data often contains private patient information as well as chemical formulas proprietary to the drug company.Enter confidential computing, a way of processing data in a protected part of a GPU or CPU that acts like a black box for an organizations most important secrets.To ensure their data is kept confidential at all times, banks, government agencies and even advertisers are using the technology thats backed by a consortium of some of the worlds largest companies.A Proof of Concept for PrivacyTo validate that confidential computing would allow its customers to safely share data, Xeureka created two imaginary companies, each with a thousand drug candidates. Each companys dataset was used separately to train an AI model to predict the chemicals toxicity levels. Then the data was combined to train a similar, but larger AI model.Xeureka ran its test on NVIDIA H100 Tensor Core GPUs using security management software from Fortanix, one of the first startups to support confidential computing.The H100 GPUs support a trusted execution environment with hardware-based engines that ensure and validate confidential workloads are protected while in use on the GPU, without compromising performance. The Fortanix software manages data sharing, encryption keys and the overall workflow.Up to 74% Higher AccuracyThe results were impressive. The larger models predictions were 65-74% more accurate, thanks to use of the combined datasets.The models created by a single companys data showed instability and bias issues that were not present with the larger model, Ito said.Confidential computing from NVIDIA and Fortanix essentially alleviates the privacy and security concerns while also improving model accuracy, which will prove to be a win-win situation for the entire industry, said Xeurekas CTO, Hiroki Makiguchi, in a Fortanix press release.An AI Supercomputing EcosystemNow, Xeureka is exploring broad applications of this technology in drug discovery research, in collaboration with the community behind Tokyo-1, its GPU-accelerated AI supercomputer. Announced in February, Tokyo-1 aims to enhance the efficiency of pharmaceutical companies in Japan and beyond.Initial projects may include collaborations to predict protein structures, screen ligand-base pairs and accelerate molecular dynamics simulations with trusted services. Tokyo-1 users can harness large language models for chemistry, protein, DNA and RNA data formats through the NVIDIA BioNeMo drug discovery microservices and framework.Its part of Mitsuis broader strategic growth plan to develop software and services for healthcare, such as powering Japans $100 billion pharma industry, the worlds third largest following the U.S. and China.Xeuekas services will include using AI to quickly screen billions of drug candidates, to predict how useful molecules will bind with proteins and to simulate detailed chemical behaviors.To learn more, read about NVIDIA Confidential Computing and NVIDIA BioNeMo, an AI platform for drug discovery.
    0 Reacties 0 aandelen 70 Views
  • BLOGS.NVIDIA.COM
    NVIDIA Ranks No. 1 as Forbes Debuts List of Americas Best Companies 2025
    NVIDIA ranked No. 1 on Forbes magazines new list Americas Best Companies based on more than 60 measures in nearly a dozen categories that cover financial performance, customer and employee satisfaction, sustainability, remote work policies and more.Forbes stated that the company thrived in numerous areas, particularly employee satisfaction, earning high ratings in career opportunities, company benefits and culture, as well as financial strength.About 2,000 of the largest public companies in the U.S. were eligible, with 300 making the list.Beau Davidson, vice president of employee experience at NVIDIA, told Forbes that the company has created systemic opportunities to listen to its staff (such as quarterly surveys, CEO Q&As and a virtual suggestion box) and then takes action on concerns ranging from benefits to cafe snacks.NVIDIA has also championed Free Days two days each quarter where the entire company closes. It allows us to take a break as a company, Davidson told Forbes. NVIDIA provides counselors onsite and a careers week that provides programs and training for workers to pursue internal job opportunities.NVIDIA enjoys a low rate of employee turnover widely viewed as a sign of employee happiness, according to People Data Labs, Forbes data provider on workforce stability.For a full list of rankings, view Forbes Americas Best Companies 2025 list.Check out the NVIDIA Careers page and learn more about NVIDIA Life.
    0 Reacties 0 aandelen 69 Views
  • BLOGS.NVIDIA.COM
    Keeping an AI on Diabetes Risk: Gen AI Model Predicts Blood Sugar Levels Four Years Out
    Diabetics or others monitoring their sugar intake may look at a cookie and wonder, How will eating this affect my glucose levels? A generative AI model can now predict the answer.Researchers from the Weizmann Institute of Science, Tel Aviv-based startup Pheno.AI and NVIDIA led the development of GluFormer, an AI model that can predict an individuals future glucose levels and other health metrics based on past glucose monitoring data.Data from continuous glucose monitoring could help more quickly diagnose patients with prediabetes or diabetes, according to Harvard Health Publishing and NYU Langone Health. GluFormers AI capabilities can further enhance the value of this data, helping clinicians and patients spot anomalies, predict clinical trial outcomes and forecast health outcomes up to four years in advance.The researchers showed that, after adding dietary intake data into the model, GluFormer can also predict how a persons glucose levels will respond to specific foods and dietary changes, enabling precision nutrition.Accurate predictions of glucose levels for those at high risk of developing diabetes could enable doctors and patients to adopt preventative care strategies sooner, improving patient outcomes and reducing the economic impact of diabetes, which could reach $2.5 trillion globally by 2030.AI tools like GluFormer have the potential to help the hundreds of millions of adults with diabetes. The condition currently affects around 10% of the worlds adults a figure that could potentially double by 2050 to impact over 1.3 billion people. Its one of the 10 leading causes of death globally, with side effects including kidney damage, vision loss and heart problems.GluFormer is a transformer model, a kind of neural network architecture that tracks relationships in sequential data. Its the same architecture as OpenAIs GPT models in this case generating glucose levels instead of text.Medical data, and continuous glucose monitoring in particular, can be viewed as sequences of diagnostic tests that trace biological processes throughout life, said Gal Chechik, senior director of AI research at NVIDIA. We found that the transformer architecture, developed for long text sequences, can take a sequence of medical tests and predict the results of the next test. In doing so, it learns something about how the diagnostic measurements develop over time.The model was trained on 14 days of glucose monitoring data from over 10,000 non-diabetic study participants, with data collected every 15 minutes through a wearable monitoring device. The data was collected as part of the Human Phenotype Project, an initiative by Pheno.AI, a startup that aims to improve human health through data collection and analysis.Two important factors converged at the same time to enable this research: the maturing of generative AI technology powered by NVIDIA and the collection of large-scale health data by the Weizmann Institute, said the papers lead author, Guy Lutsker, an NVIDIA researcher and Ph.D. student at the Weizmann Institute of Science. It put us in the unique position to extract interesting medical insights from the data.The research team validated GluFormer across 15 other datasets and found it generalizes well to predict health outcomes for other groups, including those with prediabetes, type 1 and type 2 diabetes, gestational diabetes and obesity.They used a cluster of NVIDIA Tensor Core GPUs to accelerate model training and inference.Beyond glucose levels, GluFormer can predict medical values including visceral adipose tissue, a measure of the amount of body fat around organs like the liver and pancreas; systolic blood pressure, which is associated with diabetes risk; and apnea-hypopnea index, a measurement for sleep apnea, which is linked to type 2 diabetes.Read the GluFormer research paper on Arxiv.
    0 Reacties 0 aandelen 67 Views
  • BLOGS.NVIDIA.COM
    From Seed to Stream: Farming Simulator 25 Sprouts on GeForce NOW
    Grab a pitchfork and fire up the tractor the fields of GeForce NOW are about to get a whole lot greener with Farming Simulator 25.Whether looking for a time-traveling adventure, cozy games or epic action, GeForce NOW has something for everyone with over 2,000 games in its cloud library. Nine titles arrive this week, including the new 4X historical grand strategy game Ara: History Untold from Oxide Games and Xbox Game Studios.And in this season of giving, GeForce NOW will offer members new rewards and more this month. This week, GeForce NOW is spreading cheer with a new reward for members thats sure to delight Throne and Liberty fans. Get ready to add a dash of mischief and a sprinkle of wealth to the epic adventures in the sprawling world of this massively multiplayer online role-playing game.Plus, the NVIDIA app is officially released for download this week. GeForce users can use it to access GeForce NOW to play their games with RTX performance when theyre away from their gaming rigs or dont want to wait around for their games to update and patch.A Cloud Gaming BountyGet ready to plow the fields and tend to crops anywhere with GeForce NOW.Farming Simulator 25 from Giants Software launched in the cloud for members to stream, bringing a host of new features and improvements including the introduction of rice as a crop type, complete with specialized machinery and techniques for planting, flooding fields and harvesting.This expansion into rice farming is accompanied by a new Asian-themed map that offers players a lush landscape filled with picturesque rice paddies to cultivate. The game will also include two other diverse environments: a spacious North American setting and a scenic Central European location, allowing farmers to build their agricultural empires in varied terrains. Dont forget about the addition of water buffaloes and goats, as well as the introduction of animal offspring for a new layer of depth to farm management.Be the cream of the crop streaming with a Performance or Ultimate membership. Performance members get up to 1440p 60 frames per second and Ultimate streams at up to 4K and 120 fps for the most incredible levels of realism and variety. Whether tackling agriculture, forestry and animal husbandry single-handedly or together with friends in cooperative multiplayer mode, experience farming life like never before with GeForce NOW.Mischief ManagedWhether new to the game or a seasoned adventurer, GeForce NOW members can claim a special PC-exclusive reward to use in Amazon Games hit title Throne and Liberty. The reward includes 200 Ornate Coins and a PC-exclusive mischievous youngster named Gneiss Amitoi that will enhance the Throne and Liberty journey as members forge alliances, wage epic battles and uncover hidden treasures.Ornate Coins allow players to acquire morphs for animal shapeshifting, autonomous pets named Amitois, exclusive cosmetic items, experience boosters and inventory expansions. Gneiss Youngster Amitoi is a toddler-aged prankster that randomly targets players and non-playable characters with its tricks. While some of its mischief can be mean-spirited, it just wants attention, and will pout and roll back to its adventurers side if ignored, adding an entertaining dynamic to the journey through the world of Throne and Liberty.Members whove opted in to GeForce NOWs Rewards program can check their email for instructions on how to redeem the reward. Ultimate and Performance members can start redeeming the reward today, while free members will be able to claim it starting tomorrow, Nov. 15. Its available through Tuesday, Dec. 10, first come, first served.Rewriting HistoryExplore, build, lead and conquer a nation in Ara: History Untold, where every choice will shape the world and define a players legacy. Its now available for GeForce NOW members to stream.Ara: History Untold offers a fresh take on 4X historical grand strategy games. Players will prove their worth by guiding their citizens through history to the pinnacles of human achievement. Explore new lands, develop arts and culture, and engage in diplomacy or combat with other nations, before ultimately claiming the mantle of the greatest nation of all time.Members can craft their own unique story of triumph and achievement by streaming the game across devices from the cloud. GeForce NOW Performance and Ultimate members can enjoy longer gaming sessions and faster access to servers than free users, perfect for crafting sprawling empires and engaging in complex diplomacy without worrying about local hardware limitations.New Games Are KnockingGeForce NOW brings the new Wuthering Waves update When the Night Knocks for members this week. Version 1.4 brings a wealth of new content, including two new Resonators, Camellya and Lumi, along with powerful new weapons, including the five-star Red Spring and the four-star event weapon Somnoire Anchor. Dive into the Somnoire Adventure Event, Somnium Labyrinth, and enjoy a variety of log-in rewards, combat challenges and exploration activities. The update also includes Camellyas companion story, a new Phantom Echo and introduces the exciting Weapon Projection feature.Members can look for the following games available to stream in the cloud this week:Farming Simulator 25 (New release on Steam, Nov. 12)Sea Power: Naval Combat in the Missile Age (New release on Steam, Nov. 12)Industry Giant 4.0 (New release Steam, Nov. 15)Ara: History Untold (Steam and Xbox, available on PC Game Pass)Call of Duty: Black Ops Cold War (Steam and Battle.net)Call of Duty: Vanguard (Steam and Battle.net)Magicraft (Steam)Crash Bandicoot N. Sane Trilogy (Steam and Xbox, available on PC Game Pass)Spyro Reignited Trilogy (Steam and Xbox, available on PC Game Pass)What are you planning to play this weekend? Let us know on X or in the comments below.the last thing your left hand touched is your video game weapon what was it? NVIDIA GeForce NOW (@NVIDIAGFN) November 13, 2024
    0 Reacties 0 aandelen 70 Views
  • BLOGS.NVIDIA.COM
    Open for Development: NVIDIA Works With Cloud-Native Community to Advance AI and ML
    Cloud-native technologies have become crucial for developers to create and implement scalable applications in dynamic cloud environments.This week at KubeCon + CloudNativeCon North America 2024, one of the most-attended conferences focused on open-source technologies, Chris Lamb, vice president of computing software platforms at NVIDIA, delivered a keynote outlining the benefits of open source for developers and enterprises alike and NVIDIA offered nearly 20 interactive sessions with engineers and experts.The Cloud Native Computing Foundation (CNCF), part of the Linux Foundation and host of KubeCon, is at the forefront of championing a robust ecosystem to foster collaboration among industry leaders, developers and end users.As a member of CNCF since 2018, NVIDIA is working across the developer community to contribute to and sustain cloud-native open-source projects. Our open-source software and more than 750 NVIDIA-led open-source projects help democratize access to tools that accelerate AI development and innovation.Empowering Cloud-Native EcosystemsNVIDIA has benefited from the many open-source projects under CNCF and has made contributions to dozens of them over the past decade. These actions help developers as they build applications and microservice architectures aligned with managing AI and machine learning workloads.Kubernetes, the cornerstone of cloud-native computing, is undergoing a transformation to meet the challenges of AI and machine learning workloads. As organizations increasingly adopt large language models and other AI technologies, robust infrastructure becomes paramount.NVIDIA has been working closely with the Kubernetes community to address these challenges. This includes:Work on dynamic resource allocation (DRA) that allows for more flexible and nuanced resource management. This is crucial for AI workloads, which often require specialized hardware. NVIDIA engineers played a key role in designing and implementing this feature.Leading efforts in KubeVirt, an open-source project extending Kubernetes to manage virtual machines alongside containers. This provides a unified, cloud-native approach to managing hybrid infrastructure.Development of NVIDIA GPU Operator, which automates the lifecycle management of NVIDIA GPUs in Kubernetes clusters. This software simplifies the deployment and configuration of GPU drivers, runtime and monitoring tools, allowing organizations to focus on building AI applications rather than managing infrastructure.The companys open-source efforts extend beyond Kubernetes to other CNCF projects:NVIDIA is a key contributor to Kubeflow, a comprehensive toolkit that makes it easier for data scientists and engineers to build and manage ML systems on Kubernetes. Kubeflow reduces the complexity of infrastructure management and allows users to focus on developing and improving ML models.NVIDIA has contributed to the development of CNAO, which manages the lifecycle of host networks in Kubernetes clusters.NVIDIA has also added to Node Health Check, which provides virtual machine high availability.And NVIDIA has assisted with projects that address the observability, performance and other critical areas of cloud-native computing, such as:Prometheus: Enhancing monitoring and alerting capabilitiesEnvoy: Improving distributed proxy performanceOpenTelemetry: Advancing observability in complex, distributed systemsArgo: Facilitating Kubernetes-native workflows and application managementCommunity EngagementNVIDIA engages the cloud-native ecosystem by participating in CNCF events and activities, including:Collaboration with cloud service providers to help them onboard new workloads.Participation in CNCFs special interest groups and working groups on AI discussions.Participation in industry events such as KubeCon + CloudNativeCon, where it shares insights on GPU acceleration for AI workloads.Work with CNCF-adjacent projects in the Linux Foundation as well as many partners.This translates into extended benefits for developers, such as improved efficiency in managing AI and ML workloads; enhanced scalability and performance of cloud-native applications; better resource utilization, which can lead to cost savings; and simplified deployment and management of complex AI infrastructures.As AI and machine learning continue to transform industries, NVIDIA is helping advance cloud-native technologies to support compute-intensive workloads. This includes facilitating the migration of legacy applications and supporting the development of new ones.These contributions to the open-source community help developers harness the full potential of AI technologies and strengthen Kubernetes and other CNCF projects as the tools of choice for AI compute workloads.Check out NVIDIAs keynote at KubeCon + CloudNativeCon North America 2024 delivered by Chris Lamb, where he discusses the importance of CNCF projects in building and delivering AI in the cloud and NVIDIAs contributions to the community to push the AI revolution forward.
    0 Reacties 0 aandelen 71 Views
  • BLOGS.NVIDIA.COM
    Japan Develops Next-Generation Drug Design, Healthcare Robotics and Digital Health Platforms
    To provide high-quality medical care to its population around 30% of whom are 65 or older Japan is pursuing sovereign AI initiatives supporting nearly every aspect of healthcare.AI tools trained on country-specific data and local compute infrastructure are supercharging the abilities of Japans clinicians and researchers so they can care for patients, amid an expected shortage of nearly 500,000 healthcare workers by next year.Breakthrough technology deployments by the countrys healthcare leaders including in AI-accelerated drug discovery, genomic medicine, healthcare imaging and robotics are highlighted at the NVIDIA AI Summit Japan, taking place in Tokyo through Nov. 13.Powered by NVIDIA AI computing platforms like the Tokyo-1 NVIDIA DGX supercomputer, these applications were developed using domain-specific platforms such as NVIDIA BioNeMo for drug discovery, NVIDIA MONAI for medical imaging, NVIDIA Parabricks for genomics and NVIDIA Holoscan for healthcare robotics.Drug Discovery AI Factories Deepen Understanding, Accuracy and SpeedNVIDIA is supporting Japans pharmaceutical market one of the three largest in the world with NVIDIA BioNeMo, an end-to-end platform that enables drug discovery researchers to develop and deploy AI models for generating biological intelligence from biomolecular data.BioNeMo includes a customizable, modular programming framework and NVIDIA NIM microservices for optimized AI inference. New models include AlphaFold2, which predicts the 3D structure of a protein from its amino acid sequence; DiffDock, which predicts the 3D structure of a molecule interacting with a protein; and RFdiffusion, which designs novel protein structures likely to bind with a target molecule.The platform also features BioNeMo Blueprints, a catalog of customizable reference AI workflows to help developers scale biomolecular AI models to enterprise-grade applications.The NIM microservice for AlphaFold2 now integrates MMSeqs2-GPU, an evolutionary information retrieval tool that accelerates the traditional AlphaFold2 pipeline by 5x. Led by researchers at Seoul National University, Johannes Gutenberg University Mainz and NVIDIA, this integration enables protein structure prediction in 8 minutes instead of 40 minutes.At AI Summit Japan, TetraScience, a company that engineers AI-native scientific datasets, announced a collaboration with NVIDIA to industrialize the production of scientific AI use cases to accelerate and improve workflows across the life sciences value chain.For example, choosing an optimal cell line to produce biologic therapies such as vaccines and monoclonal antibodies is a critical but time-consuming step. TetraSciences new Lead Clone Assistant uses BioNeMo tools, including the NVIDIA VISTA-2D foundation model for cell segmentation and the Geneformer model for gene expression analysis, to reduce lead clone selection to hours instead of weeks.Tokyo-based Astellas Pharma uses BioNeMo biomolecular AI models such as ESM-1nv, ESM-2nv and DNABERT to accelerate biologics research. Its AI models are used to generate novel molecular structures, predict how those molecules will bind to target proteins and optimize them to more effectively bind to those target proteins.Using the BioNeMo framework, Astellas has accelerated chemical molecule generation by more than 30x. The company plans to use BioNeMo NIM microservices to further advance its work.Japans Pharma Companies and Research Institutions Advance Drug Research and DevelopmentAstellas, Daiichi-Sankyo and Ono Pharmaceutical are leading Japanese pharma companies harnessing the Tokyo-1 system, an NVIDIA DGX AI supercomputer built in collaboration with Xeureka, a subsidiary of the Japanese business conglomerate Mitsui & Co, to build AI models for drug discovery. Xeureka is using Tokyo-1 to accelerate AI model development and molecular simulations.Xeureka is also using NVIDIA H100 Tensor Core GPUs to explore the application of confidential computing to enhance the ability of pharmaceutical companies to collaborate on large AI model training while protecting proprietary datasets.To further support disease and precision medicine research, genomics researchers across Japan have adopted the NVIDIA Parabricks software suite to accelerate secondary analysis of DNA and RNA data.Among them is the University of Tokyo Human Genome Center, the main academic institution working on a government-led whole genome project focused on cancer research. The initiative will help researchers identify gene variants unique to Japans population and support the development of precision therapeutics.The genome center is also exploring the use of Giraffe, a tool now available via Parabricks v4.4 that enables researchers to map genome sequences to a pangenome, a reference genome that represents diverse populations.AI Scanners and Scopes Give Radiologists and Surgeons Real-Time SuperpowersJapans healthcare innovators are building AI-augmented systems to support radiologists and surgeons.Fujifilm has developed an AI application in collaboration with NVIDIA to help surgeons perform surgery more efficiently.This application uses an AI model developed using NVIDIA DGX systems to convert CT images into 3D simulations to support surgery.Olympus recently collaborated with NVIDIA and telecommunications company NTT to demonstrate how cloud-connected endoscopes can efficiently run image processing and AI applications in real time. The endoscopes featured NVIDIA Jetson Orin modules for edge computing and connected to a cloud server using the NTT communication platforms IOWN All-Photonics Network, which introduces photonics-based technology across the network to enable lower power consumption, greater capacity and lower latency.NVIDIA is also supporting real-time AI-powered robotic systems for radiology and surgery in Japan with Holoscan, a sensor processing platform that streamlines AI model and application development for real-time insights. Holoscan includes a catalog of AI reference workflows for applications including endoscopy and ultrasound analysis.A neurosurgeon at Showa University, a medical school with multiple campuses across Japan, has adopted Holoscan and the NVIDIA IGX platform for industrial-grade edge AI to develop a surgical microscopy application that takes video footage from surgical scopes and converts it into 3D imagery in real time using AI. With access to 3D reconstructions, surgeons can more easily locate tumors and key structures in the brain to improve the efficiency of procedures.Japanese surgical AI companies including AI Medical Service (AIM), Anaut, iMed Technologies and Jmees are investigating the use of Holoscan to power applications that provide diagnostic support for endoscopists and surgeons. These applications could detect anatomical structures like organs in real time, with the potential to reduce injury risks, identify conditions such as gastrointestinal cancers and brain hemorrhages, and provide immediate insights to help doctors prepare for and conduct surgeries.Scaling Healthcare With Digital Health AgentsOlder adults have higher rates of chronic conditions and use healthcare services the most so to keep up with its aging population, Japan-based companies are at the forefront of developing digital health systems to augment patient care.Fujifilm has launched NURA, a group of health screening centers with AI-augmented medical examinations designed to help doctors test for cancer and chronic diseases with faster examinations and lower radiation doses for CT scans.Developed using NVIDIA DGX systems, the tool incorporates large language models that create text summaries of medical images. The AI models run on NVIDIA RTX GPUs for inference. Fujifilm is also evaluating the use of MONAI, NeMo and NIM microservices.To learn more about NVIDIAs collaborations with Japans healthcare ecosystem, watch the NVIDIA AI Summit on-demand session by Kimberly Powell, the companys vice president of healthcare.
    0 Reacties 0 aandelen 69 Views
  • BLOGS.NVIDIA.COM
    Japans Market Innovators Bring Physical AI to Industries With NVIDIA AI and Omniverse
    Robots transporting heavy metal at a Toyota plant. Yaskawas robots working alongside human coworkers in factories. To advance efforts like these virtually, Rikei Corporation develops digital twin tooling to assist planning.And if that werent enough, diversified retail holdings company Seven & i Holdings is running digital twin simulations to enhance customer experiences.Physical AI and industrial AI, powered by NVIDIA Omniverse and Isaac and Metropolis, are propelling Japans industrial giants into the future. Such pioneering moves in robotic manipulation, industrial inspection and digital twins for human assistance are on full display at NVIDIA AI Summit Japan this week.The arrival of generative AI-driven robotics leaps couldnt come at a better time. With its population in decline, Japan has a critical need for advanced robotics. A report in the Japan Times said the nation is expected to face an 11 million shortage of workers by 2040.Industrial and physical AI-based systems are today becoming accelerated by a three computer solution that enables robot AI model training, testing, and simulation and deployment.Looking Into the Future With Toyota RoboticsToyota is tapping into NVIDIA Omniverse for physics simulation for robot motion and gripping to improve its metal forging capabilities. Thats helping to reduce the time it takes to teach robots to transport forging materials.Image courtesy of Toyota.Toyota is verifying to reproduce its robotic work handling and robot motion with the accuracy of NVIDIA PhysX with Omniverse. Omniverse enables modeling digital twins of factories and other environments that accurately duplicate the physical characteristics of objects and systems in the real world, which is foundational to building physical AI for driving next-generation autonomous systems.Omniverse enables Toyota to model things like mass properties, gravity and friction for comparing results with physical representations of tests. This can help work in manipulation and robot motion.It also allows Toyota to replicate the expertise of its senior employees with robotics for issues requiring a high degree of skills. And it increases safety and throughput since factory personnel are not required to work in the high temperatures and harsh environments associated with metal-forging production lines.Driving Automation, Yaskawa Harnesses NVIDIA IsaacYaskawa is a leading global robotics manufacturer that has shipped more than 600,000 robots and offers nearly 200 robot models, including industrial robots for the automotive industry, collaborative robots and dual-arm robots.Image courtesy of YASKAWA.The Japanese robotics leader is expanding into new markets with its MOTOMAN NEXT adaptive robot, which is moving into task adaptation, versatility and flexibility. Driven by advanced robotics enabled by the NVIDIA Isaac and Omniverse platforms, Yaskawas adaptive robots are focused on delivering automation for the food, logistics, medical and agriculture industries.Using NVIDIA Isaac Manipulator, a reference workflow of NVIDIA-accelerated libraries and AI models, Yaskawa is integrating AI to its industrial arm robots, giving them the ability to complete a wide range of industrial automation tasks. Yaskawa is using FoundationPose for precise 6D pose estimation and tracking. These AI models enhance the adaptability and efficiency of Yaskawas robotic arms, and the motion control enables sim-to-real transition, making them versatile and effective at performing complex tasks across a wide range of industries.Additionally, Yaskawa is embracing digital twin and robotics simulations powered by NVIDIA Isaac Sim, built on Omniverse, to accelerate the development and deployment of Yaskawas robotic solutions, saving time and resources.Creating Customer Experiences at Seven & i Holdings With Omniverse, MetropolisSeven & i Holdings is one of the largest Japanese diversified retail holdings companies.The Japanese retail company runs a proof of concept to understand customer behaviors at its retail outlets with digital simulation.Seven & i Holdings is pushing its research activities by tapping into NVIDIA Omniverse and NVIDIA Metropolis to better understand operations across its retail stores. Using NVIDIA Metropolis, a set of developer tools for building vision AI applications, store operations are analyzed with computer vision models, helping improve efficiency and safety. A digital twin of this environment is developed in an Omniverse-based application, along with assets from Blender and animations from SideFX Houdini.Image courtesy of Seven & i Holdings Co.Combining digital twins with price recognition, object tracking and other AI-based computation enables it to generate useful behavioral insights about retail environments and customer interactions. Such information offers opportunities to dynamically generate and show personalized ads on digital signage displays targeted to customers.The retailer plans to use Metropolis and the NVIDIA Merlin recommendation engine framework to create tailored suggestions to individual shoppers, responding to customer interests based on data like never before.Virtually Revolutionizing, Rikei Corporation Launches Asset Library for Digital TwinsRikei Corporation, a systems solutions provider, specializes in spatial computing and extended reality technology for the manufacturing sector.The technology company has developed JAPAN USD Factory, which is a digital twin asset library specifically for the Japanese manufacturing industry. Developed on NVIDIA Omniverse, JAPAN USD Factory reproduces materials and equipment commonly used in manufacturing sites across Japan in a digital form so that Japanese manufacturers can more easily build digital twins of their factories and warehouses.Image courtesy of RikeiRikei Corporation aims to streamline various stages of design, simulation and operations for the manufacturing process with these digital assets to enhance productivity with digital twins.Developed with OpenUSD, a universal 3D asset interchange, JAPAN USD Factory allows developers to access its asset libraries for things like palettes and racks, offering seamless integration across tools and workflows.To learn more, watch the NVIDIA AI Summit Japan fireside chat with NVIDIA founder and CEO Jensen Huang.
    0 Reacties 0 aandelen 70 Views
  • BLOGS.NVIDIA.COM
    Every Industry, Every Company, Every Country Must Produce a New Industrial Revolution, Says NVIDIA CEO
    The next technology revolution is here, and Japan is poised to be a major part of it.At NVIDIAs AI Summit Japan on Wednesday, NVIDIA founder and CEO Jensen Huang and SoftBank Chairman and CEO Masayoshi Son shared a sweeping vision for Japans role in the AI revolution.Speaking in Tokyo, Huang underscored that AI infrastructure is essential to drive global transformation.In his talk, he emphasized two types of AI: digital and physical. Digital is represented by AI agents, while physical AI is represented by robotics.He said Japan is poised to create both types, leveraging its unique language, culture and data.Every industry, every company, every country must produce a new industrial revolution, Huang said, pointing to AI as the catalyst for this shift.Huang emphasized Japans unique position to lead in this AI-driven economy, praising the countrys history of innovation and engineering excellence as well as its technological and cultural panache.I cant imagine a better country to lead the robotics AI revolution than Japan, Huang said. You have created some of the worlds best robots. These are the robots we grew up with, the robots weve loved our whole lives.Huang highlighted the potential of agentic AIadvanced digital agents capable of understanding, reasoning, planning, and taking actionto transform productivity across industries.He noted that these agents can tackle complex, multi-step tasks, effectively doing 50% of the work for 100% of the people, turbocharging human productivity.By turning data into actionable insights, agentic AI offers companies powerful tools to enhance operations without replacing human roles.SoftBank and NVIDIA to Build Japans Largest AI SupercomputerAmong the summits major announcements was NVIDIAs collaboration with SoftBank to build Japans most powerful AI supercomputer.NVIDIA CEO Jensen Huang showcases Blackwell, the companys advanced AI supercomputing platform, at the AI Summit Japan in Tokyo.Using the NVIDIA Blackwell platform, SoftBanks DGX SuperPOD will deliver extensive computing power to drive sovereign AI initiatives, including large language models (LLMs) specifically designed for Japan.With your support, we are creating the largest AI data center here in Japan, said Son, a visionary who, as Huang noted, has been a part of every major technology revolution of the past half-century.We should provide this platform to many of those researchers, the students, the startups, so that we can encourage so that they have a better access [to] much more compute.Huang noted that the AI supercomputer project is just one part of the collaboration.SoftBank also successfully piloted the worlds first combined AI and 5G network, known as AI-RAN (radio access network). The network enables AI and 5G workloads to run simultaneously, opening new revenue possibilities for telecom providers.Now with this intelligence network that we densely connect each other, [it will] become one big neural brain for the infrastructure intelligence to Japan, Son said. That will be amazing.Accelerated Computing and Japans AI InfrastructureHuang emphasized the profound synergy between AI and robotics, highlighting how advancements in artificial intelligence have created new possibilities for robotics across industries.He noted that as AI enables machines to learn, adapt and perform complex tasks autonomously, robotics is evolving beyond traditional programming.Huang spoke to developers, researchers and AI industry leaders at this weeks NVIDIA AI Summit Japan.I hope that Japan will take advantage of the latest breakthroughs in artificial intelligence and combine that with your world-class expertise in mechatronics, Huang said. No country in the world has greater skills in mechatronics than Japan, and this is an extraordinary opportunity to seize.NVIDIA aims to develop a national AI infrastructure network through partnerships with Japanese cloud leaders such as GMO Internet Group and SAKURA internet.Supported by the Japan Ministry of Economy, Trade and Industry, this infrastructure will support sectors like healthcare, automotive and robotics by providing advanced AI resources to companies and research institutions across Japan.This is the beginning of a new era we cant miss this time, Huang added.Read more about all of todays announcements in the NVIDIA AI Summit Japan online press kit.
    0 Reacties 0 aandelen 87 Views
  • BLOGS.NVIDIA.COM
    Indonesia Tech Leaders Team With NVIDIA and Partners to Launch Nations AI
    Working with NVIDIA and its partners, Indonesias technology leaders have launched an initiative to bring sovereign AI to the nations more than 277 million Indonesian speakers.The collaboration is grounded in a broad public-private partnership that reflects the nations concept of gotong royong, a term describing a spirit of mutual assistance and community collaboration.NVIDIA founder and CEO Jensen Huang joined Indonesia Minster for State-Owned Enterprises Erick Thohir, Indosat Ooredoo Hutchison (IOH) President Director and CEO Vikram Sinha, GoTo CEO Patrick Walujo and other leaders in Jakarta to celebrate the launch of Sahabat-AI.Sahabat-AI is a collection of open-source Indonesian large language models (LLMs) that local industries, government agencies, universities and research centers can use to create generative AI applications. Built with NVIDIA NeMo and NVIDIA NIM microservices, the models were launched today at Indonesia AI Day, a conference focused on enabling AI sovereignty and driving AI-driven digital independence in the country.Built by Indonesians, for Indonesians, Sahabat-AI models understand local contexts and enable people to build generative AI services and applications in Bahasa Indonesian and various local languages. The models form the foundation of a collaborative effort to empower Indonesia through a locally developed, open-source LLM ecosystem.Artificial intelligence will democratize technology. It is the great equalizer, said Huang.The technology is complicated but the benefit is not.Sahabat-AI is not just a technological achievement, it embodies Indonesias vision for a future where digital sovereignty and inclusivity go hand in hand, Sinha said. By creating an AI model that speaks our language and reflects our culture, were empowering every Indonesian to harness advanced technologys potential. This initiative is a crucial step toward democratizing AI as a tool for growth, innovation and empowerment across our diverse society.To accelerate this initiative, IOH one of Indonesias largest telecom and internet companies earlier this year launched GPU Merdeka by Lintasarta, an NVIDIA-accelerated sovereign AI cloud. The GPU Merdeka cloud service operates at a BDx Indonesia AI data center powered by renewable energy.Bolstered by the NVIDIA Cloud Partner program, IOH subsidiary Lintasarta built the high-performance AI cloud in less than three months, a feat that wouldve taken much longer without NVIDIAs technology infrastructure. The AI cloud is now driving transformation across energy, financial services, healthcare and other industries.The NVIDIA Cloud Partner (NCP) program provides Lintasarta with access to NVIDIA reference architectures blueprints for building high-performance, scalable and secure data centers.The program also offers technological and go-to-market support, access to the latest NVIDIA AI software and accelerated computing platforms, and opportunities to collaborate with NVIDIAs extensive ecosystem of industry partners. These partners include global systems integrators like Accenture and Tech Mahindra and software companies like GoTo and Hippocratic AI, each of which is working alongside IOH to boost the telcos sovereign AI initiatives.Developing Industry-Specific Applications With AccenturePartnering with leading professional services company Accenture, IOH is developing applications for industry-specific use cases based on its new AI cloud, Sahabat-AI and the NVIDIA AI Enterprise software platform.NVIDIA CEO Huang joined Accenture CEO Julie Sweet in a fireside chat during Indonesia AI Day to discuss how the companies are supporting enterprise and industrial AI in Indonesia.The collaboration taps into the Accenture AI Refinery platform to help Indonesian enterprises build AI solutions tailored for financial services, energy and other industries, while delivering sovereign data governance.Initially focused on financial services, IOHs work with Accenture and NVIDIA technologies is delivering pre-built enterprise solutions that can help Indonesian banks more quickly harness AI.With a modular architecture, these solutions can meet clients needs wherever they are in their AI journeys, helping increase profitability, operational efficiency and sustainable growth.Building the Bahasa LLM and Chatbot Services With Tech MahindraBuilt with India-based global systems integrator Tech Mahindra, the Sahabat-AI LLMs power various AI services in Indonesia.For example, Sahabat-AI enables IOHs AI chatbot to answer queries in the Indonesian language for various citizen and resident services. A person could ask about processes for updating their national identification card, as well as about tax rates, payment procedures, deductions and more.The chatbot integrates with a broader citizen services platform Tech Mahindra and IOH are developing as part of the Indonesian governments sovereign AI initiative.Indosat developed Sahabat-AI using the NVIDIA NeMo platform for developing customized LLMs. The team fine-tuned a version of the Llama 3 8B model, customizing it for the Bahasa language using a diverse dataset tailored for effective communication with users.To further optimize performance, Sahabat-AI uses NVIDIA NIM microservices, which have demonstrated up to 2.5x greater throughput compared with standard implementations. This improvement in processing efficiency allows for faster responses and more satisfying user experiences.In addition, NVIDIA NeMo Guardrails open-source software orchestrates dialog management and helps ensure accuracy, appropriateness and security of the LLM-based chatbot.Many other service capabilities tapping Sahabat-AI are also planned for development, including AI-powered healthcare services and other local applications.Improving Indonesian Healthcare With Hippocratic AIAmong the first to tap into Sahabat-AI is healthcare AI company Hippocratic AI, which is using the models, the NVIDIA AI platform and IOHs sovereign AI cloud to develop digital agents that can have humanlike conversations, exhibit empathic qualities, and build rapport and trust with patients across Indonesia.Hippocratic AI empowers a novel trillion-parameter constellation architecture that brings together specialized healthcare LLM agents to deliver safe, accurate digital agent implementation.Digital AI agents can significantly increase staff productivity by offloading time-consuming tasks, allowing human nurses and medical professionals to focus on critical duties to increase healthcare accessibility and quality of service.IOHs sovereign AI cloud lets Hippocratic AI keep patient data local and secure, and enables extremely low-latency AI inference for its LLMs.Enhancing Simplicity, Accessibility for On-Demand and Financial Services With GoToGoTo offers technology infrastructure and solutions that help users thrive in the digital economy, including through applications spanning on-demand services for transport, food, grocery and logistics delivery, financial services and e-commerce.The company which operates one of Indonesias leading on-demand transport services, as well as a leading payment application in the country is adopting and enhancing the new Sahabat-AI models to integrate with its AI voice assistant, called Dira.Dira is a speech and generative AI-powered digital assistant that helps customers book rides, order food deliveries, transfer money, pay bills and more.Tapping into Sahabat-AI, Dira is poised to deliver more localized and culturally relevant interactions with application users.Advancing Sustainability Within Lintasarta as IOHs AI FactoryFundamentally, Lintasartas AI cloud is an AI factory a next-generation data center that hosts advanced, full-stack accelerated computing platforms for the most computationally intensive tasks. Itll enable regional governments, businesses and startups to build, customize and deploy generative AI applications aligned with local language and customs.Looking forward, Lintasarta plans to expand its AI factory with the most advanced NVIDIA technologies. The infrastructure already boasts a green design, powered by renewable energy and sustainable technologies. Lintasarta is committed to adding value to Indonesias digital ecosystem with integrated, secure and sustainable technology, in line with the Golden Indonesia 2045 vision.Beyond Indonesia, NVIDIA NIM microservices are bolstering sovereign AI models that support local languages in India, Japan, Taiwan and many other countries and regions.NVIDIA NIM microservices, NeMo and NeMo Guardrails are available as part of the NVIDIA AI Enterprise software platform.Learn more about NVIDIA-powered sovereign AI factories for telecommunications.See notice regarding software product information.
    0 Reacties 0 aandelen 85 Views
  • BLOGS.NVIDIA.COM
    Peak Training: Blackwell Delivers Next-Level MLPerf Training Performance
    Generative AI applications that use text, computer code, protein chains, summaries, video and even 3D graphics require data-center-scale accelerated computing to efficiently train the large language models (LLMs) that power them.In MLPerf Training 4.1 industry benchmarks, the NVIDIA Blackwell platform delivered impressive results on workloads across all tests and up to 2.2x more performance per GPU on LLM benchmarks, including Llama 2 70B fine-tuning and GPT-3 175B pretraining.In addition, NVIDIAs submissions on the NVIDIA Hopper platform continued to hold at-scale records on all benchmarks, including a submission with 11,616 Hopper GPUs on the GPT-3 175B benchmark.Leaps and Bounds With BlackwellThe first Blackwell training submission to the MLCommons Consortium which creates standardized, unbiased and rigorously peer-reviewed testing for industry participants highlights how the architecture is advancing generative AI training performance.For instance, the architecture includes new kernels that make more efficient use of Tensor Cores. Kernels are optimized, purpose-built math operations like matrix-multiplies that are at the heart of many deep learning algorithms.Blackwells higher per-GPU compute throughput and significantly larger and faster high-bandwidth memory allows it to run the GPT-3 175B benchmark on fewer GPUs while achieving excellent per-GPU performance.Taking advantage of larger, higher-bandwidth HBM3e memory, just 64 Blackwell GPUs were able to run in the GPT-3 LLM benchmark without compromising per-GPU performance. The same benchmark run using Hopper needed 256 GPUs.The Blackwell training results follow an earlier submission to MLPerf Inference 4.1, where Blackwell delivered up to 4x more LLM inference performance versus the Hopper generation. Taking advantage of the Blackwell architectures FP4 precision, along with the NVIDIA QUASAR Quantization System, the submission revealed powerful performance while meeting the benchmarks accuracy requirements.Relentless OptimizationNVIDIA platforms undergo continuous software development, racking up performance and feature improvements in training and inference for a wide variety of frameworks, models and applications.In this round of MLPerf training submissions, Hopper delivered a 1.3x improvement on GPT-3 175B per-GPU training performance since the introduction of the benchmark.NVIDIA also submitted large-scale results on the GPT-3 175B benchmark using 11,616 Hopper GPUs connected with NVIDIA NVLink and NVSwitch high-bandwidth GPU-to-GPU communication and NVIDIA Quantum-2 InfiniBand networking.NVIDIA Hopper GPUs have more than tripled scale and performance on the GPT-3 175B benchmark since last year. In addition, on the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA increased performance by 26% using the same number of Hopper GPUs, reflecting continued software enhancements.NVIDIAs ongoing work on optimizing its accelerated computing platforms enables continued improvements in MLPerf test results driving performance up in containerized software, bringing more powerful computing to partners and customers on existing platforms and delivering more return on their platform investment.Partnering UpNVIDIA partners, including system makers and cloud service providers like ASUSTek, Azure, Cisco, Dell, Fujitsu, Giga Computing, Lambda Labs, Lenovo, Oracle Cloud, Quanta Cloud Technology and Supermicro also submitted impressive results to MLPerf in this latest round.A founding member of MLCommons, NVIDIA sees the role of industry-standard benchmarks and benchmarking best practices in AI computing as vital. With access to peer-reviewed, streamlined comparisons of AI and HPC platforms, companies can keep pace with the latest AI computing innovations and access crucial data that can help guide important platform investment decisions.Learn more about the latest MLPerf results on the NVIDIA Technical Blog.
    0 Reacties 0 aandelen 69 Views
  • BLOGS.NVIDIA.COM
    2025 Predictions: AI Finds a Reason to Tap Industry Data Lakes
    Since the advent of the computer age, industries have been so awash in stored data that most of it never gets put to use.This data is estimated to be in the neighborhood of 120 zettabytes the equivalent of trillions of terabytes, or more than 120x the amount of every grain of sand on every beach around the globe. Now, the worlds industries are putting that untamed data to work by building and customizing large language models (LLMs).As 2025 approaches, industries such as healthcare, telecommunications, entertainment, energy, robotics, automotive and retail are using those models, combining it with their proprietary data and gearing up to create AI that can reason.The NVIDIA experts below focus on some of the industries that deliver $88 trillion worth of goods and services globally each year. They predict that AI that can harness data at the edge and deliver near-instantaneous insights is coming to hospitals, factories, customer service centers, cars and mobile devices near you.But first, lets hear AIs predictions for AI. When asked, What will be the top trends in AI in 2025 for industries? both Perplexity and ChatGPT 4.0 responded that agentic AI sits atop the list alongside edge AI, AI cybersecurity and AI-driven robots.Agentic AI is a new category of generative AI that operates virtually autonomously. It can make complex decisions and take actions based on continuous learning and analysis of vast datasets. Agentic AI is adaptable, has defined goals and can correct itself, and can chat with other AI agents or reach out to a human for help.Now, hear from NVIDIA experts on what to expect in the year ahead:Kimberly PowellVice President of HealthcareHuman-robotic interaction: Robots will assist human clinicians in a variety of ways, from understanding and responding to human commands, to performing and assisting in complex surgeries.Its being made possible by digital twins, simulation and AI that train and test robotic systems in virtual environments to reduce risks associated with real-world trials. It also can train robots to react in virtually any scenario, enhancing their adaptability and performance across different clinical situations.New virtual worlds for training robots to perform complex tasks will make autonomous surgical robots a reality. These surgical robots will perform complex surgical tasks with precision, reducing patient recovery times and decreasing the cognitive workload for surgeons.Digital health agents: The dawn of agentic AI and multi-agent systems will address the existential challenges of workforce shortages and the rising cost of care.Administrative health services will become digital humans taking notes for you or making your next appointment introducing an era of services delivered by software and birthing a service-as-a-software industry.Patient experience will be transformed with always-on, personalized care services while healthcare staff will collaborate with agents that help them reduce clerical work, retrieve and summarize patient histories, and recommend clinical trials and state-of-the-art treatments for their patients.Drug discovery and design AI factories: Just as ChatGPT can generate an email or a poem without putting a pen to paper for trial and error, generative AI models in drug discovery can liberate scientific thinking and exploration.Techbio and biopharma companies have begun combining models that generate, predict and optimize molecules to explore the near-infinite possible target drug combinations before going into time-consuming and expensive wet lab experiments.The drug discovery and design AI factories will consume all wet lab data, refine AI models and redeploy those models improving each experiment by learning from the previous one. These AI factories will shift the industry from a discovery process to a design and engineering one.Rev LebaredianVice President of Omniverse and Simulation TechnologyLets get physical (AI, that is): Getting ready for AI models that can perceive, understand and interact with the physical world is one challenge enterprises will race to tackle.While LLMs require reinforcement learning largely in the form of human feedback, physical AI needs to learn in a world model that mimics the laws of physics. Large-scale physically based simulations are allowing the world to realize the value of physical AI through robots by accelerating the training of physical AI models and enabling continuous training in robotic systems across every industry.Cheaper by the dozen: In addition to their smarts (or lack thereof), one big factor that has slowed adoption of humanoid robots has been affordability. As agentic AI brings new intelligence to robots, though, volume will pick up and costs will come down sharply. The average cost of industrial robots is expected to drop to $10,800 in 2025, down sharply from $46K in 2010 to $27K in 2017. As these devices become significantly cheaper, theyll become as commonplace across industries as mobile devices are.Deepu TallaVice President of Robotics and Edge ComputingRedefining robots: When people think of robots today, theyre usually images or content showing autonomous mobile robots (AMRs), manipulator arms or humanoids. But tomorrows robots are set to be an autonomous system that perceives, reasons, plans and acts then learns.Soon well be thinking of robots embodied everywhere from surgical rooms and data centers to warehouses and factories. Even traffic control systems or entire cities will be transformed from static, manually operated systems to autonomous, interactive systems embodied by physical AI.The rise of small language models: To improve the functionality of robots operating at the edge, expect to see the rise of small language models that are energy-efficient and avoid latency issues associated with sending data to data centers. The shift to small language models in edge computing will improve inference in a range of industries, including automotive, retail and advanced robotics.Kevin LevittGlobal Director of Financial ServicesAI agents boost firm operations: AI-powered agents will be deeply integrated into the financial services ecosystem, improving customer experiences, driving productivity and reducing operational costs.AI agents will take every form based on each financial services firms needs. Human-like 3D avatars will take requests and interact directly with clients, while text-based chatbots will summarize thousands of pages of data and documents in seconds to deliver accurate, tailored insights to employees across all business functions.AI factories become table stakes: AI use cases in the industry are exploding. This includes improving identity verification for anti-money laundering and know-your-customer regulations, reducing false positives for transaction fraud and generating new trading strategies to improve market returns. AI also is automating document management, reducing funding cycles to help consumers and businesses on their financial journeys.To capitalize on opportunities like these, financial institutions will build AI factories that use full-stack accelerated computing to maximize performance and utilization to build AI-enabled applications that serve hundreds, if not thousands, of use cases helping set themselves apart from the competition.AI-assisted data governance: Due to the sensitive nature of financial data and stringent regulatory requirements, governance will be a priority for firms as they use data to create reliable and legal AI applications, including for fraud detection, predictions and forecasting, real-time calculations and customer service.Firms will use AI models to assist in the structure, control, orchestration, processing and utilization of financial data, making the process of complying with regulations and safeguarding customer privacy smoother and less labor intensive. AI will be the key to making sense of and deriving actionable insights from the industrys stockpile of underutilized, unstructured data.Richard KerrisVice President of Media and EntertainmentLet AI entertain you: AI will continue to revolutionize entertainment with hyperpersonalized content on every screen, from TV shows to live sports. Using generative AI and advanced vision-language models, platforms will offer immersive experiences tailored to individual tastes, interests and moods. Imagine teaser images and sizzle reels crafted to capture the essence of a new show or live event and create an instant personal connection.In live sports, AI will enhance accessibility and cultural relevance, providing language dubbing, tailored commentary and local adaptations. AI will also elevate binge-watching by adjusting pacing, quality and engagement options in real time to keep fans captivated. This new level of interaction will transform streaming from a passive experience into an engaging journey that brings people closer to the action and each other.AI-driven platforms will also foster meaningful connections with audiences by tailoring recommendations, trailers and content to individual preferences. AIs hyperpersonalization will allow viewers to discover hidden gems, reconnect with old favorites and feel seen. For the industry, AI will drive growth and innovation, introducing new business models and enabling global content strategies that celebrate unique viewer preferences, making entertainment feel boundless, engaging and personally crafted.Ronnie VasishtaSenior Vice President of TelecomsThe AI connection: Telecommunications providers will begin to deliver generative AI applications and 5G connectivity over the same network. AI radio access network (AI-RAN) will enable telecom operators to transform traditional single-purpose base stations from cost centers into revenue-producing assets capable of providing AI inference services to devices, while more efficiently delivering the best network performance.AI agents to the rescue: The telecommunications industry will be among the first to dial into agentic AI to perform key business functions. Telco operators will use AI agents for a wide variety of tasks, from suggesting money-saving plans to customers and troubleshooting network connectivity, to answering billing questions and processing payments.More efficient, higher-performing networks: AI also will be used at the wireless network layer to enhance efficiency, deliver site-specific learning and reduce power consumption. Using AI as an intelligent performance improvement tool, operators will be able to continuously observe network traffic, predict congestion patterns and make adjustments before failures happen, allowing for optimal network performance.Answering the call on sovereign AI: Nations will increasingly turn to telcos which have proven experience managing complex, distributed technology networks to achieve their sovereign AI objectives. The trend will spread quickly across Europe and Asia, where telcos in Switzerland, Japan, Indonesia and Norway are already partnering with national leaders to build AI factories that can use proprietary, local data to help researchers, startups, businesses and government agencies create AI applications and services.Xinzhou WuVice President of AutomotivePedal to generative AI metal: Autonomous vehicles will become more performant as developers tap into advancements in generative AI. For example, harnessing foundation models, such as vision language models, provides an opportunity to use internet-scale knowledge to solve one of the hardest problems in the autonomous vehicle (AV) field, namely that of efficiently and safely reasoning through rare corner cases.Simulation unlocks success: More broadly, new AI-based tools will enable breakthroughs in how AV development is carried out. For example, advances in generative simulation will enable the scalable creation of complex scenarios aimed at stress-testing vehicles for safety purposes. Aside from allowing for testing unusual or dangerous conditions, simulation is also essential for generating synthetic data to enable end-to-end model training.Three-computer approach: Effectively, new advances in AI will catalyze AV software development across the three key computers underpinning AV development one for training the AI-based stack in the data center, another for simulation and validation, and a third in-vehicle computer to process real-time sensor data for safe driving. Together, these systems will enable continuous improvement of AV software for enhanced safety and performance of cars, trucks, robotaxis and beyond.Marc SpielerSenior Managing Director of Global Energy IndustryWelcoming the smart grid: Do you know when your daily peak home electricity is? You will soon as utilities around the world embrace smart meters that use AI to broadly manage their grid networks, from big power plants and substations and, now, into the home.As the smart grid takes shape, smart meters once deemed too expensive to be installed in millions of homes that combine software, sensors and accelerated computing will alert utilities when trees in a backyard brush up against power lines or when to offer big rebates to buy back the excess power stored through rooftop solar installations.Powering up: Delivering the optimal power stack has always been mission-critical for the energy industry. In the era of generative AI, utilities will address this issue in ways that reduce environmental impact.Expect in 2025 to see a broader embrace of nuclear power as one clean-energy path the industry will take. Demand for natural gas also will grow as it replaces coal and other forms of energy. These resurgent forms of energy are being helped by the increased use of accelerated computing, simulation technology and AI and 3D visualization, which helps optimize design, pipeline flows and storage. Well see the same happening at oil and gas companies, which are looking to reduce the impact of energy exploration and production.Azita MartinVice President of Retail, Consumer-Packaged Goods and Quick-Service RestaurantsSoftware-defined retail: Supercenters and grocery stores will become software-defined, each running computer vision and sophisticated AI algorithms at the edge. The transition will accelerate checkout, optimize merchandising and reduce shrink the industry term for a product being lost or stolen.Each store will be connected to a headquarters AI network, using collective data to become a perpetual learning machine. Software-defined stores that continually learn from their own data will transform the shopping experience.Intelligent supply chain: Intelligent supply chains created using digital twins, generative AI, machine learning and AI-based solvers will drive billions of dollars in labor productivity and operational efficiencies. Digital twin simulations of stores and distribution centers will optimize layouts to increase in-store sales and accelerate throughput in distribution centers.Agentic robots working alongside associates will load and unload trucks, stock shelves and pack customer orders. Also, last-mile delivery will be enhanced with AI-based routing optimization solvers, allowing products to reach customers faster while reducing vehicle fuel costs.
    0 Reacties 0 aandelen 68 Views
  • BLOGS.NVIDIA.COM
    GPUs Companion: NVIDIA App Supercharges RTX GPUs With AI-Powered Tools and Features
    The NVIDIA app officially releasing today is a companion platform for content creators, GeForce gamers and AI enthusiasts using GeForce RTX GPUs.Featuring a GPU control center, the NVIDIA app allows users to access all their GPU settings in one place. From the app, users can do everything from updating to the latest drivers and configuring NVIDIA G-SYNC monitor settings, to tapping AI video enhancements through RTX Video and discovering exclusive AI-powered NVIDIA apps.In addition, NVIDIA RTX Remix has a new update that improves performance and streamlines workflows.For a deeper dive on gaming-exclusive benefits, check out the GeForce article.The GPUs PC CompanionThe NVIDIA app turbocharges GeForce RTX GPUs with a bevy of applications, features and tools.Keep NVIDIA Studio Drivers up to date The NVIDIA app automatically notifies users when the latest Studio Driver is available. These graphics drivers, fine-tuned in collaboration with developers, enhance performance in top creative applications and are tested extensively to deliver maximum stability. Theyre released once a month.Discover AI creator apps Millions have used the NVIDIA Broadcast app to turn offices and dorm rooms into home studios using AI-powered features that improve audio and video quality without the need for expensive, specialized equipment. Its user-friendly, works in virtually any app and includes AI features like Noise and Acoustic Echo Removal, Virtual Backgrounds, Eye Contact, Auto Frame, Vignettes and Video Noise Removal.NVIDIA RTX Remix is a modding platform built on NVIDIA Omniverse that allows users to capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing, including DLSS 3.5 support featuring Ray Reconstruction.NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscape images. Artists can create backgrounds quickly or speed up concept exploration, enabling them to visualize more ideas.Enhance video streams with AI The NVIDIA app includes a System tab as a one-stop destination for display, video and GPU options. It also includes an AI feature called RTX Video that enhances all videos streamed on browsers.RTX Video Super Resolution uses AI to enhance video streaming on GeForce RTX GPUs by removing compression artifacts and sharpening edges when upscaling.RTX Video HDR converts any standard dynamic range video into vibrant high dynamic range (HDR) when played in Google Chrome, Microsoft Edge, Mozilla Firefox or the VLC media player. HDR enables more vivid, dynamic colors to enhance gaming and content creation. A compatible HDR10 monitor is required.Give game streams or video on demand a unique look with AI filters Content creators looking to elevate their streamed or recorded gaming sessions can access the NVIDIA apps redesigned Overlay feature with AI-powered game filters.Freestyle RTX filters allow livestreamers and content creators to apply fun post-processing filters, changing the look and mood of content with tweaks to color and saturation.Joining these Freestyle RTX game filters is RTX Dynamic Vibrance, which enhances visual clarity on a per-app basis. Colors pop more on screen, and color crushing is minimized to preserve image quality and immersion. The filter is accelerated by Tensor Cores on GeForce RTX GPUs, making it easier for viewers to enjoy all the action.Enhanced visual clarity with RTX Dynamic Vibrance.Freestyle RTX filters empower gamers to personalize the visual aesthetics of their favorite games through real-time post-processing filters. This feature boasts compatibility with a vast library of more than 1,200 games.Download the NVIDIA app today.RTX Remix 0.6 ReleaseThe new RTX Remix update offers modders significantly improved mod performance, as well as quality of life improvements that help streamline the mod-making process.RTX Remix now supports the ability to test experimental features under active development. It includes a new Stage Manager that makes it easier to see and change every mesh, texture, light or element in scenes in real time.To learn more about the RTX Remix 0.6 release, check out the release notes.With RTX Remix in the NVIDIA app launcher, modders have direct access to Remixs powerful features. Through the NVIDIA app, RTX Remix modders can benefit from faster start-up times, lower CPU usage and direct control over updates with an optimized user interface.To the 3D Victor Go the SpoilsNVIDIA Studio in June kicked off a 3D character contest for artists in collaboration with Reallusion, a company that develops 2D and 3D character creation and animation software. Today, were celebrating the winners from that contest.In the category of Best Realistic Character Animation, Robert Lundqvist won for the piece Lisa and Fia.In the category of Best Stylized Character Animation, Loic Bramoulle won for the piece HellGal.Both winners will receive an NVIDIA Studio-validated laptop to help further their creative efforts.View over 250 imaginative and impressive entries here.Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of whats new and whats next by subscribing to the AI Decoded newsletter.
    0 Reacties 0 aandelen 98 Views
  • BLOGS.NVIDIA.COM
    Welcome to GeForce NOW Performance: Priority Members Get Instant Upgrade
    This GFN Thursday, the GeForce NOW Priority membership is getting enhancements and a fresh name to go along with it. The new Performance membership offers more GeForce-powered premium gaming at no change in the monthly membership cost.Gamers having a hard time deciding between the Performance and Ultimate memberships can take them both for a spin with a Day Pass, now 25% off for a limited time. Day Passes give access to 24 continuous hours of powerful cloud gaming.In addition, seven new games are available this week, joining the over 2,000 games in the GeForce NOW library.Time for a Glow UpThe Performance membership keeps all the same great gaming benefits and now provides members with an enhanced streaming experience at no additional cost.Say hello to the Performance membership.Performance members can stream at up to 1440p an increase from the previous 1080p resolution and experience games in immersive, ultrawide resolutions. They can also save their in-game graphics settings across streaming sessions, including for NVIDIA RTX features in supported titles.All current Priority members are automatically upgraded to Performance and can take advantage of the upgraded streaming experience today.Performance members will connect to GeForce RTX-powered gaming rigs for up to 1440p resolution. Ultimate members continue to receive the top streaming experience: connecting to GeForce RTX 4080-powered gaming rigs with up to 4K resolution and 120 frames per second, or 1080p and 240 fps in Competitive mode for games with support for NVIDIA Reflex technology.Gamers playing on the free tier will now see theyre streaming from basic rigs, with varying specs that offer entry-level cloud gaming and are optimized for capacity.Time to play.At the start of next year, GeForce NOW will roll out a 100-hour monthly playtime allowance to continue providing exceptional quality and speed as well as shorter queue times for Performance and Ultimate members. This ample limit comfortably accommodates 94% of members, who typically enjoy the service well within this timeframe. Members can check out how much time theyve spent in the cloud through their account portal (see screenshot example above).Up to 15 hours of unused playtime will automatically roll over to the next month for members, and additional hours can be purchased at $2.99 for 15 additional hours of Performance, or $5.99 for 15 additional Ultimate hours.Loyal Member BenefitTo thank the GFN community for joining the cloud gaming revolution, GeForce NOW is offering active paid members as of Dec. 31, 2024, the ability to continue with unlimited playtime for a full year until January 2026.New members can lock in this feature by signing up for GeForce NOW before Dec. 31, 2024. As long as a members account remains uninterrupted and in good standing, theyll continue to receive unlimited playtime for all of 2025.Dont Pass This UpFor those looking to try out the new premium benefits and all Performance and Ultimate memberships have to offer, Day Passes are 25% off for a limited time.Whether with the newly named Performance Day Pass at $2.99 or the Ultimate Day Pass at $5.99, members can unlock 24 hours of uninterrupted access to powerful NVIDIA GeForce RTX-powered cloud gaming servers.Another new GeForce NOW feature lets users apply the value of their most recently purchased Day Pass toward any monthly membership if they sign up within 48 hours of the completion of their Day Pass.Quarter the price, full day of fun.Dive into a vast library of over 2,000 games with enhanced graphics, including NVIDIA RTX features like ray tracing and DLSS. With the Ultimate Day Pass, snag a taste of GeForce NOWs highest-performing membership tier and enjoy up to 4K resolution 120 fps or 1080p 240 fps across nearly any device. Its an ideal way to experience elevated GeForce gaming in the cloud.Thrilling New GamesMembers can look for the following games available to stream in the cloud this week:Planet Coaster 2 (New release on Steam, Nov. 6)Teenage Mutant Ninja Turtles: Splintered Fate (New release on Steam, Nov. 6)Empire of the Ants (New release on Steam, Nov. 7)Unrailed 2: Back on Track (New release on Steam, Nov. 7)TCG Card Shop Simulator (Steam)StarCraft II (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)StarCraft Remastered (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)What are you planning to play this weekend? Let us know on X or in the comments below.
    0 Reacties 0 aandelen 104 Views
  • BLOGS.NVIDIA.COM
    Jensen Huang to Discuss AIs Future With Masayoshi Son at AI Summit Japan
    NVIDIA founder and CEO Jensen Huang will join SoftBank Group Chairman and CEO Masayoshi Son in a fireside chat at NVIDIA AI Summit Japan to discuss the transformative role of AI and more.Taking place on Nov. 12-13, the invite-only event at The Prince Park Tower in Tokyos Minato district will gather industry leaders to explore advancements in generative AI, robotics and industrial digitalization.Call to action: Tickets for the event are sold out, but tune in via livestream or watch on-demand sessions.Over 50 sessions and live demos will showcase innovations from NVIDIA and its partners, covering everything from large language models, known as LLMs, to AI-powered robotics and digital twins.Huang and Son will discuss AIs transformative role and efforts driving the AI field.Son has invested in companies around the world that show potential for AI-driven growth through SoftBank Vision Funds. Huang has steered NVIDIAs rise to a global leader in AI and accelerated computing.One major topic: Japans AI infrastructure initiative, supported by NVIDIA and local firms. This investment is central to the countrys AI ambitions.Leaders from METI and experts like Shunsuke Aoki from Turing Inc. will dig into how sovereign AI fosters innovation and strengthens Japans technological independence.On Wednesday, Nov. 13, two key sessions will offer deeper insights into Japans AI journey:The Present and Future of Generative AI in Japan: Professor Yutaka Matsuo of the University of Tokyo will explore the advances of generative AI and its impact on policy and business strategy. Expect discussions on the opportunities and challenges Japan faces as it pushes forward with AI innovations.Sovereign AI and Its Role in Japans Future: A panel of four experts will dive into the concept of sovereign AI. Speakers like Takuya Watanabe of METI and Hironobu Tamba of SoftBank will discuss how sovereign AI can accelerate business strategies and strengthen Japans technological independence.These sessions highlight how Japan is positioning itself at the forefront of AI development. Practical insights into the next wave of AI innovation and policy are on the agenda.Experts from Sakana AI, Sony, Tokyo Science University and Yaskawa Electric will be among those presenting breakthroughs across sectors like healthcare, robotics and data centers.The summit will also feature hands-on workshops, including a full-day session on Tuesday, Nov. 12, titled Building RAG Agents With LLM.Led by NVIDIA experts, this workshop will offer practical experience in developing retrieval-augmented generation, or RAG, agents using large-scale language models.With its mix of forward-looking discussions and real-world applications, the NVIDIA AI Summit Tokyo will highlight Japans ongoing advancements in AI and its contributions to the global AI landscape.Tune in to the fireside chat between Son and Huang via livestream or watch on-demand sessions.
    0 Reacties 0 aandelen 105 Views
  • BLOGS.NVIDIA.COM
    Get Plugged In: How to Use Generative AI Tools in Obsidian
    Editors note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.As generative AI evolves and accelerates industry, a community of AI enthusiasts is experimenting with ways to integrate the powerful technology into common productivity workflows.Applications that support community plug-ins give users the power to explore how large language models (LLMs) can enhance a variety of workflows. By using local inference servers powered by the NVIDIA RTX-accelerated llama.cpp software library, users on RTX AI PCs can integrate local LLMs with ease.Previously, we looked at how users can take advantage of Leo AI in the Brave web browser to optimize the web browsing experience. Today, we look at Obsidian, a popular writing and note-taking application, based on the Markdown markup language, thats useful for keeping complex and linked records for multiple projects. The app supports community-developed plug-ins that bring additional functionality, including several that enable users to connect Obsidian to a local inferencing server like Ollama or LM Studio.Using Obsidian and LM Studio to generate notes with a 27B-parameter LLM accelerated by RTX.Connecting Obsidian to LM Studio only requires enabling the local server functionality in LM Studio by clicking on the Developer icon on the left panel, loading any downloaded model, enabling the CORS toggle and clicking Start. Take note of the chat completion URL from the Developer log console (http://localhost:1234/v1/chat/completions by default), as the plug-ins will need this information to connect.Next, launch Obsidian and open the Settings panel. Click Community plug-ins and then Browse. There are several community plug-ins related to LLMs, but two popular options are Text Generator and Smart Connections.Text Generator is helpful for generating content in an Obsidian vault, like notes and summaries on a research topic.Smart Connections is useful for asking questions about the contents of an Obsidian vault, such as the answer to an obscure trivia question previously saved years ago.Each plug-in has its own way of entering the LM Server URL.For Text Generator, open the settings and select Custom for Provider profile and paste the whole URL into the Endpoint field. For Smart Connections, configure the settings after starting the plug-in. In the settings panel on the right side of the interface, select Custom Local (OpenAI Format) for the model platform. Then, enter the URL and the model name (e.g., gemma-2-27b-instruct) into their respective fields as they appear in LM Studio.Once the fields are filled in, the plug-ins will function. The LM Studio user interface will also show logged activity if users are curious about whats happening on the local server side.Transforming Workflows With Obsidian AI Plug-InsBoth the Text Generator and Smart Connections plug-ins use generative AI in compelling ways.For example, imagine a user wants to plan a vacation to the fictitious destination of Lunar City and brainstorm ideas for what to do there. The user would start a new note, titled What to Do in Lunar City. Since Lunar City is not a real place, the query sent to the LLM will need to include a few extra instructions to guide the responses. Click the Text Generator plug-in icon, and the model will generate a list of activities to do during the trip.Obsidian, via the Text Generator plug-in, will request LM Studio to generate a response, and in turn LM Studio will run the Gemma 2 27B model. With RTX GPU acceleration in the users computer, the model can quickly generate a list of things to do.The Text Generator community plug-in in Obsidian enables users to connect to an LLM in LM Studio and generate notes for an imaginary vacation. The Text Generator community plug-in in Obsidian allows users to access an LLM through LM Studio to generate notes for a fictional vacation.Or, suppose many years later the users friend is going to Lunar City and wants to know where to eat. The user may not remember the names of the places where they ate, but they can check the notes in their vault (Obsidians term for a collection of notes) in case theyd written something down.Rather than looking through all of the notes manually, a user can use the Smart Connections plug-in to ask questions about their vault of notes and other content. The plug-in uses the same LM Studio server to respond to the request, and provides relevant information it finds from the users notes to assist the process. The plug-in does this using a technique called retrieval-augmented generation.The Smart Connections community plug-in in Obsidian uses retrieval-augmented generation and a connection to LM Studio to enable users to query their notes.These are fun examples, but after spending some time with these capabilities, users can see the real benefits and improvements for everyday productivity. Obsidian plug-ins are just two ways in which community developers and AI enthusiasts are embracing AI to supercharge their PC experiences.NVIDIA GeForce RTX technology for Windows PCs can run thousands of open-source models for developers to integrate into their Windows apps.Learn more about the power of LLMs, Text Generation and Smart Connections by integrating Obsidian into your workflow and play with the accelerated experience available on RTX AI PCs.Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of whats new and whats next by subscribing to the AI Decoded newsletter.
    0 Reacties 0 aandelen 108 Views
  • BLOGS.NVIDIA.COM
    Hugging Face and NVIDIA to Accelerate Open-Source AI Robotics Research and Development
    At the Conference for Robot Learning (CoRL) in Munich, Germany, Hugging Face and NVIDIA announced a collaboration to accelerate robotics research and development by bringing together their open-source robotics communities.Hugging Faces LeRobot open AI platform combined with NVIDIA AI, Omniverse and Isaac robotics technology will enable researchers and developers to drive advances across a wide range of industries, including manufacturing, healthcare and logistics.Open-Source Robotics for the Era of Physical AIThe era of physical AI robots understanding physical properties of environments is here, and its rapidly transforming the worlds industries.To drive and sustain this rapid innovation, robotics researchers and developers need access to open-source, extensible frameworks that span the development process of robot training, simulation and inference. With models, datasets and workflows released under shared frameworks, the latest advances are readily available for use without the need to recreate code.Hugging Faces leading open AI platform serves more than 5 million machine learning researchers and developers, offering tools and resources to streamline AI development. Hugging Face users can access and fine-tune the latest pretrained models and build AI pipelines on common APIs with over 1.5 million models, datasets and applications freely accessible on the Hugging Face Hub.LeRobot, developed by Hugging Face, extends the successful paradigms from its Transformers and Diffusers libraries into the robotics domain. LeRobot offers a comprehensive suite of tools for sharing data collection, model training and simulation environments along with designs for low-cost manipulator kits.NVIDIAs AI technology, simulation and open-source robot learning modular framework such as NVIDIA Isaac Lab can accelerate the LeRobots data collection, training and verification workflow. Researchers and developers can share their models and datasets built with LeRobot and Isaac Lab, creating a data flywheel for the robotics community.Scaling Robot Development With SimulationDeveloping physical AI is challenging. Unlike language models that use extensive internet text data, physics-based robotics relies on physical interaction data along with vision sensors, which is harder to gather at scale. Collecting real-world robot data for dexterous manipulation across a large number of tasks and environments is time-consuming and labor-intensive.Making this easier, Isaac Lab, built on NVIDIA Isaac Sim, enables robot training by demonstration or trial-and-error in simulation using high-fidelity rendering and physics simulation to create realistic synthetic environments and data. By combining GPU-accelerated physics simulations and parallel environment execution, Isaac Lab provides the ability to generate vast amounts of training data equivalent to thousands of real-world experiences from a single demonstration.Generated motion data is then used to train a policy with imitation learning. After successful training and validation in simulation, the policies are deployed on a real robot, where they are further tested and tuned to achieve optimal performance.This iterative process leverages real-world datas accuracy and the scalability of simulated synthetic data, ensuring robust and reliable robotic systems.By sharing these datasets, policies and models on Hugging Face, a robot data flywheel is created that enables developers and researchers to build upon each others work, accelerating progress in the field.The robotics community thrives when we build together, said Animesh Garg, assistant professor at Georgia Tech. By embracing open-source frameworks such as Hugging Faces LeRobot and NVIDIA Isaac Lab, we accelerate the pace of research and innovation in AI-powered robotics.Fostering Collaboration and Community EngagementThe planned collaborative workflow involves collecting data through teleoperation and simulation in Isaac Lab, storing it in the standard LeRobotDataset format. Data generated using GR00T-Mimic, will then be used to train a robot policy with imitation learning, which is subsequently evaluated in simulation. Finally, the validated policy is deployed on real-world robots with NVIDIA Jetson for real-time inference.The initial steps in this collaboration have already been taken, having shown a physical picking setup with LeRobot software running on NVIDIA Jetson Orin Nano, providing a powerful, compact compute platform for deployment.Combining Hugging Face open-source community with NVIDIAs hardware and Isaac Lab simulation has the potential to accelerate innovation in AI for robotics, said Remi Cadene, principal research scientist at LeRobot.This work builds on NVIDIAs community contributions in generative AI at the edge, supporting the latest open models and libraries, such as Hugging Face Transformers, optimizing inference for large language models (LLMs), small language models (SLMs) and multimodal vision-language models (VLMs), along with VLMs action-based variants of vision language action models (VLAs), diffusion policies and speech models all with strong, community-driven support.Together, Hugging Face and NVIDIA aim to accelerate the work of the global ecosystem of robotics researchers and developers transforming industries ranging from transportation to manufacturing and logistics.Learn about NVIDIAs robotics research papers at CoRL, including VLM integration for better environmental understanding, temporal navigation and long-horizon planning. Check out workshops at CoRL with NVIDIA researchers.
    0 Reacties 0 aandelen 99 Views
  • BLOGS.NVIDIA.COM
    NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools
    Robotics developers can greatly accelerate their work on AI-enabled robots, including humanoids, using new AI and simulation tools and workflows that NVIDIA revealed this week at the Conference for Robot Learning (CoRL) in Munich, Germany.The lineup includes the general availability of the NVIDIA Isaac Lab robot learning framework; six new humanoid robot learning workflows for Project GR00T, an initiative to accelerate humanoid robot development; and new world-model development tools for video data curation and processing, including the NVIDIA Cosmos tokenizer and NVIDIA NeMo Curator for video processing.The open-source Cosmos tokenizer provides robotics developers superior visual tokenization by breaking down images and videos into high-quality tokens with exceptionally high compression rates. It runs up to 12x faster than current tokenizers, while NeMo Curator provides video processing curation up to 7x faster than unoptimized pipelines.Also timed with CoRL, NVIDIA presented 23 papers and nine workshops related to robot learning and released training and workflow guides for developers. Further, Hugging Face and NVIDIA announced theyre collaborating to accelerate open-source robotics research with LeRobot, NVIDIA Isaac Lab and NVIDIA Jetson for the developer community.Accelerating Robot Development With Isaac LabNVIDIA Isaac Lab is an open-source, robot learning framework built on NVIDIA Omniverse, a platform for developing OpenUSD applications for industrial digitalization and physical AI simulation.Developers can use Isaac Lab to train robot policies at scale. This open-source unified robot learning framework applies to any embodiment from humanoids to quadrupeds to collaborative robots to handle increasingly complex movements and interactions.Leading commercial robot makers, robotics application developers and robotics research entities around the world are adopting Isaac Lab, including 1X, Agility Robotics, The AI Institute, Berkeley Humanoid, Boston Dynamics, Field AI, Fourier, Galbot, Mentee Robotics, Skild AI, Swiss-Mile, Unitree Robotics and XPENG Robotics.Project GR00T: Foundations for General-Purpose Humanoid RobotsBuilding advanced humanoids is extremely difficult, demanding multilayer technological and interdisciplinary approaches to make the robots perceive, move and learn skills effectively for human-robot and robot-environment interactions.Project GR00T is an initiative to develop accelerated libraries, foundation models and data pipelines to accelerate the global humanoid robot developer ecosystem.Six new Project GR00T workflows provide humanoid developers with blueprints to realize the most challenging humanoid robot capabilities. They include:GR00T-Gen for building generative AI-powered, OpenUSD-based 3D environmentsGR00T-Mimic for robot motion and trajectory generationGR00T-Dexterity for robot dexterous manipulationGR00T-Control for whole-body controlGR00T-Mobility for robot locomotion and navigationGR00T-Perception for multimodal sensingHumanoid robots are the next wave of embodied AI, said Jim Fan, senior research manager of embodied AI at NVIDIA. NVIDIA research and engineering teams are collaborating across the company and our developer ecosystem to build Project GR00T to help advance the progress and development of global humanoid robot developers.New Development Tools for World Model BuildersToday, robot developers are building world models AI representations of the world that can predict how objects and environments respond to a robots actions. Building these world models is incredibly compute- and data-intensive, with models requiring thousands of hours of real-world, curated image or video data.NVIDIA Cosmos tokenizers provide efficient, high-quality encoding and decoding to simplify the development of these world models. They set a new standard of minimal distortion and temporal instability, enabling high-quality video and image reconstructions.Providing high-quality compression and up to 12x faster visual reconstruction, the Cosmos tokenizer paves the path for scalable, robust and efficient development of generative applications across a broad spectrum of visual domains.1X, a humanoid robot company, has updated the 1X World Model Challenge dataset to use the Cosmos tokenizer.NVIDIA Cosmos tokenizer achieves really high temporal and spatial compression of our data while still retaining visual fidelity, said Eric Jang, vice president of AI at 1X Technologies. This allows us to train world models with long horizon video generation in an even more compute-efficient manner.Other humanoid and general-purpose robot developers, including XPENG Robotics and Hillbot, are developing with the NVIDIA Cosmos tokenizer to manage high-resolution images and videos.NeMo Curator now includes a video processing pipeline. This enables robot developers to improve their world-model accuracy by processing large-scale text, image and video data.Curating video data poses challenges due to its massive size, requiring scalable pipelines and efficient orchestration for load balancing across GPUs. Additionally, models for filtering, captioning and embedding need optimization to maximize throughput.NeMo Curator overcomes these challenges by streamlining data curation with automatic pipeline orchestration, reducing processing time significantly. It supports linear scaling across multi-node, multi-GPU systems, efficiently handling over 100 petabytes of data. This simplifies AI development, reduces costs and accelerates time to market.Advancing the Robot Learning Community at CoRLThe nearly two dozen research papers the NVIDIA robotics team released with CoRL cover breakthroughs in integrating vision language models for improved environmental understanding and task execution, temporal robot navigation, developing long-horizon planning strategies for complex multistep tasks and using human demonstrations for skill acquisition.Groundbreaking papers for humanoid robot control and synthetic data generation include SkillGen, a system based on synthetic data generation for training robots with minimal human demonstrations, and HOVER, a robot foundation model for controlling humanoid robot locomotion and manipulation.NVIDIA researchers will also be participating in nine workshops at the conference. Learn more about the full schedule of events.AvailabilityNVIDIA Isaac Lab 1.2 is available now and is open source on GitHub. NVIDIA Cosmos tokenizer is available now on GitHub and Hugging Face. NeMo Curator for video processing will be available at the end of the month.The new NVIDIA Project GR00T workflows are coming soon to help robot companies build humanoid robot capabilities with greater ease. Read more about the workflows on the NVIDIA Technical Blog.Researchers and developers learning to use Isaac Lab can now access developer guides and tutorials, including an Isaac Gym to Isaac Lab migration guide.Discover the latest in robot learning and simulation in an upcoming OpenUSD insider livestream on robot simulation and learning on Nov. 13, and attend the NVIDIA Isaac Lab office hours for hands-on support and insights.Developers can apply to join the NVIDIA Humanoid Robot Developer Program.
    0 Reacties 0 aandelen 104 Views
  • BLOGS.NVIDIA.COM
    Austin Calling: As Texas Absorbs Influx of Residents, Rekor Taps NVIDIA Technology for Roadway Safety, Traffic Relief
    Austin is drawing people to jobs, music venues, comedy clubs, barbecue and more. But with this boom has come a big city blues: traffic jams.Rekor, which offers traffic management and public safety analytics, has a front-row seat to the increasing traffic from an influx of new residents migrating to Austin. Rekor works with the Texas Department of Transportation, which has a $7 billion project addressing this, to help mitigate the roadway concerns.Texas has been trying to meet that growth and demand on the roadways by investing a lot in infrastructure, and theyre focusing a lot on digital infrastructure, said Shervin Esfahani, vice president of global marketing and communications at Rekor. Its super complex, and they realized their traditional systems were unable to really manage and understand it in real time.Rekor, based in Columbia, Maryland, has been harnessing NVIDIA Metropolis for real-time video understanding and NVIDIA Jetson Xavier NX modules for edge AI in Texas, Florida, Philadelphia, Georgia, Nevada, Oklahoma and many more U.S. destinations as well as in Israel and other places internationally.Metropolis is an application framework for smart infrastructure development with vision AI. It provides developer tools, including the NVIDIA DeepStream SDK, NVIDIA TAO Toolkit, pretrained models on the NVIDIA NGC catalog and NVIDIA TensorRT. NVIDIA Jetson is a compact, powerful and energy-efficient accelerated computing platform used for embedded and robotics applications.Rekors efforts in Texas and Philadelphia to help better manage roads with AI are the latest development in an ongoing story for traffic safety and traffic management.Reducing Rubbernecking, Pileups, Fatalities and JamsRekor offers two main products: Rekor Command and Rekor Discover. Command is an AI-driven platform for traffic management centers, providing rapid identification of traffic events and zones of concern. It offers departments of transportation with real-time situational awareness and alerts that allows them to keep city roadways safer and more congestion-free.Discover taps into Rekors edge system to fully automate the capture of comprehensive traffic and vehicle data and provides robust traffic analytics that turn roadway data into measurable, reliable traffic knowledge. With Rekor Discover, departments of transportation can see a full picture of how vehicles move on roadways and the impact they make, allowing them to better organize and execute their future city-building initiatives.The company has deployed Command across Austin to help detect issues, analyze incidents and respond to roadway activity with a real-time view.For every minute an incident happens and stays on the road, it creates four minutes of traffic, which puts a strain on the road, and the likelihood of a secondary incident like an accident from rubbernecking massively goes up, said Paul-Mathew Zamsky, vice president of strategic growth and partnerships at Rekor. Austin deployed Rekor Command and saw a 159% increase in incident detections, and they were able to respond eight and a half minutes faster to those incidents.Rekor Command takes in many feeds of data like traffic camera footage, weather, connected car info and construction updates and taps into any other data infrastructure, as well as third-party data. It then uses AI to make connections and surface up anomalies, like a roadside incident. That information is presented in workflows to traffic management centers for review, confirmation and response.They look at it and respond to it, and they are doing it faster than ever before, said Esfahani. It helps save lives on the road, and it also helps peoples quality of life, helps them get home faster and stay out of traffic, and it reduces the strain on the system in the city of Austin.In addition to adopting NVIDIAs full-stack accelerated computing for roadway intelligence, Rekor is going all in on NVIDIA AI and NVIDIA AI Blueprints, which are reference workflows for generative AI use cases, built with NVIDIA NIM microservices as part of the NVIDIA AI Enterprise software platform. NVIDIA NIM is a set of easy-to-use inference microservices for accelerating deployments of foundation models on any cloud or data center while keeping data secure.Rekor has multiple large language models and vision language models running on NVIDIA Triton Inference Server in production, according to Shai Maron, senior vice president of global software and data engineering at Rekor.Internally, well use it for data annotation, and it will help us optimize different aspects of our day to day, he said. LLMs externally will help us calibrate our cameras in a much more efficient way and configure them.Rekor is using the NVIDIA AI Blueprint for video search and summarization to build AI agents for city services, particularly in areas such as traffic management, public safety and optimization of city infrastructure. NVIDIA recently announced a new AI Blueprint for video search and summarization enabling a range of interactive visual AI agents that extracts complex activities from massive volumes of live or archived video.Philadelphia Monitors Roads, EV Charger Needs, PollutionPhiladelphia Navy Yard is a tourism hub run by the Philadelphia Industrial Development Corporation (PIDC), which has some challenges in road management and gathering data on new developments for the popular area. The Navy Yard location, occupying 1,200 acres, has more than 150 companies and 15,000 employees, but a $6 billion redevelopment plan there promises to bring in 12,000-plus new jobs and thousands more as residents to the area.PIDC sought greater visibility into the effects of road closures and construction projects on mobility and how to improve mobility during significant projects and events. PIDC also looked to strengthen the Navy Yards ability to understand the volume and traffic flow of car carriers or other large vehicles and quantify the impact of speed-mitigating devices deployed across hazardous stretches of roadway.Discover provided PIDC insights into additional infrastructure projects that need to be deployed to manage any changes in traffic.Understanding the number of electric vehicles, and where theyre entering and leaving the Navy Yard, provides PIDC with clear insights on potential sites for electric vehicle (EV) charge station deployment in the future. By pulling insights from Rekors edge systems, built with NVIDIA Jetson Xavier NX modules for powerful edge processing and AI, Rekor Discover lets Navy Yard understand the number of EVs and where theyre entering and leaving, allowing PIDC to better plan potential sites for EV charge station deployment in the future.Rekor Discover enabled PIDC planners to create a hotspot map of EV traffic by looking at data provided by the AI platform. The solution relies on real-time traffic analysis using NVIDIAs DeepStream data pipeline and Jetson. Additionally, it uses NVIDIA Triton Inference Server to enhance LLM capabilities.The PIDC wanted to address public safety issues related to speeding and collisions as well as decrease property damage. Using speed insights, its deploying traffic calming measures where average speeds are exceeding whats ideal on certain segments of roadway.NVIDIA Jetson Xavier NX to Monitor Pollution in Real TimeTraditionally, urban planners can look at satellite imagery to try to understand pollution locations, but Rekors vehicle recognition models, running on NVIDIA Jetson Xavier NX modules, were able to track it to the sources, taking it a step further toward mitigation.Its about air quality, said Shobhit Jain, senior vice president of product management at Rekor. Weve built models to be really good at that. They can know how much pollution each vehicle is putting out.Looking ahead, Rekor is examining how NVIDIA Omniverse might be used for digital twins development in order to simulate traffic mitigation with different strategies. Omniverse is a platform for developing OpenUSD applications for industrial digitalization and generative physical AI.Developing digital twins with Omniverse for municipalities has enormous implications for reducing traffic, pollution and road fatalities all areas Rekor sees as hugely beneficial to its customers.Our data models are granular, and were definitely exploring Omniverse, said Jain. Wed like to see how we can support those digital use cases.Learn about the NVIDIA AI Blueprint for building AI agents for video search and summarization.
    0 Reacties 0 aandelen 110 Views
  • BLOGS.NVIDIA.COM
    Give AI a Look: Any Industry Can Now Search and Summarize Vast Volumes of Visual Data
    Enterprises and public sector organizations around the world are developing AI agents to boost the capabilities of workforces that rely on visual information from a growing number of devices including cameras, IoT sensors and vehicles.To support their work, a new NVIDIA AI Blueprint for video search and summarization will enable developers in virtually any industry to build visual AI agents that analyze video and image content. These agents can answer user questions, generate summaries and enable alerts for specific scenarios.Part of NVIDIA Metropolis, a set of developer tools for building vision AI applications, the blueprint is a customizable workflow that combines NVIDIA computer vision and generative AI technologies.Global systems integrators and technology solutions providers including Accenture, Dell Technologies and Lenovo are bringing the NVIDIA AI Blueprint for visual search and summarization to businesses and cities worldwide, jump-starting the next wave of AI applications that can be deployed to boost productivity and safety in factories, warehouses, shops, airports, traffic intersections and more.Announced ahead of the Smart City Expo World Congress, the NVIDIA AI Blueprint gives visual computing developers a full suite of optimized software for building and deploying generative AI-powered agents that can ingest and understand massive volumes of live video streams or data archives.Users can customize these visual AI agents with natural language prompts instead of rigid software code, lowering the barrier to deploying virtual assistants across industries and smart city applications.NVIDIA AI Blueprint Harnesses Vision Language ModelsVisual AI agents are powered by vision language models (VLMs), a class of generative AI models that combine computer vision and language understanding to interpret the physical world and perform reasoning tasks.The NVIDIA AI Blueprint for video search and summarization can be configured with NVIDIA NIM microservices for VLMs like NVIDIA VILA, LLMs like Metas Llama 3.1 405B and AI models for GPU-accelerated question answering and context-aware retrieval-augmented generation. Developers can easily swap in other VLMs, LLMs and graph databases and fine-tune them using the NVIDIA NeMo platform for their unique environments and use cases.Adopting the NVIDIA AI Blueprint could save developers months of effort on investigating and optimizing generative AI models for smart city applications. Deployed on NVIDIA GPUs at the edge, on premises or in the cloud, it can vastly accelerate the process of combing through video archives to identify key moments.In a warehouse environment, an AI agent built with this workflow could alert workers if safety protocols are breached. At busy intersections, an AI agent could identify traffic collisions and generate reports to aid emergency response efforts. And in the field of public infrastructure, maintenance workers could ask AI agents to review aerial footage and identify degrading roads, train tracks or bridges to support proactive maintenance.Beyond smart spaces, visual AI agents could also be used to summarize videos for people with impaired vision, automatically generate recaps of sporting events and help label massive visual datasets to train other AI models.The video search and summarization workflow joins a collection of NVIDIA AI Blueprints that make it easy to create AI-powered digital avatars, build virtual assistants for personalized customer service and extract enterprise insights from PDF data.NVIDIA AI Blueprints are free for developers to experience and download, and can be deployed in production across accelerated data centers and clouds with NVIDIA AI Enterprise, an end-to-end software platform that accelerates data science pipelines and streamlines generative AI development and deployment.AI Agents to Deliver Insights From Warehouses to World CapitalsEnterprise and public sector customers can also harness the full collection of NVIDIA AI Blueprints with the help of NVIDIAs partner ecosystem.Global professional services company Accenture has integrated NVIDIA AI Blueprints into its Accenture AI Refinery, which is built on NVIDIA AI Foundry and enables customers to develop custom AI models trained on enterprise data.Global systems integrators in Southeast Asia including ITMAX in Malaysia and FPT in Vietnam are building AI agents based on the video search and summarization NVIDIA AI Blueprint for smart city and intelligent transportation applications.Developers can also build and deploy NVIDIA AI Blueprints on NVIDIA AI platforms with compute, networking and software provided by global server manufacturers.Dell will use VLM and agent approaches with Dells NativeEdge platform to enhance existing edge AI applications and create new edge AI-enabled capabilities. Dell Reference Designs for the Dell AI Factory with NVIDIA and the NVIDIA AI Blueprint for video search and summarization will support VLM capabilities in dedicated AI workflows for data center, edge and on-premises multimodal enterprise use cases.NVIDIA AI Blueprints are also incorporated in Lenovo Hybrid AI solutions powered by NVIDIA.Companies like K2K, a smart city application provider in the NVIDIA Metropolis ecosystem, will use the new NVIDIA AI Blueprint to build AI agents that analyze live traffic cameras in real time. This will enable city officials to ask questions about street activity and receive recommendations on ways to improve operations. The company also is working with city traffic managers in Palermo, Italy, to deploy visual AI agents using NIM microservices and NVIDIA AI Blueprints.Discover more about the NVIDIA AI Blueprint for video search and summarization by visiting the NVIDIA booth at the Smart Cities Expo World Congress, taking place in Barcelona through Nov. 7.Learn how to build a visual AI agent and get started with the blueprint.
    0 Reacties 0 aandelen 104 Views
  • BLOGS.NVIDIA.COM
    Scale New Heights With Dragon Age: The Veilguard in the Cloud on GeForce NOW
    Even post-spooky season, GFN Thursday has some treats for GeForce NOW members: a new batch of 17 games joining the cloud in November.Catch the five games available to stream this week, including Dragon Age: The Veilguard, the highly anticipated next installment in BioWares beloved fantasy role-playing game series. Players who purchased the GeForce NOW Ultimate bundle can stream the game at launch for free starting today.Unite the VeilguardWhats your dragon age?In Dragon Age: The Veilguard, take on the role of Rook and stop a pair of corrupt ancient gods whove broken free from centuries of darkness, hellbent on destroying the world. Set in the rich world of Thedas, the game includes an epic story with meaningful choices, deep character relationships, and a mix of familiar and new companions to go on adventures with.Select from three classes, each with distinct weapon types, and harness the classes unique, powerful abilities while coordinating with a team of seven companions, who have their own rich lives and deep backstories. An expansive skill-tree system allows for diverse character builds across the Warrior, Rogue and Mage classes.Experience the adventure in the vibrant world of Thedas with enhanced visual fidelity and performance by tapping into a GeForce NOW membership. Priority members can enjoy the game at up to 1080p resolution and 60 frames per second (fps). Ultimate members can take advantage of 4K resolution, up to 120 fps and advanced features like NVIDIA DLSS 3, low-latency gameplay with NVIDIA Reflex, and enhanced image quality and immersion with ray-traced ambient occlusion and reflections, even on low-powered devices.Resident Evil 4 in the CloudStream it from the cloud to survive.Capcoms Resident Evil 4 is now available on GeForce NOW, bringing the horror to cloud gaming.Survival is just the beginning. Six years have passed since the biological disaster in Raccoon City.Agent Leon S. Kennedy, one of the incidents survivors, has been sent to rescue the presidents kidnapped daughter. The agent tracks her to a secluded European village, where theres something terribly wrong with the locals. The curtain rises on this story of daring rescue and grueling horror where life and death, terror and catharsis intersect.Featuring modernized gameplay, a reimagined storyline and vividly detailed graphics, Resident Evil 4 marks the rebirth of an industry juggernaut. Relive the nightmare that revolutionized survival horror, with stunning high-dynamic-range visuals and immersive ray-tracing technology for Priority and Ultimate members.Life Is Great With New GamesTime to gear up, agents.A new season for Year 6 in Tom Clancys The Division 2 from Ubisoft is now available for members to stream. In Shades of Red, rogue ex-Division agent Aaron Keener has given himself up and is now in custody at the White House. The Division must learn what he knows to secure the other members of his team. New Seasonal Modifiers change gameplay and gear usage for players. A new revamped progression is also available. The Seasonal Journey comprises a series of missions, each containing a challenge-style objective for players to complete.Look for the following games available to stream in the cloud this week:Life Is Strange: Double Exposure (New release on Steam and Xbox, available in the Microsoft store, Oct. 29)Dragon Age: The Veilguard (New release on Steam and EA App, Oct. 31)Resident Evil 4 (Steam)Resident Evil 4 Chainsaw Demo (Steam)VRChat (Steam)Heres what members can expect for the rest of November:Metal Slug Tactics (New release on Steam, Nov. 5)Planet Coaster 2 (New release on Steam, Nov. 6)Teenage Mutant Ninja Turtles: Splintered Fate (New Release on Steam, Nov. 6)Empire of the Ants (New release on Steam, Nov. 7)Unrailed 2: Back on Track (New release on Steam, Nov. 7)Farming Simulator 25 (New release on Steam, Nov. 12)Sea Power: Naval Combat in the Missile Age (New release on Steam, Nov. 12)Industry Giant 4.0 (New release Steam, Nov. 15)Towers of Aghasba (New release on Steam, Nov. 19)S.T.A.L.K.E.R. 2: Heart of Chornobyl (New release on Steam and Xbox, available on PC Game Pass, Nov .20)Star Wars Outlaws (New release on Steam, Nov. 21)Dungeons & Degenerate Gamblers (Steam)Headquarters: World War II (Steam)PANICORE (Steam)Slime Rancher (Steam)Sumerian Six (Steam)TCG Card Shop Simulator (Steam)Outstanding OctoberIn addition to the 22 games announced last month, eight more joined the GeForce NOW library:Empyrion Galactic Survival (New release on Epic Games Store, Oct. 10)Assassins Creed Mirage (New release on Steam, Oct. 17)Windblown (New release on Steam, Oct. 24)Call of Duty HQ, including Call of Duty: Modern Warfare III and Call of Duty: Warzone (Xbox, available on PC Game Pass)Dungeon Tycoon (Steam)Off the Grid (Epic Games Store)South Park: The Fractured but Whole (Available on PC Game Pass, Oct 16. Members need to activate access.)Star Trucker (Steam and Xbox, available on PC Game Pass)What are you planning to play this weekend? Let us know on X or in the comments below.What's video game character you've dressed up as for Halloween? NVIDIA GeForce NOW (@NVIDIAGFN) October 29, 2024
    0 Reacties 0 aandelen 99 Views
  • BLOGS.NVIDIA.COM
    Startup Helps Surgeons Target Breast Cancers With AI-Powered 3D Visualizations
    A new AI-powered, imaging-based technology that creates accurate three-dimensional models of tumors, veins and other soft tissue offers a promising new method to help surgeons operate on, and better treat, breast cancers.The technology, from Illinois-based startup SimBioSys, converts routine black-and-white MRI images into spatially accurate, volumetric images of a patients breasts. It then illuminates different parts of the breast with distinct colors the vascular system, or veins, may be red; tumors are shown in blue; surrounding tissue is gray.Surgeons can then easily manipulate the 3D visualization on a computer screen, gaining important insight to help guide surgeries and influence treatment plans. The technology, called TumorSight, calculates key surgery-related measurements, including a tumors volume and how far tumors are from the chest wall and nipple.It also provides key data about a tumors volume in relation to a breasts overall volume, which can help determine before a procedure begins whether surgeons should try to preserve a breast or choose a mastectomy, which often presents cosmetic and painful side effects. Last year, TumorSight received FDA clearance.Across the world, nearly 2.3 million women are diagnosed with breast cancer each year, according to the World Health Organization. Every year, breast cancer is responsible for the deaths of more than 500,000 women. Around 100,000 women in the U.S. annually undergo some form of mastectomy, according to the Brigham and Womens Hospital.According to Jyoti Palaniappan, chief commercial officer at SimBioSys, the companys visualization technology offers a step-change improvement over the kind of data surgeons typically see before they begin surgery.Typically, surgeons will get a radiology report, which tells them, Heres the size and location of the tumor, and theyll get one or two pictures of the patients tumor, said Palaniappan. If the surgeon wants to get more information, theyll need to find the radiologist and have a conversation with them which doesnt always happen and go through the case with them.Dr. Barry Rosen, the companys chief medical officer, said one of the technologys primary goals is to uplevel and standardize presurgical imaging, which he believes can have broad positive impacts on outcomes.Were trying to move the surgical process from an art to a science by harnessing the power of AI to improve surgical planning, Dr. Rosen said.SimBioSys uses NVIDIA A100 Tensor Core GPUs in the cloud for pretraining its models. It also uses NVIDIA MONAI for training and validation data, and NVIDIA CUDA-X libraries including cuBLAS and MONAI Deploy to run its imaging technology. SimBioSys is part of the NVIDIA Inception program for startups.SimBioSys is already working on additional AI use cases it hopes can improve breast cancer survival rates.It has developed a novel technique to reconcile MRI images of a patients breasts, taken when the patient is lying face down, and converts those images into virtual, realistic 3D visualizations that show how the tumor and surrounding tissue will appear during surgery when a patient is lying face up.This 3D visualization is especially relevant for surgeons so they can visualize what a breast and any tumors will look like once surgery begins.To create this imagery, the technology calculates gravitys impact on different kinds of breast tissue and accounts for how different kinds of skin elasticity impact a breasts shape when a patient is lying on the operating table.The startup is also working on a new strategy that also relies on AI to quickly provide insights that can help avoid cancer recurrence.Currently, hospital labs run pathology tests on tumors that surgeons have removed. The biopsies are then sent to a different outside lab, which conducts a more comprehensive molecular analysis.This process routinely takes up to six weeks. Without knowing how aggressive a cancer in the removed tumor is, or how that type of cancer might respond to different treatments, patients and doctors are unable to quickly chart out treatment plans to avoid recurrence.SimBioSyss new technology uses an AI model to analyze the 3D volumetric features of the just-removed tumor, the hospitals initial tumor pathology report and a patients demographic data. From that information, SimBioSys generates in a matter of hours a risk analysis for that patients cancer, which helps doctors quickly determine the best treatment to avoid recurrence.According to SimBioSyss Palaniappan, the startups new method matches or exceeds the risk of recurrence scoring ability of more traditional methodologies, based upon its internal studies. It also takes a fraction of the time of these other methods while costing far less money.
    0 Reacties 0 aandelen 97 Views
Meer blogs