NVIDIA
NVIDIA
Recent Updates
  • BLOGS.NVIDIA.COM
    GPUs Companion: NVIDIA App Supercharges RTX GPUs With AI-Powered Tools and Features
    The NVIDIA app officially releasing today is a companion platform for content creators, GeForce gamers and AI enthusiasts using GeForce RTX GPUs.Featuring a GPU control center, the NVIDIA app allows users to access all their GPU settings in one place. From the app, users can do everything from updating to the latest drivers and configuring NVIDIA G-SYNC monitor settings, to tapping AI video enhancements through RTX Video and discovering exclusive AI-powered NVIDIA apps.In addition, NVIDIA RTX Remix has a new update that improves performance and streamlines workflows.For a deeper dive on gaming-exclusive benefits, check out the GeForce article.The GPUs PC CompanionThe NVIDIA app turbocharges GeForce RTX GPUs with a bevy of applications, features and tools.Keep NVIDIA Studio Drivers up to date The NVIDIA app automatically notifies users when the latest Studio Driver is available. These graphics drivers, fine-tuned in collaboration with developers, enhance performance in top creative applications and are tested extensively to deliver maximum stability. Theyre released once a month.Discover AI creator apps Millions have used the NVIDIA Broadcast app to turn offices and dorm rooms into home studios using AI-powered features that improve audio and video quality without the need for expensive, specialized equipment. Its user-friendly, works in virtually any app and includes AI features like Noise and Acoustic Echo Removal, Virtual Backgrounds, Eye Contact, Auto Frame, Vignettes and Video Noise Removal.NVIDIA RTX Remix is a modding platform built on NVIDIA Omniverse that allows users to capture game assets, automatically enhance materials with generative AI tools and create stunning RTX remasters with full ray tracing, including DLSS 3.5 support featuring Ray Reconstruction.NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscape images. Artists can create backgrounds quickly or speed up concept exploration, enabling them to visualize more ideas.Enhance video streams with AI The NVIDIA app includes a System tab as a one-stop destination for display, video and GPU options. It also includes an AI feature called RTX Video that enhances all videos streamed on browsers.RTX Video Super Resolution uses AI to enhance video streaming on GeForce RTX GPUs by removing compression artifacts and sharpening edges when upscaling.RTX Video HDR converts any standard dynamic range video into vibrant high dynamic range (HDR) when played in Google Chrome, Microsoft Edge, Mozilla Firefox or the VLC media player. HDR enables more vivid, dynamic colors to enhance gaming and content creation. A compatible HDR10 monitor is required.Give game streams or video on demand a unique look with AI filters Content creators looking to elevate their streamed or recorded gaming sessions can access the NVIDIA apps redesigned Overlay feature with AI-powered game filters.Freestyle RTX filters allow livestreamers and content creators to apply fun post-processing filters, changing the look and mood of content with tweaks to color and saturation.Joining these Freestyle RTX game filters is RTX Dynamic Vibrance, which enhances visual clarity on a per-app basis. Colors pop more on screen, and color crushing is minimized to preserve image quality and immersion. The filter is accelerated by Tensor Cores on GeForce RTX GPUs, making it easier for viewers to enjoy all the action.Enhanced visual clarity with RTX Dynamic Vibrance.Freestyle RTX filters empower gamers to personalize the visual aesthetics of their favorite games through real-time post-processing filters. This feature boasts compatibility with a vast library of more than 1,200 games.Download the NVIDIA app today.RTX Remix 0.6 ReleaseThe new RTX Remix update offers modders significantly improved mod performance, as well as quality of life improvements that help streamline the mod-making process.RTX Remix now supports the ability to test experimental features under active development. It includes a new Stage Manager that makes it easier to see and change every mesh, texture, light or element in scenes in real time.To learn more about the RTX Remix 0.6 release, check out the release notes.With RTX Remix in the NVIDIA app launcher, modders have direct access to Remixs powerful features. Through the NVIDIA app, RTX Remix modders can benefit from faster start-up times, lower CPU usage and direct control over updates with an optimized user interface.To the 3D Victor Go the SpoilsNVIDIA Studio in June kicked off a 3D character contest for artists in collaboration with Reallusion, a company that develops 2D and 3D character creation and animation software. Today, were celebrating the winners from that contest.In the category of Best Realistic Character Animation, Robert Lundqvist won for the piece Lisa and Fia.In the category of Best Stylized Character Animation, Loic Bramoulle won for the piece HellGal.Both winners will receive an NVIDIA Studio-validated laptop to help further their creative efforts.View over 250 imaginative and impressive entries here.Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of whats new and whats next by subscribing to the AI Decoded newsletter.
    0 Comments 0 Shares 0 Views
  • BLOGS.NVIDIA.COM
    Welcome to GeForce NOW Performance: Priority Members Get Instant Upgrade
    This GFN Thursday, the GeForce NOW Priority membership is getting enhancements and a fresh name to go along with it. The new Performance membership offers more GeForce-powered premium gaming at no change in the monthly membership cost.Gamers having a hard time deciding between the Performance and Ultimate memberships can take them both for a spin with a Day Pass, now 25% off for a limited time. Day Passes give access to 24 continuous hours of powerful cloud gaming.In addition, seven new games are available this week, joining the over 2,000 games in the GeForce NOW library.Time for a Glow UpThe Performance membership keeps all the same great gaming benefits and now provides members with an enhanced streaming experience at no additional cost.Say hello to the Performance membership.Performance members can stream at up to 1440p an increase from the previous 1080p resolution and experience games in immersive, ultrawide resolutions. They can also save their in-game graphics settings across streaming sessions, including for NVIDIA RTX features in supported titles.All current Priority members are automatically upgraded to Performance and can take advantage of the upgraded streaming experience today.Performance members will connect to GeForce RTX-powered gaming rigs for up to 1440p resolution. Ultimate members continue to receive the top streaming experience: connecting to GeForce RTX 4080-powered gaming rigs with up to 4K resolution and 120 frames per second, or 1080p and 240 fps in Competitive mode for games with support for NVIDIA Reflex technology.Gamers playing on the free tier will now see theyre streaming from basic rigs, with varying specs that offer entry-level cloud gaming and are optimized for capacity.Time to play.At the start of next year, GeForce NOW will roll out a 100-hour monthly playtime allowance to continue providing exceptional quality and speed as well as shorter queue times for Performance and Ultimate members. This ample limit comfortably accommodates 94% of members, who typically enjoy the service well within this timeframe. Members can check out how much time theyve spent in the cloud through their account portal (see screenshot example above).Up to 15 hours of unused playtime will automatically roll over to the next month for members, and additional hours can be purchased at $2.99 for 15 additional hours of Performance, or $5.99 for 15 additional Ultimate hours.Loyal Member BenefitTo thank the GFN community for joining the cloud gaming revolution, GeForce NOW is offering active paid members as of Dec. 31, 2024, the ability to continue with unlimited playtime for a full year until January 2026.New members can lock in this feature by signing up for GeForce NOW before Dec. 31, 2024. As long as a members account remains uninterrupted and in good standing, theyll continue to receive unlimited playtime for all of 2025.Dont Pass This UpFor those looking to try out the new premium benefits and all Performance and Ultimate memberships have to offer, Day Passes are 25% off for a limited time.Whether with the newly named Performance Day Pass at $2.99 or the Ultimate Day Pass at $5.99, members can unlock 24 hours of uninterrupted access to powerful NVIDIA GeForce RTX-powered cloud gaming servers.Another new GeForce NOW feature lets users apply the value of their most recently purchased Day Pass toward any monthly membership if they sign up within 48 hours of the completion of their Day Pass.Quarter the price, full day of fun.Dive into a vast library of over 2,000 games with enhanced graphics, including NVIDIA RTX features like ray tracing and DLSS. With the Ultimate Day Pass, snag a taste of GeForce NOWs highest-performing membership tier and enjoy up to 4K resolution 120 fps or 1080p 240 fps across nearly any device. Its an ideal way to experience elevated GeForce gaming in the cloud.Thrilling New GamesMembers can look for the following games available to stream in the cloud this week:Planet Coaster 2 (New release on Steam, Nov. 6)Teenage Mutant Ninja Turtles: Splintered Fate (New release on Steam, Nov. 6)Empire of the Ants (New release on Steam, Nov. 7)Unrailed 2: Back on Track (New release on Steam, Nov. 7)TCG Card Shop Simulator (Steam)StarCraft II (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)StarCraft Remastered (Xbox, available on PC Game Pass, Nov. 5. Members need to enable access.)What are you planning to play this weekend? Let us know on X or in the comments below.
    0 Comments 0 Shares 8 Views
  • BLOGS.NVIDIA.COM
    Jensen Huang to Discuss AIs Future With Masayoshi Son at AI Summit Japan
    NVIDIA founder and CEO Jensen Huang will join SoftBank Group Chairman and CEO Masayoshi Son in a fireside chat at NVIDIA AI Summit Japan to discuss the transformative role of AI and more.Taking place on Nov. 12-13, the invite-only event at The Prince Park Tower in Tokyos Minato district will gather industry leaders to explore advancements in generative AI, robotics and industrial digitalization.Call to action: Tickets for the event are sold out, but tune in via livestream or watch on-demand sessions.Over 50 sessions and live demos will showcase innovations from NVIDIA and its partners, covering everything from large language models, known as LLMs, to AI-powered robotics and digital twins.Huang and Son will discuss AIs transformative role and efforts driving the AI field.Son has invested in companies around the world that show potential for AI-driven growth through SoftBank Vision Funds. Huang has steered NVIDIAs rise to a global leader in AI and accelerated computing.One major topic: Japans AI infrastructure initiative, supported by NVIDIA and local firms. This investment is central to the countrys AI ambitions.Leaders from METI and experts like Shunsuke Aoki from Turing Inc. will dig into how sovereign AI fosters innovation and strengthens Japans technological independence.On Wednesday, Nov. 13, two key sessions will offer deeper insights into Japans AI journey:The Present and Future of Generative AI in Japan: Professor Yutaka Matsuo of the University of Tokyo will explore the advances of generative AI and its impact on policy and business strategy. Expect discussions on the opportunities and challenges Japan faces as it pushes forward with AI innovations.Sovereign AI and Its Role in Japans Future: A panel of four experts will dive into the concept of sovereign AI. Speakers like Takuya Watanabe of METI and Hironobu Tamba of SoftBank will discuss how sovereign AI can accelerate business strategies and strengthen Japans technological independence.These sessions highlight how Japan is positioning itself at the forefront of AI development. Practical insights into the next wave of AI innovation and policy are on the agenda.Experts from Sakana AI, Sony, Tokyo Science University and Yaskawa Electric will be among those presenting breakthroughs across sectors like healthcare, robotics and data centers.The summit will also feature hands-on workshops, including a full-day session on Tuesday, Nov. 12, titled Building RAG Agents With LLM.Led by NVIDIA experts, this workshop will offer practical experience in developing retrieval-augmented generation, or RAG, agents using large-scale language models.With its mix of forward-looking discussions and real-world applications, the NVIDIA AI Summit Tokyo will highlight Japans ongoing advancements in AI and its contributions to the global AI landscape.Tune in to the fireside chat between Son and Huang via livestream or watch on-demand sessions.
    0 Comments 0 Shares 8 Views
  • BLOGS.NVIDIA.COM
    Get Plugged In: How to Use Generative AI Tools in Obsidian
    Editors note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.As generative AI evolves and accelerates industry, a community of AI enthusiasts is experimenting with ways to integrate the powerful technology into common productivity workflows.Applications that support community plug-ins give users the power to explore how large language models (LLMs) can enhance a variety of workflows. By using local inference servers powered by the NVIDIA RTX-accelerated llama.cpp software library, users on RTX AI PCs can integrate local LLMs with ease.Previously, we looked at how users can take advantage of Leo AI in the Brave web browser to optimize the web browsing experience. Today, we look at Obsidian, a popular writing and note-taking application, based on the Markdown markup language, thats useful for keeping complex and linked records for multiple projects. The app supports community-developed plug-ins that bring additional functionality, including several that enable users to connect Obsidian to a local inferencing server like Ollama or LM Studio.Using Obsidian and LM Studio to generate notes with a 27B-parameter LLM accelerated by RTX.Connecting Obsidian to LM Studio only requires enabling the local server functionality in LM Studio by clicking on the Developer icon on the left panel, loading any downloaded model, enabling the CORS toggle and clicking Start. Take note of the chat completion URL from the Developer log console (http://localhost:1234/v1/chat/completions by default), as the plug-ins will need this information to connect.Next, launch Obsidian and open the Settings panel. Click Community plug-ins and then Browse. There are several community plug-ins related to LLMs, but two popular options are Text Generator and Smart Connections.Text Generator is helpful for generating content in an Obsidian vault, like notes and summaries on a research topic.Smart Connections is useful for asking questions about the contents of an Obsidian vault, such as the answer to an obscure trivia question previously saved years ago.Each plug-in has its own way of entering the LM Server URL.For Text Generator, open the settings and select Custom for Provider profile and paste the whole URL into the Endpoint field. For Smart Connections, configure the settings after starting the plug-in. In the settings panel on the right side of the interface, select Custom Local (OpenAI Format) for the model platform. Then, enter the URL and the model name (e.g., gemma-2-27b-instruct) into their respective fields as they appear in LM Studio.Once the fields are filled in, the plug-ins will function. The LM Studio user interface will also show logged activity if users are curious about whats happening on the local server side.Transforming Workflows With Obsidian AI Plug-InsBoth the Text Generator and Smart Connections plug-ins use generative AI in compelling ways.For example, imagine a user wants to plan a vacation to the fictitious destination of Lunar City and brainstorm ideas for what to do there. The user would start a new note, titled What to Do in Lunar City. Since Lunar City is not a real place, the query sent to the LLM will need to include a few extra instructions to guide the responses. Click the Text Generator plug-in icon, and the model will generate a list of activities to do during the trip.Obsidian, via the Text Generator plug-in, will request LM Studio to generate a response, and in turn LM Studio will run the Gemma 2 27B model. With RTX GPU acceleration in the users computer, the model can quickly generate a list of things to do.The Text Generator community plug-in in Obsidian enables users to connect to an LLM in LM Studio and generate notes for an imaginary vacation. The Text Generator community plug-in in Obsidian allows users to access an LLM through LM Studio to generate notes for a fictional vacation.Or, suppose many years later the users friend is going to Lunar City and wants to know where to eat. The user may not remember the names of the places where they ate, but they can check the notes in their vault (Obsidians term for a collection of notes) in case theyd written something down.Rather than looking through all of the notes manually, a user can use the Smart Connections plug-in to ask questions about their vault of notes and other content. The plug-in uses the same LM Studio server to respond to the request, and provides relevant information it finds from the users notes to assist the process. The plug-in does this using a technique called retrieval-augmented generation.The Smart Connections community plug-in in Obsidian uses retrieval-augmented generation and a connection to LM Studio to enable users to query their notes.These are fun examples, but after spending some time with these capabilities, users can see the real benefits and improvements for everyday productivity. Obsidian plug-ins are just two ways in which community developers and AI enthusiasts are embracing AI to supercharge their PC experiences.NVIDIA GeForce RTX technology for Windows PCs can run thousands of open-source models for developers to integrate into their Windows apps.Learn more about the power of LLMs, Text Generation and Smart Connections by integrating Obsidian into your workflow and play with the accelerated experience available on RTX AI PCs.Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of whats new and whats next by subscribing to the AI Decoded newsletter.
    0 Comments 0 Shares 7 Views
  • BLOGS.NVIDIA.COM
    Hugging Face and NVIDIA to Accelerate Open-Source AI Robotics Research and Development
    At the Conference for Robot Learning (CoRL) in Munich, Germany, Hugging Face and NVIDIA announced a collaboration to accelerate robotics research and development by bringing together their open-source robotics communities.Hugging Faces LeRobot open AI platform combined with NVIDIA AI, Omniverse and Isaac robotics technology will enable researchers and developers to drive advances across a wide range of industries, including manufacturing, healthcare and logistics.Open-Source Robotics for the Era of Physical AIThe era of physical AI robots understanding physical properties of environments is here, and its rapidly transforming the worlds industries.To drive and sustain this rapid innovation, robotics researchers and developers need access to open-source, extensible frameworks that span the development process of robot training, simulation and inference. With models, datasets and workflows released under shared frameworks, the latest advances are readily available for use without the need to recreate code.Hugging Faces leading open AI platform serves more than 5 million machine learning researchers and developers, offering tools and resources to streamline AI development. Hugging Face users can access and fine-tune the latest pretrained models and build AI pipelines on common APIs with over 1.5 million models, datasets and applications freely accessible on the Hugging Face Hub.LeRobot, developed by Hugging Face, extends the successful paradigms from its Transformers and Diffusers libraries into the robotics domain. LeRobot offers a comprehensive suite of tools for sharing data collection, model training and simulation environments along with designs for low-cost manipulator kits.NVIDIAs AI technology, simulation and open-source robot learning modular framework such as NVIDIA Isaac Lab can accelerate the LeRobots data collection, training and verification workflow. Researchers and developers can share their models and datasets built with LeRobot and Isaac Lab, creating a data flywheel for the robotics community.Scaling Robot Development With SimulationDeveloping physical AI is challenging. Unlike language models that use extensive internet text data, physics-based robotics relies on physical interaction data along with vision sensors, which is harder to gather at scale. Collecting real-world robot data for dexterous manipulation across a large number of tasks and environments is time-consuming and labor-intensive.Making this easier, Isaac Lab, built on NVIDIA Isaac Sim, enables robot training by demonstration or trial-and-error in simulation using high-fidelity rendering and physics simulation to create realistic synthetic environments and data. By combining GPU-accelerated physics simulations and parallel environment execution, Isaac Lab provides the ability to generate vast amounts of training data equivalent to thousands of real-world experiences from a single demonstration.Generated motion data is then used to train a policy with imitation learning. After successful training and validation in simulation, the policies are deployed on a real robot, where they are further tested and tuned to achieve optimal performance.This iterative process leverages real-world datas accuracy and the scalability of simulated synthetic data, ensuring robust and reliable robotic systems.By sharing these datasets, policies and models on Hugging Face, a robot data flywheel is created that enables developers and researchers to build upon each others work, accelerating progress in the field.The robotics community thrives when we build together, said Animesh Garg, assistant professor at Georgia Tech. By embracing open-source frameworks such as Hugging Faces LeRobot and NVIDIA Isaac Lab, we accelerate the pace of research and innovation in AI-powered robotics.Fostering Collaboration and Community EngagementThe planned collaborative workflow involves collecting data through teleoperation and simulation in Isaac Lab, storing it in the standard LeRobotDataset format. Data generated using GR00T-Mimic, will then be used to train a robot policy with imitation learning, which is subsequently evaluated in simulation. Finally, the validated policy is deployed on real-world robots with NVIDIA Jetson for real-time inference.The initial steps in this collaboration have already been taken, having shown a physical picking setup with LeRobot software running on NVIDIA Jetson Orin Nano, providing a powerful, compact compute platform for deployment.Combining Hugging Face open-source community with NVIDIAs hardware and Isaac Lab simulation has the potential to accelerate innovation in AI for robotics, said Remi Cadene, principal research scientist at LeRobot.This work builds on NVIDIAs community contributions in generative AI at the edge, supporting the latest open models and libraries, such as Hugging Face Transformers, optimizing inference for large language models (LLMs), small language models (SLMs) and multimodal vision-language models (VLMs), along with VLMs action-based variants of vision language action models (VLAs), diffusion policies and speech models all with strong, community-driven support.Together, Hugging Face and NVIDIA aim to accelerate the work of the global ecosystem of robotics researchers and developers transforming industries ranging from transportation to manufacturing and logistics.Learn about NVIDIAs robotics research papers at CoRL, including VLM integration for better environmental understanding, temporal navigation and long-horizon planning. Check out workshops at CoRL with NVIDIA researchers.
    0 Comments 0 Shares 7 Views
  • BLOGS.NVIDIA.COM
    NVIDIA Advances Robot Learning and Humanoid Development With New AI and Simulation Tools
    Robotics developers can greatly accelerate their work on AI-enabled robots, including humanoids, using new AI and simulation tools and workflows that NVIDIA revealed this week at the Conference for Robot Learning (CoRL) in Munich, Germany.The lineup includes the general availability of the NVIDIA Isaac Lab robot learning framework; six new humanoid robot learning workflows for Project GR00T, an initiative to accelerate humanoid robot development; and new world-model development tools for video data curation and processing, including the NVIDIA Cosmos tokenizer and NVIDIA NeMo Curator for video processing.The open-source Cosmos tokenizer provides robotics developers superior visual tokenization by breaking down images and videos into high-quality tokens with exceptionally high compression rates. It runs up to 12x faster than current tokenizers, while NeMo Curator provides video processing curation up to 7x faster than unoptimized pipelines.Also timed with CoRL, NVIDIA presented 23 papers and nine workshops related to robot learning and released training and workflow guides for developers. Further, Hugging Face and NVIDIA announced theyre collaborating to accelerate open-source robotics research with LeRobot, NVIDIA Isaac Lab and NVIDIA Jetson for the developer community.Accelerating Robot Development With Isaac LabNVIDIA Isaac Lab is an open-source, robot learning framework built on NVIDIA Omniverse, a platform for developing OpenUSD applications for industrial digitalization and physical AI simulation.Developers can use Isaac Lab to train robot policies at scale. This open-source unified robot learning framework applies to any embodiment from humanoids to quadrupeds to collaborative robots to handle increasingly complex movements and interactions.Leading commercial robot makers, robotics application developers and robotics research entities around the world are adopting Isaac Lab, including 1X, Agility Robotics, The AI Institute, Berkeley Humanoid, Boston Dynamics, Field AI, Fourier, Galbot, Mentee Robotics, Skild AI, Swiss-Mile, Unitree Robotics and XPENG Robotics.Project GR00T: Foundations for General-Purpose Humanoid RobotsBuilding advanced humanoids is extremely difficult, demanding multilayer technological and interdisciplinary approaches to make the robots perceive, move and learn skills effectively for human-robot and robot-environment interactions.Project GR00T is an initiative to develop accelerated libraries, foundation models and data pipelines to accelerate the global humanoid robot developer ecosystem.Six new Project GR00T workflows provide humanoid developers with blueprints to realize the most challenging humanoid robot capabilities. They include:GR00T-Gen for building generative AI-powered, OpenUSD-based 3D environmentsGR00T-Mimic for robot motion and trajectory generationGR00T-Dexterity for robot dexterous manipulationGR00T-Control for whole-body controlGR00T-Mobility for robot locomotion and navigationGR00T-Perception for multimodal sensingHumanoid robots are the next wave of embodied AI, said Jim Fan, senior research manager of embodied AI at NVIDIA. NVIDIA research and engineering teams are collaborating across the company and our developer ecosystem to build Project GR00T to help advance the progress and development of global humanoid robot developers.New Development Tools for World Model BuildersToday, robot developers are building world models AI representations of the world that can predict how objects and environments respond to a robots actions. Building these world models is incredibly compute- and data-intensive, with models requiring thousands of hours of real-world, curated image or video data.NVIDIA Cosmos tokenizers provide efficient, high-quality encoding and decoding to simplify the development of these world models. They set a new standard of minimal distortion and temporal instability, enabling high-quality video and image reconstructions.Providing high-quality compression and up to 12x faster visual reconstruction, the Cosmos tokenizer paves the path for scalable, robust and efficient development of generative applications across a broad spectrum of visual domains.1X, a humanoid robot company, has updated the 1X World Model Challenge dataset to use the Cosmos tokenizer.NVIDIA Cosmos tokenizer achieves really high temporal and spatial compression of our data while still retaining visual fidelity, said Eric Jang, vice president of AI at 1X Technologies. This allows us to train world models with long horizon video generation in an even more compute-efficient manner.Other humanoid and general-purpose robot developers, including XPENG Robotics and Hillbot, are developing with the NVIDIA Cosmos tokenizer to manage high-resolution images and videos.NeMo Curator now includes a video processing pipeline. This enables robot developers to improve their world-model accuracy by processing large-scale text, image and video data.Curating video data poses challenges due to its massive size, requiring scalable pipelines and efficient orchestration for load balancing across GPUs. Additionally, models for filtering, captioning and embedding need optimization to maximize throughput.NeMo Curator overcomes these challenges by streamlining data curation with automatic pipeline orchestration, reducing processing time significantly. It supports linear scaling across multi-node, multi-GPU systems, efficiently handling over 100 petabytes of data. This simplifies AI development, reduces costs and accelerates time to market.Advancing the Robot Learning Community at CoRLThe nearly two dozen research papers the NVIDIA robotics team released with CoRL cover breakthroughs in integrating vision language models for improved environmental understanding and task execution, temporal robot navigation, developing long-horizon planning strategies for complex multistep tasks and using human demonstrations for skill acquisition.Groundbreaking papers for humanoid robot control and synthetic data generation include SkillGen, a system based on synthetic data generation for training robots with minimal human demonstrations, and HOVER, a robot foundation model for controlling humanoid robot locomotion and manipulation.NVIDIA researchers will also be participating in nine workshops at the conference. Learn more about the full schedule of events.AvailabilityNVIDIA Isaac Lab 1.2 is available now and is open source on GitHub. NVIDIA Cosmos tokenizer is available now on GitHub and Hugging Face. NeMo Curator for video processing will be available at the end of the month.The new NVIDIA Project GR00T workflows are coming soon to help robot companies build humanoid robot capabilities with greater ease. Read more about the workflows on the NVIDIA Technical Blog.Researchers and developers learning to use Isaac Lab can now access developer guides and tutorials, including an Isaac Gym to Isaac Lab migration guide.Discover the latest in robot learning and simulation in an upcoming OpenUSD insider livestream on robot simulation and learning on Nov. 13, and attend the NVIDIA Isaac Lab office hours for hands-on support and insights.Developers can apply to join the NVIDIA Humanoid Robot Developer Program.
    0 Comments 0 Shares 7 Views
  • BLOGS.NVIDIA.COM
    Austin Calling: As Texas Absorbs Influx of Residents, Rekor Taps NVIDIA Technology for Roadway Safety, Traffic Relief
    Austin is drawing people to jobs, music venues, comedy clubs, barbecue and more. But with this boom has come a big city blues: traffic jams.Rekor, which offers traffic management and public safety analytics, has a front-row seat to the increasing traffic from an influx of new residents migrating to Austin. Rekor works with the Texas Department of Transportation, which has a $7 billion project addressing this, to help mitigate the roadway concerns.Texas has been trying to meet that growth and demand on the roadways by investing a lot in infrastructure, and theyre focusing a lot on digital infrastructure, said Shervin Esfahani, vice president of global marketing and communications at Rekor. Its super complex, and they realized their traditional systems were unable to really manage and understand it in real time.Rekor, based in Columbia, Maryland, has been harnessing NVIDIA Metropolis for real-time video understanding and NVIDIA Jetson Xavier NX modules for edge AI in Texas, Florida, Philadelphia, Georgia, Nevada, Oklahoma and many more U.S. destinations as well as in Israel and other places internationally.Metropolis is an application framework for smart infrastructure development with vision AI. It provides developer tools, including the NVIDIA DeepStream SDK, NVIDIA TAO Toolkit, pretrained models on the NVIDIA NGC catalog and NVIDIA TensorRT. NVIDIA Jetson is a compact, powerful and energy-efficient accelerated computing platform used for embedded and robotics applications.Rekors efforts in Texas and Philadelphia to help better manage roads with AI are the latest development in an ongoing story for traffic safety and traffic management.Reducing Rubbernecking, Pileups, Fatalities and JamsRekor offers two main products: Rekor Command and Rekor Discover. Command is an AI-driven platform for traffic management centers, providing rapid identification of traffic events and zones of concern. It offers departments of transportation with real-time situational awareness and alerts that allows them to keep city roadways safer and more congestion-free.Discover taps into Rekors edge system to fully automate the capture of comprehensive traffic and vehicle data and provides robust traffic analytics that turn roadway data into measurable, reliable traffic knowledge. With Rekor Discover, departments of transportation can see a full picture of how vehicles move on roadways and the impact they make, allowing them to better organize and execute their future city-building initiatives.The company has deployed Command across Austin to help detect issues, analyze incidents and respond to roadway activity with a real-time view.For every minute an incident happens and stays on the road, it creates four minutes of traffic, which puts a strain on the road, and the likelihood of a secondary incident like an accident from rubbernecking massively goes up, said Paul-Mathew Zamsky, vice president of strategic growth and partnerships at Rekor. Austin deployed Rekor Command and saw a 159% increase in incident detections, and they were able to respond eight and a half minutes faster to those incidents.Rekor Command takes in many feeds of data like traffic camera footage, weather, connected car info and construction updates and taps into any other data infrastructure, as well as third-party data. It then uses AI to make connections and surface up anomalies, like a roadside incident. That information is presented in workflows to traffic management centers for review, confirmation and response.They look at it and respond to it, and they are doing it faster than ever before, said Esfahani. It helps save lives on the road, and it also helps peoples quality of life, helps them get home faster and stay out of traffic, and it reduces the strain on the system in the city of Austin.In addition to adopting NVIDIAs full-stack accelerated computing for roadway intelligence, Rekor is going all in on NVIDIA AI and NVIDIA AI Blueprints, which are reference workflows for generative AI use cases, built with NVIDIA NIM microservices as part of the NVIDIA AI Enterprise software platform. NVIDIA NIM is a set of easy-to-use inference microservices for accelerating deployments of foundation models on any cloud or data center while keeping data secure.Rekor has multiple large language models and vision language models running on NVIDIA Triton Inference Server in production, according to Shai Maron, senior vice president of global software and data engineering at Rekor.Internally, well use it for data annotation, and it will help us optimize different aspects of our day to day, he said. LLMs externally will help us calibrate our cameras in a much more efficient way and configure them.Rekor is using the NVIDIA AI Blueprint for video search and summarization to build AI agents for city services, particularly in areas such as traffic management, public safety and optimization of city infrastructure. NVIDIA recently announced a new AI Blueprint for video search and summarization enabling a range of interactive visual AI agents that extracts complex activities from massive volumes of live or archived video.Philadelphia Monitors Roads, EV Charger Needs, PollutionPhiladelphia Navy Yard is a tourism hub run by the Philadelphia Industrial Development Corporation (PIDC), which has some challenges in road management and gathering data on new developments for the popular area. The Navy Yard location, occupying 1,200 acres, has more than 150 companies and 15,000 employees, but a $6 billion redevelopment plan there promises to bring in 12,000-plus new jobs and thousands more as residents to the area.PIDC sought greater visibility into the effects of road closures and construction projects on mobility and how to improve mobility during significant projects and events. PIDC also looked to strengthen the Navy Yards ability to understand the volume and traffic flow of car carriers or other large vehicles and quantify the impact of speed-mitigating devices deployed across hazardous stretches of roadway.Discover provided PIDC insights into additional infrastructure projects that need to be deployed to manage any changes in traffic.Understanding the number of electric vehicles, and where theyre entering and leaving the Navy Yard, provides PIDC with clear insights on potential sites for electric vehicle (EV) charge station deployment in the future. By pulling insights from Rekors edge systems, built with NVIDIA Jetson Xavier NX modules for powerful edge processing and AI, Rekor Discover lets Navy Yard understand the number of EVs and where theyre entering and leaving, allowing PIDC to better plan potential sites for EV charge station deployment in the future.Rekor Discover enabled PIDC planners to create a hotspot map of EV traffic by looking at data provided by the AI platform. The solution relies on real-time traffic analysis using NVIDIAs DeepStream data pipeline and Jetson. Additionally, it uses NVIDIA Triton Inference Server to enhance LLM capabilities.The PIDC wanted to address public safety issues related to speeding and collisions as well as decrease property damage. Using speed insights, its deploying traffic calming measures where average speeds are exceeding whats ideal on certain segments of roadway.NVIDIA Jetson Xavier NX to Monitor Pollution in Real TimeTraditionally, urban planners can look at satellite imagery to try to understand pollution locations, but Rekors vehicle recognition models, running on NVIDIA Jetson Xavier NX modules, were able to track it to the sources, taking it a step further toward mitigation.Its about air quality, said Shobhit Jain, senior vice president of product management at Rekor. Weve built models to be really good at that. They can know how much pollution each vehicle is putting out.Looking ahead, Rekor is examining how NVIDIA Omniverse might be used for digital twins development in order to simulate traffic mitigation with different strategies. Omniverse is a platform for developing OpenUSD applications for industrial digitalization and generative physical AI.Developing digital twins with Omniverse for municipalities has enormous implications for reducing traffic, pollution and road fatalities all areas Rekor sees as hugely beneficial to its customers.Our data models are granular, and were definitely exploring Omniverse, said Jain. Wed like to see how we can support those digital use cases.Learn about the NVIDIA AI Blueprint for building AI agents for video search and summarization.
    0 Comments 0 Shares 9 Views
  • BLOGS.NVIDIA.COM
    Give AI a Look: Any Industry Can Now Search and Summarize Vast Volumes of Visual Data
    Enterprises and public sector organizations around the world are developing AI agents to boost the capabilities of workforces that rely on visual information from a growing number of devices including cameras, IoT sensors and vehicles.To support their work, a new NVIDIA AI Blueprint for video search and summarization will enable developers in virtually any industry to build visual AI agents that analyze video and image content. These agents can answer user questions, generate summaries and enable alerts for specific scenarios.Part of NVIDIA Metropolis, a set of developer tools for building vision AI applications, the blueprint is a customizable workflow that combines NVIDIA computer vision and generative AI technologies.Global systems integrators and technology solutions providers including Accenture, Dell Technologies and Lenovo are bringing the NVIDIA AI Blueprint for visual search and summarization to businesses and cities worldwide, jump-starting the next wave of AI applications that can be deployed to boost productivity and safety in factories, warehouses, shops, airports, traffic intersections and more.Announced ahead of the Smart City Expo World Congress, the NVIDIA AI Blueprint gives visual computing developers a full suite of optimized software for building and deploying generative AI-powered agents that can ingest and understand massive volumes of live video streams or data archives.Users can customize these visual AI agents with natural language prompts instead of rigid software code, lowering the barrier to deploying virtual assistants across industries and smart city applications.NVIDIA AI Blueprint Harnesses Vision Language ModelsVisual AI agents are powered by vision language models (VLMs), a class of generative AI models that combine computer vision and language understanding to interpret the physical world and perform reasoning tasks.The NVIDIA AI Blueprint for video search and summarization can be configured with NVIDIA NIM microservices for VLMs like NVIDIA VILA, LLMs like Metas Llama 3.1 405B and AI models for GPU-accelerated question answering and context-aware retrieval-augmented generation. Developers can easily swap in other VLMs, LLMs and graph databases and fine-tune them using the NVIDIA NeMo platform for their unique environments and use cases.Adopting the NVIDIA AI Blueprint could save developers months of effort on investigating and optimizing generative AI models for smart city applications. Deployed on NVIDIA GPUs at the edge, on premises or in the cloud, it can vastly accelerate the process of combing through video archives to identify key moments.In a warehouse environment, an AI agent built with this workflow could alert workers if safety protocols are breached. At busy intersections, an AI agent could identify traffic collisions and generate reports to aid emergency response efforts. And in the field of public infrastructure, maintenance workers could ask AI agents to review aerial footage and identify degrading roads, train tracks or bridges to support proactive maintenance.Beyond smart spaces, visual AI agents could also be used to summarize videos for people with impaired vision, automatically generate recaps of sporting events and help label massive visual datasets to train other AI models.The video search and summarization workflow joins a collection of NVIDIA AI Blueprints that make it easy to create AI-powered digital avatars, build virtual assistants for personalized customer service and extract enterprise insights from PDF data.NVIDIA AI Blueprints are free for developers to experience and download, and can be deployed in production across accelerated data centers and clouds with NVIDIA AI Enterprise, an end-to-end software platform that accelerates data science pipelines and streamlines generative AI development and deployment.AI Agents to Deliver Insights From Warehouses to World CapitalsEnterprise and public sector customers can also harness the full collection of NVIDIA AI Blueprints with the help of NVIDIAs partner ecosystem.Global professional services company Accenture has integrated NVIDIA AI Blueprints into its Accenture AI Refinery, which is built on NVIDIA AI Foundry and enables customers to develop custom AI models trained on enterprise data.Global systems integrators in Southeast Asia including ITMAX in Malaysia and FPT in Vietnam are building AI agents based on the video search and summarization NVIDIA AI Blueprint for smart city and intelligent transportation applications.Developers can also build and deploy NVIDIA AI Blueprints on NVIDIA AI platforms with compute, networking and software provided by global server manufacturers.Dell will use VLM and agent approaches with Dells NativeEdge platform to enhance existing edge AI applications and create new edge AI-enabled capabilities. Dell Reference Designs for the Dell AI Factory with NVIDIA and the NVIDIA AI Blueprint for video search and summarization will support VLM capabilities in dedicated AI workflows for data center, edge and on-premises multimodal enterprise use cases.NVIDIA AI Blueprints are also incorporated in Lenovo Hybrid AI solutions powered by NVIDIA.Companies like K2K, a smart city application provider in the NVIDIA Metropolis ecosystem, will use the new NVIDIA AI Blueprint to build AI agents that analyze live traffic cameras in real time. This will enable city officials to ask questions about street activity and receive recommendations on ways to improve operations. The company also is working with city traffic managers in Palermo, Italy, to deploy visual AI agents using NIM microservices and NVIDIA AI Blueprints.Discover more about the NVIDIA AI Blueprint for video search and summarization by visiting the NVIDIA booth at the Smart Cities Expo World Congress, taking place in Barcelona through Nov. 7.Learn how to build a visual AI agent and get started with the blueprint.
    0 Comments 0 Shares 13 Views
  • BLOGS.NVIDIA.COM
    Scale New Heights With Dragon Age: The Veilguard in the Cloud on GeForce NOW
    Even post-spooky season, GFN Thursday has some treats for GeForce NOW members: a new batch of 17 games joining the cloud in November.Catch the five games available to stream this week, including Dragon Age: The Veilguard, the highly anticipated next installment in BioWares beloved fantasy role-playing game series. Players who purchased the GeForce NOW Ultimate bundle can stream the game at launch for free starting today.Unite the VeilguardWhats your dragon age?In Dragon Age: The Veilguard, take on the role of Rook and stop a pair of corrupt ancient gods whove broken free from centuries of darkness, hellbent on destroying the world. Set in the rich world of Thedas, the game includes an epic story with meaningful choices, deep character relationships, and a mix of familiar and new companions to go on adventures with.Select from three classes, each with distinct weapon types, and harness the classes unique, powerful abilities while coordinating with a team of seven companions, who have their own rich lives and deep backstories. An expansive skill-tree system allows for diverse character builds across the Warrior, Rogue and Mage classes.Experience the adventure in the vibrant world of Thedas with enhanced visual fidelity and performance by tapping into a GeForce NOW membership. Priority members can enjoy the game at up to 1080p resolution and 60 frames per second (fps). Ultimate members can take advantage of 4K resolution, up to 120 fps and advanced features like NVIDIA DLSS 3, low-latency gameplay with NVIDIA Reflex, and enhanced image quality and immersion with ray-traced ambient occlusion and reflections, even on low-powered devices.Resident Evil 4 in the CloudStream it from the cloud to survive.Capcoms Resident Evil 4 is now available on GeForce NOW, bringing the horror to cloud gaming.Survival is just the beginning. Six years have passed since the biological disaster in Raccoon City.Agent Leon S. Kennedy, one of the incidents survivors, has been sent to rescue the presidents kidnapped daughter. The agent tracks her to a secluded European village, where theres something terribly wrong with the locals. The curtain rises on this story of daring rescue and grueling horror where life and death, terror and catharsis intersect.Featuring modernized gameplay, a reimagined storyline and vividly detailed graphics, Resident Evil 4 marks the rebirth of an industry juggernaut. Relive the nightmare that revolutionized survival horror, with stunning high-dynamic-range visuals and immersive ray-tracing technology for Priority and Ultimate members.Life Is Great With New GamesTime to gear up, agents.A new season for Year 6 in Tom Clancys The Division 2 from Ubisoft is now available for members to stream. In Shades of Red, rogue ex-Division agent Aaron Keener has given himself up and is now in custody at the White House. The Division must learn what he knows to secure the other members of his team. New Seasonal Modifiers change gameplay and gear usage for players. A new revamped progression is also available. The Seasonal Journey comprises a series of missions, each containing a challenge-style objective for players to complete.Look for the following games available to stream in the cloud this week:Life Is Strange: Double Exposure (New release on Steam and Xbox, available in the Microsoft store, Oct. 29)Dragon Age: The Veilguard (New release on Steam and EA App, Oct. 31)Resident Evil 4 (Steam)Resident Evil 4 Chainsaw Demo (Steam)VRChat (Steam)Heres what members can expect for the rest of November:Metal Slug Tactics (New release on Steam, Nov. 5)Planet Coaster 2 (New release on Steam, Nov. 6)Teenage Mutant Ninja Turtles: Splintered Fate (New Release on Steam, Nov. 6)Empire of the Ants (New release on Steam, Nov. 7)Unrailed 2: Back on Track (New release on Steam, Nov. 7)Farming Simulator 25 (New release on Steam, Nov. 12)Sea Power: Naval Combat in the Missile Age (New release on Steam, Nov. 12)Industry Giant 4.0 (New release Steam, Nov. 15)Towers of Aghasba (New release on Steam, Nov. 19)S.T.A.L.K.E.R. 2: Heart of Chornobyl (New release on Steam and Xbox, available on PC Game Pass, Nov .20)Star Wars Outlaws (New release on Steam, Nov. 21)Dungeons & Degenerate Gamblers (Steam)Headquarters: World War II (Steam)PANICORE (Steam)Slime Rancher (Steam)Sumerian Six (Steam)TCG Card Shop Simulator (Steam)Outstanding OctoberIn addition to the 22 games announced last month, eight more joined the GeForce NOW library:Empyrion Galactic Survival (New release on Epic Games Store, Oct. 10)Assassins Creed Mirage (New release on Steam, Oct. 17)Windblown (New release on Steam, Oct. 24)Call of Duty HQ, including Call of Duty: Modern Warfare III and Call of Duty: Warzone (Xbox, available on PC Game Pass)Dungeon Tycoon (Steam)Off the Grid (Epic Games Store)South Park: The Fractured but Whole (Available on PC Game Pass, Oct 16. Members need to activate access.)Star Trucker (Steam and Xbox, available on PC Game Pass)What are you planning to play this weekend? Let us know on X or in the comments below.What's video game character you've dressed up as for Halloween? NVIDIA GeForce NOW (@NVIDIAGFN) October 29, 2024
    0 Comments 0 Shares 13 Views
  • BLOGS.NVIDIA.COM
    Startup Helps Surgeons Target Breast Cancers With AI-Powered 3D Visualizations
    A new AI-powered, imaging-based technology that creates accurate three-dimensional models of tumors, veins and other soft tissue offers a promising new method to help surgeons operate on, and better treat, breast cancers.The technology, from Illinois-based startup SimBioSys, converts routine black-and-white MRI images into spatially accurate, volumetric images of a patients breasts. It then illuminates different parts of the breast with distinct colors the vascular system, or veins, may be red; tumors are shown in blue; surrounding tissue is gray.Surgeons can then easily manipulate the 3D visualization on a computer screen, gaining important insight to help guide surgeries and influence treatment plans. The technology, called TumorSight, calculates key surgery-related measurements, including a tumors volume and how far tumors are from the chest wall and nipple.It also provides key data about a tumors volume in relation to a breasts overall volume, which can help determine before a procedure begins whether surgeons should try to preserve a breast or choose a mastectomy, which often presents cosmetic and painful side effects. Last year, TumorSight received FDA clearance.Across the world, nearly 2.3 million women are diagnosed with breast cancer each year, according to the World Health Organization. Every year, breast cancer is responsible for the deaths of more than 500,000 women. Around 100,000 women in the U.S. annually undergo some form of mastectomy, according to the Brigham and Womens Hospital.According to Jyoti Palaniappan, chief commercial officer at SimBioSys, the companys visualization technology offers a step-change improvement over the kind of data surgeons typically see before they begin surgery.Typically, surgeons will get a radiology report, which tells them, Heres the size and location of the tumor, and theyll get one or two pictures of the patients tumor, said Palaniappan. If the surgeon wants to get more information, theyll need to find the radiologist and have a conversation with them which doesnt always happen and go through the case with them.Dr. Barry Rosen, the companys chief medical officer, said one of the technologys primary goals is to uplevel and standardize presurgical imaging, which he believes can have broad positive impacts on outcomes.Were trying to move the surgical process from an art to a science by harnessing the power of AI to improve surgical planning, Dr. Rosen said.SimBioSys uses NVIDIA A100 Tensor Core GPUs in the cloud for pretraining its models. It also uses NVIDIA MONAI for training and validation data, and NVIDIA CUDA-X libraries including cuBLAS and MONAI Deploy to run its imaging technology. SimBioSys is part of the NVIDIA Inception program for startups.SimBioSys is already working on additional AI use cases it hopes can improve breast cancer survival rates.It has developed a novel technique to reconcile MRI images of a patients breasts, taken when the patient is lying face down, and converts those images into virtual, realistic 3D visualizations that show how the tumor and surrounding tissue will appear during surgery when a patient is lying face up.This 3D visualization is especially relevant for surgeons so they can visualize what a breast and any tumors will look like once surgery begins.To create this imagery, the technology calculates gravitys impact on different kinds of breast tissue and accounts for how different kinds of skin elasticity impact a breasts shape when a patient is lying on the operating table.The startup is also working on a new strategy that also relies on AI to quickly provide insights that can help avoid cancer recurrence.Currently, hospital labs run pathology tests on tumors that surgeons have removed. The biopsies are then sent to a different outside lab, which conducts a more comprehensive molecular analysis.This process routinely takes up to six weeks. Without knowing how aggressive a cancer in the removed tumor is, or how that type of cancer might respond to different treatments, patients and doctors are unable to quickly chart out treatment plans to avoid recurrence.SimBioSyss new technology uses an AI model to analyze the 3D volumetric features of the just-removed tumor, the hospitals initial tumor pathology report and a patients demographic data. From that information, SimBioSys generates in a matter of hours a risk analysis for that patients cancer, which helps doctors quickly determine the best treatment to avoid recurrence.According to SimBioSyss Palaniappan, the startups new method matches or exceeds the risk of recurrence scoring ability of more traditional methodologies, based upon its internal studies. It also takes a fraction of the time of these other methods while costing far less money.
    0 Comments 0 Shares 15 Views
  • BLOGS.NVIDIA.COM
    Spooks Await at the Haunted Sanctuary, Built With RTX and AI
    Among the artists using AI to enhance and accelerate their creative endeavors is Sabour Amirazodi, a creator and tech marketing and workflow specialist at NVIDIA.Using his over 20 years of multi-platform experience in location-based entertainment and media production, he decorates his home every year with an incredible Halloween installation dubbed the Haunted Sanctuary.The project is a massive undertaking requiring projection mapping, the creation and assembly of 3D scenes, compositing and editing in Adobe After Effects and Premiere Pro, and more. The creation process was accelerated using the NVIDIA Studio content creation platform and Amirazodis NVIDIA RTX 6000 GPU.This year, Amirazodi deployed new AI workflows in ComfyUI, Adobe Firefly and Photoshop to create digital portraits inspired by his family as part of the installation.Give em Pumpkin to Talk AboutComfyUI is a node-based interface that generates images and videos from text. Its designed to be highly customizable, allowing users to design workflows, adjust settings and see results immediately. It can combine various AI models and third-party extensions to achieve a higher degree of control.For example, this workflow below requires entering a prompt, the details and characteristics of the desired image, and a negative prompt to help omit any undesired visual effects.Since Amirazodi wanted his digital creations to closely resemble his family, he started by applying Run IP Adapters, which use reference images to inform generated content.ComfyUI nodes and reference material in the viewer.From there, he tinkered with the settings to achieve the desired look and feel of each character.The Amirazodis digitized for the Halloween Sanctuary installation.ComfyUI has NVIDIA TensorRT acceleration, so RTX users can generate images from prompts up to 60% faster.Get started with ComfyUI.In Darkness, Let There Be LightAdobe Firefly is a family of creative generative AI models that offer new ways to ideate and create while assisting creative workflows. Theyre designed to be safe for commercial use and were trained, using NVIDIA GPUs, on licensed content like Adobe Stock Images and public domain content where copyright has expired.To make the digital portraits fit as desired, Amirazodi needed to expand the background.Adobe Photoshop features a Generative Fill tool called Generative Expand that allows artists to extend the border of their image with the Crop tool and automatically fill the space with content that matches the existing image.Photoshop also features Neural Filters that allow artists to explore creative ideas and make complex adjustments to images in just seconds, saving them hours of tedious, manual work.With Smart Portrait Neural Filters, artists can easily experiment with facial characteristics such as gaze direction and lighting angles simply by dragging a slider. Amirazodi used the feature to apply the final touches to his portraits, adjusting colors, textures, depth blur and facial expressions.NVIDIA RTX GPUs help power AI-based tasks, accelerating the Neural Filters in Photoshop.Learn more about the latest Adobe features and tools in this blog.AI is already helping accelerate and automate tasks across content creation, gaming and everyday life and the speedups are only multiplied with an NVIDIA RTX- or GeForce RTX GPU-equipped system.Check out and share Halloween- and fall-themed art as a part of the NVIDIA Studio #HarvestofCreativity challenge on Instagram, X, Facebook and Threads for a chance to be featured on the social media channels.
    0 Comments 0 Shares 24 Views
  • BLOGS.NVIDIA.COM
    A New ERA of AI Factories: NVIDIA Unveils Enterprise Reference Architectures
    As the world transitions from general-purpose to accelerated computing, finding a path to building data center infrastructure at scale is becoming more important than ever. Enterprises must navigate uncharted waters when designing and deploying infrastructure to support these new AI workloads.Constant developments in model capabilities and software frameworks, along with the novelty of these workloads, mean best practices and standardized approaches are still in their infancy. This state of flux can make it difficult for enterprises to establish long-term strategies and invest in infrastructure with confidence.To address these challenges, NVIDIA is unveiling Enterprise Reference Architectures (Enterprise RAs). These comprehensive blueprints help NVIDIA systems partners and joint customers build their own AI factories high-performance, scalable and secure data centers for manufacturing intelligence.Building AI Factories to Unlock Enterprise GrowthNVIDIA Enterprise RAs help organizations avoid pitfalls when designing AI factories by providing full-stack hardware and software recommendations, and detailed guidance on optimal server, cluster and network configurations for modern AI workloads.Enterprise RAs can reduce the time and cost of deploying AI infrastructure solutions by providing a streamlined approach for building flexible and cost-effective accelerated infrastructure, while ensuring compatibility and interoperability.Each Enterprise RA includes recommendations for:Accelerated infrastructure based on an optimized NVIDIA-Certified server configuration, featuring the latest NVIDIA GPUs, CPUs and networking technologies, thats been tested and validated to deliver performance at scale.AI-optimized networking with the NVIDIA Spectrum-X AI Ethernet platform and NVIDIA BlueField-3 DPUs to deliver peak network performance, and guidance on optimal network configurations at multiple design points to address varying workload and scale requirements.The NVIDIA AI Enterprise software platform for production AI, which includes NVIDIA NeMo and NVIDIA NIM microservices for easily building and deploying AI applications, and NVIDIA Base Command Manager Essentials for infrastructure provisioning, workload management and resource monitoring.Businesses that deploy AI workloads on partner solutions based upon Enterprise RAs, which are informed by NVIDIAs years of expertise in designing and building large-scale computing systems, will benefit from:Accelerated time to market: By using NVIDIAs structured approach and recommended designs, enterprises can deploy AI solutions faster, reducing the time to achieve business value.Performance: Build upon tested and validated technologies with the confidence that AI workloads will run at peak performance.Scalability and manageability: Develop AI infrastructure while incorporating design best practices that enable flexibility and scale and help ensure optimal network performance.Security: Run workloads securely on AI infrastructure thats engineered with zero trust in mind, supports confidential computing and is optimized for the latest cybersecurity AI innovations.Reduced complexity: Accelerate deployment timelines, while avoiding design and planning pitfalls, through optimal server, cluster and network configurations for AI workloads.AvailabilitySolutions based upon NVIDIA Enterprise RAs are available from NVIDIAs global partners, including Dell Technologies, Hewlett Packard Enterprise, Lenovo and Supermicro.Learn more about NVIDIA-Certified Systems and NVIDIA Enterprise Reference Architectures.
    0 Comments 0 Shares 28 Views
  • BLOGS.NVIDIA.COM
    Bring Receipts: New NVIDIA AI Workflow Detects Fraudulent Credit Card Transactions
    Financial losses from worldwide credit card transaction fraud are expected to reach $43 billion by 2026.A new NVIDIA AI workflow for fraud detection running on Amazon Web Services (AWS) can help combat this burgeoning epidemic using accelerated data processing and advanced algorithms to improve AIs ability to detect and prevent credit card transaction fraud.Launched this week at the Money20/20 fintech conference, the workflow enables financial institutions to identify subtle patterns and anomalies in transaction data based on user behavior to improve accuracy and reduce false positives compared with traditional methods.Users can streamline the migration of their fraud detection workflows from traditional compute to accelerated compute using the NVIDIA AI Enterprise software platform and NVIDIA GPU instances.Businesses embracing comprehensive machine learning tools and strategies can observe up to an estimated 40% improvement in fraud detection accuracy, boosting their ability to identify and stop fraudsters faster and mitigate harm.As such, leading financial organizations like American Express and Capital One have been using AI to build proprietary solutions that mitigate fraud and enhance customer protection.The new NVIDIA workflow accelerates data processing, model training and inference, and demonstrates how these components can be wrapped into a single, easy-to-use software offering, powered by NVIDIA AI.Currently optimized for credit card transaction fraud, the workflow could be adapted for use cases such as new account fraud, account takeover and money laundering.Accelerated Computing for Fraud DetectionAs AI models expand in size, intricacy and diversity, its more important than ever for organizations across industries including financial services to harness cost- and energy-efficient computing power.Traditional data science pipelines lack the necessary compute acceleration to handle the massive volumes of data required to effectively fight fraud amid rapidly growing losses across the industry. Leveraging NVIDIA RAPIDS Accelerator for Apache Spark could help payment companies reduce data processing times and save on their data processing costs.To efficiently manage large-scale datasets and deliver real-time AI performance with complex AI models, financial institutions are turning to NVIDIAs AI and accelerated computing platforms.The use of gradient-boosted decision trees a type of machine learning algorithm tapping into libraries such as XGBoost, has long been the standard for fraud detection.The new NVIDIA AI workflow for fraud detection enhances XGBoost using the NVIDIA RAPIDS suite of AI libraries with graph neural network (GNN) embeddings as additional features to help reduce false positives.The GNN embeddings are fed into XGBoost to create and train a model that can then be orchestrated with the NVIDIA Morpheus Runtime Core library and NVIDIA Triton Inference Server for real-time inferencing.The NVIDIA Morpheus framework securely inspects and classifies all incoming data, tagging it with patterns and flagging potentially suspicious activity. NVIDIA Triton Inference Server simplifies inference of all types of AI model deployments in production, while optimizing throughput, latency and utilization.NVIDIA Morpheus, RAPIDS and Triton Inference Server are available through NVIDIA AI Enterprise.Leading Financial Services Organizations Adopt AIDuring a time when many large North American financial institutions are reporting online or mobile fraud losses continue to increase, AI is helping to combat this trend.American Express, which began using AI to fight fraud in 2010, leverages fraud detection algorithms to monitor all customer transactions globally in real time, generating fraud decisions in just milliseconds. Using a combination of advanced algorithms, one of which tapped into the NVIDIA AI platform, American Express enhanced model accuracy, advancing the companys ability to better fight fraud.European digital bank bunq uses generative AI and large language models to help detect fraud and money laundering. Its AI-powered transaction-monitoring system achieved nearly 100x faster model training speeds with NVIDIA accelerated computing.BNY announced in March that it became the first major bank to deploy an NVIDIA DGX SuperPOD with DGX H100 systems, which will help build solutions that support fraud detection and other use cases.And now, systems integrators, software vendors and cloud service providers can integrate the new NVIDIA AI workflow for fraud detection to boost their financial services applications and help keep customers money, identities and digital accounts safe.Explore the fraud detection NVIDIA AI workflow and read this NVIDIA Technical Blog on supercharging fraud detection with GNNs.Learn more about AI for fraud detection by visiting the NVIDIA AI Pavilion featuring AWS at Money 20/20, running this week in Las Vegas.
    0 Comments 0 Shares 15 Views
  • BLOGS.NVIDIA.COM
    Fintech Leaders Tap Generative AI for Safer, Faster, More Accurate Financial Services
    An overwhelming 91% of financial services industry (FSI) companies are either assessing artificial intelligence or already have it in the bag as a tool thats driving innovation, improving operational efficiency and enhancing customer experiences.Generative AI powered by NVIDIA NIM microservices and accelerated computing can help organizations improve portfolio optimization, fraud detection, customer service and risk management.Among the companies harnessing these technologies to boost financial services applications are Ntropy, Contextual AI and NayaOne all members of the NVIDIA Inception program for cutting-edge startups.And Silicon Valley-based startup Securiti, which offers a centralized, intelligent platform for the safe use of data and generative AI, is using NVIDIA NIM to build an AI-powered copilot for financial services.At Money20/20, a leading fintech conference running this week in Las Vegas, the companies will demonstrate how their technologies can turn disparate, often complex FSI data into actionable insights and advanced innovation opportunities for banks, fintechs, payment providers and other organizations.Ntropy Brings Order to Unstructured Financial DataNew York-based Ntropy is helping remove various states of entropy disorder, randomness or uncertainty from financial services workflows.Whenever money is moved from point A to point B, text is left in bank statements, PDF receipts and other forms of transaction history, said Nar Vardanyan, cofounder and CEO of Ntropy. Traditionally, that unstructured data has been very hard to clean up and use for financial applications.The companys transaction enrichment application programming interface (API) standardizes financial data from across different sources and geographies, acting as a common language that can help financial services applications understand any transaction with humanlike accuracy in just milliseconds, at 10,000x lower cost than traditional methods.Its built on the Llama 3 NVIDIA NIM microservice and NVIDIA Triton Inference Server running on NVIDIA H100 Tensor Core GPUs. Using the Llama 3 NIM microservice, Ntropy achieved up to 20x better utilization and throughput for its large language models (LLMs) compared with running the native models.Airbase, a leading procure-to-pay software platform provider, boosts transaction authorization processes using LLMs and the Ntropy data enricher.At Money20/20, Ntropy will discuss how its API can be used to clean up customers merchant data, which boosts fraud detection by improving the accuracy of risk-detection models. This in turn reduces both false transaction declines and revenue loss.Another demo will highlight how an automated loan agent taps into the Ntropy API to analyze information on a banks website and generate a relevant investment report to speed loan dispersal and decision-making processes for users.Contextual AI Advances Retrieval-Augmented Generation for FSIContextual AI based in Mountain View, California offers a production-grade AI platform, powered by retrieval-augmented generation (RAG) and ideal for building enterprise AI applications in knowledge-intensive FSI use cases.RAG is the answer to delivering enterprise AI into production, said Douwe Kiela, CEO and cofounder of Contextual AI. Tapping into NVIDIA technologies and large language models, the Contextual AI RAG 2.0 platform can bring accurate, auditable AI to FSI enterprises looking to optimize operations and offer new generative AI-powered products.The Contextual AI platform integrates the entire RAG pipeline including extraction, retrieval, reranking and generation into a single optimized system that can be deployed in minutes, and further tuned and specialized based on customer needs, delivering much greater accuracy in context-dependent tasks.HSBC plans to use Contextual AI to provide research insights and process guidance support through retrieving and synthesizing relevant market outlooks, financial news and operational documents. Other financial organizations are also harnessing Contextual AIs pre-built applications, including for financial analysis, policy-compliance report generation, financial advice query resolution and more.For example, a user could ask, Whats our forecast for central bank rates by Q4 2025? The Contextual AI platform would provide a brief explanation and an accurate answer grounded in factual documents, including citations to specific sections in the source.Contextual AI uses NVIDIA Triton Inference Server and the open-source NVIDIA TensorRT-LLM library for accelerating and optimizing LLM inference performance.NayaOne Provides Digital Sandbox for Financial Services InnovationLondon-based NayaOne offers an AI sandbox that allows customers to securely test and validate AI applications prior to commercial deployment. Its technology platform allows financial institutions the ability to create synthetic data and gives them access to a marketplace of hundreds of fintechs.Customers can use the digital sandbox to benchmark applications for fairness, transparency, accuracy and other compliance measures and to better ensure top performance and successful integration.The demand for AI-driven solutions in financial services is accelerating, and our collaboration with NVIDIA allows institutions to harness the power of generative AI in a controlled, secure environment, said Karan Jain, CEO of NayaOne. Were creating an ecosystem where financial institutions can prototype faster and more effectively, leading to real business transformation and growth initiatives.Using NVIDIA NIM microservices, NayaOnes AI Sandbox lets customers explore and experiment with optimized AI models, and take them to deployment more easily. With NVIDIA accelerated computing, NayaOne achieves up to 10x faster processing for the large datasets used in its fraud detection models, at up to 40% lower infrastructure costs compared with running extensive CPU-based models.The digital sandbox also uses the open-source NVIDIA RAPIDS set of data science and AI libraries to accelerate fraud detection and prevention capabilities in money movement applications. The company will demonstrate its digital sandbox at the NVIDIA AI Pavilion at Money20/20.Securiti Improves Financial Planning With AI CopilotPowering a broad range of generative AI applications including safe enterprise AI copilots and LLM training and tuning Securitis highly flexible Data+AI platform lets users build safe, end-to-end enterprise AI systems.The company is now building an NVIDIA NIM-powered financial planning assistant. The copilot chatbot accesses diverse financial data while adhering to privacy and entitlement policies to provide context-aware responses to users finance-related questions.Banks struggle to provide personalized financial advice at scale while maintaining data security, privacy and compliance with regulations, said Jack Berkowitz, chief data officer at Securiti. With robust data protection and role-based access for secure, scalable support, Securiti helps build safe AI copilots that offer personalized financial advice tailored to individual goals.The chatbot retrieves data from a variety of sources, such as earnings transcripts, client profiles and account balances, and investment research documents. Securitis solution safely ingests and prepares it for use with high-performance, NVIDIA-powered LLMs, preserving controls such as access entitlements. Finally, it provides users with customized responses through a simple consumer interface.Using the Llama 3 70B-Instruct NIM microservice, Securiti optimized the performance of the LLM, while ensuring the safe use of data. The company will demonstrate its generative AI solution at Money20/20.NIM microservices and Triton Inference Server are available through the NVIDIA AI Enterprise software platform.Learn more about AI for financial services by joining NVIDIA at Money20/20, running through Wednesday, Oct. 30.Explore a new NVIDIA AI workflow for fraud detection.
    0 Comments 0 Shares 32 Views
  • BLOGS.NVIDIA.COM
    Zooms AI-First Transformation to Boost Business Productivity, Collaboration
    Zoom, a company that helped change the way people work during the COVID-19 pandemic, is continuing to reimagine the future of work by transforming itself into an AI-first communications and productivity platform. In this episode of NVIDIAs AI Podcast, Zoom CTO Xuedong (XD) Huang shares how the company is reshaping productivity with AI, including through its Zoom AI Companion 2.0, unveiled recently at the Zoomtopia conference. Designed to be a productivity partner, the AI companion is central to Zooms federated AI strategy, which focuses on integrating multiple large language models.Huang also introduces the concept of AUI, combining conversational AI and graphical user interfaces (GUIs) to streamline collaboration and supercharge business performance.The AI Podcast Zooms AI-First Transformation to Boost Business Productivity, Collaboration Ep. 235Time Stamps6:49: The fundamental capabilities of generative AI8:20: Zooms approach to AI, including the use of small language models11:20: Zooms federated AI strategy, integrating multiple AI models13:10: Introducing the concept of AUI.20:00: Huang on how AI will impact productivity and everyday tasks29:00: How Zoom helps business leaders understand AI and return on investment of AI projects32:50: Huangs near-term outlook on the development of AIYou Might Also LikeHow SonicJobs Uses AI Agents to Connect the Internet, Starting With Jobs Ep. 233Mikhil Raja, cofounder and CEO of SonicJobs, shares how the company has have built AI agents to enable candidates to complete applications directly on job platforms, without redirection, boosting completion rates. Raja delves deep into SonicJobs cutting-edge technology, which merges traditional AI with large language models, to understand and interact with job application web flows.Yotta CEO Sunil Gupta on Supercharging Indias Fast-Growing AI Market Ep. 225Sunil Gupta, cofounder, managing director and CEO of Yotta Data Services, talks about the companys Shakti Cloud offering, which provides scalable GPU services for enterprises. Gupta also shares insights on Indias potential as a major AI market and the importance of balancing data center growth with sustainability and energy efficiency.Replit CEO Amjad Masad on Empowering the Next Billion Software Creators Ep. 201Amjad Masad, CEO of Replit, aims to bridge the gap between ideas and software using the latest advancements in generative AI. Masad talks about the future of AI and how it can function as a collaborator that can conduct high-level tasks and even manage resources.Subscribe to the AI PodcastGet the AI Podcast through iTunes, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.
    0 Comments 0 Shares 30 Views
  • BLOGS.NVIDIA.COM
    Call of Duty: Black Ops 6 Storms Into the Cloud With GeForce NOW
    Attention, recruits! Its time to test combat skills and strategic prowess. Drop into the heart of the action this GFN Thursday with the launch of the highly anticipated first-person blockbuster Call of Duty: Black Ops 6, streaming in the cloud Oct. 24, 9pm PT.Plus, embark on an adventure to defend an empire in the remake of the classic role-playing game (RPG), Romancing SaGa 2: Revenge of the Seven.Thats just the tip of the iceberg. These are part of the nine titles being added to GeForce NOWs library of over 2,000 games. Members can also look forward to a new reward a free in-game Stag-Heart Skull Sallet Hat for the award-winning RPG The Elder Scrolls Online, starting on Thursday, Oct. 31. Get ready by opting into GeForce NOWs Rewards program today.Mission CriticalAnswering the call.Get ready, soldiers the next assignment awaits in the cloud. Call of Duty: Black Ops 6 is set against the backdrop of the Gulf War in the early 1990s, and will bring a thrilling Campaign, action-packed Multiplayer and a Zombies experience sure to thrill fans. The new game introduces the Omnimovement system to the franchise, allowing players to sprint, slide and dive in any direction, enhancing tactical gameplay across all modes.Call of Duty: Black Ops 6 launches with 16 new Multiplayer maps and two Zombies maps Terminus and Liberty Falls. In Multiplayer, the classic Prestige system makes a comeback, while the single player Campaign experience promises a gripping espionage narrative. With its blend of historical fiction, gameplay innovations and fan-favorite features, Black Ops 6 aims to deliver an intense Call of Duty experience that pushes the boundaries of the franchise.Prepare for adrenaline-fueled missions with an Ultimate GeForce NOW membership. These members get an advantage on the field with ultra-low-latency gaming, streaming from a GeForce RTX 4080 gaming rig in the cloud.Lucky Number SevenPeace was never an option.Save the Empire with a little help from the cloud Romancing SaGa2: Revenge of the Seven is coming to GeForce NOW.Experience the full remake of Square Enixs groundbreaking nonlinear RPG, Romancing SaGa2: Revenge of the Seven, first released in 1993 in Japan. The adventure includes both new and classic SaGa franchise features, complete with Japanese and English voiceovers, original and rearranged compositions, and much more. Its an ideal entry point for new players and provides an amped-up experience for longtime fans.The game features the Seven Heroes, once hailed as saviors before the ancients feared their power and banished them to another dimension. Thousands of years have passed, and the heroes have become legends. Furious that humankind has forgotten their many sacrifices, theyve now returned as villains bent on revenge.Members can join forces with characters from over 30 different classes featuring a wide variety of professions and races, each with their own favored weapons, unique abilities and effective tactics. Ultimate members can stream at up to 4K resolution with extended session lengths and more.Hircines Hunt From the CloudIts not just a hat, its a legacy.A new game reward has arrived for GeForce NOW members. New and existing The Elder Scrolls Online players can dawn the rare Stag-Heart Skull Sallet Hat, a fierce antlered helm with untamed strength. Its the perfect way for gamers to forge their legends in Tamriel and stand out during the Witches Festival.Members whove opted into GeForce NOWs Rewards program can check their email for instructions on how to redeem it. Ultimate and Priority members can start redeeming the reward now, while free members will be able to claim it starting tomorrow, Oct. 25. Its available through Sunday, Nov. 24, first come, first served.Play TodayMembers can look for the following games available to stream in the cloud this week:Worshippers of Cthulhu (New release on Steam, Oct. 21)No More Room in Hell 2 (New release on Steam, Oct. 22)Romancing SaGa 2: Revenge of the Seven (New release on Steam, Oct. 24)Windblown (New release on Steam, Oct. 24)Call of Duty: Black Ops 6 (New release on Steam, Battle.net and Xbox, available on PC Game Pass, Oct. 25)Call of Duty HQ, including Call of Duty: Modern Warfare III and Call of Duty: Warzone (Xbox, available on PC Game Pass)DUCKSIDE (Steam)Off the Grid (Epic Games Store)Selaco (Steam)What are you planning to play this weekend? Let us know on X or in the comments below.the cloud is calling, will you answer? NVIDIA GeForce NOW (@NVIDIAGFN) October 23, 2024
    0 Comments 0 Shares 41 Views
  • BLOGS.NVIDIA.COM
    India Should Manufacture Its Own AI, Declares NVIDIA CEO
    Artificial intelligence will be the driving force behind Indias digital transformation, fueling innovation, economic growth, and global leadership, NVIDIA founder and CEO Jensen Huang said Thursday at NVIDIAs AI Summit in Mumbai.Addressing a crowd of entrepreneurs, developers, academics and business leaders, Huang positioned AI as the cornerstone of the countrys future.India has an amazing natural resource in its IT and computer science expertise, Huang said, nothing the vast potential waiting to be unlocked.To capitalize on this countrys talent and Indias immense data resources, the countrys leading cloud infrastructure providers are rapidly accelerating their data center capacity. NVIDIA is playing a key role, with NVIDIA GPU deployments expected to grow nearly 10x by years end, creating the backbone for an AI-driven economy.Together with NVIDIA, these companies are at the cutting edge of a shift Huang compared to the seismic change in computing introduced by IBMs System 360 in 1964, calling it the most profound platform shift since then.This industry, the computing industry, is going to become the intelligence industry, Huang said, pointing to Indias unique strengths to lead this industry, thanks to its enormous amounts of data and large population. With this rapid expansion in infrastructure, AI factories will play a critical role in Indias future, serving as the backbone of the nations AI-driven growth.NVIDIA founder and CEO Jensen Huang speaking with Reliance Industries Chairman Mukesh Ambani at NVIDIAs AI Summit in Mumbai.It makes complete sense that India should manufacture its own AI, Huang said. You should not export data to import intelligence, he added, noting the importance of India building its own AI infrastructure.Huang identified three areas where AI will transform industries: sovereign AI, where nations use their own data to drive innovation; agentic AI, which automates knowledge-based work; and physical AI, which applies AI to industrial tasks through robotics and autonomous systems. India, Huang noted, is uniquely positioned to lead in all three areas.Indias startups are already harnessing NVIDIA technology to drive innovation across industries and are positioning themselves as global players, bringing the countrys AI solutions to the world.Meanwhile, Indias robotics ecosystem is adopting NVIDIA Isaac and Omniverse to power the next generation of physical AI, revolutionizing industries like manufacturing and logistics with advanced automation.Huangs also keynote featured a surprise appearance by actor and producer Akshay Kumar.Following Huangs remarks, the focus shifted to a fireside chat between Huang and Reliance Industries Chairman Mukesh Ambani, where the two leaders explored how AI will shape the future of Indian industries, particularly in sectors like energy, telecommunications and manufacturing.Ambani emphasized that AI is central to this continued growth. Reliance, in partnership with NVIDIA, is building AI factories to automate industrial tasks and transform processes in sectors like energy and manufacturing.Both men discussed their companies joint efforts to pioneer AI infrastructure in India.Ambani underscored the role of AI in public sector services, explaining how Indias data combined with AI is already transforming governance and service delivery.Huang added that AI promises to democratize technology.The ability to program AI is something that everyone can do if AI could be put into the hands of every citizen, it would elevate and put into the hands of everyone this incredible capability, he said.Huang emphasized NVIDIAs role in preparing Indias workforce for an AI-driven future.NVIDIA is partnering with Indias IT giants such as Infosys, TCS, Tech Mahindra and Wipro to upskill nearly half a million developers, ensuring India leads the AI revolution with a highly trained workforce.Indias technical talent is unmatched, Huang said.Ambani echoed these sentiments, stressing that India will be one of the biggest intelligence markets, pointing to the nations youthful, technically talented population.A Vision for Indias AI-Driven FutureAs the session drew to a close, Huang and Ambani reflected on their vision for Indias AI-driven future.With its vast talent pool, burgeoning tech ecosystem and immense data resources, the country, they agreed, has the potential to contribute globally in sectors such as energy, healthcare, finance and manufacturing.This cannot be done by any one company, any one individual, but we all have to work together to bring this intelligence age safely to the world so that we can create a more equal world, a more prosperous world, Ambani said.Huang echoed the sentiment, adding: Lets make it a promise today that we will work together so that India can take advantage of the intelligence revolution thats ahead of us.
    0 Comments 0 Shares 40 Views
  • BLOGS.NVIDIA.COM
    India Enterprises Serve Over a Billion Local Language Speakers Using LLMs Built With NVIDIA AI
    Namaste, vanakkam, sat sri akaal these are just three forms of greeting in India, a country with 22 constitutionally recognized languages and over 1,500 more recorded by the countrys census. Around 10% of its residents speak English, the internets most common language.As India, the worlds most populous country, forges ahead with rapid digitalization efforts, its enterprises and local startups are developing multilingual AI models that enable more Indians to interact with technology in their primary language. Its a case study in sovereign AI the development of domestic AI infrastructure that is built on local datasets and reflects a regions specific dialects, cultures and practices.These projects are building language models for Indic languages and English that can power customer service AI agents for businesses, rapidly translate content to broaden access to information, and enable services to more easily reach a diverse population of over 1.4 billion individuals.To support initiatives like these, NVIDIA has released a small language model for Hindi, Indias most prevalent language with over half a billion speakers. Now available as an NVIDIA NIM microservice, the model, dubbed Nemotron-4-Mini-Hindi-4B, can be easily deployed on any NVIDIA GPU-accelerated system for optimized performance.Tech Mahindra, an Indian IT services and consulting company, is the first to use the Nemotron Hindi NIM microservice to develop an AI model called Indus 2.0, which is focused on Hindi and dozens of its dialects. Indus 2.0 harnesses Tech Mahindras high-quality fine-tuning data to further boost model accuracy, unlocking opportunities for clients in banking, education, healthcare and other industries to deliver localized services.Tech Mahindra will showcase Indus 2.0 at the NVIDIA AI Summit, taking place Oct. 23-25 in Mumbai. The company also uses NVIDIA NeMo to develop its sovereign large language model (LLM) platform, TeNo.NVIDIA NIM Makes AI Adoption for Hindi as Easy as Ek, Do, TeenThe Nemotron Hindi model has 4 billion parameters and is derived from Nemotron-4 15B, a 15-billion parameter multilingual language model developed by NVIDIA. The model was pruned, distilled and trained with a combination of real-world Hindi data, synthetic Hindi data and an equal amount of English data using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI.The dataset was created with NVIDIA NeMo Curator, which improves generative AI model accuracy by processing high-quality multimodal data at scale for training and customization. NeMo Curator uses NVIDIA RAPIDS libraries to accelerate data processing pipelines on multi-node GPU systems, lowering processing time and total cost of ownership. It also provides pre-built pipelines and building blocks for synthetic data generation, data filtering, classification and deduplication to process high-quality data.After fine-tuning with NeMo, the final model leads on multiple accuracy benchmarks for AI models with up to 8 billion parameters. Packaged as a NIM microservice, it can be easily harnessed to support use cases across industries such as education, retail and healthcare.Its available as part of the NVIDIA AI Enterprise software platform, which gives businesses access to additional resources, including technical support and enterprise-grade security, to streamline AI development for production environments.Bevy of Businesses Serves Multilingual PopulationInnovators, major enterprises and global systems integrators across India are building customized language models using NVIDIA NeMo.Companies in the NVIDIA Inception program for cutting-edge startups are using NeMo to develop AI models for several Indic languages.Sarvam AI offers enterprise customers speech-to-text, text-to-speech, translation and data parsing models. The company developed Sarvam 1, Indias first homegrown, multilingual LLM, which was trained from scratch on domestic AI infrastructure powered by NVIDIA H100 Tensor Core GPUs.Sarvam 1 developed using NVIDIA AI Enterprise software including NeMo Curator and NeMo Framework supports English and 10 major Indian languages, including Bengali, Marathi, Tamil and Telugu.Sarvam AI also uses NVIDIA NIM microservices, NVIDIA Riva for conversational AI, NVIDIA TensorRT-LLM software and NVIDIA Triton Inference Server to optimize and deploy conversational AI agents with sub-second latency.Another Inception startup, Gnani.ai, built a multilingual speech-to-speech LLM that powers AI customer service assistants that handle around 10 million real-time voice interactions daily for over 150 banking, insurance and financial services companies across India and the U.S. The model supports 14 languages and was trained on over 14 million hours of conversational speech data using NVIDIA Hopper GPUs and NeMo Framework.Gnani.ai uses TensorRT-LLM, Triton Inference Server and Riva NIM microservices to optimize its AI for virtual customer service assistants and speech analytics.Large enterprises building LLMs with NeMo include:Flipkart, a major Indian ecommerce company majority-owned by Walmart, is integrating NeMo Guardrails, an open-source toolkit that enables developers to add programmable guardrails to LLMs, to enhance the safety of its conversational AI systems.Krutrim, part of the Ola Group of businesses that includes one of Indias top ride-booking platforms, is developing a multilingual Indic foundation model using Mistral NeMo 12B, a state-of-the-art LLM developed by Mistral AI and NVIDIA.Zoho Corporation, a global technology company based in Chennai, will use NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server to optimize and deliver language models for its over 700,000 customers. The company will use NeMo running on NVIDIA Hopper GPUs to pretrain narrow, small, medium and large models from scratch for over 100 business applications.Indias top global systems integrators are also offering NVIDIA NeMo-accelerated solutions to their customers.Infosys will work on specific tools and solutions using the NVIDIA AI stack. The companys center of excellence is also developing AI-powered small language models that will be offered to customers as a service.Tata Consultancy Services has developed AI solutions based on NVIDIA NIM Agent Blueprints for the telecommunications, retail, manufacturing, automotive and financial services industries. TCS offerings include NeMo-powered, domain-specific language models that can be customized to address customer queries and answer company-specific questions for employees for all enterprise functions such as IT, HR or field operations.Wipro is using NVIDIA AI Enterprise software including NIM Agent Blueprints and NeMo to help businesses easily develop custom conversational AI solutions such as digital humans to support customer service interactions.Wipro and TCS also use NeMo Curators synthetic data generation pipelines to generate data in languages other than English to customize LLMs for their clients.To learn more about NVIDIAs collaboration with businesses and developers in India, watch the replay of company founder and CEO Jensen Huangs fireside chat at the NVIDIA AI Summit.
    0 Comments 0 Shares 34 Views
  • BLOGS.NVIDIA.COM
    Healthcare Leaders Across India Bring NVIDIA NIM for Hindi Language to LLM Applications
    Life sciences and healthcare organizations across India are using generative AI to build applications that can deliver life-saving impacts within the country and across the globe.Among such leading organizations are research centers at the Indian Institute of Technology Madras (IIT Madras) and the Indraprastha Institute of Information Technology Delhi (IIIT-Delhi), intelligent life sciences company Innoplexus and AI-led medical diagnostics platform provider 5C Network.Central to their work are NVIDIA NIM microservices, including the new Nemotron-4-Mini-Hindi 4B microservice for building sovereign AI applications and large language models (LLMs) in the Hindi language.The Nemotron-4 Hindi model delivers the highest accuracy across benchmarks in 2 billion to 8 billion model-size categories for Hindi.With the Indian healthcare market projected to grow from about $180 billion last year to $320 billion by 2028, the new AI model has the potential to dramatically improve healthcare accessibility and efficiency.To gear up for such growing demands and to help more patients faster the Indian government is significantly investing in building foundational AI models designed and developed within the country, including for healthcare, through initiatives like the IndiaAI Mission.Members of the Indian healthcare ecosystem are leading the charge by advancing neuroscience research, combating antibiotic resistance, accelerating drug discovery, automating diagnostic scan analysis and more all with AIs help.IIT Madras Advances Neuroscience Research With AIThe IIT Madras Brain Centre is advancing neuroscience research by imaging whole human brains at a cellular level across various ages and brain diseases, and using AI to analyze these vast petabyte-sized primary datasets. The work is opening new avenues for understanding brain structure and function, as well as how they change in disease conditions, accelerating research that could lead to life-saving discoveries.To make information about the brain more accessible to STEM students and researchers, the center is developing an AI chatbot using the Nemotron-4 Hindi NIM microservice that can answer neuroscience-related questions in Hindi.This builds upon the centers existing NVIDIA AI-powered knowledge-exploration framework, called Neuro Voyager. Developed using visual question-answering models and LLMs, Neuro Voyager lets researchers submit queries related to brain images and provides highly accurate answers using multimodal information retrieval.IIT Madras developed Neuro Voyager using both real-world data from research publications and synthetic data.Using NVIDIA NeMo Retriever, a collection of NIM microservices for information retrieval, the team achieved a 30% increase in accuracy through fine-tuning of the embedding model and further refinement of the framework.For the tools answer-generation portion, the researchers tapped the Llama 3.1 70B NVIDIA NIM microservice, running on NVIDIA DGX systems, which accelerated LLM inference 4x compared with the native model.IIIT-Delhi-Led Consortium Fights Antimicrobial Resistance Using Generative AI, NVIDIA DGXA research group at IIIT-Delhi is using the Nemotron-4 Hindi model to collect antibiotic prescription patterns in local languages, including Hindi.Antimicrobial resistance among the worlds greatest threats to global health occurs when bacteria, viruses, fungi and parasites change over time, no longer responding to treatment and increasing the risk of disease spread, severe illness and death.IIIT-Delhi researchers predict that AI-guided antimicrobial stewardship will be a key component of preventing the tens of millions of deaths that could be caused by antimicrobial resistance between 2025 and 2050.The researchers AI-powered data integration and predictive analytics tool, AMRSense, improves accuracy and speeds time to insights on antimicrobial resistance. Powered by NVIDIA NeMo platform-based natural language processing, AMRSense is designed to be used in hospital and community settings.This collaborative solution between IIIT-Delhi and a consortium of other research institutions placed second out of over 300 entries in the Trinity Challenge, a competition that calls for data-driven solutions to help tackle global health threats.IIIT-Delhi is also using NVIDIA DGX systems to build foundation models that can further hone its workflows.5C Network Uses NVIDIA NIM, MONAI for AI-Powered Medical ImagingBengaluru- and Coimbatore-based 5C Networks Bionic suite of medical imaging tools, based on computer vision and LLMs, is helping transform radiology reporting by reading, detecting and analyzing medical scans and generating comprehensive medical notes that provide actionable insights to support clinicians in decision-making.Used across Indias largest hospital groups and several marquee hospitals, Bionic detects pathologies in scans, such as lung lesions in X-rays or brain masses in MRIs. It then provides detailed measurements of abnormalities, such as the size, volume or density of lesions to assess disease severity and treatment planning. Finally, Bionic compiles the data into clear, actionable reports with suggested next steps, such as further testing or specialist referrals.Bionic was developed using the open-source MONAI framework, the NVIDIA TensorRT ecosystem of application programming interfaces for high-performance deep learning inference, and the NVIDIA NeMo platform for custom generative AI.Using the Nemotron-4 Hindi NIM microservice, 5C Network is now enhancing its client app, which allows patients to ask questions about radiology reports and receive quick, accurate responses in simplified Hindi.https://blogs.nvidia.com/wp-content/uploads/2024/10/5C-Network-Bionic.mp4Innoplexus Analyzes Protein Interactions With NVIDIA NIMInnoplexus, a member of the NVIDIA Inception program for cutting-edge startups, has built an AI-powered life sciences platform for drug discovery powered by NVIDIA NIM, including the AlphaFold2 NIM microservice.Protein-protein interaction (PPI) is critical to pathogenic and physiologic mechanisms that trigger the onset and progression of diseases. This means understanding PPI can help facilitate effective diagnostic and therapeutic strategies.Innoplexus performs large-scale PPI predictions up to 500x faster than traditional methods. The companys platform can analyze 200 million protein interactions in just seconds, tapping into NVIDIA H100 Tensor Core GPU acceleration.Using NVIDIA NIM microservices, Innoplexus generates synthetic patient data to boost its AI models and performs virtual screenings of 5.8 million small molecules in less than eight hours 10x faster than without NIM.Plus, the microservices help Innoplexus identify the most effective, safest drugs within a given set of therapeutic agents with 90% accuracy.Using the new Nemotron-4 Hindi model, Innoplexus is developing a tool that will let users easily access and understand information about Ayurveda, a system of traditional medicine native to India, based on Hindi content from key repositories.Another Innoplexus LLM application, built with the new Hindi model, explains details about user prescriptions and medical reports based on photos of them in easy-to-understand terms.NVIDIA NIM microservices are available as part of the NVIDIA AI Enterprise software platform. Developers can get started with them for free at ai.nvidia.com.In addition, global system integrators including Infosys, Tata Consultancy Services (TCS), Tech Mahindra and Wipro are collaborating with NVIDIA to help life sciences and healthcare companies accelerate their generative AI adoption.Learn more about the latest in generative AI and accelerated computing at the NVIDIA AI Summit in India, and subscribe to NVIDIA healthcare news.
    0 Comments 0 Shares 34 Views
  • BLOGS.NVIDIA.COM
    India Manufacturers Build Factory Digital Twins With NVIDIA AI and Omniverse
    Manufacturers and service providers in India are adopting NVIDIA Omniverse to tap into simulation, digital twins and generative AI to accelerate their factory planning and drive automation for more efficient operations.Indias exports have surged in recent years as the nation positions itself to be the next global industrial manufacturing powerhouse. Automotive, industrial machinery, electronics, textiles, chemicals and pharmaceuticals are among the sectors expected to help drive Indias exports to $1 trillion by 2028, according to Bain & Company.As Indias manufacturing industry continues to soar, manufacturers are embracing AI for digitization of processes and robotics to scale their operations and meet growing global demands.This wave of manufacturing automation, harnessing Omniverse to build virtual warehouses and production facilities to enable the next era of industrial and physical AI, was on full display this week at the NVIDIA AI Summit India, taking place in Mumbai through Oct. 25, from major industrial names like Ola Electric, Reliance Industries, Tech Mahindra, and TCS.Ola Accelerates Electric Scooter Production With OmniverseOla Electric, the largest electric scooter maker in India, announced it has developed the Ola Digital Twin platform on NVIDIA Omniverse. The company said that the ODT platform has helped it achieve 20% faster time to market from design to commissioning for its manufacturing operationsBuilt on NVIDIA Isaac Sim, the Ola Digital Twin platform taps into core Omniverse technologies like OpenUSD for data interoperability, RTX for physically-based rendering, and generative AI for accelerated world building to generate synthetic data or training autonomous mobile robots and robotic arms.Ola is using the digital twin platform to plan and build its next-generation Future Factory Indias largest integrated and automated electric two-wheeler manufacturing plant in only eight months. The platform helped provide insights for factory and construction planning, as well as for quality inspection systems, manufacturing processes and safety training. The company is also using the digital twin to compare real and simulated environments, assisting with predictive maintenance.Reliance Industries Adopts Omniverse for Solar Panel Factory PlanningReliance Industries, a leading industrial conglomerate in India with businesses in energy, petrochemicals, natural gas, retail, entertainment, telecommunications, mass media and textiles, is embracing Omniverse for planning its new solar panel factory in Jamnagar, India.Supporting the companys goal of reaching net-zero carbon status by 2035, the 5,000-acre integrated photovoltaic manufacturing plant is meant to be Indias largest solar gigafactory.Reliance is using Omniverse to develop applications for managing 3D data, virtual collaboration, simulation and optimized operations, as well as data integration for planning, design, automation, operation, sustainability and training of their workforce for soon to be commissioned Giga factories in Jamnagar, India.The company is also using Omniverse to develop OpenUSD-based, SimReady virtual factory assets, including the factory building, manufacturing equipment, robots, kinematics, and material and product models, and its running simulations for logistics and human workers.Leading System Integrators Help India Manufacturers Embrace Industrial AISystem integrators play a crucial role in helping Indias largest manufacturers use NVIDIA technology to build the countrys next generation of manufacturing plants.Consulting leaders such as Tata Consultancy Services (TCS) and Tech Mahindra are developing industrial AI applications and services on Omniverse to help manufacturers develop digital twins for accelerated factory planning, optimized processes, robotics training and large-scale automation.TCS announced its working on a suite of digital twin solutions built on NVIDIA Omniverse that enable manufacturers to design, simulate, operate and optimize their products and production facilities across multiple sectors.Use cases cover nearly every aspect of heavy manufacturing from building virtual factories for real-time factory planning and monitoring, to creating digital twins of aircraft components for immersive training and predictive maintenance.TCS also uses Omniverse to simulate autonomous vehicles, enabling automotive companies to simulate and validate complex driving scenarios without the need for physical testing. Meanwhile, its smart farming digital twin integrates real-world physics to simulate farming scenarios that help improve equipment performance.In addition, TCS launched its TCS Manufacturing AI for Industrials, a generative AI solution built on the NVIDIA AI Enterprise platform, which includes NVIDIA NeMo for building and managing the AI application lifecycle, to turn general-purpose large language models (LLMs) into manufacturing expert AI agents capable of providing real-time, industry-specific insights across clients various production facilities. These agentic AIs can be connected to virtual factories, developed on Omniverse, to augment facility planning, design, and operations.Tech Mahindra, a global leader in technology consulting, announced it is establishing a center of excellence enabled by NVIDIA AI Enterprise and Omniverse. It is targeted at helping drive advances in sovereign LLM frameworks, agentic AI and physical AI.Tech Mahindras Center for Excellence uses NVIDIA Omniverse to develop connected industrial AI digital twins and physical AI applications for clients across sectors, including manufacturing, automotive, telecommunications, healthcare, banking, financial services and insurance.Other leading system integrators such as Wipro and Infosys are also building solutions using the NVIDIA AI stack and expanding into physical AI with Omniverse.
    0 Comments 0 Shares 36 Views
  • BLOGS.NVIDIA.COM
    Indias Robotics Ecosystem Adopts NVIDIA Isaac and Omniverse to Build Next Wave of Physical AI
    In vast warehouses, Addverbs robots work tirelessly, picking, sorting and delivering products with precision.Across frozen Oslo, Norway, Ottonomys Yeti robots assist in navigating icy streets as part of a trial with Posten Norge for urban deliveries, while in sun-soaked Madrid, they autonomously cruise bustling avenues, supporting last-mile delivery services.AI-powered robots are revolutionizing industries worldwide, and Indian innovators like Addverb, Ati Motors and Ottonomy are leading the charge, powered by NVIDIAs accelerated computing, simulation, robotics and AI platforms.According to ABI Research, the installed base for industrial and commercial robots is projected at 5.4 million units by 2024, with annual shipments expected of 1.3 million. By 2030, these numbers are forecast to grow significantly, with over 15 million installed robots and more than 4 million annual shipments.This explosive growth represents a massive opportunity for India, a country known for its software and engineering expertise.Companies like Ottonomy, with its cutting-edge Ottobot 2.0 featuring swerve-drive technology, are pushing the boundaries of automation.As members of the NVIDIA Inception program for startups, over 25 robotics companies such as Xmachines, Machani Robotics, Drishti Works, ANSCER Robotics and Orangewood Labs are driving innovations across sectors like industrial automation, healthcare and smart cities.These startups are scaling quickly, transforming industries both locally and globally. The NVIDIA AI Summit in Mumbai will highlight how NVIDIAs platforms are enabling the next wave of robotics advancements.Heres a look at key players putting NVIDIA technologies to work across India and beyond.Addverb: Driving Global Robotics InnovationFrom its Noida headquarters, Addverb is setting new standards in industrial automation with the launch of Bot-Verse, a large-scale facility capable of producing 100,000 robots annually.At the AI Summit, Addverb will showcase how the NVIDIA Isaac Sim and Omniverse platforms are used to create digital twins of real-world environments, enabling the testing and optimization of robots with synthetic data.The NVIDIA Jetson Orin NX and TensorRT platforms also power Addverbs robots, enhancing their ability to perform complex warehouse tasks with greater efficiency and reduced downtime.High-profile clients rely on Addverbs automation solutions to significantly boost operational efficiency.Ottonomy: Redefining Last-Mile DeliveryOttonomy is currently using the NVIDIA TensorRT deep learning inference library along with NVIDIA Jetson working with its Contextual AI software to make it more robust for running robots in dynamic environments.Its latest creation, Ottobots, features swerve-drive technology, enabling zero-radius turns for smooth navigation in tight spaces, indoors and out.With headquarters in California and R&D in Noida, India, Ottonomy is making waves, particularly in the healthcare, retail, food and beverage, and e-commerce delivery markets.Its autonomous delivery systems are deployed at customers across North America, Europe and the Middle East.Ati Motors: Pioneering Autonomous Vehicles in IndiaAti Motors, based in Bengaluru, is redefining autonomous vehicle technology with its focus on industrial-grade autonomous electric vehicles, which are built with NVIDIA Isaac Sim and NVIDIA Jetson for edge AI.The company, which recently completed a $10.83 million Series A funding round, is driving innovation in industrial automation, particularly within the automotive and manufacturing sectors.Ati Motors Sherpa line of electric autonomous vehicles is designed to operate in complex environments such as factories and warehouses, using advanced AI for precision navigation and real-time decision-making.The Sherpa autonomous mobile robots can navigate challenging terrains, including outdoor and rugged industrial environments, all without requiring modifications to existing infrastructure. This allows seamless integration into diverse operational settings, from factory floors to open yards, enhancing efficiency without disrupting existing workflows.For instance, the Sherpa Lifter benefits from NVIDIA Isaac Sims realistic physics-based simulations for training and testing before real-world deployment. The synthetic data generated in Isaac Sim enables comprehensive testing of Sherpa Lifters critical functions, enhancing its robustness and precision in varied factory environments.NVIDIA Robotics Technologies Transforming IndustriesPowered by NVIDIAs world-class AI and robotics platforms, these and other innovators across India are transforming industries from e-commerce to smart cities.Indias vibrant robotics and edge AI ecosystem includes members of the NVIDIA Partner Network providing AI services, product design and manufacturing and sensor solutions to accelerate time to market for robotics developers and customers globally.As more companies embrace these cutting-edge technologies, India is rapidly becoming a global leader in automation and robotics.Tune in to the livestream of NVIDIA founder and CEO Jensen Huang at the NVIDIA AI Summit India or catch sessions on-demand.Learn more about NVIDIA robotics and edge AI.
    0 Comments 0 Shares 36 Views
  • BLOGS.NVIDIA.COM
    Open for AI: India Tech Leaders Build AI Factories for Economic Transformation
    Indias leading cloud infrastructure providers and server manufacturers are ramping up accelerated data center capacity. By years end, theyll have boosted NVIDIA GPU deployment in the country by nearly 10x compared to 18 months ago.Tens of thousands of NVIDIA Hopper GPUs will be added to build AI factories large-scale data centers for producing AI that support Indias large businesses, startups and research centers running AI workloads in the cloud and on premises. This will cumulatively provide nearly 180 exaflops of compute to power innovation in healthcare, financial services and digital content creation.Announced today at the NVIDIA AI Summit, taking place in Mumbai through Oct. 25, this buildout of accelerated computing technology is led by data center provider Yotta Data Services, global digital ecosystem enabler Tata Communications, cloud service provider E2E Networks and original equipment manufacturer Netweb.Their systems will enable developers to harness domestic data center resources powerful enough to fuel a new wave of large language models, complex scientific visualizations and industrial digital twins that could propel India to the forefront of AI-accelerated innovation.Yotta Brings AI Systems and Services to Shakti CloudYotta Data Services is providing Indian businesses, government departments and researchers access to managed cloud services through its Shakti Cloud platform to boost generative AI adoption and AI education.Powered by thousands of NVIDIA Hopper GPUs, these computing resources are complemented by NVIDIA AI Enterprise, an end-to-end, cloud-native software platform that accelerates data science pipelines and streamlines development and deployment of production-grade copilots and other generative AI applications.With NVIDIA AI Enterprise, Yotta customers can access NVIDIA NIM, a collection of microservices for optimized AI inference, and NVIDIA NIM Agent Blueprints, a set of customizable reference architectures for generative AI applications. This will allow them to rapidly adopt optimized, state-of-the-art AI for applications including biomolecular generation, virtual avatar creation and language generation.The future of AI is about speed, flexibility and scalability, which is why Yottas Shakti Cloud platform is designed to eliminate the common barriers that organizations across industries face in AI adoption, said Sunil Gupta, cofounder, CEO and managing director of Yotta. Shakti Cloud brings together high-performance GPUs, optimized storage and a services layer that simplifies AI development from model training to deployment, so organizations can quickly scale their AI efforts, streamline operations and push the boundaries of what AI can accomplish.Yottas customers include Sarvam AI, which is building AI models that support major Indian languages; Innoplexus, which is developing an AI-powered life sciences platform for drug discovery; and Zoho Corporation, which is creating language models for enterprise customers.Tata Supports Enterprise AI Innovation Across IndustriesTata Communications is initiating a large-scale deployment of NVIDIA Hopper architecture GPUs to power its public cloud infrastructure and support a wide range of AI applications. The company also plans to expand its offerings next year to include NVIDIA Blackwell GPUs.In addition to providing accelerated hardware, Tata Communications will enable customers to run NVIDIA AI Enterprise, including NVIDIA NIM and NIM Agent Blueprints, and NVIDIA Omniverse, a software platform and operating system that developers use to build physical AI and robotic system simulation applications.By combining NVIDIAs accelerated computing infrastructure with Tata Communications AI Studio and global network, were creating a future-ready platform that will enable AI transformation across industries, said A.S. Lakshminarayanan, managing director and CEO of Tata Communications. Access to these resources will make AI more accessible to innovators in fields including manufacturing, healthcare, retail, banking and financial services.E2E Expands Cloud Infrastructure for AI InnovationE2E Networks supports enterprises in India, the Middle East, the Asia-Pacific region and the U.S with GPU-powered cloud servers.It offers customers access to clusters featuring NVIDIA Hopper GPUs interconnected with NVIDIA Quantum-2 InfiniBand networking to help meet the demand for high-compute tasks including simulations, foundation model training and real-time AI inference.This infrastructure expansion helps ensure Indian businesses have access to high-performance, scalable infrastructure to develop custom AI models, said Tarun Dua, cofounder and managing director of E2E Networks. NVIDIA Hopper GPUs will be a powerful driver of innovation in large language models and large vision models for our users.E2Es clients include AI4Bharat, a research lab at the Indian Institute of Technology Madras developing open-source AI applications for Indian languages as well as members of the NVIDIA Inception startup program such as disease detection company Qure.ai, text-to-video generative AI company Invideo AI and intelligent voice agent company Assisto.Netweb Servers Advance Sovereign AI InitiativesNetweb is expanding its range of Tyrone AI systems based on NVIDIA MGX, a modular reference architecture to accelerate enterprise data center workloads.Offered for both on-premises and off-premises cloud infrastructure, the new servers feature NVIDIA GH200 Grace Hopper Superchips, delivering the computational power to support large hyperscalers, research centers, enterprises and supercomputing centers in India and across Asia.Through Netwebs decade-long collaboration with NVIDIA, weve shown that world-class computing infrastructure can be developed in India, said Sanjay Lodha, chairman and managing director of Netweb. Our next-generation systems will help the countrys businesses and researchers build and deploy more complex AI applications trained on proprietary datasets.Netweb also offers customers Tryone Skylus cloud instances that include the companys full software stack, alongside the NVIDIA AI Enterprise and NVIDIA Omniverse software platforms, to develop large-scale agentic AI and physical AI.NVIDIAs roadmap features new platforms set to arrive on a one-year rhythm. By harnessing these advancements in AI computing and networking, infrastructure providers and manufacturers in India and beyond will be able to further scale the capabilities of AI development to power larger, multimodal models, optimize inference performance and train the next generation of AI applications.Learn more about Indias AI adoption in the fireside chat between NVIDIA founder and CEO Jensen Huang and Mukesh Ambani, chairman and managing director of Reliance Industries, at the NVIDIA AI Summit.
    0 Comments 0 Shares 31 Views
  • BLOGS.NVIDIA.COM
    Worlds Greatest Upskill: Consulting Giants Team With NVIDIA to Transform India Into Front Office for AI Era
    Information technology giants including Infosys, TCS, Tech Mahindra and Wipro are teaming with NVIDIA to accelerate AI adoption. Theyre creating new jobs and training nearly half a million developers for the era of AI.This great upskill is introducing a new wave of opportunity.IT is a leading Indian export, and IDC reports that the Indian domestic IT & Business Services market was valued at $14.5 billion in 2023. Now, Indias technology leaders are taking on an expanded role as the industry shifts from providing IT services to consulting with global clients to meet the demand for front-office AI applications.In the coming years, IT service investments will be driven by interest in gen AI, said Harish Krishnakumar, senior market analyst of IT Services at IDC India. Enterprises will continue engaging with IT service providers to develop potential use cases and POCs and also to transform and manage their complex IT infrastructure and applications.Agents of InnovationIndias IT consulting giants are helping clients deploy AI with custom-built solutions that use the NVIDIA AI Enterprise software platform. Clients can use these generative AI applications that include virtual agents that can learn, reason and take action to drive new levels of productivity and foster breakthroughs to help solve complex challenges across healthcare, climate, agriculture, manufacturing and more.Consulting experts are creating custom models with NVIDIA NeMo and deploying AI in production with NVIDIA NIM microservices. Using NVIDIA NIM Agent Blueprints including a new blueprint for customer service announced today at the NVIDIA AI Summit in India consultants are helping global clients tailor AI agents for their unique needs.NeMo Curator is playing a key role in enabling consulting experts to train highly accurate sovereign AI for India and neighboring Southeast Asian countries. With NeMo Curator, consulting companies are processing high-quality data at scale in these low-resource languages and generating synthetic data to augment their existing datasets.With NVIDIA RAPIDS, theyre accelerating data analytics to create a robust foundation for AI development and retrieval-augmented generation (RAG).As manufacturing leaders seek to use physical AI to scale production, efficiency and safety, leading consulting firms are also tapping into the NVIDIA Omniverse Enterprise development platform to create industrial AI digital twins.Consulting Leaders Create New Job Opportunities in AIGoldman Sachs forecasts that AI has the potential to automate up to 20% of work tasks in emerging economies, including India. NVIDIAs technology consulting partners are helping professionals and clients get ready for these AI-driven opportunities with full-stack NVIDIA AI.Infosys uses NVIDIA AI Enterprise for Infosys Topaz, an AI-first set of offerings, to help businesses quickly adopt and integrate generative AI into their operations. It has set up an NVIDIA Center of Excellence thats spearheading the reskilling of employees, development of solutions and adoption of NVIDIA technology across enterprises.Tata Consultancy Services features NVIDIA AI Enterprise software in its industry solutions for automotive, manufacturing, telecommunications, financial services, retail and many other industry verticals. It has trained 50,000+ AI associates to help clients develop and implement AI strategies that are scalable, sustainable and responsible.Tech Mahindra offers the Tech Mahindra Optimized Framework built on NVIDIA AI Enterprise to advance sovereign large language model frameworks and bring generative AI into mainstream enterprise applications. As part of the framework, Tech Mahindra will establish a Center of Excellence and introduce Project Indus 2.0, an advanced Hindi-based AI model that uses a new NIM microservice for the Nemotron-4-Hindi 4B model. It has reskilled over 45,000 employees through its AI proficiency framework.Wipro has built its Wipro Enterprise Generative AI Studio with NVIDIA AI Enterprise to accelerate industry-specific use cases for supply chain management, marketing campaigns, contact center agents, financial services, retail and more. It has trained more than 225,000 employees nearly its entire workforce to be ready to serve AI client demands.Faster Fixes: NIM Agent Blueprint for Customer Service Debuts to Speed ResolutionThe new NIM Agent Blueprint for customer service can help Indias consulting leaders quickly build custom AI virtual assistants for call center clients. These AI virtual assistants can recommend solutions to resolve issues and help people efficiently serve customers.Powered by NVIDIA NIM microservices and RAG, the blueprint shows how to build a solution that supports context-aware, multi-turn conversations and can provide general and personalized Q&A responses based on structured and unstructured data. Using NVIDIA NeMo Guardrails, developers can ensure that the AI virtual agents stay on topic.NVIDIAs consulting partners can help customers tailor the customer service agent blueprint to build unique virtual assistants using their preferred AI model including sovereign LLMs from India-based model makers and efficiently run it in production on the infrastructure of their choice with NVIDIA NIM.Try the new NVIDIA NIM Agent Blueprint for free or get notified of the upcoming release of a downloadable version of the blueprint.To learn more, watch the NVIDIA AI Summit India keynote with NVIDIA founder and CEO Jensen Huang.Editors note: IDC figures and statement courtesy of IDC, Worldwide Semiannual Services Tracker, April 2024, and IDC: Indias IT Services Market Grows by 6.6% in 2023 as Enterprises Focus on Critical Projects, press release from June 2024.
    0 Comments 0 Shares 18 Views
  • BLOGS.NVIDIA.COM
    Start Local, Go Global: Indias Startups Spur Growth and Innovation With NVIDIA Technology
    India is becoming a key producer of AI for virtually every industry powered by thousands of startups that are serving the countrys multilingual, multicultural population and scaling out to global users.The country is one of the top six global economies leading generative AI adoption and has seen rapid growth in its startup and investor ecosystem, rocketing to more than 100,000 startups this year from under 500 in 2016.More than 2,000 are part of NVIDIA Inception, a free program for startups designed to accelerate innovation and growth through technical training and tools, go-to-market support and opportunities to connect with venture capitalists through the Inception VC Alliance.At the NVIDIA AI Summit, taking place in Mumbai through Oct. 25, around 50 India-based startups are sharing AI innovations delivering impact in fields such as customer service, sports media, healthcare and robotics. These Inception members will be showcasing their solutions onsite in the Startup Pavilion, in panel discussions and in a startup pitch session. Startups can also attend a reverse pitch session where venture capital firms share their vision for the next wave of innovation.Conversational AI for Indian Railway CustomersBengaluru-based startup CoRover.ai already has over a billion users of its LLM-based conversational AI platform, which includes text, audio and video-based agents.The support of NVIDIA Inception is helping us advance our work to automate conversational AI use cases with domain-specific large language models, said Ankush Sabharwal, CEO of CoRover. NVIDIA AI technology enables us to deliver enterprise-grade virtual assistants that support 1.3 billion users in over 100 languages.CoRovers AI platform powers chatbots and customer service applications for major private and public sector customers, such as the Indian Railway Catering and Tourism Corporation, the official provider of online tickets, drinking water and food for Indias railways stations and trains.Dubbed AskDISHA, after the Sanskrit word for direction, the IRCTCs multimodal chatbot handles more than 150,000 user queries daily, and has facilitated over 10 billion interactions for more than 175 million passengers to date. It assists customers with tasks such as booking or canceling train tickets, changing boarding stations, requesting refunds, and checking the status of their booking in languages including English, Hindi, Gujarati and Hinglish a mix of Hindi and English.The deployment of AskDISHA has resulted in a 70% improvement in IRCTCs customer satisfaction rate and a 70% reduction in queries through other channels like social media, phone calls and emails.CoRovers modular AI tools were developed using NVIDIA NeMo, an end-to-end, cloud-native framework and suite of microservices for developing generative AI. They run on NVIDIA GPUs in the cloud, enabling CoRover to automatically scale up compute resources during peak usage such as the moment train tickets are released.Watch CoRovers session live at the AI Summit or on demand, and learn more about Indian businesses building multilingual language models with NeMo.Powering the Future of Sports MediaVideoVerse, founded in Mumbai with offices in six countries, has built a family of AI models to support content creation in the sports media industry enabling global customers including the Indian Premier League for cricket, the Vietnam Basketball Association and the Mountain West Conference for American college football to generate game highlights up to 15x faster and boost viewership.Short-form video highlights that can be easily shared on social media can also help lesser-known sports gain audience attention and grow their fanbases, said VideoVerse CEO Vinayak Shrivastav. AI-assisted content creation makes it feasible for emerging sports like longball and kabbadi to raise awareness with a limited marketing budget.VideoVerses enterprise solution, called Magnifi, uses AI technologies such as vision analysis, natural language processing and optical character recognition to streamline editing workflows by detecting players, identifying key moments and tracking ball movement across multiple camera angles. Magnifi also adjusts video sizes automatically for horizontal and vertical formats across laptops, tablets and phones, ensuring the primary action remains centered in the frame.VideoVerse uses NVIDIA CUDA libraries to accelerate AI models for image and video understanding, automatic speech recognition and natural language understanding. The company runs its custom AI models on NVIDIA Tensor Core GPUs for inference.Watch VideoVerses session live at the AI Summit or on demand.Rewriting the Narrative of Enterprise EfficiencyMumbai-based startup Fluid AI offers generative AI chatbots, voice calling bots and a range of application programming interfaces to boost enterprise efficiency. Its AI tools can access an organizations knowledge base to provide teams with insights, reports and ideas or to help accurately answer questions.Fluid AIs chatbots can be applied in customer service to increase agent productivity and reduce response times, generating accurate outputs in real time. Or, organizations can choose to deploy them with sales and customer-facing teams, using them for tasks like creating slide decks in under 15 seconds.Fluid AI taps NVIDIA NIM microservices, the NVIDIA NeMo platform and the NVIDIA TensorRT inference engine to deliver a complete, scalable platform for developing custom generative AI for its customers.The company is also exploring the use of NVIDIA Riva microservices to develop a voice experience for its chatbots that will help significantly reduce latency and offer higher-fidelity experiences. Its AI models run on NVIDIA GPUs in the cloud.Our work with NVIDIA has been invaluable the low latency and high fidelity that we offer on AI-powered voice calls come from the innovation that NVIDIA technology allows us to achieve, said Abhinav Aggarwal, founder of Fluid AI.Watch Fluid AIs session live at the AI summit or on demand.Providing Data Work to Bridge the Digital DivideKarya, based in Bengaluru, is a smartphone-based digital work platform that enables members of low-income and marginalized communities across India to earn supplemental income by completing language-based tasks that support the development of multilingual AI models.Nearly 100,000 Karya workers are recording voice samples, transcribing audio or checking the accuracy of AI-generated sentences in their native languages, earning nearly 20x Indias minimum wage for their work. Karya also provides royalties to all contributors each time its datasets are sold to AI developers.By fairly compensating these communities for their digital work, we are able to boost their quality of life while supporting the creation of multilingual AI tools theyll be able to use in the future, said Manu Chopra, CEO of Karya.Karyas work helps enterprises accelerate the data design and collection process, enabling the creation of deployable AI solutions that cater to non-English speakers in India. The company will use NVIDIA NeMo and NVIDIA NIM to build its AI platform, which offers custom AI model training and pretrained models tailored to customers business needs.Businesses and research centers can purchase the datasets Karya collects to train diverse, multilingual AI models. For example, Karya is working with the Bill and Melinda Gates Foundation to build the largest gender-intentional, open-source AI dataset in Indic languages yet. Karya is employing over 30,000 low-income women participants across six language groups in India to help create the dataset, which will support the creation of diverse AI applications across agriculture, healthcare and banking.Watch Karyas session live at the AI Summit or on demand.For more from the AI Summit, watch NVIDIA founder and CEO Jensen Huangs fireside chat with Mukesh Ambani, chairman and managing director of Reliance Industries.
    0 Comments 0 Shares 38 Views
  • BLOGS.NVIDIA.COM
    NVIDIA, F5 Turbocharge Sovereign AI Cloud Security, Efficiency
    To improve AI efficiency and security in sovereign cloud environments, NVIDIA and F5 are integrating NVIDIA BlueField-3 DPUs with the F5 BIG-IP Next for Kubernetes for application delivery and security.The collaboration, announced today at the NVIDIA AI Summit in Mumbai, India, is ideal for industries with strict data governance, privacy or compliance requirements and addresses the growing demand for scalable AI infrastructure.Were working with NVIDIA to enable industries to deploy scalable, secure AI solutions faster, with better performance, all while ensuring data remains protected, said Ahmed Guetari, vice president and general manager, service provider at F5.The collaboration aims to help governments and industries manage sensitive data while accelerating AI application delivery. The sovereign cloud market is projected to reach $250 billion by 2027, according to IDC. Meanwhile, ABI Research projects the market for foundation models will be $30 billion by 2027.Sovereign clouds are built to meet strict data privacy and localization requirements. Theyre critical for industries handling sensitive data, such as telecommunications and financial services, as well as government agencies.F5 BIG-IP Next for Kubernetes deployed on NVIDIA BlueField-3 DPUs offers a secure and compliant AI networking infrastructure, allowing industries to adopt advanced AI capabilities without compromising data privacy.By offloading tasks like load balancing, routing and security to the BlueField-3 DPU, F5 BIG-IP Next for Kubernetes efficiently routes AI prompts to LLM instances and reduces energy use. This ensures scalable AI performance while optimizing GPU resource utilization.NVIDIA NIM microservices, which accelerate the deployment of foundation models, will also benefit from the collaboration thanks to more efficient AI workload management.The combined solutions from NVIDIA and F5 promise enhanced security and efficiency, key for industries transitioning to cloud-native infrastructures. With these innovations, industries in highly regulated sectors can scale AI applications securely and confidently, meeting the highest standards for data protection.Editors note: The figures on the global sovereign cloud market come from IDCs Worldwide Sovereign Cloud Market Forecast, 20222027 report, Doc # US49695922, published in November 2023. The data on the generative AI software market are courtesy of ABI Researchs Gen AI Software report, MD-AISG-101, published in July 2024.
    0 Comments 0 Shares 46 Views
  • BLOGS.NVIDIA.COM
    The Three Computer Solution: Powering the Next Wave of AI Robotics
    ChatGPT marked the big bang moment of generative AI. Answers can be generated in response to nearly any query, helping transform digital work such as content creation, customer service, software development and business operations for knowledge workers.Physical AI, the embodiment of artificial intelligence in humanoids, factories and other devices within industrial systems, has yet to experience its breakthrough moment.This has held back industries such as transportation and mobility, manufacturing, logistics and robotics. But thats about to change thanks to three computers bringing together advanced training, simulation and inference.The Rise of Multimodal, Physical AIFor 60 years, Software 1.0 serial code written by human programmers ran on general-purpose computers powered by CPUs.Then, in 2012, Alex Krizhevsky, mentored by Ilya Sutskever and Geoffrey Hinton, won the ImageNet computer image recognition competition with AlexNet, a revolutionary deep learning model for image classification.This marked the industrys first contact with AI. The breakthrough of machine learning neural networks running on GPUs jump-started the era of Software 2.0.Today, software writes software. The worlds computing workloads are shifting from general-purpose computing on CPUs to accelerated computing on GPUs, leaving Moores law far behind.With generative AI, multimodal transformer and diffusion models have been trained to generate responses.Large language models are one-dimensional, able to predict the next token, in modes like letters or words. Image- and video-generation models are two-dimensional, able to predict the next pixel.None of these models can understand or interpret the three-dimensional world. And thats where physical AI comes in.Physical AI models can perceive, understand, interact with and navigate the physical world with generative AI. With accelerated computing, multimodal physical AI breakthroughs and large-scale physically based simulations are allowing the world to realize the value of physical AI through robots.A robot is a system that can perceive, reason, plan, act and learn. Robots are often thought of as autonomous mobile robots (AMRs), manipulator arms or humanoids. But there are many more types of robotic embodiments.In the near future, everything that moves, or that monitors things that move, will be autonomous robotic systems. These systems will be capable of sensing and responding to their environments.Everything from surgical rooms to data centers, warehouses to factories, even traffic control systems or entire smart cities will transform from static, manually operated systems to autonomous, interactive systems embodied by physical AI.The Next Frontier: Humanoids RobotsHumanoid robots are an ideal general-purpose robotic manifestation because they can operate efficiently in environments built for humans, while requiring minimal adjustments for deployment and operation.The global market for humanoid robots is expected to reach $38 billion by 2035, a more than sixfold increase from the roughly $6 billion for the period forecast nearly two years ago, according to Goldman Sachs.Researchers and developers around the world are racing to build this next wave of robots.Three Computers to Develop Physical AITo develop humanoid robots, three accelerated computer systems are required to handle physical AI and robot training, simulation and runtime. Two computing advancements are accelerating humanoid robot development: multimodal foundation models and scalable, physically based simulations of robots and their worlds.Breakthroughs in generative AI are bringing 3D perception, control, skill planning and intelligence to robots. Robot simulation at scale lets developers refine, test and optimize robot skills in a virtual world that mimics the laws of physics helping reduce real-world data acquisition costs and ensuring they can perform in safe, controlled settings.NVIDIA has built three computers and accelerated development platforms to enable developers to create physical AI.First, models are trained on a supercomputer. Developers can use NVIDIA NeMo on the NVIDIA DGX platform to train and fine-tune powerful foundation and generative AI models. They can also tap into NVIDIA Project GR00T, an initiative to develop general-purpose foundation models for humanoid robots to enable them to understand natural language and emulate movements by observing human actions.Second, NVIDIA Omniverse, running on NVIDIA OVX servers, provides the development platform and simulation environment for testing and optimizing physical AI with application programming interfaces and frameworks like NVIDIA Isaac Sim.Developers can use Isaac Sim to simulate and validate robot models, or generate massive amounts of physically-based synthetic data to bootstrap robot model training. Researchers and developers can also use NVIDIA Isaac Lab, an open-source robot learning framework that powers robot reinforcement learning and imitation learning, to help accelerate robot policy training and refinement.Lastly, trained AI models are deployed to a runtime computer. NVIDIA Jetson Thor robotics computers are specifically designed for compact, on-board computing needs. An ensemble of models consisting of control policy, vision and language models composes the robot brain and is deployed on a power-efficient, on-board edge computing system.Depending on their workflows and challenge areas, robot makers and foundation model developers can use as many of the accelerated computing platforms and systems as needed.Building the Next Wave of Autonomous FacilitiesRobotic facilities result from a culmination of all of these technologies.Manufacturers like Foxconn or logistics companies like Amazon Robotics can orchestrate teams of autonomous robots to work alongside human workers and monitor factory operations through hundreds or thousands of sensors.These autonomous warehouses, plants and factories will have digital twins. The digital twins are used for layout planning and optimization, operations simulation and, most importantly, robot fleet software-in-the-loop testing.Built on Omniverse, Mega is a blueprint for factory digital twins that enables industrial enterprises to test and optimize their robot fleets in simulation before deploying them to physical factories. This helps ensure seamless integration, optimal performance and minimal disruption.Mega lets developers populate their factory digital twins with virtual robots and their AI models, or the brains of the robots. Robots in the digital twin execute tasks by perceiving their environment, reasoning, planning their next motion and, finally, completing planned actions.These actions are simulated in the digital environment by the world simulator in Omniverse, and the results are perceived by the robot brains through Omniverse sensor simulation.With sensor simulations, the robot brains decide the next action, and the loop continues, all while Mega meticulously tracks the state and position of every element within the factory digital twin.This advanced software-in-the-loop testing methodology enables industrial enterprises to simulate and validate changes within the safe confines of the Omniverse digital twin, helping them anticipate and mitigate potential issues to reduce risk and costs during real-world deployment.Empowering the Developer Ecosystem With NVIDIA TechnologyNVIDIA accelerates the work of the global ecosystem of robotics developers and robot foundation model builders with three computers.Universal Robots, a Teradyne Robotics company, used NVIDIA Isaac Manipulator, Isaac accelerated libraries and AI models, and NVIDIA Jetson Orin to build UR AI Accelerator, a ready-to-use hardware and software toolkit that enables cobot developers to build applications, accelerate development and reduce the time to market of AI products.RGo Robotics used NVIDIA Isaac Perceptor to help its wheel.me AMRs work everywhere, all the time, and make intelligent decisions by giving them human-like perception and visual-spatial information.Humanoid robot makers including 1X Technologies, Agility Robotics, Apptronik, Boston Dynamics, Fourier, Galbot, Mentee, Sanctuary AI, Unitree Robotics and XPENG Robotics are adopting NVIDIAs robotics development platform.Boston Dynamics is using Isaac Sim and Isaac Lab to build quadrupeds and humanoid robots to augment human productivity, tackle labor shortages and prioritize safety in warehouses.Fourier is tapping into Isaac Sim to train humanoid robots to operate in fields that demand high levels of interaction and adaptability, such as scientific research, healthcare and manufacturing.Using Isaac Lab and Isaac Sim, Galbot advanced the development of a large-scale robotic dexterous grasp dataset called DexGraspNet that can be applied to different dexterous robotic hands, as well as a simulation environment for evaluating dexterous grasping models.Field AI developed risk-bounded multitask and multipurpose foundation models for robots to safely operate in outdoor field environments, using the Isaac platform and Isaac Lab.The era of physical AI is here and its transforming the worlds heavy industries and robotics.Get started with NVIDIA Robotics.
    0 Comments 0 Shares 47 Views
  • BLOGS.NVIDIA.COM
    Denmark Launches Leading Sovereign AI Supercomputer to Solve Scientific Challenges With Social Impact
    NVIDIA founder and CEO Jensen Huang joined the king of Denmark to launch the countrys largest sovereign AI supercomputer, aimed at breakthroughs in quantum computing, clean energy, biotechnology and other areas serving Danish society and the world.Denmarks first AI supercomputer, named Gefion after a goddess in Danish mythology, is an NVIDIA DGX SuperPOD driven by 1,528 NVIDIA H100 Tensor Core GPUs and interconnected using NVIDIA Quantum-2 InfiniBand networking.Gefion is operated by the Danish Center for AI Innovation (DCAI), a company established with funding from the Novo Nordisk Foundation, the worlds wealthiest charitable foundation, and the Export and Investment Fund of Denmark. The new AI supercomputer was symbolically turned on by King Frederik X of Denmark, Huang and Nadia Carlsten, CEO of DCAI, at an event in Copenhagen.Huang sat down with Carlsten, a quantum computing industry leader, to discuss the public-private initiative to build one of the worlds fastest AI supercomputers in collaboration with NVIDIA.The Gefion AI supercomputer comes to Copenhagen to serve industry, startups and academia.Gefion is going to be a factory of intelligence. This is a new industry that never existed before. It sits on top of the IT industry. Were inventing something fundamentally new, Huang said.The launch of Gefion is an important milestone for Denmark in establishing its own sovereign AI. Sovereign AI can be achieved when a nation has the capacity to produce artificial intelligence with its own data, workforce, infrastructure and business networks. Having a supercomputer on national soil provides a foundation for countries to use their own infrastructure as they build AI models and applications that reflect their unique culture and language.What country can afford not to have this infrastructure, just as every country realizes you have communications, transportation, healthcare, fundamental infrastructures the fundamental infrastructure of any country surely must be the manufacturer of intelligence, said Huang. For Denmark to be one of the handful of countries in the world that has now initiated on this vision is really incredible.The new supercomputer is expected to address global challenges with insights into infectious disease, climate change and food security. Gefion is now being prepared for users, and a pilot phase will begin to bring in projects that seek to use AI to accelerate progress, including in such areas as quantum computing, drug discovery and energy efficiency.The era of computer-aided drug discovery must be within this decade. Im hoping that what the computer did to the technology industry, it will do for digital biology, Huang said.Supporting Next Generation of Breakthroughs With GefionThe Danish Meteorological Institute (DMI) is in the pilot and aims to deliver faster and more accurate weather forecasts. It promises to reduce forecast times from hours to minutes while greatly reducing the energy footprint required for these forecasts when compared with traditional methods.Researchers from the University of Copenhagen are tapping into Gefion to implement and carry out a large-scale distributed simulation of quantum computer circuits. Gefion enables the simulated system to increase from 36 to 40 entangled qubits, which brings it close to whats known as quantum supremacy, or essentially outperforming a traditional computer while using less resources.The University of Copenhagen, the Technical University of Denmark, Novo Nordisk and Novonesis are working together on a multi-modal genomic foundation model for discoveries in disease mutation analysis and vaccine design. Their model will be used to improve signal detection and the functional understanding of genomes, made possible by the capability to train LLMs on Gefion.Startup Go Autonomous seeks training time on Gefion to develop an AI model that understands and uses multi-modal input from both text, layout and images. Another startup, Teton, is building an AI Care Companion with large video pretraining, using Gefion.Addressing Global Challenges With Leading SupercomputerThe Gefion supercomputer and ongoing collaborations with NVIDIA will position Denmark, with its renowned research community, to pursue the worlds leading scientific challenges with enormous social impact as well as large-scale projects across industries.With Gefion, researchers will be able to work with industry experts at NVIDIA to co-develop solutions to complex problems, including research in pharmaceuticals and biotechnology and protein design using the NVIDIA BioNeMo platform.Scientists will also be collaborating with NVIDIA on fault-tolerant quantum computing usingNVIDIA CUDA-Q, the open-source hybrid quantum computing platform.
    0 Comments 0 Shares 46 Views
  • BLOGS.NVIDIA.COM
    How to Accelerate Larger LLMs Locally on RTX With LM Studio
    Editors note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.Large language models (LLMs) are reshaping productivity. Theyre capable of drafting documents, summarizing web pages and, having been trained on vast quantities of data, accurately answering questions about nearly any topic.LLMs are at the core of many emerging use cases in generative AI, including digital assistants, conversational avatars and customer service agents.Many of the latest LLMs can run locally on PCs or workstations. This is useful for a variety of reasons: users can keep conversations and content private on-device, use AI without the internet, or simply take advantage of the powerful NVIDIA GeForce RTX GPUs in their system. Other models, because of their size and complexity, do not fit into the local GPUs video memory (VRAM) and require hardware in large data centers.However, Iit is possible to accelerate part of a prompt on a data-center-class model locally on RTX-powered PCs using a technique called GPU offloading. This allows users to benefit from GPU acceleration without being as limited by GPU memory constraints.Size and Quality vs. PerformanceTheres a tradeoff between the model size and the quality of responses and the performance. In general, larger models deliver higher-quality responses, but run more slowly. With smaller models, performance goes up while quality goes down.This tradeoff isnt always straightforward. There are cases where performance might be more important than quality. Some users may prioritize accuracy for use cases like content generation, since it can run in the background. A conversational assistant, meanwhile, needs to be fast while also providing accurate responses.The most accurate LLMs, designed to run in the data center, are tens of gigabytes in size, and may not fit in a GPUs memory. This would traditionally prevent the application from taking advantage of GPU acceleration.However, GPU offloading uses part of the LLM on the GPU and part on the CPU. This allows users to take maximum advantage of GPU acceleration regardless of model size.Optimize AI Acceleration With GPU Offloading and LM StudioLM Studio is an application that lets users download and host LLMs on their desktop or laptop computer, with an easy-to-use interface that allows for extensive customization in how those models operate. LM Studio is built on top of llama.cpp, so its fully optimized for use with GeForce RTX and NVIDIA RTX GPUs.LM Studio and GPU offloading takes advantage of GPU acceleration to boost the performance of a locally hosted LLM, even if the model cant be fully loaded into VRAM.With GPU offloading, LM Studio divides the model into smaller chunks, or subgraphs, which represent layers of the model architecture. Subgraphs arent permanently fixed on the GPU, but loaded and unloaded as needed. With LM Studios GPU offloading slider, users can decide how many of these layers are processed by the GPU.LM Studios interface makes it easy to decide how much of an LLM should be loaded to the GPU.For example, imagine using this GPU offloading technique with a large model like Gemma 2 27B. 27B refers to the number of parameters in the model, informing an estimate as to how much memory is required to run the model.According to 4-bit quantization, a technique for reducing the size of an LLM without significantly reducing accuracy, each parameter takes up a half byte of memory. This means that the model should require about 13.5 billion bytes, or 13.5GB plus some overhead, which generally ranges from 1-5GB.Accelerating this model entirely on the GPU requires 19GB of VRAM, available on the GeForce RTX 4090 desktop GPU. With GPU offloading, the model can run on a system with a lower-end GPU and still benefit from acceleration.The table above shows how to run several popular models of increasing size across a range of GeForce RTX and NVIDIA RTX GPUs. The maximum level of GPU offload is indicated for each combination. Note that even with GPU offloading, users still need enough system RAM to fit the whole model.In LM Studio, its possible to assess the performance impact of different levels of GPU offloading, compared with CPU only. The below table shows the results of running the same query across different offloading levels on a GeForce RTX 4090 desktop GPU.Depending on the percent of the model offloaded to GPU, users see increasing throughput performance compared with running on CPUs alone. For the Gemma 2 27B model, performance goes from an anemic 2.1 tokens per second to increasingly usable speeds the more the GPU is used. This enables users to benefit from the performance of larger models that they otherwise wouldve been unable to run.On this particular model, even users with an 8GB GPU can enjoy a meaningful speedup versus running only on CPUs. Of course, an 8GB GPU can always run a smaller model that fits entirely in GPU memory and get full GPU acceleration.Achieving Optimal BalanceLM Studios GPU offloading feature is a powerful tool for unlocking the full potential of LLMs designed for the data center, like Gemma 2 27B, locally on RTX AI PCs. It makes larger, more complex models accessible across the entire lineup of PCs powered by GeForce RTX and NVIDIA RTX GPUs.Download LM Studio to try GPU offloading on larger models, or experiment with a variety of RTX-accelerated LLMs running locally on RTX AI PCs and workstations.Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of whats new and whats next by subscribing to the AI Decoded newsletter.
    0 Comments 0 Shares 46 Views
  • BLOGS.NVIDIA.COM
    What Is Agentic AI?
    AI chatbots use generative AI to provide responses based on a single interaction. A person makes a query and the chatbot uses natural language processing to reply.The next frontier of artificial intelligence is agentic AI, which uses sophisticated reasoning and iterative planning to autonomously solve complex, multi-step problems. And its set to enhance productivity and operations across industries.Agentic AI systems ingest vast amounts of data from multiple sources to independently analyze challenges, develop strategies and execute tasks like supply chain optimization, cybersecurity vulnerability analysis and helping doctors with time-consuming tasks.How Does Agentic AI Work?Agentic AI uses a four-step process for problem-solving:Perceive: AI agents gather and process data from various sources, such as sensors, databases and digital interfaces. This involves extracting meaningful features, recognizing objects or identifying relevant entities in the environment.Reason: A large language model acts as the orchestrator, or reasoning engine, that understands tasks, generates solutions and coordinates specialized models for specific functions like content creation, vision processing or recommendation systems. This step uses techniques like retrieval-augmented generation (RAG) to access proprietary data sources and deliver accurate, relevant outputs.Act: By integrating with external tools and software via application programming interfaces, agentic AI can quickly execute tasks based on the plans it has formulated. Guardrails can be built into AI agents to help ensure they execute tasks correctly. For example, a customer service AI agent may be able to process claims up to a certain amount, while claims above the amount would have to be approved by a human.Learn: Agentic AI continuously improves through a feedback loop, ordata flywheel, where the data generated from its interactions is fed into the system to enhance models. This ability to adapt and become more effective over time offers businesses a powerful tool for driving better decision-making and operational efficiency.Fueling Agentic AI With Enterprise DataAcross industries and job functions, generative AI is transforming organizations by turning vast amounts of data into actionable knowledge, helping employees work more efficiently.AI agents build on this potential by accessing diverse data through accelerated AI query engines, which process, store and retrieve information to enhance generative AI models. A key technique for achieving this is RAG, which allows AI to tap into a broader range of data sources.Over time, AI agents learn and improve by creating a data flywheel, where data generated through interactions is fed back into the system, refining models and increasing their effectiveness.The end-to-end NVIDIA AI platform, including NVIDIA NeMo microservices, provides the ability to manage and access data efficiently, which is crucial for building responsive agentic AI applications.Agentic AI in ActionThe potential applications of agentic AI are vast, limited only by creativity and expertise. From simple tasks like generating and distributing content to more complex use cases such as orchestrating enterprise software, AI agents are transforming industries.Customer Service: AI agents are improving customer support by enhancing self-service capabilities and automating routine communications. Over half of service professionals report significant improvements in customer interactions, reducing response times and boosting satisfaction.Theres also growing interest in digital humans AI-powered agents that embody a companys brand and offer lifelike, real-time interactions to help sales representatives answer customer queries or solve issues directly when call volumes are high.Content Creation: Agentic AI can help quickly create high-quality, personalized marketing content. Generative AI agents can save marketers an average of three hours per content piece, allowing them to focus on strategy and innovation. By streamlining content creation, businesses can stay competitive while improving customer engagement.Software Engineering: AI agents are boosting developer productivity by automating repetitive coding tasks. Its projected that by 2030 AI could automate up to 30% of work hours, freeing developers to focus on more complex challenges and drive innovation.Healthcare: For doctors analyzing vast amounts of medical and patient data, AI agents can distill critical information to help them make better-informed care decisions. Automating administrative tasks and capturing clinical notes in patient appointments reduces the burden of time-consuming tasks, allowing doctors to focus on developing a doctor-patient connection.AI agents can also provide 24/7 support, offering information on prescribed medication usage, appointment scheduling and reminders, and more to help patients adhere to treatment plans.How to Get StartedWith its ability to plan and interact with a wide variety of tools and software, agentic AI marks the next chapter of artificial intelligence, offering the potential to enhance productivity and revolutionize the way organizations operate.To accelerate the adoption of generative AI-powered applications and agents, NVIDIA NIM Agent Blueprints provide sample applications, reference code, sample data, tools and comprehensive documentation.NVIDIA partners including Accenture and Salesforce are helping enterprises use agentic AI with solutions built with NIM Agent Blueprints.Visit ai.nvidia.com to learn more about the tools and software NVIDIA offers to help enterprises build their own AI agents.
    0 Comments 0 Shares 48 Views
  • BLOGS.NVIDIA.COM
    NVIDIA CEO Jensen Huang to Spotlight Innovation at Indias AI Summit
    The NVIDIA AI Summit India, taking place October 2325 at the Jio World Convention Centre in Mumbai, will bring together the brightest minds to explore how India is tackling the worlds grand challenges.A major highlight: a fireside chat with NVIDIA founder and CEO Jensen Huang on October 24. Hell share his insights on AIs pivotal role in reshaping industries and how India is emerging as a global AI leader, and be joined by the chairman and managing director of Reliance Industries, Mukesh Ambani.Passes for the event are sold out. But dont worry audiences can tune in via livestream or watch on-demand sessions at NVIDIA AI Summit.With over 50 sessions, live demos and hands-on workshops, the event will showcase AIs transformative impact across industries like robotics, supercomputing and industrial digitalization.It will explore opportunities both globally and locally in India. Over 70% of the use cases discussed will focus on how AI can address Indias most pressing challenges.Indias AI JourneyIndias rise to become a global AI leader is powered by its focus on building AI infrastructure and foundational models.NVIDIAs accelerated computing platform, now 100,000x more energy-efficient for processing large language models than a decade ago, is driving this progress.If car efficiency had improved at the same rate, vehicles today would get 280,000 miles per gallon enough to drive to the moon with a single gallon of gas.As India solidifies its place in AI leadership, the summit will tackle key topics.These include building AI infrastructure with NVIDIAs advanced GPUs, harnessing foundational models for Indian languages, fueling innovation in Indias startup ecosystem and upskilling developers to take the countrys workforce to the AI front office. The momentum is undeniable.Indias AI Summit: Driving Innovation, Solving Grand ChallengesNVIDIA is at the heart of Indias rise as an AI powerhouse.With six locations across the country hosting over 4,000 employees, NVIDIA plays a central role in the countrys rapid progress in AI.The company works with enterprises, cloud providers and startups to build AI infrastructure powered by NVIDIAs accelerated computing stack comprising tens of thousands of its most advanced GPUs, high-performance networking, and AI software platforms and tools.The summit will feature sessions on how this infrastructure empowers sectors like healthcare, agriculture, education and manufacturing.Jensen Huangs Fireside ChatThe fireside chat with Huang on October 24 is a must-watch.Hell discuss how AI is revolutionizing industries worldwide and Indias increasingly important role as a global AI leader.To hear his thoughts firsthand, tune in to the livestream or catch the session on demand for insights from one of the most influential figures in AI.Key Sessions and SpeakersTop industry experts like Niki Parmar (Essential AI), Deepu Talla (NVIDIA) and Abhinav Aggarwal (Fluid AI) will dive into a range of game-changing topics, including:Generative AI and large language models (LLMs): Discover innovations in video synthesis and high-quality data models for large-scale inference.Robotics and industrial efficiency: See how AI-powered robotics tackle automation challenges in manufacturing and warehouse operations.AI in healthcare: Learn how AI transforms diagnostics and treatments, improving outcomes across Indias healthcare system.These sessions will also introduce cutting-edge NVIDIA AI networking technologies, essential for building next-gen AI data centers.Workshops and Startup InnovationIndias vibrant startup ecosystem will be in the spotlight at the summit.Nearly 2,000 companies in India are part of NVIDIA Inception, a program that supports startups driving innovation in AI and other fields.Onsite workshops at the AI Summit will offer hands-on experiences with NVIDIAs advanced AI tools, giving developers and startups practical skills to push the boundaries of innovation.Meanwhile, Reverse VC Pitches will provide startups with unique insights as venture capital firms pitch their visions for the future, sparking fresh ideas and collaborations.Industrial AI and Manufacturing InnovationNVIDIA is also backing Indias industrial expansion by deploying AI technologies like Omniverse and Isaac.These tools are enhancing everything from factory planning to manufacturing and construction, helping build greenfield factories that are more efficient and sustainable.These technologies integrate advanced AI capabilities into factory operations, cutting costs while boosting sustainability.Through hands-on workshops and deep industry insights, participants will see how India is positioning itself to lead the world in AI innovation.Join the livestream or explore sessions on demand at NVIDIA AI Summit.
    0 Comments 0 Shares 51 Views
  • BLOGS.NVIDIA.COM
    NVIDIA Brings Generative AI Tools, Simulation and Perception Workflows to ROS Developer Ecosystem
    At ROSCon in Odense, one of Denmarks oldest cities and a hub of automation, NVIDIA and its robotics ecosystem partners announced generative AI tools ,simulation, and perception workflows for Robot Operating System (ROS) developers.Among the reveals were new generative AI nodes and workflows for ROS developers deploying to the NVIDIA Jetson platform for edge AI and robotics. Generative AI enables robots to perceive and understand the context of their surroundings, communicate naturally with humans and make adaptive decisions autonomously.Generative AI Comes to ROS CommunityReMEmbR, built on ROS 2, uses generative AI to enhance robotic reasoning and action. It combines large language models (LLMs), vision language models (VLMs) and retrieval-augmented generation to allow robots to build and query long-term semantic memories and improve their ability to navigate and interact with their environments.The speech recognition capability is powered by the WhisperTRT ROS 2 node. This node uses NVIDIA TensorRT to optimize OpenAIs Whisper model to enable low-latency inference on NVIDIA Jetson, resulting in responsive human-robot interaction.The ROS 2 robots with voice control project uses the NVIDIA Riva ASR-TTS service to make robots understand and respond to spoken commands. The NASA Jet Propulsion Laboratory independently demonstrated ROSA, an AI-powered agent for ROS, operating on its Nebula-SPOT robot and the NVIDIA Nova Carter robot in NVIDIA Isaac Sim.At ROSCon, Canonical is demonstrating NanoOWL, a zero-shot object detection model running on the NVIDIA Jetson Orin Nano system-on-module. It allows robots to identify a broad range of objects in real time, without relying on predefined categories.Developers can get started today with ROS 2 Nodes for Generative AI, which brings NVIDIA Jetson-optimized LLMs and VLMs to enhance robot capabilities.Enhancing ROS Workflows With a Sim-First ApproachSimulation is critical to safely test and validate AI-enabled robots before deployment. NVIDIA Isaac Sim, a robotics simulation platform built on OpenUSD, provides ROS developers a virtual environment to test robots by easily connecting them to their ROS packages. A new Beginners Guide to ROS 2 Workflows With Isaac Sim, which illustrates the end-to-end workflow for robot simulation and testing, is now available.Foxglove, a member of the NVIDIA Inception program for startups, demonstrated an integration that helps developers visualize and debug simulation data in real time using Foxgloves custom extension, built on Isaac Sim.New Capabilities for Isaac ROS 3.2NVIDIA Isaac ROS, built on the open-source ROS 2 software framework, is a suite of accelerated computing packages and AI models for robotics development. The upcoming 3.2 release enhances robot perception, manipulation and environment mapping.Key improvements to NVIDIA Isaac Manipulator include new reference workflows that integrate FoundationPose and cuMotion to accelerate development of pick-and-place and object-following pipelines in robotics.Another is to NVIDIA Isaac Perceptor, which features a new visual SLAM reference workflow, enhanced multi-camera detection and 3D reconstruction to improve an autonomous mobile robots (AMR) environmental awareness and performance in dynamic settings like warehouses.Partners Adopting NVIDIA IsaacRobotics companies are integrating NVIDIA Isaac accelerated libraries and AI models into their platforms.Universal Robots, a Teradyne Robotics company, launched a new AI Accelerator toolkit to enable the development of AI-powered cobot applications.Miso Robotics is using Isaac ROS to speed up its AI-powered robotic french fry-making Flippy Fry Station and drive advances in efficiency and accuracy in food service automation.Wheel.me is partnering with RGo Robotics and NVIDIA to create a production-ready AMR using Isaac Perceptor.Main Street Autonomy is using Isaac Perceptor to streamline sensor calibration.Orbbec announced its Perceptor Developer Kit, an out-of-the-box AMR solution for Isaac Perceptor.LIPS Corporation has introduced a multi-camera perception devkit for improved AMR navigation.Canonical highlighted a fully certified Ubuntu environment for ROS developers, offering long-term support out of the box.Connecting With Partners at ROSConROS community members and partners, including Canonical, Ekumen, Foxglove, Intrinsic, Open Navigation, Siemens and Teradyne Robotics, will be in Denmark presenting workshops, talks, booth demos and sessions. Highlights include:Nav2 User Meetup Birds of a Feather session with Steve Macenski from Open Navigation LLCROS in Large-Scale Factory Automation with Michael Gentner from BMW AG and Carsten Braunroth from Siemens AGIntegrating AI in Robot Manipulation Workflows Birds of a Feather session with Kalyan Vadrevu from NVIDIAAccelerating Robot Learning at Scale in Simulation Birds of a Feather session with Markus Wuensch from NVIDIAOn Use of Nav2 Docking with Open Navigations MacenskiAdditionally, Teradyne Robotics and NVIDIA are co-hosting a lunch and evening reception on Tuesday, Oct. 22, in Odense, Denmark.The Open Source Robotics Foundation (OSRF) organizes ROSCon. NVIDIA is a supporter of Open Robotics, the umbrella organization for OSRF and all its initiatives.For the latest updates, visit the ROSCon page.
    0 Comments 0 Shares 56 Views
  • BLOGS.NVIDIA.COM
    NVIDIA and Microsoft Give AI Startups a Double Dose of Acceleration
    NVIDIA is expanding its collaboration with Microsoft to support global AI startups across industries with an initial focus on healthcare and life sciences companies.Announced today at the HLTH healthcare innovation conference, the initiative connects the startup ecosystem by bringing together the NVIDIA Inception global program for cutting-edge startups and Microsoft for Startups to broaden innovators access to accelerated computing by providing cloud credits, software for AI development and the support of technical and business experts.The first phase will focus on high-potential digital health and life sciences companies that are part of both programs. Future phases will focus on startups in other industries.Microsoft for Startups will provide each company with $150,000 of Microsoft Azure credits to access leading AI models, up to $200,000 worth of Microsoft business tools, and priority access to its Pegasus Program for go-to-market support.NVIDIA Inception will provide 10,000 ai.nvidia.com inference credits to run GPU-optimized AI models through NVIDIA-managed serverless APIs; preferred pricing on NVIDIA AI Enterprise, which includes the full suite of NVIDIA Clara healthcare and life sciences computing platforms, software and services; early access to new NVIDIA healthcare offerings; and opportunities to connect with investors through the Inception VC Alliance and with industry partners through the Inception Alliance for Healthcare.Both companies will provide the selected startups with dedicated technical support and hands-on workshops to develop digital health applications with the NVIDIA technology stack on Azure.Supporting Startups at Every StageHundreds of companies are already part of both NVIDIA Inception and Microsoft for Startups, using the combination of accelerated computing infrastructure and cutting-edge AI to advance their work.Artisight, for example, is a smart hospital startup using AI to improve operational efficiency, documentation and care coordination in order to reduce the administrative burden on clinical staff and improve the patient experience. Its smart hospital network includes over 2,000 cameras and microphones at Northwestern Medicine, in Chicago, and over 200 other hospitals.The company uses speech recognition models that can automate patient check-in with voice-enabled kiosks and computer vision models that can alert nurses when a patient is at risk of falling. Its products use software including NVIDIA Riva for conversational AI, NVIDIA DeepStream for vision AI and NVIDIA Triton Inference server to simplify AI inference in production.Access to the latest AI technologies is critical to developing smart hospital solutions that are reliable enough to be deployed in real-world clinical settings, said Andrew Gostine, founder and CEO of Artisight. The support of NVIDIA Inception and Microsoft for Startups has enabled our company to scale our products to help top U.S. hospitals care for thousands of patients.Another company, Pangaea Data, is helping healthcare organizations and pharmaceutical companies identify patients who remain undertreated or untreated despite available intelligence in their existing medical records. The companys PALLUX platform supports clinicians at the point of care by finding more patients for screening and treatment. Deployed with NVIDIA GPUs on Azures HIPAA-compliant, secure cloud environment, PALLUX uses the NVIDIA FLARE federated learning framework to preserve patient privacy while driving improvement in health outcomes.PALLUX helped one healthcare provider find 6x more cancer patients with cachexia a condition characterized by loss of weight and muscle mass for treatment and clinical trials. Pangaea Datas platform achieved 90% accuracy and was deployed on the providers existing infrastructure within 12 weeks.By building our platform on a trusted cloud environment, were offering healthcare providers and pharmaceutical companies a solution to uncover insights from existing health records and realize the true promise of precision medicine and preventative healthcare, said Pangaea Data CEO Vibhor Gupta. Microsoft and NVIDIA have supported our work with powerful virtual machines and AI software, enabling us to focus on advancing our platform, rather than infrastructure management.Other startups participating in both programs and using NVIDIA GPUs on Azure include:Artificial, a lab orchestration startup that enables researchers to digitize end-to-end scientific workflows with AI tools that optimize scheduling, automate data entry tasks and guide scientists in real time using virtual assistants. The company is exploring the use of NVIDIA BioNeMo, an AI platform for drug discovery.BeeKeeperAI, which enables secure computing on sensitive data, including regulated data that cant be anonymized or de-identified. Its EscrowAI platform integrates trusted execution environments with confidential computing and other privacy-enhancing technologies including NVIDIA H100 Tensor Core GPUs to meet data protection requirements and protect data sovereignty, individual privacy and intellectual property.Niramai, a startup that has developed an AI-powered medical device for early breast cancer detection. Its Thermalytix solution is a low-cost, portable screening tool that has been used to help screen over 250,000 women in 18 countries.Building on a Trove of Healthcare ResourcesMicrosoft earlier this year announced a collaboration with NVIDIA to boost healthcare and life sciences organizations with generative AI, accelerated computing and the cloud.Aimed at supporting projects in clinical research, drug discovery, medical imaging and precision medicine, this collaboration brought together Microsoft Azure with NVIDIA DGX Cloud, an end-to-end, scalable AI platform for developers.It also provides users of NVIDIA DGX Cloud on Azure access to NVIDIA Clara, including domain-specific resources such as NVIDIA BioNeMo, a generative AI platform for drug discovery; NVIDIA MONAI, a suite of enterprise-grade AI for medical imaging; and NVIDIA Parabricks, a software suite designed to accelerate processing of sequencing data for genomics applications.Join the Microsoft for Startups Founders Hub and the NVIDIA Inception program.
    0 Comments 0 Shares 37 Views
  • BLOGS.NVIDIA.COM
    NVIDIA Works With Deloitte to Deploy Digital AI Agents for Healthcare
    Ahead of a visit to the hospital for a surgical procedure, patients often have plenty of questions about what to expect and can be plenty nervous.To help minimize presurgery jitters, NVIDIA and Deloitte are developing AI agents using NVIDIA AI to bring the next generation of digital, frontline teammates to patients before they even step foot inside the hospital.These virtual teammates can have natural, human-like conversations with patients, answer a wide range of questions and provide supporting guidance prior to preadmission appointments at hospitals.This demo shows one virtual representative in action, answering patient questions:https://blogs.nvidia.com/wp-content/uploads/2024/10/TOH-PAU-DEMO.mp4Working with NVIDIA, Deloitte has developed Frontline AI Teammate for use in settings like hospitals, where the digital avatar can have practical conversations in any language that give the end user, such as a patient, instant answers to pressing questions.Powered by the NVIDIA AI Enterprise software platform, Frontline AI Teammate includes avatars, generative AI and large language models.Avatar-based conversational AI agents offer an incredible opportunity to reduce the productivity paradox that our healthcare system faces with digitization, said Niraj Dalmia, partner at Deloitte Canada. It could possibly be the complementary innovation that reduces administrative burden, complements our healthcare human resources to free up capacity and helps solve for patient experience challenges.Next-Gen Technologies Powering Digital HumansDigital humans can provide lifelike interactions that can enhance experiences for doctors and patients.Developers can tap into NVIDIA NIM microservices, which streamline the path for developing AI-powered applications and moving AI models into production, to craft digital humans for healthcare industry applications. NIM includes an easily adaptable NIM Agent Blueprint developers can use to create interactive, AI-driven avatars that are ideal for telehealth as well as NVIDIA NeMo Retriever, an industry-leading embedding, retrieval and re-ranking model that allows for fast responses based on up-to-date healthcare data.Customizable digital humans like James, an interactive demo developed by NVIDIA can handle tasks such as scheduling appointments, filling out intake forms and answering questions about upcoming health services. This can make healthcare services more efficient and more accessible to patients.In addition to NIM microservices, James uses NVIDIA ACE and ElevenLabs digital human technologies to provide natural, low-latency responses.NVIDIA ACE is a suite of AI, graphics and simulation technologies for bringing digital humans to life. It can integrate every aspect of a digital human into healthcare applications from speech and translation abilities capable of understanding diverse accents and languages, to realistic animations of facial and body movements.Deloittes Frontline AI Teammate, powered by the NVIDIA AI Enterprise platform and built on Deloittes Conversational AI Framework, is designed to deliver human-to-machine experiences in healthcare settings. Developed within the NVIDIA Omniverse platform, Deloittes lifelike avatar can respond to complex, domain-specific questions that are pivotal in healthcare delivery.The avatar uses NVIDIA Riva for fluid, multilingual communication, helping ensure no patient is left behind due to language barriers. Its also equipped with the NeMo Megatron-Turing 530B large language model for accurate understanding and processing of patient data. These advanced capabilities can make clinical visits less intimidating, especially for patients who may feel uneasy about medical environments.Personalized Experiences for Hospital PatientsPatients can get overwhelmed with the amount of pre-operative information. Typically, they have only one preadmission appointment, many weeks before the surgery, which can leave them with lingering questions and escalating concerns. The stress of a serious diagnosis may prevent them from asking all the necessary questions during these brief interactions.This can result in patients arriving unprepared for their preadmission appointments, lacking knowledge about the appointments purpose, duration, location and necessary documents, and potentially leading to delays or even rescheduling of their surgeries.To enhance patient preparation and reduce pre-procedure anxiety, The Ottawa Hospital is using AI agents, powered by NVIDIA and Deloitte technologies, to provide more consistent, accurate and continuous access to information.With the digital teammate, patients can experience benefits including:24/7 access to the digital teammate using a smartphone, tablet or home computer.Reliable, preapproved answers to detailed questions, including information around anesthesia or the procedure itself.Post-surgery consultation to resolve any questions about the recovery process, potentially improving treatment adherence and health outcomes.In user acceptance testing of the digital teammate conducted this summer, a majority of the testers noted that responses provided were clear, relevant and met the needs of the given interaction.The Frontline AI Teammate offers a novel and innovative solution to help combat our health human resource crisis it has the potential to reduce the administrative burden, giving back time to healthcare providers to provide the quality care our population deserves and expects from The Ottawa Hospital, said Mathieu LeBreton, digital experience lead at The Ottawa Hospital. The opportunity to explore these technologies is well-timed, given the planning of the New Campus Development, a new hospital project in Ottawa. Proper identification of the problems we are trying to solve is imperative to ensure this is done responsibly and transparently.Deloitte is working with other hospitals and healthcare institutions to deploy digital agents. A patient-facing pilot with Ottawa Hospital is expected to go live by the end of the year.Developers can get started by accessing the digital human NIM Agent Blueprint.
    0 Comments 0 Shares 36 Views
  • BLOGS.NVIDIA.COM
    Get Ready to Slay: Dragon Age: The Veilguard to Soar Into GeForce NOW at Launch
    Bundle up this fall with GeForce NOW and Dragon Age: The Veilguard with a special, limited-time promotion just for members.The highly anticipated role-playing game (RPG) leads 10 titles joining the ever-growing GeForce NOW library of over 2,000 games.A Heroic BundleThe mother of dragon bundles.Fight for Thedas future at Ultimate quality this fall as new and existing members who purchase six months of GeForce NOW Ultimate can get BioWare and Electronic Arts epic RPG Dragon Age: The Veilguard for free when it releases on Oct. 31.Rise as Rook, Dragon Ages newest hero. Lead a team of seven companions, each with their own unique story, against a new evil rising in Thedas. The latest entry in the legendary Dragon Age franchise lets players customize their characters and engage with new romancable companions whose stories unfold over time. Band together and become the Veilguard.Ultimate members can experience BioWares latest entry at full GeForce quality, with support for NVIDIA DLSS 3, low-latency gameplay with NVIDIA Reflex, and enhanced image quality and immersion with ray-traced ambient occlusion and reflections. Ultimate members can also play popular PC games at up to 4K resolution with extended session lengths, even on low-spec devices.Move fast this bundle is only available for a limited time until Oct. 30.Supernatural Thrills, Super New GamesEternal adventure, instant access.New World: Aeternum is the latest content for Amazon Games hit action RPG. Available for members to stream at launch this week, it offers a thrilling action RPG experience in a vast, perilous world. Explore the mysterious island, encounter diverse creatures, face supernatural dangers and uncover ancient secrets.The games action-oriented combat system and wide variety of weapons allow for diverse playstyles, while the crafting and progression systems offer depth for long-term engagement. Then, grab the gaming squad for intense combat and participate in large-scale battles for territorial control.Members can look for the following games available to stream in the cloud this week:Neva (New release on Steam, Oct. 15)MechWarrior 5: Clans (New release on Steam and Xbox, available on PC Game Pass, Oct. 16)A Quiet Place: The Road Ahead (New release on Steam, Oct. 17)Assassins Creed Mirage (New release on Steam, Oct. 17)Artisan TD (Steam)ASKA (Steam)Dungeon Tycoon (Steam)South Park: The Fractured But Whole (Available on PC Game Pass, Oct 16. Members will need to activate access)Spirit City: Lofi Sessions (Steam)Star Trucker (Xbox, available on Game Pass)What are you planning to play this weekend? Let us know on X or in the comments below.bundle up NVIDIA GeForce NOW (@NVIDIAGFN) October 16, 2024
    0 Comments 0 Shares 105 Views
  • BLOGS.NVIDIA.COM
    Sustainable Manufacturing and Design: How Digital Twins Are Driving Efficiency and Cutting Emissions
    Improving the sustainability of manufacturing involves optimizing entire product lifecycles from material sourcing and transportation to design, production, distribution and end-of-life disposal.According to the International Energy Agency, reducing the carbon footprint of industrial production by just 1% could save 90 million tons of CO emissions annually. Thats equivalent to taking more than 20 million gasoline-powered cars off the road each year.Technologies such as digital twins and accelerated computing are enabling manufacturers to reduce emissions, enhance energy efficiency and meet the growing demand for environmentally conscious production.Siemens and NVIDIA are at the forefront of developing technologies that help customers achieve their sustainability goals and improve production processes.Key Challenges in Sustainable ManufacturingBalancing sustainability with business objectives like profitability remains a top concern for manufacturers. A study by Ernst & Young in 2022 found that digital twins can reduce construction costs by up to 35%, underscoring the close link between resource consumption and construction expenses.Yet, one of the biggest challenges in driving sustainable manufacturing and reducing overhead is the presence of silos between departments, different plants within the same organization and across production teams. These silos arise from a variety of issues, including conflicting priorities and incentives, a lack of common energy-efficiency metrics and language, and the need for new skills and solutions to bridge these gaps.Data management also presents a hurdle, with many manufacturers struggling to turn vast amounts of data into actionable insights particularly those that can impact sustainability goals.According to a case study by The Manufacturer, a quarter of respondents surveyed acknowledged that their data shortcomings negatively impact energy efficiency and environmental sustainability, with nearly a third reporting that data is siloed to local use cases.Addressing these challenges requires innovative approaches that break down barriers and use data to drive sustainability. Acting as a central hub for information, digital twin technology is proving to be an essential tool in this effort.The Role of Digital Twins in Sustainable ManufacturingIndustrial-scale digital twins built on the NVIDIA Omniverse development platform and Universal Scene Description (OpenUSD) are transforming how manufacturers approach sustainability and scalability.These technologies power digital twins that take engineering data from various sources and contextualize it as it would appear in the real world. This breaks down information silos and offers a holistic view that can be shared across teams from engineering to sales and marketing.This enhanced visibility enables engineers and designers to simulate and optimize product designs, facility layouts, energy use and manufacturing processes before physical production begins. That allows for deeper insights and collaboration by helping stakeholders make more informed decisions to improve efficiency and reduce costly errors and last-minute changes that can result in significant waste.To further transform how products and experiences are designed and manufactured, Siemens is integrating NVIDIA Omniverse Cloud application programming interfaces into its Siemens Xcelerator platform, starting with Teamcenter X, its cloud-based product lifecycle management software.These integrations enable Siemens to bring the power of photorealistic visualization to complex engineering data and workflows, allowing companies to create physics-based digital twins that help eliminate workflow waste and errors.Siemens and NVIDIA have demonstrated how companies like HD Hyundai, a leader in sustainable ship manufacturing, are using these new capabilities to visualize and interact with complex engineering data at new levels of scale and fidelity.HD Hyundai is unifying and visualizing complex engineering projects directly within Teamcenter X.Physics-based digital twins are also being utilized to test and validate robotics and physical AI before theyre deployed into real-world manufacturing facilities.Foxconn, the worlds largest electronics manufacturer, has introduced a virtual plant that pushes the boundaries of industrial automation. Foxconns digital twin platform, built on Omniverse and NVIDIA Isaac, replicates a new factory in the Guadalajara, Mexico, electronics hub to allow engineers to optimize processes and train robots for efficient production of NVIDIA Blackwell systems.By simulating the factory environment, engineers can determine the best placement for heavy robotic arms, optimize movement and maximize safe operations while strategically positioning thousands of sensors and video cameras to monitor the entire production process.Foxconns virtual factory uses a digital twin powered by the NVIDIA Omniverse and NVIDIA Isaac platforms to produce NVIDIA Blackwell systems.The use of digital twins, like those in Foxconns virtual factory, is becoming increasingly common in industrial settings for simulation and testing.Foxconns chairman, Young Liu, highlighted how the digital twin will lead to enhanced automation and efficiency, resulting in significant savings in time, cost and energy. The company expects to increase manufacturing efficiency while reducing energy consumption by over 30% annually.By connecting data from Siemens Xcelerator software to its platform built on NVIDIA Omniverse and OpenUSD, the virtual plant allows Foxconn to design and train robots in a realistic, simulated environment, revolutionizing its approach to automation and sustainable manufacturing.Making Every Watt CountOne consideration for industries everywhere is how the rising demand for AI is outpacing the adoption of renewable energy. This means business leaders, particularly manufacturing plant and data center operators, must maximize energy efficiency and ensure every watt is utilized effectively to balance decarbonization efforts alongside AI growth.The best and simplest means of optimizing energy use is to accelerate every possible workload.Using accelerated computing platforms that integrate both GPUs and CPUs, manufacturers can significantly enhance computational efficiency.GPUs, specifically designed for handling complex calculations, can outperform traditional CPU-only systems in AI tasks. These systems can be up to 20x more energy efficient when it comes to AI inference and training.This leap in efficiency has fueled substantial gains over the past decade, enabling AI to address more complex challenges while maintaining energy-efficient operations.Building on these advances, businesses can further reduce their environmental impact by adopting key energy management strategies. These include implementing energy demand management and efficiency measures, scaling battery storage for short-duration power outages, securing renewable energy sources for baseload electricity, using renewable fuels for backup generation and exploring innovative ideas like heat reuse.Join the Siemens and NVIDIA session at the 7X24 Exchange 2024 Fall Conference to discover how digital twins and AI are driving sustainable solutions across data centers.The Future of Sustainable Manufacturing: Industrial DigitalizationThe next frontier in manufacturing is the convergence of the digital and physical worlds in what is known as industrial digitalization, or the industrial metaverse. Here, digital twins become even more immersive and interactive, allowing manufacturers to make data-driven decisions faster than ever.We will revolutionize how products and experiences are designed, manufactured and serviced, said Roland Busch, president and CEO of Siemens AG. On the path to the industrial metaverse, this next generation of industrial software enables customers to experience products as they would in the real world: in context, in stunning realism and in the future interact with them through natural language input.Leading the Way With Digital Twins and Sustainable ComputingSiemens and NVIDIAs collaboration showcases the power of digital twins and accelerated computing for reducing the environmental impact caused by the manufacturing industry every year. By leveraging advanced simulations, AI insights and real-time data, manufacturers can reduce waste and increase energy efficiency on their path to decarbonization.Learn more about how Siemens and NVIDIA are accelerating sustainable manufacturing.Read about NVIDIAs sustainable computing efforts and check out the energy-efficiency calculator to discover potential energy and emissions savings.
    0 Comments 0 Shares 100 Views
  • BLOGS.NVIDIA.COM
    Waterways Wonder: Clearbot Autonomously Cleans Waters With Energy-Efficient AI
    What started as two classmates seeking a free graduation trip to Bali subsidized by a university project ended up as an AI-driven sea-cleaning boat prototype built of empty water bottles, hobbyist helicopter blades and a GoPro camera.University of Hong Kong grads Sidhant Gupta and Utkarsh Goel have since then made a splash with their Clearbot autonomous trash collection boats enabled by NVIDIA Jetson.We came up with the idea to clean the water there because there are a lot of dirty beaches, and the local community depends on them to be clean for their tourism business, said Gupta, who points out the same is true for touristy regions of Hong Kong and India, where they do business now.Before launching Clearbot, in 2021, the university friends put up their proof-of-concept waste collection boat on a website and then just forgot about it, he said, starting work after graduation. A year later, a marine construction company proposed a water cleanup project, and the pair developed their prototype around the effort to remove three tons of trash daily from a Hong Kong marine construction site.They were using a big boat and a crew of three to four people every day, at a cost of about $1,000 per day thats when we realized we can build this and do it better and at lower cost, said Gupta.Plastic makes up about 85% of ocean litter, with an estimated 11 million metric tons entering oceans every year, according to the United Nations Environment Programme. Clearbot aims to remove waste from waterways before it gets into oceans.Cleaning Waters With Energy-Efficient Jetson XavierClearbot, based in Hong Kong and India, has 24 employees developing and deploying its water-cleaning electric-powered boats that can self-dock at solar charging stations. We believe that humanitys relationship with the ocean is sort of broken the question is can we make that better and is there a better future outcome? We can do it 100% emissions-free, so youre not creating pollution while youre cleaning pollution. Sidhant Gupta, co-founder of ClearbotThe ocean vessels, ranging in length from 10 to 16 feet, have two cameras one for navigation and another for waste identification of what boats have scooped up. The founders trained garbage models on cloud and desktop NVIDIA GPUs from images they took in their early days, and now they have large libraries of images from collecting on cleanup sites. Theyve also trained models that enable the Clearbot to autonomously navigate away from obstacles.The energy-efficient Jetson Xavier NX allows the water-cleaning boats propelled by battery-driven motors to collect for eight hours at a time before returning to recharge.Harbors and other waterways frequented by tourists and businesses often rely on diesel-powered boats with workers using nets to remove garbage, said Gupta. Traditionally, a crew of 50 people in such scenarios can run about 15 or 20 boats, estimates Gupta. With Clearbot, a crew of 50 people can run about 150 boats, boosting intake, he said.We believe that humanitys relationship with the ocean is sort of broken the question is can we make that better and is there a better future outcome? said Gupta. We can do it 100% emissions-free, so youre not creating pollution while youre cleaning pollution.Customers Harnessing Clearbot for Environmental BenefitsKingspan, a maker of building materials, is working with Clearbot to clean up trash and oil in rivers and lakes in Nongstoin, India. So far, the work has resulted in the removal of 1.2 tons of waste per month in the area.Umiam Lake in Meghalaya, India, has long been a tourist destination and place for fishing. However, its become so polluted, that areas of the waters surface arent visible with all of the floating trash.The regions leadership is working with Clearbot in a project with the University of California Berkeley Haas School of Business to help remove the trash from the lake. Since the program began three months ago, Clearbot has collected 15 tons of waste.Mitigating Environmental Impacts With Clearbot DataClearbot has expanded its services beyond trash collection to address environmental issues more broadly. The company is now assisting in marine pollution control for sewage, oil, gas and other chemical spills as well as undersea inspections for dredging projects, examining algae growth and many other areas where its autonomous boats can capture data.Unforeseen to Clearbots founders, they have discovered that the data about garbage collection and other environmental pollutants can be used in mitigation strategies. The images that they collect are geotagged, so if somebody is trying to find the source of a problem, backtracking from Clearbots software dashboard on some of the data on findings is a good place to start.For example, if theres a concentration of plastic bottle waste in a particular area, and its of a particular type, local agencies could track back to where its coming from. This could allow local governments to mitigate the waste by reaching out to the polluter to put a stop to the activity that is causing it, said Gupta.Lets say Im a municipality and I want to ban plastic bags in my area you need the NGOs, the governments and the change makers to acquire the data to back their justifications for why they want to close down the plastic plant up the stream, said Gupta. That data is being generated on board your NVIDIA Jetson Xavier.Learn about NVIDIA Jetson Xavier and Earth-2
    0 Comments 0 Shares 91 Views
  • BLOGS.NVIDIA.COM
    We Would Like to Achieve Superhuman Productivity, NVIDIA CEO Says as Lenovo Brings Smarter AI to Enterprises
    Moving to accelerate enterprise AI innovation, NVIDIA founder and CEO Jensen Huang joined Lenovo CEO Yuanqing Yang on stage Tuesday during the keynote at Lenovo Tech World 2024.Together, they introduced the Lenovo Hybrid AI Advantage with NVIDIA, a full-stack platform for building and deploying AI capabilities across the enterprise that drive speed, innovation and productivity.We would like to achieve essentially superhuman productivity, Huang told a crowd gathered in-person and online for Lenovos Seattle event. And these AI agents are helping employees across industries to be more efficient and productive.They also unveiled a new high-performance AI server featuring Lenovos Neptune liquid-cooling technology and NVIDIA Blackwell, marking a leap forward in sustainability and energy efficiency for AI systems.This is going to be the largest of industrial revolutions weve ever seen, Huang noted, highlighting the profound impact AI is having on industries worldwide. And were seeing, in the last 12 months or so, just an extraordinary awakening in every single industry, every single company, every single country.Lenovo Unveils Hybrid AI Advantage With NVIDIAThe Lenovo Hybrid AI Advantage with NVIDIA is built on Lenovos services and infrastructure capabilities with NVIDIA AI software and accelerated computing. It enables organizations to create agentic AI and physical AI that transform data into actionable business outcomes more efficiently.Our strategy is to combine modularization with customization so that we can respond quickly to customer needs while tailoring our solutions for them, Yang said.Introducing Lenovo AI Fast Start and Hybrid AI SolutionsAs part of the Lenovo Hybrid AI Advantage, Lenovo has introduced Lenovo AI Fast Start, a service designed to help organizations rapidly build generative AI solutions.Leveraging the NVIDIA AI Enterprise software platform, which includes NVIDIA NIM microservices and NVIDIA NeMo for building AI agents, Lenovo AI Fast Start enables customers to prove the business value of AI use cases across personal, enterprise, and public AI platforms within weeks.By giving organizations access to AI assets, experts, and partners, the service helps tailor solutions to meet the needs of each business, speeding up deployment at scale.This platform also includes the Lenovo AI Service Library and uses NVIDIA AI Enterprise software, including NVIDIA NIM, NVIDIA NeMo and NVIDIA NIM Agent Blueprints for agentic AI, as well as support for NVIDIA Omniverse for physical AI.The AI Service Library offers a collection of preconfigured AI solutions that can be customized for different needs.When these offerings are combined with NIM Agent Blueprints, businesses can rapidly develop and deploy AI agents tailored to their specific needs, accelerating AI adoption across industries.With the addition of NeMo for large language model optimization and Omniverse for digital twin simulations, enterprises can use cutting-edge AI technologies for both agentic and physical AI applications.Energy Efficiency and AI InfrastructureYang and Huang emphasized the critical need for energy-efficient AI infrastructure.Speed is sustainability. Speed is performance. Speed is energy efficiency, Huang said, stressing how performance improvements directly contribute to reducing energy consumption and increasing efficiency.Lenovos 6th Generation Neptune Liquid Cooling solution supports AI computing and high-performance computing while delivering better energy efficiency, Yang said.By reducing data center power consumption by up to 40%, Neptune allows businesses to efficiently run accelerated AI workloads while lowering operational costs and environmental impact.In line with this, Lenovos TruScale infrastructure services offer a scalable cloud-based model that gives organizations access to AI computing power without the need for large upfront investments in physical infrastructure, ensuring businesses can scale deployments as needed.Introducing Lenovo ThinkSystem SC777 V4 Neptune With NVIDIA BlackwellThe CEOs revealed the ThinkSystem SC777 V4 Neptune server, featuring NVIDIA GB200 Grace Blackwell.This 100% liquid-cooled system requires no fans or specialized data center air conditioning. It fits into a standard rack and runs on standard power.To an engineer, this is sexy, Huang said, referring to the ThinkSystem SC777 V4 Neptune server he and Yang had just unveiled.The SC777 includes next-gen NVIDIA NVLink interconnect, supporting NVIDIA Quantum-2 InfiniBand or Spectrum-X Ethernet networking. It also supports NVIDIA AI Enterprise software with NIM microservices.Our partnership spans from infrastructure to software and to service level, Yang said. Together, we deploy enterprise AI agents to our customers.
    0 Comments 0 Shares 88 Views
  • BLOGS.NVIDIA.COM
    MAXimum AI: RTX-Accelerated Adobe AI-Powered Features Speed Up Content Creation
    At the Adobe MAX creativity conference this week, Adobe announced updates to its Adobe Creative Cloud products, including Premiere Pro and After Effects, as well as to Substance 3D products and the Adobe video ecosystem.These apps are accelerated by NVIDIA RTX and GeForce RTX GPUs in the cloud or running locally on RTX AI PCs and workstations.One of the most highly anticipated features is Generative Extend in Premiere Pro (beta), which uses generative AI to seamlessly add frames to the beginning or end of a clip. Powered by the Firefly Video Model, its designed to be commercially safe and only trained on content Adobe has permission to use, so artists can create with confidence.Adobe Substance 3D Collection apps offer numerous RTX-accelerated features for 3D content creation, including ray tracing, AI delighting and upscaling, and image-to-material workflows powered by Adobe Firefly.Substance 3D Viewer, entering open beta at Adobe MAX, is designed to unlock 3D in 2D design workflows by allowing 3D files to be opened, viewed and used across design teams. This will improve interoperability with other RTX-accelerated Adobe apps like Photoshop.Adobe Firefly integrations have also been added to Substance 3D Collection apps, including Text to Texture, Text to Pattern and Image to Texture tools in Substance 3D Sampler, as well as Generative Background in Substance 3D Stager, to further enhance the 3D content creation with generative AI.The October NVIDIA Studio Driver, designed to optimize creative apps, will be available for download tomorrow. For automatic Studio Driver notifications, as well as easy access to apps like NVIDIA Broadcast, download the NVIDIA app beta.Video Editing EvolvedAdobe Premiere Pro has transformed video editing workflows over the last four years withfeatures like Auto Reframe and Scene Edit Detection.The recently launched GPU-accelerated Enhance Speech, AI Audio Category Tagging and Filler Word Detection features allow editors to use AI to intelligently cut and modify video scenes.The Adobe Firefly Video Model now available in limited beta at Firefly.Adobe.com brings generative AI to video, marking the next advancement in video editing. It allows users to create and edit video clips using simple text prompts or images, helping fill in content gaps without having to reshoot, extend or reframe takes. It can also be used to create video clip prototypes as inspiration for future shots.Topaz Labs has introduced a new plug-in for Adobe After Effects, a video enhancement software that uses AI models to improve video quality. This gives users access to enhancement and motion deblur models for sharper, clearer video quality. Accelerated on GeForce RTX GPUs, these models run nearly 2.5x faster on the GeForce RTX 4090 Laptop GPU compared with the MacBook Pro M3 Max.Stay tuned for NVIDIA TensorRT enhancements and more Topaz Video AI effects coming to the After Effects plug-in soon.3D Super PoweredThe Substance 3D Collection is revolutionizing the ideation stage of 3D creation with powerful generative AI features in Substance 3D Sampler and Stager.Samplers Text to Texture, Text to Pattern and Image to Texture tools, powered by Adobe Firefly, allow artists to rapidly generate reference images from simple prompts that can be used to create parametric materials.Stagers Generative Background feature helps designers explore backgrounds for staging 3D models, using text descriptions to generate images. Stager can then match lighting and camera perspective, allowing designers to explore more variations faster when iterating and mocking up concepts.Substance 3D Viewer also offers a connected workflow with Photoshop, where 3D models can be placed into Photoshop projects and edits made to the model in Viewer will be automatically sent back to the Photoshop project. GeForce RTX GPU hardware acceleration and ray tracing provide smooth movement in the viewport, producing up to 80% higher frames per second on the GeForce RTX 4060 Laptop GPU compared to the MacBook M3 Pro.There are also new Firefly-powered features in Substance 3D Viewer, like Text to 3D and 3D Model to Image, that combine text prompts and 3D objects to give artists more control when generating new scenes and variations.The latest After Effects release features an expanded range of 3D tools that enable creators to embed 3D animations, cast ultra-realistic shadows on 2D objects and isolate effects in 3D space.After Effects now also has an RTX GPU-powered Advanced 3D Renderer that accelerates the processing-intensive and time-consuming task of applying HDRI lighting lowering creative barriers to entry while improving content realism. Rendering can be done 30% faster on a GeForce RTX 4090 GPU over the previous generation.Pairing Substance 3D with After Effects native and fast 3D integration allows artists to significantly boost the visual quality of 3D in After Effects with precision texturing and access to more than 20,000 parametric 3D materials, IBL environment lights and 3D models.Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of whats new and whats next by subscribing to the AI Decoded newsletter.
    0 Comments 0 Shares 77 Views
  • BLOGS.NVIDIA.COM
    NVIDIA AI Summit Panel Outlines Autonomous Driving Safety
    The autonomous driving industry is shaped by rapid technological advancements and the need for standardization of guidelines to ensure the safety of both autonomous vehicles (AVs) and their interaction with human-driven vehicles.At the NVIDIA AI Summit this week in Washington, D.C., industry experts shared viewpoints on this AV safety landscape from regulatory and technology perspectives.Danny Shapiro, vice president of automotive at NVIDIA, led the wide-ranging conversation with Mark Rosekind, former administrator of the National Highway Traffic Safety Administration, and Marco Pavone, director of AV research at NVIDIA.To frame the discussion, Shapiro kicked off with a sobering comment about the high number of crashes, injuries and fatalities on the worlds roadways. Human error remains a serious problem and the primary cause of these incidents.Improving safety on our roads is critical, Shapiro said, noting that NVIDIA has been working for over two decades with the auto industry, including advanced driver assistance systems and fully autonomous driving technology development.NVIDIAs approach to AV development is centered on the integration of three computers: one for training the AI, one for simulation to test and validate the AI, and one in the vehicle to process sensor data in real time to make safe driving decisions. Together, these systems enable continuous development cycles, always improving the AV software in performance and safety.Rosekind, a highly regarded automotive safety expert, spoke about the patchwork of regulations that exists across the U.S., explaining that federal agencies focus on the vehicle, while the states focus on the operator, including driver education, insurance and licensing.Pavone commented on the emergence of new tools that allow researchers and developers to rethink how AV development is carried out, as a result of the explosion of new technologies related to generative AI and neural rendering, among others.These technologies are enabling new developments in simulation, for example to generate complex scenarios aimed at stress testing vehicles for safety purposes. And theyre harnessing foundation models, such as vision language models, to allow developers to build more robust autonomy software, Pavone said.One of the relevant and timely topics discussed during the panel was an announcement made during the AI Summit by MITRE, a government-sponsored nonprofit research organization.MITRE announced its partnership with Mcity at the University of Michigan to develop a virtual and physical AV validation platform for industry deployment.MITRE will use Mcitys simulation tools and a digital twin of its Mcity Test Facility, a real-world AV test environment in its digital proving ground. The jointly developed platform will deliver physically based sensor simulation enabled by NVIDIA Omniverse Cloud Sensor RTX applications programming interfaces.By combining these simulation capabilities with the MITRE digital proving ground reporting and analysis framework, developers will be able to perform exhaustive testing in a simulated world to safely validate AVs before real-world deployment.Rosekind commented: The MITRE announcement represents an opportunity to have a trusted source whos done this in many other areas, especially in aviation, to create an independent, neutral setting to test safety assurance.One of the most exciting things about this endeavor is that simulation is going to have a key role, added Pavone. Simulation allows you to test very dangerous conditions in a repeatable and varied way, so you can simulate different cases at scale.Thats the beauty of simulation, said Shapiro. Its repeatable, its controllable. We can control the weather in the simulation. We can change the time of day, and then we can control all the scenarios and inject hazards. Once the simulation is created, we can run it over and over, and as the software develops, we can ensure we are solving the problem, and can fine-tune as necessary.The panel wrapped up with a reminder that the key goal of autonomous driving is one that businesses and regulators alike share: to reduce death and injuries on our roadways.Watch a replay of the session. (Registration required.)To learn more about NVIDIAs commitment to bringing safety to our roads, read the NVIDIA Self-Driving Safety Report.
    Love
    1
    0 Comments 0 Shares 112 Views
  • BLOGS.NVIDIA.COM
    Game-Changer: How the Worlds First GPU Leveled Up Gaming and Ignited the AI Era
    In 1999, fans lined up at Blockbuster to rent chunky VHS tapes of The Matrix. Y2K preppers hoarded cash and canned Spam, fearing a worldwide computer crash. Teens gleefully downloaded Britney Spears and Eminem on Napster.But amid the caffeinated fizz of turn-of-the-millennium tech culture, something more transformative was unfolding.The release of NVIDIAs GeForce 256 twenty-five years ago today, overlooked by all but hardcore PC gamers and tech enthusiasts at the time, would go on to lay the foundation for todays generative AI.The GeForce 256 wasnt just another graphics card it was introduced as the worlds first GPU, setting the stage for future advancements in both gaming and computing.With hardware transform and lighting (T&L), it took the load off the CPU, a pivotal advancement. As Toms Hardware emphasized: [The GeForce 256] can take the strain off the CPU, keep the 3D-pipeline from stalling, and allow game developers to use much more polygons, which automatically results in greatly increased detail.Where Gaming Changed ForeverFor gamers, starting up Quake III Arena on a GeForce 256 was a revelation. Immediately after firing up your favorite game, it feels like youve never even seen the title before this moment, as the enthusiasts at AnandTech put it,The GeForce 256 paired beautifully with breakthrough titles such Unreal Tournament, one of the first games with realistic reflections, which would go on to sell more than 1 million copies in its first year.Over the next quarter-century, the collaboration between game developers and NVIDIA would continue to push boundaries, driving advancements such as increasingly realistic textures, dynamic lighting, and smoother frame rates innovations that delivered far more than just immersive experiences for gamers.NVIDIAs GPUs evolved into a platform that transformed new silicon and software into powerful, visceral innovations that reshaped the gaming landscape.In the decades to come, NVIDIA GPUs drove ever higher frame rates and visual fidelity, allowing for smoother, more responsive gameplay.This leap in performance was embraced by platforms such as Twitch, YouTube Gaming, and Facebook, as gamers were able to stream content with incredible clarity and speed.These performance boosts not only transformed the gaming experience but also turned players into entertainers. This helped fuel the global growth of esports.Major events like The International (Dota 2), the League of Legends World Championship, and the Fortnite World Cup attracted millions of viewers, solidifying esports as a global phenomenon and creating new opportunities for competitive gaming.From Gaming to AI: The GPUs Next FrontierAs gaming worlds grew in complexity, so too did the computational demands.The parallel power that transformed gaming graphics caught the attention of researchers, who realized these GPUs could also unlock massive computational potential in AI, enabling breakthroughs far beyond the gaming world.Deep learning a software model that relies on billions of neurons and trillions of connections requires immense computational power.Traditional CPUs, designed for sequential tasks, couldnt efficiently handle this workload. But GPUs, with their massively parallel architecture, were perfect for the job.By 2011, AI researchers had discovered NVIDIA GPUs and their ability to handle deep learnings immense processing needs.Researchers at Google, Stanford and New York University began using NVIDIA GPUs to accelerate AI development, achieving performance that previously required supercomputers.In 2012, a breakthrough came when Alex Krizhevsky from the University of Toronto used NVIDIA GPUs to win the ImageNet image recognition competition. His neural network, AlexNet, trained on a million images, crushed the competition, beating handcrafted software written by vision experts.This marked a seismic shift in technology. What once seemed like science fiction computers learning and adapting from vast amounts of data was now a reality, driven by the raw power of GPUs.By 2015, AI had reached superhuman levels of perception, with Google, Microsoft and Baidu surpassing human performance in tasks like image recognition and speech understanding all powered by deep neural networks running on GPUs.In 2016, NVIDIA CEO Jensen Huang donated the first NVIDIA DGX-1 AI supercomputer a system packed with eight cutting-edge GPUs to OpenAI, which would harness GPUs to train ChatGPT, launched in November 2022.In 2018, NVIDIA debuted GeForce RTX (20 Series) with RT Cores and Tensor Cores, designed specifically for real-time ray tracing and AI workloads.This innovation accelerated the adoption of ray-traced graphics in games, bringing cinematic realism to gaming visuals and AI-powered features like NVIDIA DLSS, which enhanced gaming performance by leveraging deep learning.Meanwhile, ChatGPT, launched in 2022, would go on to reach more than 100 million users within months of its launch, demonstrating how NVIDIA GPUs continue to drive the transformative power of generative AI.Today, GPUs arent only celebrated in the gaming world theyve become icons of tech culture, appearing in Reddit memes, Twitch streams, T-shirts at Comic-Con and even being immortalized in custom PC builds and digital fan art.Shaping the FutureThis revolution that began with the GeForce 256 continues to unfold today in gaming and entertainment, in personal computing where AI powered by NVIDIA GPUs is now part of everyday life and inside the trillion-dollar industries building next-generation AI into the core of their businesses.GPUs are not just enhancing gaming but are designing the future of AI itself.And now, with innovations like NVIDIA DLSS, which uses AI to boost gaming performance and deliver sharper images, and NVIDIA ACE, designed to bring more lifelike interactions to in-game characters, AI is once again reshaping the gaming world.The GeForce 256 laid the bedrock for a future where gaming, computing, and AI are not just evolving together, theyre transforming the world.
    0 Comments 0 Shares 93 Views
More Stories