• NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • NVIDIA helps Germany lead Europe’s AI manufacturing race

    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #nvidia #helps #germany #lead #europes
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #nvidia #helps #germany #lead #europes
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory” (as they’re calling it) will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.(Photo by Maheshkumar Painam)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI

    Industrial AI isn’t slowing down. Germany is ready.
    Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud.
    This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics.
    “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”
    “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”
    This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software.
    NEURA Robotics’ training center for cognitive robots.
    NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure.
    At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments.
    “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”
    Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts.
    Driving Germany’s Industrial Ecosystem
    Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem.
    Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale.
    Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps.
    A Speedboat Toward AI Gigafactories
    The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey.
    The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners.
    Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers.
    As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources.
    NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities.
    Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications.
    Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay.
    #nvidia #deutsche #telekom #partner #advance
    NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
    Industrial AI isn’t slowing down. Germany is ready. Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud. This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software. NEURA Robotics’ training center for cognitive robots. NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure. At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments. “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts. Driving Germany’s Industrial Ecosystem Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem. Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale. Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps. A Speedboat Toward AI Gigafactories The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey. The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners. Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers. As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources. NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities. Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications. Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay. #nvidia #deutsche #telekom #partner #advance
    BLOGS.NVIDIA.COM
    NVIDIA and Deutsche Telekom Partner to Advance Germany’s Sovereign AI
    Industrial AI isn’t slowing down. Germany is ready. Following London Tech Week and GTC Paris at VivaTech, NVIDIA founder and CEO Jensen Huang’s European tour continued with a stop in Germany to discuss with Chancellor Friedrich Merz — pictured above — new partnerships poised to bring breakthrough innovations on the world’s first industrial AI cloud. This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics. “In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Jensen Huang, founder and CEO of NVIDIA. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.” “Europe’s technological future needs a sprint, not a stroll,” said Timotheus Höttges, CEO of Deutsche Telekom AG. “We must seize the opportunities of artificial intelligence now, revolutionize our industry and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.” This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software. NEURA Robotics’ training center for cognitive robots. NEURA Robotics, a Germany-based global pioneer in physical AI and cognitive robotics, will use the computing resources to power its state-of-the-art training centers for cognitive robots — a tangible example of how physical AI can evolve through powerful, connected infrastructure. At this work’s core is the Neuraverse, a seamlessly networked robot ecosystem that allows robots to learn from each other across a wide range of industrial and domestic applications. This platform creates an app-store-like hub for robotic intelligence — for tasks like welding and ironing — enabling continuous development and deployment of robotic skills in real-world environments. “Physical AI is the electricity of the future — it will power every machine on the planet,” said David Reger, founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.” Critical to Germany’s competitiveness is AI technology development, including the expansion of data center capacity, according to a Deloitte study. This is strategically important because demand for data center capacity is expected to triple over the next five years to 5 gigawatts. Driving Germany’s Industrial Ecosystem Deutsche Telekom will operate the AI factory and provide AI cloud computing resources to Europe’s industrial ecosystem. Customers will be able to run NVIDIA CUDA-X libraries, as well as NVIDIA RTX- and Omniverse-accelerated workloads from leading software providers such as Siemens, Ansys, Cadence and Rescale. Many more stand to benefit. From the country’s robust small- and medium-sized businesses, known as the Mittelstand, to academia, research and major enterprises — the AI factory offers strategic technology leaps. A Speedboat Toward AI Gigafactories The industrial AI cloud will accelerate AI development and adoption from European manufacturers, driving simulation-first, AI-driven manufacturing practices and helping prepare for the country’s transition to AI gigafactories, the next step in Germany’s sovereign AI infrastructure journey. The AI gigafactory initiative is a 100,000 GPU-powered program backed by the European Union, Germany and partners. Poised to go online in 2027, it’ll provide state-of-the-art AI infrastructure that gives enterprises, startups, researchers and universities access to accelerated computing through the establishment and expansion of high-performance computing centers. As of March, there are about 900 Germany-based members of the NVIDIA Inception program for cutting-edge startups, all of which will be eligible to access the AI resources. NVIDIA offers learning courses through its Deep Learning Institute to promote education and certification in AI across the globe, and those resources are broadly available across Germany’s computing ecosystem to offer upskilling opportunities. Additional European telcos are building AI infrastructure for regional enterprises to build and deploy agentic AI applications. Learn more about the latest AI advancements by watching Huang’s GTC Paris keynote in replay.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Alienware Aurora R16 RTX 5080 Gaming PCs Start at Just $2,450 Shipped

    Alienware is offering competitive prices on RTX 5080 equipped gaming PCs to kick off June. Right now you can pick up an Alienware Aurora R16 RTX 5080 gaming PC from shipped. This is a good price for a well-engineered gaming rig with powerful current generation components, 240mm AIO water cooling, and sensible airflow design that can handle 4K gaming at high frame rates. In the current market, buying a prebuilt gaming PC is the only way to score an RTX 5080 GPU without paying an exorbitant markup. If you were to try to find a 5080 GPU for your do-it-yourself PC build, you'll probably spend nearly as much for the GPU as you would for an entire system.Alienware Aurora RTX 5080 Gaming PC From Alienware Aurora R16 Intel Core Ultra 7 265F RTX 5080 Gaming PCat AlienwareAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCNew for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PCThe prices on the Alienware Aurora R16 model starts range from the bottom end of to the top end of Each tier up offers either a CPU upgrade or increased RAM and/or storage:- Intel Core Ultra 7 265F, 16GB RAM, 1TB SSD- Intel Core Ultra 9 285K, 32GB RAM, 2TB SSD- Intel Core Ultra 9 285K, 64GB RAM, 4TB SSDAlienware Area-51 RTX 5080 Gaming PC for New for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PCDell unveiled the new Alienware Area-51 gaming PC at CES 2025. The chassis looks similar to the 2024 R16 system with aesthetic and cooling redesigns and updated components. The I/O panel is positioned at the top of the case instead of the front, and the tempered glass window now spans the entire side panel instead of just a smaller cutout. As a result, the side panel vents are gone, and instead air intakes are located at the bottom as well as the front of the case. Alienware is now pushing a positive airflow design, which means a less dusty interior. The internal components have been refreshed with a new motherboard, faster RAM, and more powerful power supply to accommodate the new generation of CPUs and GPUs.The GeForce RTX 5080 GPU will run any game in 4KThe RTX 5080 is the second best Blackwell graphics card, surpassed only by the RTX 5090. It's about 5%-10% faster than the previous generation RTX 4080 Super, which is discontinued and no longer available. In games that support the new DLSS 4 with multi-frame generation exclusive to Blackwell cards, the gap widens.Nvidia GeForce RTX 5080 FE Review, by Jacqueline Thomas"If you already have a high-end graphics card from the last couple of years, the Nvidia GeForce RTX 5080 doesn’t make a lot of sense – it just doesn’t have much of a performance lead over the RTX 4080, though the extra frames from DLSS 4 Multi-Frame Generation do make things look better in games that support it. However, for gamers with an older graphics card who want a significant performance boost, the RTX 5080 absolutely provides – doubly so if you’re comfortable with Nvidia’s AI goodies."Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    #alienware #aurora #r16 #rtx #gaming
    Alienware Aurora R16 RTX 5080 Gaming PCs Start at Just $2,450 Shipped
    Alienware is offering competitive prices on RTX 5080 equipped gaming PCs to kick off June. Right now you can pick up an Alienware Aurora R16 RTX 5080 gaming PC from shipped. This is a good price for a well-engineered gaming rig with powerful current generation components, 240mm AIO water cooling, and sensible airflow design that can handle 4K gaming at high frame rates. In the current market, buying a prebuilt gaming PC is the only way to score an RTX 5080 GPU without paying an exorbitant markup. If you were to try to find a 5080 GPU for your do-it-yourself PC build, you'll probably spend nearly as much for the GPU as you would for an entire system.Alienware Aurora RTX 5080 Gaming PC From Alienware Aurora R16 Intel Core Ultra 7 265F RTX 5080 Gaming PCat AlienwareAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCNew for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PCThe prices on the Alienware Aurora R16 model starts range from the bottom end of to the top end of Each tier up offers either a CPU upgrade or increased RAM and/or storage:- Intel Core Ultra 7 265F, 16GB RAM, 1TB SSD- Intel Core Ultra 9 285K, 32GB RAM, 2TB SSD- Intel Core Ultra 9 285K, 64GB RAM, 4TB SSDAlienware Area-51 RTX 5080 Gaming PC for New for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PCDell unveiled the new Alienware Area-51 gaming PC at CES 2025. The chassis looks similar to the 2024 R16 system with aesthetic and cooling redesigns and updated components. The I/O panel is positioned at the top of the case instead of the front, and the tempered glass window now spans the entire side panel instead of just a smaller cutout. As a result, the side panel vents are gone, and instead air intakes are located at the bottom as well as the front of the case. Alienware is now pushing a positive airflow design, which means a less dusty interior. The internal components have been refreshed with a new motherboard, faster RAM, and more powerful power supply to accommodate the new generation of CPUs and GPUs.The GeForce RTX 5080 GPU will run any game in 4KThe RTX 5080 is the second best Blackwell graphics card, surpassed only by the RTX 5090. It's about 5%-10% faster than the previous generation RTX 4080 Super, which is discontinued and no longer available. In games that support the new DLSS 4 with multi-frame generation exclusive to Blackwell cards, the gap widens.Nvidia GeForce RTX 5080 FE Review, by Jacqueline Thomas"If you already have a high-end graphics card from the last couple of years, the Nvidia GeForce RTX 5080 doesn’t make a lot of sense – it just doesn’t have much of a performance lead over the RTX 4080, though the extra frames from DLSS 4 Multi-Frame Generation do make things look better in games that support it. However, for gamers with an older graphics card who want a significant performance boost, the RTX 5080 absolutely provides – doubly so if you’re comfortable with Nvidia’s AI goodies."Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time. #alienware #aurora #r16 #rtx #gaming
    WWW.IGN.COM
    Alienware Aurora R16 RTX 5080 Gaming PCs Start at Just $2,450 Shipped
    Alienware is offering competitive prices on RTX 5080 equipped gaming PCs to kick off June. Right now you can pick up an Alienware Aurora R16 RTX 5080 gaming PC from $2,449.99 shipped. This is a good price for a well-engineered gaming rig with powerful current generation components, 240mm AIO water cooling, and sensible airflow design that can handle 4K gaming at high frame rates. In the current market, buying a prebuilt gaming PC is the only way to score an RTX 5080 GPU without paying an exorbitant markup. If you were to try to find a 5080 GPU for your do-it-yourself PC build, you'll probably spend nearly as much for the GPU as you would for an entire system.Alienware Aurora RTX 5080 Gaming PC From $2,450Alienware Aurora R16 Intel Core Ultra 7 265F RTX 5080 Gaming PC (16GB/1TB)$2,449.99 at AlienwareAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PC (32GB/2TB)Alienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PC (64GB/4TB)New for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PC (32GB/1TB)The prices on the Alienware Aurora R16 model starts range from the bottom end of $2,349.99 to the top end of $3,149.99. Each tier up offers either a CPU upgrade or increased RAM and/or storage:$2,349.99 - Intel Core Ultra 7 265F, 16GB RAM, 1TB SSD$2,799.99 - Intel Core Ultra 9 285K, 32GB RAM, 2TB SSD$3,249.99 - Intel Core Ultra 9 285K, 64GB RAM, 4TB SSDAlienware Area-51 RTX 5080 Gaming PC for $3,599.99New for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PC (32GB/1TB)Dell unveiled the new Alienware Area-51 gaming PC at CES 2025. The chassis looks similar to the 2024 R16 system with aesthetic and cooling redesigns and updated components. The I/O panel is positioned at the top of the case instead of the front, and the tempered glass window now spans the entire side panel instead of just a smaller cutout. As a result, the side panel vents are gone, and instead air intakes are located at the bottom as well as the front of the case. Alienware is now pushing a positive airflow design (more intake than exhaust airflow), which means a less dusty interior. The internal components have been refreshed with a new motherboard, faster RAM, and more powerful power supply to accommodate the new generation of CPUs and GPUs.The GeForce RTX 5080 GPU will run any game in 4KThe RTX 5080 is the second best Blackwell graphics card, surpassed only by the $2,000 RTX 5090. It's about 5%-10% faster than the previous generation RTX 4080 Super, which is discontinued and no longer available. In games that support the new DLSS 4 with multi-frame generation exclusive to Blackwell cards, the gap widens.Nvidia GeForce RTX 5080 FE Review, by Jacqueline Thomas"If you already have a high-end graphics card from the last couple of years, the Nvidia GeForce RTX 5080 doesn’t make a lot of sense – it just doesn’t have much of a performance lead over the RTX 4080, though the extra frames from DLSS 4 Multi-Frame Generation do make things look better in games that support it. However, for gamers with an older graphics card who want a significant performance boost, the RTX 5080 absolutely provides – doubly so if you’re comfortable with Nvidia’s AI goodies."Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    Like
    Love
    Wow
    Angry
    Sad
    400
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results

    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference.
    The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model-focused test: Llama 3.1 405B pretraining.
    The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks.
    The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs.
    On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale.
    On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round.
    These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market.
    These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain.
    The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value.
    The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro.
    Learn more about MLPerf benchmarks.
    #nvidia #blackwell #delivers #breakthrough #performance
    NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model-focused test: Llama 3.1 405B pretraining. The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale. On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market. These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain. The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value. The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro. Learn more about MLPerf benchmarks. #nvidia #blackwell #delivers #breakthrough #performance
    BLOGS.NVIDIA.COM
    NVIDIA Blackwell Delivers Breakthrough Performance in Latest MLPerf Training Results
    NVIDIA is working with companies worldwide to build out AI factories — speeding the training and deployment of next-generation AI applications that use the latest advancements in training and inference. The NVIDIA Blackwell architecture is built to meet the heightened performance requirements of these new applications. In the latest round of MLPerf Training — the 12th since the benchmark’s introduction in 2018 — the NVIDIA AI platform delivered the highest performance at scale on every benchmark and powered every result submitted on the benchmark’s toughest large language model (LLM)-focused test: Llama 3.1 405B pretraining. The NVIDIA platform was the only one that submitted results on every MLPerf Training v5.0 benchmark — underscoring its exceptional performance and versatility across a wide array of AI workloads, spanning LLMs, recommendation systems, multimodal LLMs, object detection and graph neural networks. The at-scale submissions used two AI supercomputers powered by the NVIDIA Blackwell platform: Tyche, built using NVIDIA GB200 NVL72 rack-scale systems, and Nyx, based on NVIDIA DGX B200 systems. In addition, NVIDIA collaborated with CoreWeave and IBM to submit GB200 NVL72 results using a total of 2,496 Blackwell GPUs and 1,248 NVIDIA Grace CPUs. On the new Llama 3.1 405B pretraining benchmark, Blackwell delivered 2.2x greater performance compared with previous-generation architecture at the same scale. On the Llama 2 70B LoRA fine-tuning benchmark, NVIDIA DGX B200 systems, powered by eight Blackwell GPUs, delivered 2.5x more performance compared with a submission using the same number of GPUs in the prior round. These performance leaps highlight advancements in the Blackwell architecture, including high-density liquid-cooled racks, 13.4TB of coherent memory per rack, fifth-generation NVIDIA NVLink and NVIDIA NVLink Switch interconnect technologies for scale-up and NVIDIA Quantum-2 InfiniBand networking for scale-out. Plus, innovations in the NVIDIA NeMo Framework software stack raise the bar for next-generation multimodal LLM training, critical for bringing agentic AI applications to market. These agentic AI-powered applications will one day run in AI factories — the engines of the agentic AI economy. These new applications will produce tokens and valuable intelligence that can be applied to almost every industry and academic domain. The NVIDIA data center platform includes GPUs, CPUs, high-speed fabrics and networking, as well as a vast array of software like NVIDIA CUDA-X libraries, the NeMo Framework, NVIDIA TensorRT-LLM and NVIDIA Dynamo. This highly tuned ensemble of hardware and software technologies empowers organizations to train and deploy models more quickly, dramatically accelerating time to value. The NVIDIA partner ecosystem participated extensively in this MLPerf round. Beyond the submission with CoreWeave and IBM, other compelling submissions were from ASUS, Cisco, Dell Technologies, Giga Computing, Google Cloud, Hewlett Packard Enterprise, Lambda, Lenovo, Nebius, Oracle Cloud Infrastructure, Quanta Cloud Technology and Supermicro. Learn more about MLPerf benchmarks.
    Like
    Love
    Wow
    Angry
    Sad
    94
    7 Yorumlar 0 hisse senetleri 0 önizleme
  • MSI GeForce RTX 5060 Ti Gaming Trio OC 16 GB GPU Review – Premium Cooling & Design

    Product Info
    MSI GeForce RTX 5060 Ti Gaming Trio OCApril, 2025

    TypeGraphics Card

    PriceUS

    It's been two years since NVIDIA introduced its Ada Lovelace GPUs, kicking things off with the RTX 4090 and finishing up the initial lineup with the SUPER family At CES, the company unveiled its new RTX 50 "Blackwell" family which features a brand new architecture and several changes such as new cores, AI accelerators, new memory standards, and the latest video/display capabilities.
    NVIDIA recently released its 5th entry within its "RTX 50" portfolio, the GeForce RTX 5060 Ti. The GeForce RTX 5060 Ti is positioned in the mainstream segment, with the green team promising great value for gamers at a starting MSRP of for the 16 GB models. Today, we will be trying out the MSI GeForce RTX 5060 Ti Gaming Trio OC, which retails at the MSRP of US.
    NVIDIA GeForce GPU Segment/Tier Prices

    Graphics Segment20252023-20242022-20232021-20222020-20212019-20202018-20192017-2018

    Titan TierGeForce RTX 5090GeForce RTX 4090GeForce RTX 4090GeForce RTX 3090 Ti
    GeForce RTX 3090GeForce RTX 3090Titan RTXTitan VTitan XpPriceUSUSUSUS
    USUSUSUSUS

    Ultra Enthusiast TierGeForce RTX 5080GeForce RTX 4080 SUPERGeForce RTX 4080GeForce RTX 3080 TiGeForce RTX 3080 TiGeForce RTX 2080 TiGeForce RTX 2080 TiGeForce GTX 1080 Ti

    PriceUSUSUSUSUSUSUSUS

    Enthusiast TierGeForce RTX 5070 TiGeForce RTX 4070 Ti SUPERGeForce RTX 4070 TiGeForce RTX 3080 12 GBGeForce RTX 3080 10 GBGeForce RTX 2080 SUPERGeForce RTX 2080GeForce GTX 1080

    PriceUSUSUSUSUSUSUSUS

    High-End TierGeForce RTX 5070GeForce RTX 4070 SUPER
    GeForce RTX 4070GeForce RTX 4070
    GeForce RTX 4060 Ti 16 GBGeForce RTX 3070 Ti
    GeForce RTX 3070GeForce RTX 3070 Ti
    GeForce RTX 3070GeForce RTX 2070 SUPERGeForce RTX 2070GeForce GTX 1070

    PriceUSUS
    USUSUSUS

    Mainstream TierGeForce RTX 5060 Ti 16 GB
    GeForce RTX 5060 Ti 8 GBGeForce RTX 4060 Ti
    GeForce RTX 4060GeForce RTX 4060 Ti
    GeForce RTX 4060GeForce RTX 3060 Ti
    GeForce RTX 3060 12 GBGeForce RTX 3060 Ti
    GeForce RTX 3060 12 GBGeForce RTX 2060 SUPER
    GeForce RTX 2060
    GeForce GTX 1660 Ti
    GeForce GTX 1660 SUPER
    GeForce GTX 1660GeForce GTX 1060GeForce GTX 1060

    PriceUS
    USUS
    USUS
    USUS
    USUS
    US
    US
    US
    USUSUS

    Entry TierGeForce RTX 5060RTX 3050 8 GB
    RTX 3050 6 GBRTX 3050RTX 3050GTX 1650 SUPER
    GTX 1650GTX 1650 SUPER
    GTX 1650GTX 1050 Ti
    GTX 1050GTX 1050 Ti

    GTX 1050

    PriceUSUSUS
    USUS
    USUS
    USUS

    US

    NVIDIA GeForce RTX 50 Gaming Graphics Cards
    With Blackwell, NVIDIA is going full-on into the AI segment with loads of optimizations & AI-specific accelerators.

    The Blackwell GPU does many traditional things that we would expect from a GPU, but simultaneously breaks the barrier when it comes to untraditional GPU operations. To sum up some features:

    New Streaming MultiprocessorNew 5th Gen Tensor Cores
    New 4th Gen RTCores
    AI Management Processor
    Max-Q Mode for Desktops & Laptops
    New GDDR7 High-Performance Memory Subsystem
    New DP2.1b Display Engine & Next-Gen NVENC/NVDEC

    2 of 9

    The technologies mentioned above are some of the main building blocks of the Blackwell GPU, but there's more within the graphics core itself, which we will talk about in detail, so let's get started.

    Contents
    Next page
    #msi #geforce #rtx #gaming #trio
    MSI GeForce RTX 5060 Ti Gaming Trio OC 16 GB GPU Review – Premium Cooling & Design
    Product Info MSI GeForce RTX 5060 Ti Gaming Trio OCApril, 2025 TypeGraphics Card PriceUS It's been two years since NVIDIA introduced its Ada Lovelace GPUs, kicking things off with the RTX 4090 and finishing up the initial lineup with the SUPER family At CES, the company unveiled its new RTX 50 "Blackwell" family which features a brand new architecture and several changes such as new cores, AI accelerators, new memory standards, and the latest video/display capabilities. NVIDIA recently released its 5th entry within its "RTX 50" portfolio, the GeForce RTX 5060 Ti. The GeForce RTX 5060 Ti is positioned in the mainstream segment, with the green team promising great value for gamers at a starting MSRP of for the 16 GB models. Today, we will be trying out the MSI GeForce RTX 5060 Ti Gaming Trio OC, which retails at the MSRP of US. NVIDIA GeForce GPU Segment/Tier Prices Graphics Segment20252023-20242022-20232021-20222020-20212019-20202018-20192017-2018 Titan TierGeForce RTX 5090GeForce RTX 4090GeForce RTX 4090GeForce RTX 3090 Ti GeForce RTX 3090GeForce RTX 3090Titan RTXTitan VTitan XpPriceUSUSUSUS USUSUSUSUS Ultra Enthusiast TierGeForce RTX 5080GeForce RTX 4080 SUPERGeForce RTX 4080GeForce RTX 3080 TiGeForce RTX 3080 TiGeForce RTX 2080 TiGeForce RTX 2080 TiGeForce GTX 1080 Ti PriceUSUSUSUSUSUSUSUS Enthusiast TierGeForce RTX 5070 TiGeForce RTX 4070 Ti SUPERGeForce RTX 4070 TiGeForce RTX 3080 12 GBGeForce RTX 3080 10 GBGeForce RTX 2080 SUPERGeForce RTX 2080GeForce GTX 1080 PriceUSUSUSUSUSUSUSUS High-End TierGeForce RTX 5070GeForce RTX 4070 SUPER GeForce RTX 4070GeForce RTX 4070 GeForce RTX 4060 Ti 16 GBGeForce RTX 3070 Ti GeForce RTX 3070GeForce RTX 3070 Ti GeForce RTX 3070GeForce RTX 2070 SUPERGeForce RTX 2070GeForce GTX 1070 PriceUSUS USUSUSUS Mainstream TierGeForce RTX 5060 Ti 16 GB GeForce RTX 5060 Ti 8 GBGeForce RTX 4060 Ti GeForce RTX 4060GeForce RTX 4060 Ti GeForce RTX 4060GeForce RTX 3060 Ti GeForce RTX 3060 12 GBGeForce RTX 3060 Ti GeForce RTX 3060 12 GBGeForce RTX 2060 SUPER GeForce RTX 2060 GeForce GTX 1660 Ti GeForce GTX 1660 SUPER GeForce GTX 1660GeForce GTX 1060GeForce GTX 1060 PriceUS USUS USUS USUS USUS US US US USUSUS Entry TierGeForce RTX 5060RTX 3050 8 GB RTX 3050 6 GBRTX 3050RTX 3050GTX 1650 SUPER GTX 1650GTX 1650 SUPER GTX 1650GTX 1050 Ti GTX 1050GTX 1050 Ti GTX 1050 PriceUSUSUS USUS USUS USUS US NVIDIA GeForce RTX 50 Gaming Graphics Cards With Blackwell, NVIDIA is going full-on into the AI segment with loads of optimizations & AI-specific accelerators. The Blackwell GPU does many traditional things that we would expect from a GPU, but simultaneously breaks the barrier when it comes to untraditional GPU operations. To sum up some features: New Streaming MultiprocessorNew 5th Gen Tensor Cores New 4th Gen RTCores AI Management Processor Max-Q Mode for Desktops & Laptops New GDDR7 High-Performance Memory Subsystem New DP2.1b Display Engine & Next-Gen NVENC/NVDEC 2 of 9 The technologies mentioned above are some of the main building blocks of the Blackwell GPU, but there's more within the graphics core itself, which we will talk about in detail, so let's get started. Contents Next page #msi #geforce #rtx #gaming #trio
    WCCFTECH.COM
    MSI GeForce RTX 5060 Ti Gaming Trio OC 16 GB GPU Review – Premium Cooling & Design
    Product Info MSI GeForce RTX 5060 Ti Gaming Trio OCApril, 2025 TypeGraphics Card Price$429 US It's been two years since NVIDIA introduced its Ada Lovelace GPUs, kicking things off with the RTX 4090 and finishing up the initial lineup with the SUPER family At CES, the company unveiled its new RTX 50 "Blackwell" family which features a brand new architecture and several changes such as new cores, AI accelerators, new memory standards, and the latest video/display capabilities. NVIDIA recently released its 5th entry within its "RTX 50" portfolio, the GeForce RTX 5060 Ti. The GeForce RTX 5060 Ti is positioned in the mainstream segment, with the green team promising great value for gamers at a starting MSRP of $429 for the 16 GB models. Today, we will be trying out the MSI GeForce RTX 5060 Ti Gaming Trio OC, which retails at the MSRP of $429 US. NVIDIA GeForce GPU Segment/Tier Prices Graphics Segment20252023-20242022-20232021-20222020-20212019-20202018-20192017-2018 Titan TierGeForce RTX 5090GeForce RTX 4090GeForce RTX 4090GeForce RTX 3090 Ti GeForce RTX 3090GeForce RTX 3090Titan RTX (Turing)Titan V (Volta)Titan Xp (Pascal) Price$1999 US$1599 US$1599 US$1999 US $1499 US$1499 US$2499 US$2999 US$1199 US Ultra Enthusiast TierGeForce RTX 5080GeForce RTX 4080 SUPERGeForce RTX 4080GeForce RTX 3080 TiGeForce RTX 3080 TiGeForce RTX 2080 TiGeForce RTX 2080 TiGeForce GTX 1080 Ti Price$999 US$999 US$1199 US$1199 US$1199 US$999 US$999 US$699 US Enthusiast TierGeForce RTX 5070 TiGeForce RTX 4070 Ti SUPERGeForce RTX 4070 TiGeForce RTX 3080 12 GBGeForce RTX 3080 10 GBGeForce RTX 2080 SUPERGeForce RTX 2080GeForce GTX 1080 Price$749 US$799 US$799 US$799 US$699 US$699 US$699 US$549 US High-End TierGeForce RTX 5070GeForce RTX 4070 SUPER GeForce RTX 4070GeForce RTX 4070 GeForce RTX 4060 Ti 16 GBGeForce RTX 3070 Ti GeForce RTX 3070GeForce RTX 3070 Ti GeForce RTX 3070GeForce RTX 2070 SUPERGeForce RTX 2070GeForce GTX 1070 Price$549 US$599 $549$599 US $499 US$599 $499$599 $499$499 US$499 US$379 US Mainstream TierGeForce RTX 5060 Ti 16 GB GeForce RTX 5060 Ti 8 GBGeForce RTX 4060 Ti GeForce RTX 4060GeForce RTX 4060 Ti GeForce RTX 4060GeForce RTX 3060 Ti GeForce RTX 3060 12 GBGeForce RTX 3060 Ti GeForce RTX 3060 12 GBGeForce RTX 2060 SUPER GeForce RTX 2060 GeForce GTX 1660 Ti GeForce GTX 1660 SUPER GeForce GTX 1660GeForce GTX 1060GeForce GTX 1060 Price$429 US $379 US$449 $299$399 US $299 US$399 US $329 US$399 US $329 US$399 US $349 US $279 US $229 US $219 US$249 US$249 US Entry TierGeForce RTX 5060RTX 3050 8 GB RTX 3050 6 GBRTX 3050RTX 3050GTX 1650 SUPER GTX 1650GTX 1650 SUPER GTX 1650GTX 1050 Ti GTX 1050GTX 1050 Ti GTX 1050 Price$299$229 $179$249 US$249 US$159 US $149 US$159 US $149 US$139 US $109 US$139 US $109 US NVIDIA GeForce RTX 50 Gaming Graphics Cards With Blackwell, NVIDIA is going full-on into the AI segment with loads of optimizations & AI-specific accelerators. The Blackwell GPU does many traditional things that we would expect from a GPU, but simultaneously breaks the barrier when it comes to untraditional GPU operations. To sum up some features: New Streaming Multiprocessor (SM) New 5th Gen Tensor Cores New 4th Gen RT (Ray Tracing) Cores AI Management Processor Max-Q Mode for Desktops & Laptops New GDDR7 High-Performance Memory Subsystem New DP2.1b Display Engine & Next-Gen NVENC/NVDEC 2 of 9 The technologies mentioned above are some of the main building blocks of the Blackwell GPU, but there's more within the graphics core itself, which we will talk about in detail, so let's get started. Contents Next page
    0 Yorumlar 0 hisse senetleri 0 önizleme
CGShares https://cgshares.com