• NVIDIA’s New AI: Next Level Games Are Coming!

    NVIDIA’s New AI: Next Level Games Are Coming!
    #nvidias #new #next #level #games
    NVIDIA’s New AI: Next Level Games Are Coming!
    NVIDIA’s New AI: Next Level Games Are Coming! #nvidias #new #next #level #games
    WWW.YOUTUBE.COM
    NVIDIA’s New AI: Next Level Games Are Coming!
    NVIDIA’s New AI: Next Level Games Are Coming!
    0 Comments 0 Shares 0 Reviews
  • Overlapping vertices?

    Author

    Hi Gamedev! :DSo I'm using some existing models from other games for a PRIVATE mod im working on. But when i import them into blender or 3ds max the modeling software tells me it's got overlapping vertices. Is this normal with game models or is every vertex supposed to be welded?Kind regards!

    Maybe. They might not be duplicates, it could be that there was additional information which was lost, such as two points that had different normal information or texture coordinates even though they're at the same position.It could be normal for that project, but no, in general duplicate verts, overlapping verts, degenerate triangles, and similar can cause rendering issues and are often flagged by tools.  If it is something you extracted it might be the result of processing that took place rather than coming from the original, like a script that ends up stripping the non-duplicate information or that ends up traversing a mesh more than once.Most likely your warning is exactly the same one artists in the game would receive, and they just need to be welded, fused, or otherwise processed back into place.

    Advertisement

    It's normal. Reasons to split a mesh edge of geometrically adjacent triangles are:Differing materials / textures / uv coordsEdge should show discontinutiy in lightinginstead smooth shadingDividing mesh into smaller pieces for fine grained cullingThus, splitting models and duplicating vertices is a post process necessary to use them in game engines, while artists keep the original models to do changes and for archivation.Turning such assets back to editable models requires welding with a tolerance of zero, or eventually a very small number. Issues might still remain.
    Other things, e.g. the original cage of a subdivision model, or Nurbs control points, etc. can't be reconstructed that easily.

    Author

    Hi Guy's so i usually use this tutorial if i get overlapping:The reason im asking this is because: Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids? Or should they still be welded then?Does it matter how small/large the mesh is when welding by distance?Kind regards!

    That is another “it depends on the details” question. There might be visual artifacts or not, depending on the details. There can be performance differences depending on the details. There are reasons to do it that we're already covered, a vertex can have far more than just position data which would make them different despite both being at the same location. There are details and choices beyond just the vertex positions overlapping. 

    Newgamemodder said:Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids?Usually no. You need to regenerate the meshlets anyway after editing a model. It's done by a preprocessing tool, and the usual asset pipeline is: Model from artist → automated tool to split edges where needed to get one mesh per material, compute meshlet clusters, quantization for compression, reorder vertices for cache efficiency, etc → save as asset to ship with the game.So meshlets do not add to the risks from welding vertices which you already have. Artwork is not affected from meshlets in general.However, this applies to games production, not to modding. Things like Nanite and meshlets ofc. make it even harder to mod existing assets, since modders don't have those automated preprocessing tools if devs don't provide them.Newgamemodder said:Does it matter how small/large the mesh is when welding by distance?Yes. Usually you give a distance threshold for the welding, so the scale of the model matters.
    My advise is to use the smallest threshold possible and observing UVs, which should not change from the welding operation. 
    #overlapping #vertices
    Overlapping vertices?
    Author Hi Gamedev! :DSo I'm using some existing models from other games for a PRIVATE mod im working on. But when i import them into blender or 3ds max the modeling software tells me it's got overlapping vertices. Is this normal with game models or is every vertex supposed to be welded?Kind regards! Maybe. They might not be duplicates, it could be that there was additional information which was lost, such as two points that had different normal information or texture coordinates even though they're at the same position.It could be normal for that project, but no, in general duplicate verts, overlapping verts, degenerate triangles, and similar can cause rendering issues and are often flagged by tools.  If it is something you extracted it might be the result of processing that took place rather than coming from the original, like a script that ends up stripping the non-duplicate information or that ends up traversing a mesh more than once.Most likely your warning is exactly the same one artists in the game would receive, and they just need to be welded, fused, or otherwise processed back into place. Advertisement It's normal. Reasons to split a mesh edge of geometrically adjacent triangles are:Differing materials / textures / uv coordsEdge should show discontinutiy in lightinginstead smooth shadingDividing mesh into smaller pieces for fine grained cullingThus, splitting models and duplicating vertices is a post process necessary to use them in game engines, while artists keep the original models to do changes and for archivation.Turning such assets back to editable models requires welding with a tolerance of zero, or eventually a very small number. Issues might still remain. Other things, e.g. the original cage of a subdivision model, or Nurbs control points, etc. can't be reconstructed that easily. Author Hi Guy's so i usually use this tutorial if i get overlapping:The reason im asking this is because: Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids? Or should they still be welded then?Does it matter how small/large the mesh is when welding by distance?Kind regards! That is another “it depends on the details” question. There might be visual artifacts or not, depending on the details. There can be performance differences depending on the details. There are reasons to do it that we're already covered, a vertex can have far more than just position data which would make them different despite both being at the same location. There are details and choices beyond just the vertex positions overlapping.  Newgamemodder said:Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids?Usually no. You need to regenerate the meshlets anyway after editing a model. It's done by a preprocessing tool, and the usual asset pipeline is: Model from artist → automated tool to split edges where needed to get one mesh per material, compute meshlet clusters, quantization for compression, reorder vertices for cache efficiency, etc → save as asset to ship with the game.So meshlets do not add to the risks from welding vertices which you already have. Artwork is not affected from meshlets in general.However, this applies to games production, not to modding. Things like Nanite and meshlets ofc. make it even harder to mod existing assets, since modders don't have those automated preprocessing tools if devs don't provide them.Newgamemodder said:Does it matter how small/large the mesh is when welding by distance?Yes. Usually you give a distance threshold for the welding, so the scale of the model matters. My advise is to use the smallest threshold possible and observing UVs, which should not change from the welding operation.  #overlapping #vertices
    Overlapping vertices?
    Author Hi Gamedev! :DSo I'm using some existing models from other games for a PRIVATE mod im working on (so no restributing, don't wanna rip of talented artists and using existing meshes form games due to cost). But when i import them into blender or 3ds max the modeling software tells me it's got overlapping vertices. Is this normal with game models or is every vertex supposed to be welded?Kind regards! Maybe. They might not be duplicates, it could be that there was additional information which was lost, such as two points that had different normal information or texture coordinates even though they're at the same position.It could be normal for that project, but no, in general duplicate verts, overlapping verts, degenerate triangles, and similar can cause rendering issues and are often flagged by tools.  If it is something you extracted it might be the result of processing that took place rather than coming from the original, like a script that ends up stripping the non-duplicate information or that ends up traversing a mesh more than once.Most likely your warning is exactly the same one artists in the game would receive, and they just need to be welded, fused, or otherwise processed back into place. Advertisement It's normal. Reasons to split a mesh edge of geometrically adjacent triangles are:Differing materials / textures / uv coordsEdge should show discontinutiy in lighting (e.g. cube) instead smooth shading (e.g. sphere)Dividing mesh into smaller pieces for fine grained cullingThus, splitting models and duplicating vertices is a post process necessary to use them in game engines, while artists keep the original models to do changes and for archivation.Turning such assets back to editable models requires welding with a tolerance of zero, or eventually a very small number. Issues might still remain. Other things, e.g. the original cage of a subdivision model, or Nurbs control points, etc. can't be reconstructed that easily. Author Hi Guy's so i usually use this tutorial if i get overlapping:The reason im asking this is because: Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids? Or should they still be welded then?Does it matter how small/large the mesh is when welding by distance?Kind regards! That is another “it depends on the details” question. There might be visual artifacts or not, depending on the details. There can be performance differences depending on the details. There are reasons to do it that we're already covered, a vertex can have far more than just position data which would make them different despite both being at the same location. There are details and choices beyond just the vertex positions overlapping.  Newgamemodder said:Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids?Usually no. You need to regenerate the meshlets anyway after editing a model. It's done by a preprocessing tool, and the usual asset pipeline is: Model from artist → automated tool to split edges where needed to get one mesh per material, compute meshlet clusters, quantization for compression, reorder vertices for cache efficiency, etc → save as asset to ship with the game.So meshlets do not add to the risks from welding vertices which you already have (e.g. accidental corruption of UV coordinates or merging of material groups). Artwork is not affected from meshlets in general.However, this applies to games production, not to modding. Things like Nanite and meshlets ofc. make it even harder to mod existing assets, since modders don't have those automated preprocessing tools if devs don't provide them.Newgamemodder said:Does it matter how small/large the mesh is when welding by distance?Yes. Usually you give a distance threshold for the welding, so the scale of the model matters. My advise is to use the smallest threshold possible and observing UVs, which should not change from the welding operation. 
    Like
    Love
    Wow
    Sad
    Angry
    586
    0 Comments 0 Shares 0 Reviews
  • NVIDIA’s New AI: From Video Games to Reality!

    NVIDIA’s New AI: From Video Games to Reality!
    #nvidias #new #video #games #reality
    NVIDIA’s New AI: From Video Games to Reality!
    NVIDIA’s New AI: From Video Games to Reality! #nvidias #new #video #games #reality
    WWW.YOUTUBE.COM
    NVIDIA’s New AI: From Video Games to Reality!
    NVIDIA’s New AI: From Video Games to Reality!
    0 Comments 0 Shares 0 Reviews
  • AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong

    Key Takeaways

    AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash.
    Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p.
    AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency.
    This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing.

    It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move.
    A graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right? 
    Well, yes and no. 
    Because right alongside that 16GB version, AMD is also releasing an 8GB version for  Same name, same chip, half the memory. And that’s where the internet lost it. 
    Déjà Vu: We’ve Seen This Trick Before
    If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti. 
    They sold both 8GB and 16GB versions with the same branding, but a price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July. 

    Source: Nvidia
    Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug. 
    Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles.
    The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting. 
    We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t. 
    It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too. 
    It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff. 
    Frank Azor Lights the Fuse on X
    The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card. 

    He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine. 
    It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now. 
    Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come. 
    Hardware Unboxed Fires Back
    The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM. 
    In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points.

    The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons. 
    In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly. 
    And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone.
    If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes. 
    Why This Actually Matters
    You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that.

    Source: AMD
    Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025?
    That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software. 
    If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone. 
    It’s like trying to make a movie with a flip phone because some people still own one.
    Same Name, Different Game
    Another big issue is how these cards are named and sold. 
    The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU. 
    But that extra memory makes a huge difference. 
    In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance. 
    You’re not. You’re getting a watered-down version with the same name and a silent asterisk.
    This isn’t just AMD’s Problem
    Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw. 
    It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice.
    Spoiler: they noticed.
    And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers. 
    If you’re spending over on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t.
    Steam Data Doesn’t Help AMD’s Case
    AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now.

    Source: Nvidia
    But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable. 
    Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for  A few years ago, a screen like that would’ve cost you double. Now it’s entry-level.
    Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020. 
    Planned Obsolescence in Disguise
    Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026. 
    But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable.
    It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate. 
    So, Is AMD Actually Screwed? 
    Not right now. In fact, they’re playing the game better than they used to. 
    They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild. 
    But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them. 
    We don’t need two Nvidias. We need AMD to be different – to be better. 
    One Name, Two Very Different Cards
    The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill. 
    This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t. 
    Until then, buyers beware. There’s more going on behind the box art than meets the eye. 

    Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use. 
    Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives. 
    Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces. 
    In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands.
    Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects. 
    Key Areas of Expertise: Consumer TechCybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone. 

    View all articles by Anya Zhukova

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #amds #8gb #gamble #why #gamers
    AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong
    Key Takeaways AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash. Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p. AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency. This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing. It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move. A graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right?  Well, yes and no.  Because right alongside that 16GB version, AMD is also releasing an 8GB version for  Same name, same chip, half the memory. And that’s where the internet lost it.  Déjà Vu: We’ve Seen This Trick Before If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti.  They sold both 8GB and 16GB versions with the same branding, but a price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July.  Source: Nvidia Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug.  Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles. The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting.  We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t.  It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too.  It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff.  Frank Azor Lights the Fuse on X The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card.  He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine.  It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now.  Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come.  Hardware Unboxed Fires Back The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM.  In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points. The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons.  In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly.  And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone. If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes.  Why This Actually Matters You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that. Source: AMD Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025? That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software.  If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone.  It’s like trying to make a movie with a flip phone because some people still own one. Same Name, Different Game Another big issue is how these cards are named and sold.  The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU.  But that extra memory makes a huge difference.  In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance.  You’re not. You’re getting a watered-down version with the same name and a silent asterisk. This isn’t just AMD’s Problem Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw.  It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice. Spoiler: they noticed. And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers.  If you’re spending over on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t. Steam Data Doesn’t Help AMD’s Case AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now. Source: Nvidia But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable.  Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for  A few years ago, a screen like that would’ve cost you double. Now it’s entry-level. Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020.  Planned Obsolescence in Disguise Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026.  But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable. It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate.  So, Is AMD Actually Screwed?  Not right now. In fact, they’re playing the game better than they used to.  They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild.  But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them.  We don’t need two Nvidias. We need AMD to be different – to be better.  One Name, Two Very Different Cards The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill.  This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t.  Until then, buyers beware. There’s more going on behind the box art than meets the eye.  Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use.  Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives.  Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces.  In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands. Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects.  Key Areas of Expertise: Consumer TechCybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone.  View all articles by Anya Zhukova Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #amds #8gb #gamble #why #gamers
    TECHREPORT.COM
    AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong
    Key Takeaways AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash. Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p. AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency. This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing. It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move. A $349 graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right?  Well, yes and no.  Because right alongside that 16GB version, AMD is also releasing an 8GB version for $299. Same name, same chip, half the memory. And that’s where the internet lost it.  Déjà Vu: We’ve Seen This Trick Before If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti.  They sold both 8GB and 16GB versions with the same branding, but a $100 price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July.  Source: Nvidia Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug.  Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles. The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting.  We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t.  It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too.  It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff.  Frank Azor Lights the Fuse on X The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card.  He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine.  It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now.  Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come.  Hardware Unboxed Fires Back The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM.  In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points. The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons.  In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly.  And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone. If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes.  Why This Actually Matters You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that. Source: AMD Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6 (still) on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025? That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software.  If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone.  It’s like trying to make a movie with a flip phone because some people still own one. Same Name, Different Game Another big issue is how these cards are named and sold.  The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU.  But that extra memory makes a huge difference.  In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance.  You’re not. You’re getting a watered-down version with the same name and a silent asterisk. This isn’t just AMD’s Problem Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw.  It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice. Spoiler: they noticed. And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers.  If you’re spending over $300 on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t. Steam Data Doesn’t Help AMD’s Case AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now. Source: Nvidia But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable.  Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for $219.99. A few years ago, a screen like that would’ve cost you double. Now it’s entry-level. Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020.  Planned Obsolescence in Disguise Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026.  But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable. It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate.  So, Is AMD Actually Screwed?  Not right now. In fact, they’re playing the game better than they used to.  They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild.  But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them.  We don’t need two Nvidias. We need AMD to be different – to be better.  One Name, Two Very Different Cards The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill.  This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t.  Until then, buyers beware. There’s more going on behind the box art than meets the eye.  Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use.  Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives.  Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces.  In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands. Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects.  Key Areas of Expertise: Consumer Tech (laptops, phones, wearables, etc.) Cybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone.  View all articles by Anya Zhukova Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Comments 0 Shares 0 Reviews
  • Huawei Supernode 384 disrupts Nvidia’s AI market hold

    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #huawei #supernode #disrupts #nvidias #market
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts modelsHuawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.See also: Oracle plans B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #huawei #supernode #disrupts #nvidias #market
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    Huawei Supernode 384 disrupts Nvidia’s AI market hold
    Huawei’s AI capabilities have made a breakthrough in the form of the company’s Supernode 384 architecture, marking an important moment in the global processor wars amid US-China tech tensions.The Chinese tech giant’s latest innovation emerged from last Friday’s Kunpeng Ascend Developer Conference in Shenzhen, where company executives demonstrated how the computing framework challenges Nvidia’s long-standing market dominance directly, as the company continues to operate under severe US-led trade restrictions.Architectural innovation born from necessityZhang Dixuan, president of Huawei’s Ascend computing business, articulated the fundamental problem driving the innovation during his conference keynote: “As the scale of parallel processing grows, cross-machine bandwidth in traditional server architectures has become a critical bottleneck for training.”The Supernode 384 abandons Von Neumann computing principles in favour of a peer-to-peer architecture engineered specifically for modern AI workloads. The change proves especially powerful for Mixture-of-Experts models (machine-learning systems using multiple specialised sub-networks to solve complex computational challenges.)Huawei’s CloudMatrix 384 implementation showcases impressive technical specifications: 384 Ascend AI processors spanning 12 computing cabinets and four bus cabinets, generating 300 petaflops of raw computational power paired with 48 terabytes of high-bandwidth memory, representing a leap in integrated AI computing infrastructure.Performance metrics challenge industry leadersReal-world benchmark testing reveals the system’s competitive positioning in comparison to established solutions. Dense AI models like Meta’s LLaMA 3 achieved 132 tokens per second per card on the Supernode 384 – delivering 2.5 times superior performance compared to traditional cluster architectures.Communications-intensive applications demonstrate even more dramatic improvements. Models from Alibaba’s Qwen and DeepSeek families reached 600 to 750 tokens per second per card, revealing the architecture’s optimisation for next-generation AI workloads.The performance gains stem from fundamental infrastructure redesigns. Huawei replaced conventional Ethernet interconnects with high-speed bus connections, improving communications bandwidth by 15 times while reducing single-hop latency from 2 microseconds to 200 nanoseconds – a tenfold improvement.Geopolitical strategy drives technical innovationThe Supernode 384’s development cannot be divorced from broader US-China technological competition. American sanctions have systematically restricted Huawei’s access to cutting-edge semiconductor technologies, forcing the company to maximise performance within existing constraints.Industry analysis from SemiAnalysis suggests the CloudMatrix 384 uses Huawei’s latest Ascend 910C AI processor, which acknowledges inherent performance limitations but highlights architectural advantages: “Huawei is a generation behind in chips, but its scale-up solution is arguably a generation ahead of Nvidia and AMD’s current products in the market.”The assessment reveals how Huawei AI computing strategies have evolved beyond traditional hardware specifications toward system-level optimisation and architectural innovation.Market implications and deployment realityBeyond laboratory demonstrations, Huawei has operationalised CloudMatrix 384 systems in multiple Chinese data centres in Anhui Province, Inner Mongolia, and Guizhou Province. Such practical deployments validate the architecture’s viability and establishes an infrastructure framework for broader market adoption.The system’s scalability potential – supporting tens of thousands of linked processors – positions it as a compelling platform for training increasingly sophisticated AI models. The capability addresses growing industry demands for massive-scale AI implementation in diverse sectors.Industry disruption and future considerationsHuawei’s architectural breakthrough introduces both opportunities and complications for the global AI ecosystem. While providing viable alternatives to Nvidia’s market-leading solutions, it simultaneously accelerates the fragmentation of international technology infrastructure along geopolitical lines.The success of Huawei AI computing initiatives will depend on developer ecosystem adoption and sustained performance validation. The company’s aggressive developer conference outreach indicated a recognition that technical innovation alone cannot guarantee market acceptance.For organisations evaluating AI infrastructure investments, the Supernode 384 represents a new option that combines competitive performance with independence from US-controlled supply chains. However, long-term viability remains contingent on continued innovation cycles and improved geopolitical stability.(Image from Pixabay)See also: Oracle plans $40B Nvidia chip deal for AI facility in TexasWant to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Comments 0 Shares 0 Reviews
  • NVIDIA’s Bartley Richardson on How Teams of AI Agents Provide Next-Level Automation

    Building effective agentic AI systems requires rethinking how technology interacts and delivers value across organizations.
    Bartley Richardson, senior director of engineering and AI infrastructure at NVIDIA, joined the NVIDIA AI Podcast to discuss how enterprises can successfully deploy agentic AI systems.
    “When I talk with people about agents and agentic AI, what I really want to say is automation,” Richardson said. “It is that next level of automation.”

    Richardson explains that AI reasoning models play a critical role in these systems by “thinking out loud” and enabling better planning capabilities.
    “Reasoning models have been trained and tuned in a very specific way to think — almost like thinking out loud,” Richardson said. “It’s kind of like when you’re brainstorming with your colleagues or family.”
    What makes NVIDIA’s Llama Nemotron models distinctive is that they give users the ability to toggle reasoning on or off within the same model, optimizing for specific tasks.
    Enterprise IT leaders must acknowledge the multi-vendor reality of modern environments,  Richardson explained, saying organizations will have agent systems from various sources working together simultaneously.
    “You’re going to have all these agents working together, and the trick is discovering how to let them all mesh together in a somewhat seamless way for your employees,” Richardson said.
    To address this challenge, NVIDIA developed the AI-Q Blueprint for developing advanced agentic AI systems. Teams can build AI agents to automate complex tasks, break down operational silos and drive efficiency across industries. The blueprint uses the open-source NVIDIA Agent Intelligencetoolkit to evaluate and profile agent workflows, making it easier to optimize and ensure interoperability among agents, tools and data sources.
    “We have customers that optimize their tool-calling chains and get 15x speedups through their pipeline using AI-Q,” Richardson said.
    He also emphasized the importance of maintaining realistic expectations that still provide significant business value.
    “Agentic systems will make mistakes,” Richardson added. “But if it gets you 60%, 70%, 80% of the way there, that’s amazing.”
    Time Stamps
    1:15 – Defining agentic AI as the next evolution of enterprise automation.
    4:06 – How reasoning models enhance agentic system capabilities.
    12:41 – Enterprise considerations for implementing multi-vendor agent systems.
    19:33 – Introduction to the NVIDIA Agent Intelligence toolkit for observability and traceability.
    You Might Also Like… 
    NVIDIA’s Rama Akkiraju on How AI Platform Architects Help Bridge Business Vision and Technical Execution
    Enterprises are exploring AI to rethink problem-solving and business processes. These initiatives require the right infrastructure, such as AI factories, which allow businesses to convert data into tokens and outcomes. Rama Akkiraju, vice president of IT for AI and machine learning at NVIDIA, joined the AI Podcast to discuss how enterprises can build the right foundations for AI success, and the critical role of AI platform architects in designing and building AI infrastructure based on specific business needs.
    Roboflow Helps Unlock Computer Vision for Every Kind of AI Builder
    Roboflow’s mission is to make the world programmable through computer vision. By simplifying computer vision development, the company helps bridge the gap between AI and people looking to harness it. Cofounder and CEO Joseph Nelson discusses how Roboflow empowers users in manufacturing, healthcare and automotive to solve complex problems with visual AI.
    NVIDIA’s Jacob Liberman on Bringing Agentic AI to Enterprises
    Agentic AI enables developers to create intelligent multi-agent systems that reason, act and execute complex tasks with a degree of autonomy. Jacob Liberman, director of product management at NVIDIA, explains how agentic AI bridges the gap between powerful AI models and practical enterprise applications.
    #nvidias #bartley #richardson #how #teams
    NVIDIA’s Bartley Richardson on How Teams of AI Agents Provide Next-Level Automation
    Building effective agentic AI systems requires rethinking how technology interacts and delivers value across organizations. Bartley Richardson, senior director of engineering and AI infrastructure at NVIDIA, joined the NVIDIA AI Podcast to discuss how enterprises can successfully deploy agentic AI systems. “When I talk with people about agents and agentic AI, what I really want to say is automation,” Richardson said. “It is that next level of automation.” Richardson explains that AI reasoning models play a critical role in these systems by “thinking out loud” and enabling better planning capabilities. “Reasoning models have been trained and tuned in a very specific way to think — almost like thinking out loud,” Richardson said. “It’s kind of like when you’re brainstorming with your colleagues or family.” What makes NVIDIA’s Llama Nemotron models distinctive is that they give users the ability to toggle reasoning on or off within the same model, optimizing for specific tasks. Enterprise IT leaders must acknowledge the multi-vendor reality of modern environments,  Richardson explained, saying organizations will have agent systems from various sources working together simultaneously. “You’re going to have all these agents working together, and the trick is discovering how to let them all mesh together in a somewhat seamless way for your employees,” Richardson said. To address this challenge, NVIDIA developed the AI-Q Blueprint for developing advanced agentic AI systems. Teams can build AI agents to automate complex tasks, break down operational silos and drive efficiency across industries. The blueprint uses the open-source NVIDIA Agent Intelligencetoolkit to evaluate and profile agent workflows, making it easier to optimize and ensure interoperability among agents, tools and data sources. “We have customers that optimize their tool-calling chains and get 15x speedups through their pipeline using AI-Q,” Richardson said. He also emphasized the importance of maintaining realistic expectations that still provide significant business value. “Agentic systems will make mistakes,” Richardson added. “But if it gets you 60%, 70%, 80% of the way there, that’s amazing.” Time Stamps 1:15 – Defining agentic AI as the next evolution of enterprise automation. 4:06 – How reasoning models enhance agentic system capabilities. 12:41 – Enterprise considerations for implementing multi-vendor agent systems. 19:33 – Introduction to the NVIDIA Agent Intelligence toolkit for observability and traceability. You Might Also Like…  NVIDIA’s Rama Akkiraju on How AI Platform Architects Help Bridge Business Vision and Technical Execution Enterprises are exploring AI to rethink problem-solving and business processes. These initiatives require the right infrastructure, such as AI factories, which allow businesses to convert data into tokens and outcomes. Rama Akkiraju, vice president of IT for AI and machine learning at NVIDIA, joined the AI Podcast to discuss how enterprises can build the right foundations for AI success, and the critical role of AI platform architects in designing and building AI infrastructure based on specific business needs. Roboflow Helps Unlock Computer Vision for Every Kind of AI Builder Roboflow’s mission is to make the world programmable through computer vision. By simplifying computer vision development, the company helps bridge the gap between AI and people looking to harness it. Cofounder and CEO Joseph Nelson discusses how Roboflow empowers users in manufacturing, healthcare and automotive to solve complex problems with visual AI. NVIDIA’s Jacob Liberman on Bringing Agentic AI to Enterprises Agentic AI enables developers to create intelligent multi-agent systems that reason, act and execute complex tasks with a degree of autonomy. Jacob Liberman, director of product management at NVIDIA, explains how agentic AI bridges the gap between powerful AI models and practical enterprise applications. #nvidias #bartley #richardson #how #teams
    BLOGS.NVIDIA.COM
    NVIDIA’s Bartley Richardson on How Teams of AI Agents Provide Next-Level Automation
    Building effective agentic AI systems requires rethinking how technology interacts and delivers value across organizations. Bartley Richardson, senior director of engineering and AI infrastructure at NVIDIA, joined the NVIDIA AI Podcast to discuss how enterprises can successfully deploy agentic AI systems. “When I talk with people about agents and agentic AI, what I really want to say is automation,” Richardson said. “It is that next level of automation.” Richardson explains that AI reasoning models play a critical role in these systems by “thinking out loud” and enabling better planning capabilities. “Reasoning models have been trained and tuned in a very specific way to think — almost like thinking out loud,” Richardson said. “It’s kind of like when you’re brainstorming with your colleagues or family.” What makes NVIDIA’s Llama Nemotron models distinctive is that they give users the ability to toggle reasoning on or off within the same model, optimizing for specific tasks. Enterprise IT leaders must acknowledge the multi-vendor reality of modern environments,  Richardson explained, saying organizations will have agent systems from various sources working together simultaneously. “You’re going to have all these agents working together, and the trick is discovering how to let them all mesh together in a somewhat seamless way for your employees,” Richardson said. To address this challenge, NVIDIA developed the AI-Q Blueprint for developing advanced agentic AI systems. Teams can build AI agents to automate complex tasks, break down operational silos and drive efficiency across industries. The blueprint uses the open-source NVIDIA Agent Intelligence (AIQ) toolkit to evaluate and profile agent workflows, making it easier to optimize and ensure interoperability among agents, tools and data sources. “We have customers that optimize their tool-calling chains and get 15x speedups through their pipeline using AI-Q,” Richardson said. He also emphasized the importance of maintaining realistic expectations that still provide significant business value. “Agentic systems will make mistakes,” Richardson added. “But if it gets you 60%, 70%, 80% of the way there, that’s amazing.” Time Stamps 1:15 – Defining agentic AI as the next evolution of enterprise automation. 4:06 – How reasoning models enhance agentic system capabilities. 12:41 – Enterprise considerations for implementing multi-vendor agent systems. 19:33 – Introduction to the NVIDIA Agent Intelligence toolkit for observability and traceability. You Might Also Like…  NVIDIA’s Rama Akkiraju on How AI Platform Architects Help Bridge Business Vision and Technical Execution Enterprises are exploring AI to rethink problem-solving and business processes. These initiatives require the right infrastructure, such as AI factories, which allow businesses to convert data into tokens and outcomes. Rama Akkiraju, vice president of IT for AI and machine learning at NVIDIA, joined the AI Podcast to discuss how enterprises can build the right foundations for AI success, and the critical role of AI platform architects in designing and building AI infrastructure based on specific business needs. Roboflow Helps Unlock Computer Vision for Every Kind of AI Builder Roboflow’s mission is to make the world programmable through computer vision. By simplifying computer vision development, the company helps bridge the gap between AI and people looking to harness it. Cofounder and CEO Joseph Nelson discusses how Roboflow empowers users in manufacturing, healthcare and automotive to solve complex problems with visual AI. NVIDIA’s Jacob Liberman on Bringing Agentic AI to Enterprises Agentic AI enables developers to create intelligent multi-agent systems that reason, act and execute complex tasks with a degree of autonomy. Jacob Liberman, director of product management at NVIDIA, explains how agentic AI bridges the gap between powerful AI models and practical enterprise applications.
    0 Comments 0 Shares 0 Reviews
  • NVIDIA’s New AI: Impossible Weather Graphics!

    NVIDIA’s New AI: Impossible Weather Graphics!
    #nvidias #new #impossible #weather #graphics
    NVIDIA’s New AI: Impossible Weather Graphics!
    NVIDIA’s New AI: Impossible Weather Graphics! #nvidias #new #impossible #weather #graphics
    WWW.YOUTUBE.COM
    NVIDIA’s New AI: Impossible Weather Graphics!
    NVIDIA’s New AI: Impossible Weather Graphics!
    9 Comments 0 Shares 0 Reviews
  • RT CORSAIR: Who says creator/AI PC builds have to be boring? Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and ...

    RT CORSAIRWho says creator/AI PC builds have to be boring?Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt X870E-CREATOR WIFI, this FRAME 4000D PROTOTYPE is adorned with a newly-designed aluminum front panel, new front I/O, and a @SingularityC Powerboard, as well as a special edition of the HX1000i SHIFT With @NVIDIAStudio, GeForce RTX 50 Series GPUs also provide the creative advantage in NVIDIA Studio by unlocking transformative performance in video editing, 3D rendering, and graphic design.Full Build: /
    #corsair #who #says #creatorai #builds
    RT CORSAIR: Who says creator/AI PC builds have to be boring? Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and ...
    RT CORSAIRWho says creator/AI PC builds have to be boring?Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt X870E-CREATOR WIFI, this FRAME 4000D PROTOTYPE is adorned with a newly-designed aluminum front panel, new front I/O, and a @SingularityC Powerboard, as well as a special edition of the HX1000i SHIFT 👀With @NVIDIAStudio, GeForce RTX 50 Series GPUs also provide the creative advantage in NVIDIA Studio by unlocking transformative performance in video editing, 3D rendering, and graphic design.Full Build: / #corsair #who #says #creatorai #builds
    X.COM
    RT CORSAIR: Who says creator/AI PC builds have to be boring? Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and ...
    RT CORSAIRWho says creator/AI PC builds have to be boring?Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt X870E-CREATOR WIFI, this FRAME 4000D PROTOTYPE is adorned with a newly-designed aluminum front panel, new front I/O, and a @SingularityC Powerboard, as well as a special edition of the HX1000i SHIFT 👀With @NVIDIAStudio, GeForce RTX 50 Series GPUs also provide the creative advantage in NVIDIA Studio by unlocking transformative performance in video editing, 3D rendering, and graphic design.Full Build: https://www.corsair.com/us/en/explorer/builds/computex-2025/frame-4000d-prototype-nvidia-creator-ai-pc/
    0 Comments 0 Shares 0 Reviews
  • Who says creator/AI PC builds have to be boring? Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt...

    Who says creator/AI PC builds have to be boring?Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt X870E-CREATOR WIFI, this FRAME 4000D PROTOTYPE is adorned with a newly-designed aluminum front panel, new front I/O, and a @SingularityC Powerboard, as well as a special edition of the HX1000i SHIFT With @NVIDIAStudio, GeForce RTX 50 Series GPUs also provide the creative advantage in NVIDIA Studio by unlocking transformative performance in video editing, 3D rendering, and graphic design.Full Build: /
    #who #says #creatorai #builds #have
    Who says creator/AI PC builds have to be boring? Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt...
    Who says creator/AI PC builds have to be boring?Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt X870E-CREATOR WIFI, this FRAME 4000D PROTOTYPE is adorned with a newly-designed aluminum front panel, new front I/O, and a @SingularityC Powerboard, as well as a special edition of the HX1000i SHIFT 👀With @NVIDIAStudio, GeForce RTX 50 Series GPUs also provide the creative advantage in NVIDIA Studio by unlocking transformative performance in video editing, 3D rendering, and graphic design.Full Build: / #who #says #creatorai #builds #have
    X.COM
    Who says creator/AI PC builds have to be boring? Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt...
    Who says creator/AI PC builds have to be boring?Housing all of our components alongside the @NVIDIAGeForce RTX 5090 Founders Edition and @ASUS ProArt X870E-CREATOR WIFI, this FRAME 4000D PROTOTYPE is adorned with a newly-designed aluminum front panel, new front I/O, and a @SingularityC Powerboard, as well as a special edition of the HX1000i SHIFT 👀With @NVIDIAStudio, GeForce RTX 50 Series GPUs also provide the creative advantage in NVIDIA Studio by unlocking transformative performance in video editing, 3D rendering, and graphic design.Full Build: https://www.corsair.com/us/en/explorer/builds/computex-2025/frame-4000d-prototype-nvidia-creator-ai-pc/
    0 Comments 0 Shares 0 Reviews
CGShares https://cgshares.com