• AMD, GPU, carte graphique, gaming, budget, performance, Radeon RX 9060 XT, valeur, critique

    ## Introduction

    Dans un monde où le jeu vidéo est devenu un passe-temps incontournable, il est inacceptable que les joueurs soient constamment à la recherche de la meilleure carte graphique sans se ruiner. Alors, pourquoi diable devrions-nous nous contenter de moins quand nous avons le *Gigabyte Radeon RX 9060 XT* à notre disposition ? Cette carte graphique est présentée comme le meilleur rapport qualit...
    AMD, GPU, carte graphique, gaming, budget, performance, Radeon RX 9060 XT, valeur, critique ## Introduction Dans un monde où le jeu vidéo est devenu un passe-temps incontournable, il est inacceptable que les joueurs soient constamment à la recherche de la meilleure carte graphique sans se ruiner. Alors, pourquoi diable devrions-nous nous contenter de moins quand nous avons le *Gigabyte Radeon RX 9060 XT* à notre disposition ? Cette carte graphique est présentée comme le meilleur rapport qualit...
    Gigabyte Radeon RX 9060 XT : Un Rapport Qualité-Prix Imbattable
    AMD, GPU, carte graphique, gaming, budget, performance, Radeon RX 9060 XT, valeur, critique ## Introduction Dans un monde où le jeu vidéo est devenu un passe-temps incontournable, il est inacceptable que les joueurs soient constamment à la recherche de la meilleure carte graphique sans se ruiner. Alors, pourquoi diable devrions-nous nous contenter de moins quand nous avons le *Gigabyte Radeon...
    Like
    Love
    Wow
    Angry
    Sad
    121
    1 Commentarii 0 Distribuiri 0 previzualizare
  • AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong

    Key Takeaways

    AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash.
    Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p.
    AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency.
    This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing.

    It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move.
    A graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right? 
    Well, yes and no. 
    Because right alongside that 16GB version, AMD is also releasing an 8GB version for  Same name, same chip, half the memory. And that’s where the internet lost it. 
    Déjà Vu: We’ve Seen This Trick Before
    If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti. 
    They sold both 8GB and 16GB versions with the same branding, but a price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July. 

    Source: Nvidia
    Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug. 
    Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles.
    The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting. 
    We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t. 
    It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too. 
    It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff. 
    Frank Azor Lights the Fuse on X
    The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card. 

    He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine. 
    It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now. 
    Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come. 
    Hardware Unboxed Fires Back
    The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM. 
    In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points.

    The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons. 
    In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly. 
    And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone.
    If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes. 
    Why This Actually Matters
    You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that.

    Source: AMD
    Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025?
    That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software. 
    If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone. 
    It’s like trying to make a movie with a flip phone because some people still own one.
    Same Name, Different Game
    Another big issue is how these cards are named and sold. 
    The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU. 
    But that extra memory makes a huge difference. 
    In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance. 
    You’re not. You’re getting a watered-down version with the same name and a silent asterisk.
    This isn’t just AMD’s Problem
    Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw. 
    It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice.
    Spoiler: they noticed.
    And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers. 
    If you’re spending over on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t.
    Steam Data Doesn’t Help AMD’s Case
    AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now.

    Source: Nvidia
    But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable. 
    Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for  A few years ago, a screen like that would’ve cost you double. Now it’s entry-level.
    Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020. 
    Planned Obsolescence in Disguise
    Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026. 
    But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable.
    It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate. 
    So, Is AMD Actually Screwed? 
    Not right now. In fact, they’re playing the game better than they used to. 
    They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild. 
    But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them. 
    We don’t need two Nvidias. We need AMD to be different – to be better. 
    One Name, Two Very Different Cards
    The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill. 
    This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t. 
    Until then, buyers beware. There’s more going on behind the box art than meets the eye. 

    Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use. 
    Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives. 
    Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces. 
    In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands.
    Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects. 
    Key Areas of Expertise: Consumer TechCybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone. 

    View all articles by Anya Zhukova

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #amds #8gb #gamble #why #gamers
    AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong
    Key Takeaways AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash. Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p. AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency. This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing. It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move. A graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right?  Well, yes and no.  Because right alongside that 16GB version, AMD is also releasing an 8GB version for  Same name, same chip, half the memory. And that’s where the internet lost it.  Déjà Vu: We’ve Seen This Trick Before If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti.  They sold both 8GB and 16GB versions with the same branding, but a price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July.  Source: Nvidia Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug.  Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles. The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting.  We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t.  It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too.  It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff.  Frank Azor Lights the Fuse on X The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card.  He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine.  It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now.  Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come.  Hardware Unboxed Fires Back The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM.  In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points. The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons.  In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly.  And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone. If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes.  Why This Actually Matters You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that. Source: AMD Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025? That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software.  If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone.  It’s like trying to make a movie with a flip phone because some people still own one. Same Name, Different Game Another big issue is how these cards are named and sold.  The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU.  But that extra memory makes a huge difference.  In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance.  You’re not. You’re getting a watered-down version with the same name and a silent asterisk. This isn’t just AMD’s Problem Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw.  It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice. Spoiler: they noticed. And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers.  If you’re spending over on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t. Steam Data Doesn’t Help AMD’s Case AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now. Source: Nvidia But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable.  Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for  A few years ago, a screen like that would’ve cost you double. Now it’s entry-level. Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020.  Planned Obsolescence in Disguise Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026.  But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable. It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate.  So, Is AMD Actually Screwed?  Not right now. In fact, they’re playing the game better than they used to.  They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild.  But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them.  We don’t need two Nvidias. We need AMD to be different – to be better.  One Name, Two Very Different Cards The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill.  This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t.  Until then, buyers beware. There’s more going on behind the box art than meets the eye.  Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use.  Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives.  Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces.  In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands. Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects.  Key Areas of Expertise: Consumer TechCybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone.  View all articles by Anya Zhukova Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #amds #8gb #gamble #why #gamers
    TECHREPORT.COM
    AMD’s RX 9060 XT 8GB Gamble: Why Gamers Are Furious, and They’re Not Wrong
    Key Takeaways AMD’s RX 9060 XT is set to launch on June 5th, 2025 in both 8GB and 16GB versions under the same name, creating confusion and backlash. Reviewers and gamers say 8GB of VRAM isn’t enough for modern gaming, especially at 1440p. AMD’s decision to showcase only the 16GB model in benchmarks raised concerns about transparency. This move mirrors Nvidia’s controversial RTX 4060 Ti rollout, suggesting an industry trend of misleading GPU marketing. It all started with a new GPU announcement. The AMD Radeon RX 9060 XT is set to launch, and on paper, it looks like a solid move. A $349 graphics card with 16GB of VRAM? Not bad. That’s more memory than some RTX 4070 cards. Sounds like AMD might finally be delivering some value again, right?  Well, yes and no.  Because right alongside that 16GB version, AMD is also releasing an 8GB version for $299. Same name, same chip, half the memory. And that’s where the internet lost it.  Déjà Vu: We’ve Seen This Trick Before If this sounds familiar, it’s because Nvidia pulled the same move with the RTX 4060 Ti.  They sold both 8GB and 16GB versions with the same branding, but a $100 price difference. The RTX 4060 Ti 8GB launched in May 2023, and the 16GB variant followed in July.  Source: Nvidia Gamers hated the confusion. Reviewers criticized the 8GB version’s lack of performance, especially in memory-heavy games, and the way Nvidia tried to sweep the difference under the rug.  Performance dipped significantly at 1440p, and stuttering was a problem even in some 1080p titles. The backlash was swift. Tech media slammed Nvidia for deceptive marketing, and buyers were left second-guessing which version they were getting.  We’ve seen this pattern before in Nvidia’s review restrictions around the RTX 5060, where early coverage was shaped by what reviewers were allowed to test – and what they weren’t.  It led to a mess of misinformation, bad value perceptions, and a very clear message: don’t confuse your customers. So naturally, AMD did it too.  It’s like watching two billion-dollar companies playing a game of ‘Who Can Confuse the Customer More.’ It’s not just about the money. It’s about trust, and AMD just dumped a bunch of it off a cliff.  Frank Azor Lights the Fuse on X The backlash started when AMD’s Director of Gaming Marketing, Frank Azor, took to X to defend the 8GB card.  He said that most gamers don’t need more than 8GB of VRAM and that the cheaper card still serves the mainstream crowd just fine.  It’s the same reasoning Nvidia used last year with the RTX 4060 Ti. That didn’t work then, and it isn’t working now.  Because when Steve from Hardware Unboxed sees a bad take like that, you know a flamethrower video is coming. And oh boy, did it come.  Hardware Unboxed Fires Back The backlash against AMD’s 8GB RX 9060 XT took off after a post from Hardware Unboxed on X called out the company’s defense of limited VRAM.  In response to AMD’s claim that most gamers don’t need more than 8GB of memory, Hardware Unboxed accused them of misleading buyers and building weaker products just to hit certain price points. The criticism gained traction fast. Tech YouTuber Vex picked up the story and added fuel to the fire by showing side-by-side gameplay comparisons.  In multiple games, the 8GB RX 9060 XT showed serious performance issues – stuttering, frame drops, and VRAM bottlenecks – while the 16GB version handled the same titles smoothly.  And yet, during the GPU’s official reveal, AMD only showed performance data for the 16GB card. There were no benchmarks for the 8GB version – not a single chart. That omission wasn’t lost on anyone. If AMD truly believed the 8GB model held up under modern gaming loads, they would have shown it. The silence speaks volumes.  Why This Actually Matters You might be thinking: ‘So what? Some games still run fine on 8GB. I only play Valorant.’ Sure. But the problem is bigger than that. Source: AMD Games are getting heavier. Even titles like Cyberpunk 2077, released in 2020, can eat up more than 8GB of VRAM. And with GTA 6 (still) on the horizon, do you really think game developers are going to keep optimizing for 8GB cards in 2025? That’s not how game development works. Developers target the most common setups, yes. But hardware also shapes software.  If everyone’s stuck with 8GB, games will be designed around that limit. That holds back progress for everyone.  It’s like trying to make a movie with a flip phone because some people still own one. Same Name, Different Game Another big issue is how these cards are named and sold.  The RX 9060 XT 16GB and RX 9060 XT 8GB are not clearly labeled as different products. They’re just two versions of the same GPU.  But that extra memory makes a huge difference.  In some games, the 8GB card performs dramatically worse. And yet, unless you know what to look for, you might walk into a store and buy the 8GB version thinking you’re getting the same performance.  You’re not. You’re getting a watered-down version with the same name and a silent asterisk. This isn’t just AMD’s Problem Nvidia started this mess with the 4060 Ti naming confusion. AMD just saw the outrage and decided to walk straight into the same buzzsaw.  It’s hard not to feel like both companies are treating consumers like they’re too dumb to notice. Spoiler: they noticed. And this whole ‘VRAM doesn’t matter’ argument? It’s already been debunked by dozens of reviewers.  If you’re spending over $300 on a graphics card in 2025, it needs to last more than a year or two. 8GB cards are already struggling. Buying one now is like buying a smartphone in 2025 with 64GB of storage. Sure, it works. Until it doesn’t. Steam Data Doesn’t Help AMD’s Case AMD and Nvidia both love to point at the Steam Hardware Survey. They say, ‘See? Most people still play at 1080p.’ And that’s true – for now. Source: Nvidia But what they leave out is that 1440p gaming is growing fast. More gamers are upgrading their setups because 1440p monitors are getting a lot more affordable.  Take the Pixio PXC277 Advanced, for instance – a 27-inch curved 1440p monitor with a 165Hz refresh rate and 1ms response time, all for $219.99. A few years ago, a screen like that would’ve cost you double. Now it’s entry-level. Gamers are ready to step up their experience. The only thing holding them back is GPU hardware that’s still stuck in 2020.  Planned Obsolescence in Disguise Here’s the worst part. Companies know full well that 8GB won’t cut it in 2026.  But they still sell it, knowing many gamers will only find out when it’s too late – when the stutters kick in, the textures disappear, or the next big title becomes unplayable. It’s planned obsolescence disguised as ‘choice.’ And while it’s great to have options at different price points, it should be clear which option is built to last – and which one is built to frustrate.  So, Is AMD Actually Screwed?  Not right now. In fact, they’re playing the game better than they used to.  They’ve learned from past pricing disasters and figured out how to get better launch-day headlines – even if it means faking the MSRP and letting street prices run wild.  But this kind of marketing comes at a cost. If AMD keeps making decisions that prioritize short-term wins over long-term trust, they’ll lose the very crowd that once rooted for them.  We don’t need two Nvidias. We need AMD to be different – to be better.  One Name, Two Very Different Cards The RX 9060 XT 16GB might be a good deal. But it’s being overshadowed by the 8GB version’s drama. And the longer AMD keeps playing games with memory and naming, the more it chips away at its hard-earned goodwill.  This whole mess could’ve been avoided with one simple move: name the 8GB card something else. Call it the RX 9055. Call it Lite or whatever. Just don’t make it look like the same card when it isn’t.  Until then, buyers beware. There’s more going on behind the box art than meets the eye.  Anya Zhukova is an in-house tech and crypto writer at Techreport with 10 years of hands-on experience covering cybersecurity, consumer tech, digital privacy, and blockchain. She’s known for turning complex topics into clear, useful advice that regular people can actually understand and use.  Her work has been featured in top-tier digital publications including MakeUseOf, Online Tech Tips, Help Desk Geek, Switching to Mac, and Make Tech Easier. Whether she’s writing about the latest privacy tools or reviewing a new laptop, her goal is always the same: help readers feel confident and in control of the tech they use every day.  Anya holds a BA in English Philology and Translation from Tula State Pedagogical University and also studied Mass Media and Journalism at Minnesota State University, Mankato. That mix of language, media, and tech has given her a unique lens to look at how technology shapes our daily lives.  Over the years, she’s also taken courses and done research in data privacy, digital security, and ethical writing – skills she uses when tackling sensitive topics like PC hardware, system vulnerabilities, and crypto security.  Anya worked directly with brands like Framework, Insta360, Redmagic, Inmotion, Secretlab, Kodak, and Anker, reviewing their products in real-life scenarios. Her testing process involves real-world use cases – whether it's stress-testing laptops for creative workloads, reviewing the battery performance of mobile gaming phones, or evaluating the long-term ergonomics of furniture designed for hybrid workspaces.  In the world of crypto, Anya covers everything from beginner guides to deep dives into hardware wallets, DeFi protocols, and Web3 tools. She helps readers understand how to use multisig wallets, keep their assets safe, and choose the right platforms for their needs.  Her writing often touches on financial freedom and privacy – two things she strongly believes should be in everyone’s hands. Outside of writing, Anya contributes to editorial style guides focused on privacy and inclusivity, and she mentors newer tech writers on how to build subject matter expertise and write responsibly.  She sticks to high editorial standards, only recommends products she’s personally tested, and always aims to give readers the full picture.  You can find her on LinkedIn, where she shares more about her work and projects.  Key Areas of Expertise: Consumer Tech (laptops, phones, wearables, etc.) Cybersecurity and Digital Privacy PC/PC Hardware Blockchain, Crypto Wallets, and DeFi In-Depth Product Reviews and Buying Guides Whether she’s reviewing a new wallet or benchmarking a PC build, Anya brings curiosity, care, and a strong sense of responsibility to everything she writes. Her mission? To make the digital world a little easier – and safer – for everyone.  View all articles by Anya Zhukova Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market

    Menu

    Home
    News

    Hardware

    Gaming

    Mobile

    Finance
    Deals
    Reviews
    How To

    Wccftech

    HardwareLeak
    AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market

    Sarfraz Khan •
    May 31, 2025 at 08:29am EDT

    After a long time, we finally saw a glimpse of a mid-range Zen 5 Strix Halo chip, which is usually rare to find on devices.
    AMD Ryzen AI Max Pro 385 Benchmarked on Geekbench; Scores 2489 Points in Single and 14136 Points in Multi-Core Tests
    It has been quite a while since AMD launched its premium segment Zen 5 mobile chips. The Strix Halo is the strongest-ever lineup based on the Zen 5 architecture, which offers up to a staggering 16-cores/32-threads configuration. However, most of the machines, including AI mini PCs and laptops, are mostly equipped with the flagship model, even though AMD also launched several SKUs in the lineup.
    AMD's Ryzen AI Max Pro 395 is usually seen on high-end mobile devices, boasting 16-core/32-threads and a Radeon 8060S, but this is probably the first time we have witnessed a mid-range 8-core/16-thread SKU. It's the Ryzen AI Max Pro 385, which offers 8x Zen 5 cores clocked at 3.6 GHz with boost clock of up to 5 GHz and carries the RDNA 3.5-based Radeon 8050S graphics. Unlike the Pro 395, the Pro 385 is significantly weaker in the CPU department but is fairly strong when it comes to the integrated graphics.

    While the iGPU needs to be tested for comparison, the Radeon 8050S is only 8 Compute Units behind the Radeon 8060S. This will surely put the latter in a noticeably higher position in graphical performance, but the Radeon 8050S is likely the second strongest iGPU you will currently find on the mainstream market. Nonetheless, coming to the CPU itself, the Ryzen AI Max Pro 385 is used on an HP ZBook Ultra G1a 14-inch mobile workstation laptop. We have seen such laptops and mini PCs with faster Strix Halo variants, but it's good to see that manufacturers are going to offer more affordable options.

    The processor scored 2489 points in single and 14136 points in multicore tests, but that will hugely vary from test to test, and we all know that Geekbench 6 isn't exactly a reliable test platform for CPUs. What's more important is that we can finally see cheaper alternatives to Ryzen AI Max Pro 395-based devices, offering excellent iGPU performance. Mini PCs and laptops equipped with Ryzen AI Max Pro 385 can easily play games at 1080p with playable framerates, but keep in mind that many of these devices will be targeted towards professionals and content creators.
    Being able to offer up to 50 NPU TOPS and over 100 TOPS of AI performance overall, the Ryzen AI Max Pro 385 can be a solid option for AI workloads as well. Ryzen AI Max Pro 395-based laptops and mini PCs usually cost nearly and above, but with the availability of Ryzen AI Max Pro 385, we can finally see sub-systems.
    News Sources: Geekbench, Videocardz

    Subscribe to get an everyday digest of the latest technology news in your inbox

    Follow us on

    Topics

    Sections

    Company

    Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC
    Associates Program, an affiliate advertising program designed to provide a means for sites to earn
    advertising fees by advertising and linking to amazon.com
    © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada
    #amd #octacore #ryzen #max #pro
    AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market
    Menu Home News Hardware Gaming Mobile Finance Deals Reviews How To Wccftech HardwareLeak AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market Sarfraz Khan • May 31, 2025 at 08:29am EDT After a long time, we finally saw a glimpse of a mid-range Zen 5 Strix Halo chip, which is usually rare to find on devices. AMD Ryzen AI Max Pro 385 Benchmarked on Geekbench; Scores 2489 Points in Single and 14136 Points in Multi-Core Tests It has been quite a while since AMD launched its premium segment Zen 5 mobile chips. The Strix Halo is the strongest-ever lineup based on the Zen 5 architecture, which offers up to a staggering 16-cores/32-threads configuration. However, most of the machines, including AI mini PCs and laptops, are mostly equipped with the flagship model, even though AMD also launched several SKUs in the lineup. AMD's Ryzen AI Max Pro 395 is usually seen on high-end mobile devices, boasting 16-core/32-threads and a Radeon 8060S, but this is probably the first time we have witnessed a mid-range 8-core/16-thread SKU. It's the Ryzen AI Max Pro 385, which offers 8x Zen 5 cores clocked at 3.6 GHz with boost clock of up to 5 GHz and carries the RDNA 3.5-based Radeon 8050S graphics. Unlike the Pro 395, the Pro 385 is significantly weaker in the CPU department but is fairly strong when it comes to the integrated graphics. While the iGPU needs to be tested for comparison, the Radeon 8050S is only 8 Compute Units behind the Radeon 8060S. This will surely put the latter in a noticeably higher position in graphical performance, but the Radeon 8050S is likely the second strongest iGPU you will currently find on the mainstream market. Nonetheless, coming to the CPU itself, the Ryzen AI Max Pro 385 is used on an HP ZBook Ultra G1a 14-inch mobile workstation laptop. We have seen such laptops and mini PCs with faster Strix Halo variants, but it's good to see that manufacturers are going to offer more affordable options. The processor scored 2489 points in single and 14136 points in multicore tests, but that will hugely vary from test to test, and we all know that Geekbench 6 isn't exactly a reliable test platform for CPUs. What's more important is that we can finally see cheaper alternatives to Ryzen AI Max Pro 395-based devices, offering excellent iGPU performance. Mini PCs and laptops equipped with Ryzen AI Max Pro 385 can easily play games at 1080p with playable framerates, but keep in mind that many of these devices will be targeted towards professionals and content creators. Being able to offer up to 50 NPU TOPS and over 100 TOPS of AI performance overall, the Ryzen AI Max Pro 385 can be a solid option for AI workloads as well. Ryzen AI Max Pro 395-based laptops and mini PCs usually cost nearly and above, but with the availability of Ryzen AI Max Pro 385, we can finally see sub-systems. News Sources: Geekbench, Videocardz Subscribe to get an everyday digest of the latest technology news in your inbox Follow us on Topics Sections Company Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada #amd #octacore #ryzen #max #pro
    WCCFTECH.COM
    AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market
    Menu Home News Hardware Gaming Mobile Finance Deals Reviews How To Wccftech HardwareLeak AMD Octa-core Ryzen AI Max Pro 385 Processor Spotted On Geekbench: Affordable Strix Halo Chips Are About To Enter The Market Sarfraz Khan • May 31, 2025 at 08:29am EDT After a long time, we finally saw a glimpse of a mid-range Zen 5 Strix Halo chip, which is usually rare to find on devices. AMD Ryzen AI Max Pro 385 Benchmarked on Geekbench; Scores 2489 Points in Single and 14136 Points in Multi-Core Tests It has been quite a while since AMD launched its premium segment Zen 5 mobile chips. The Strix Halo is the strongest-ever lineup based on the Zen 5 architecture, which offers up to a staggering 16-cores/32-threads configuration. However, most of the machines, including AI mini PCs and laptops, are mostly equipped with the flagship model, even though AMD also launched several SKUs in the lineup. AMD's Ryzen AI Max Pro 395 is usually seen on high-end mobile devices, boasting 16-core/32-threads and a Radeon 8060S, but this is probably the first time we have witnessed a mid-range 8-core/16-thread SKU. It's the Ryzen AI Max Pro 385, which offers 8x Zen 5 cores clocked at 3.6 GHz with boost clock of up to 5 GHz and carries the RDNA 3.5-based Radeon 8050S graphics. Unlike the Pro 395, the Pro 385 is significantly weaker in the CPU department but is fairly strong when it comes to the integrated graphics. While the iGPU needs to be tested for comparison, the Radeon 8050S is only 8 Compute Units behind the Radeon 8060S. This will surely put the latter in a noticeably higher position in graphical performance, but the Radeon 8050S is likely the second strongest iGPU you will currently find on the mainstream market. Nonetheless, coming to the CPU itself, the Ryzen AI Max Pro 385 is used on an HP ZBook Ultra G1a 14-inch mobile workstation laptop. We have seen such laptops and mini PCs with faster Strix Halo variants, but it's good to see that manufacturers are going to offer more affordable options. The processor scored 2489 points in single and 14136 points in multicore tests, but that will hugely vary from test to test, and we all know that Geekbench 6 isn't exactly a reliable test platform for CPUs. What's more important is that we can finally see cheaper alternatives to Ryzen AI Max Pro 395-based devices, offering excellent iGPU performance. Mini PCs and laptops equipped with Ryzen AI Max Pro 385 can easily play games at 1080p with playable framerates, but keep in mind that many of these devices will be targeted towards professionals and content creators. Being able to offer up to 50 NPU TOPS and over 100 TOPS of AI performance overall, the Ryzen AI Max Pro 385 can be a solid option for AI workloads as well. Ryzen AI Max Pro 395-based laptops and mini PCs usually cost nearly $2000 and above, but with the availability of Ryzen AI Max Pro 385, we can finally see sub-$1500 systems. News Sources: Geekbench, Videocardz Subscribe to get an everyday digest of the latest technology news in your inbox Follow us on Topics Sections Company Some posts on wccftech.com may contain affiliate links. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com © 2025 WCCF TECH INC. 700 - 401 West Georgia Street, Vancouver, BC, Canada
    0 Commentarii 0 Distribuiri 0 previzualizare
  • AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070

    Review

     When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070

    Sayan Sen

    Neowin
    @ssc_combater007 ·

    May 24, 2025 14:40 EDT

    Earlier this month, we shared the first part of our review of AMD's new RX 9070. It was about the gaming performance of the GPU, and we gave it a 7.5 out of 10. The 9070 XT, in contrast, received a full 10 out of 10.
    The main reason for the lower score on the non-XT was the relatively high price and thus the poorer value it offered compared to the XT. We thought the price was much closer to the XT than it needed to be.
    While the RX 9070 proved to be more power efficient than the XT, for a desktop gaming graphics card, value and performance typically take the front seat compared to something like power efficiency.

    However, that may not be the case in terms of productivity which also takes into account things like power savings. Thus, similar to the one we did for the XT model, we are doing a dedicated productivity review for the RX 9070 as well where we compare to against the 9070 XT, 7800 XT, as well as Nvidia's 5070 and 4070.
    AI performance is a very important metric in today's world and AMD also promised big improvements thanks to its underlying architectural improvements. We already had a taste of that with the XT model so now it's time to see how good the non-XT does here.

    Before we get underway, this is a collaboration between Sayan Sen, and Steven Parker who lent us their test PC for this review. Speaking of which, here are the specs of the test PC:

    Cooler Master MasterBox NR200P MAX
    ASRock Z790 PG-ITX/TB4
    Intel Core i7-14700K with Thermal Grizzly Carbonaut Pad

    T-FORCE Delta RGB DDR57600MT/s CL362TB Kingston Fury Renegade SSD
    Windows 11 24H2Drivers used for the 7800 XT, 9070 XT and 9070 were Adrenaline v24.30.31.03 / 25.3.1 RC, and for the Nvidia RTX 5070 and 4070, GeForce v572.47 was used.Sapphire Pulse 9070 XT, Nvidia 5070 FE, and Pulse 9070First up, we have Geekbench AI running on ONNX.
    The RTX 5070 gets beaten by both the 9070 XT and 9070 in quantized and single precisionperformance. Similarly, the 4070 gets close to the 9070 in half-precisionperformance, but the latter is an enormous 30% faster in quantized score and nearly 12.2% better in single precision.
    The reason for this beatdown is the amount of memory available to each card. The Nvidia GPUs have 12GB each and thus only do better in the FP16 precision tests since the other ones are more VRAM-intensive.
    Next up, we move to UL Procyon suite starting with the Image generation benchmark.
    We chose the Stable Diffusion XL FP16 test since this is the most intense workload available on Procyon suite. Similar to what we saw on Geekbench AI, the Nvidia GPUs to relatively better here as it is FP16 or half precision which means the used VRAM is lower.
    So this is something to keep in mind again, if you wish to float32 AI workloads, it is likely that graphics cards with greater than 12 GB buffers would emerge as victors.
    There is still a big improvement on the RX 9070 compared to the 7800 XT as we see a ~54% gain. This boost is due to improvements to the core architecture itself as VRAM capacities of both cards are the same at 16 Gigs.
    Following image generation, we move to the text generation benchmark.

    In this workload, we see the least impressive performance of the 9070 in terms of how much it improves over the 7800 XT. The former is up to ~7.25% faster here. The 9070 is also not as well-performing as the Nvidia 4070 in Phi and Mistral models, although it does do better in both the Llama tests.
    Another odd result stood out here where the 5070 underperformed all the cards including the 7800 XT in Llama 2. We ran each test three times and considered the best score and so we are not exactly sure what happened here.
    Wrapping up AI testing, we measured OpenCL throughput in Geekbench compute benchmark.
    The RX 9070 did not fare well here at all even falling behind the 7800 XT and it is significantly slower than the three other cards. Interestingly, even the RTX 5070 could not beat the 4070 on OpenCL so perhaps this suggests that OpenCL optimization has not been a priority for either AMD or Nvidia this time. It could also be an issue with Geekbench itself.
    Conclusion
    We reach the end of our productivity performance review of the 9070 and we have to say we are fairly impressed but there is also a slight bit of disappointment. It is clear that the 9070 as well as the 9070 XT really shine when inferencing precision is higher, and that is due to the higher memory buffers they possess compared to the Nvidia 5070. But on FP16, the Nvidia cards pull ahead.
    Still RNDA 4, including the RX 9070, see big boost over RDNA 3. As we noted in the image generation benchmark, which is an intense load, there is over a 50% gain.
    So what do we make of the RX 9070 as a productivity hardware? We think it's a good card. If someone was looking for a GPU around that can do both gaming and crunch through some AI tasks this is a good card to pick up especially if you are dealing with single precision situations or some other VRAM-intense tasks. And we already know it is efficient so there's that too.
    For those however looking for a GPU that can deal with more, AMD recently unveiled the Radeon AI PRO R9700 which is essentially a 32 GB refresh of the 9070 XT with some additional workstation-based optimizations.
    Considering everything, we rate AMD's RX 9070 a 9 out of 10 for its AI performance. Price is less of a factor for those looking at productivity cases compared to ones considering the GPU for gaming, and as such, we felt it did quite decent overall and can be especially handy if you need more than 12 GB.
    Purchase links: RX 9070 / XTAs an Amazon Associate we earn from qualifying purchases.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #amd #performance #benchmark #review #nvidia
    AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070
    Review  When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070 Sayan Sen Neowin @ssc_combater007 · May 24, 2025 14:40 EDT Earlier this month, we shared the first part of our review of AMD's new RX 9070. It was about the gaming performance of the GPU, and we gave it a 7.5 out of 10. The 9070 XT, in contrast, received a full 10 out of 10. The main reason for the lower score on the non-XT was the relatively high price and thus the poorer value it offered compared to the XT. We thought the price was much closer to the XT than it needed to be. While the RX 9070 proved to be more power efficient than the XT, for a desktop gaming graphics card, value and performance typically take the front seat compared to something like power efficiency. However, that may not be the case in terms of productivity which also takes into account things like power savings. Thus, similar to the one we did for the XT model, we are doing a dedicated productivity review for the RX 9070 as well where we compare to against the 9070 XT, 7800 XT, as well as Nvidia's 5070 and 4070. AI performance is a very important metric in today's world and AMD also promised big improvements thanks to its underlying architectural improvements. We already had a taste of that with the XT model so now it's time to see how good the non-XT does here. Before we get underway, this is a collaboration between Sayan Sen, and Steven Parker who lent us their test PC for this review. Speaking of which, here are the specs of the test PC: Cooler Master MasterBox NR200P MAX ASRock Z790 PG-ITX/TB4 Intel Core i7-14700K with Thermal Grizzly Carbonaut Pad T-FORCE Delta RGB DDR57600MT/s CL362TB Kingston Fury Renegade SSD Windows 11 24H2Drivers used for the 7800 XT, 9070 XT and 9070 were Adrenaline v24.30.31.03 / 25.3.1 RC, and for the Nvidia RTX 5070 and 4070, GeForce v572.47 was used.Sapphire Pulse 9070 XT, Nvidia 5070 FE, and Pulse 9070First up, we have Geekbench AI running on ONNX. The RTX 5070 gets beaten by both the 9070 XT and 9070 in quantized and single precisionperformance. Similarly, the 4070 gets close to the 9070 in half-precisionperformance, but the latter is an enormous 30% faster in quantized score and nearly 12.2% better in single precision. The reason for this beatdown is the amount of memory available to each card. The Nvidia GPUs have 12GB each and thus only do better in the FP16 precision tests since the other ones are more VRAM-intensive. Next up, we move to UL Procyon suite starting with the Image generation benchmark. We chose the Stable Diffusion XL FP16 test since this is the most intense workload available on Procyon suite. Similar to what we saw on Geekbench AI, the Nvidia GPUs to relatively better here as it is FP16 or half precision which means the used VRAM is lower. So this is something to keep in mind again, if you wish to float32 AI workloads, it is likely that graphics cards with greater than 12 GB buffers would emerge as victors. There is still a big improvement on the RX 9070 compared to the 7800 XT as we see a ~54% gain. This boost is due to improvements to the core architecture itself as VRAM capacities of both cards are the same at 16 Gigs. Following image generation, we move to the text generation benchmark. In this workload, we see the least impressive performance of the 9070 in terms of how much it improves over the 7800 XT. The former is up to ~7.25% faster here. The 9070 is also not as well-performing as the Nvidia 4070 in Phi and Mistral models, although it does do better in both the Llama tests. Another odd result stood out here where the 5070 underperformed all the cards including the 7800 XT in Llama 2. We ran each test three times and considered the best score and so we are not exactly sure what happened here. Wrapping up AI testing, we measured OpenCL throughput in Geekbench compute benchmark. The RX 9070 did not fare well here at all even falling behind the 7800 XT and it is significantly slower than the three other cards. Interestingly, even the RTX 5070 could not beat the 4070 on OpenCL so perhaps this suggests that OpenCL optimization has not been a priority for either AMD or Nvidia this time. It could also be an issue with Geekbench itself. Conclusion We reach the end of our productivity performance review of the 9070 and we have to say we are fairly impressed but there is also a slight bit of disappointment. It is clear that the 9070 as well as the 9070 XT really shine when inferencing precision is higher, and that is due to the higher memory buffers they possess compared to the Nvidia 5070. But on FP16, the Nvidia cards pull ahead. Still RNDA 4, including the RX 9070, see big boost over RDNA 3. As we noted in the image generation benchmark, which is an intense load, there is over a 50% gain. So what do we make of the RX 9070 as a productivity hardware? We think it's a good card. If someone was looking for a GPU around that can do both gaming and crunch through some AI tasks this is a good card to pick up especially if you are dealing with single precision situations or some other VRAM-intense tasks. And we already know it is efficient so there's that too. For those however looking for a GPU that can deal with more, AMD recently unveiled the Radeon AI PRO R9700 which is essentially a 32 GB refresh of the 9070 XT with some additional workstation-based optimizations. Considering everything, we rate AMD's RX 9070 a 9 out of 10 for its AI performance. Price is less of a factor for those looking at productivity cases compared to ones considering the GPU for gaming, and as such, we felt it did quite decent overall and can be especially handy if you need more than 12 GB. Purchase links: RX 9070 / XTAs an Amazon Associate we earn from qualifying purchases. Tags Report a problem with article Follow @NeowinFeed #amd #performance #benchmark #review #nvidia
    WWW.NEOWIN.NET
    AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070
    Review  When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070 Sayan Sen Neowin @ssc_combater007 · May 24, 2025 14:40 EDT Earlier this month, we shared the first part of our review of AMD's new RX 9070. It was about the gaming performance of the GPU, and we gave it a 7.5 out of 10. The 9070 XT, in contrast, received a full 10 out of 10. The main reason for the lower score on the non-XT was the relatively high price and thus the poorer value it offered compared to the XT. We thought the price was much closer to the XT than it needed to be. While the RX 9070 proved to be more power efficient than the XT, for a desktop gaming graphics card, value and performance typically take the front seat compared to something like power efficiency. However, that may not be the case in terms of productivity which also takes into account things like power savings. Thus, similar to the one we did for the XT model, we are doing a dedicated productivity review for the RX 9070 as well where we compare to against the 9070 XT, 7800 XT, as well as Nvidia's 5070 and 4070. AI performance is a very important metric in today's world and AMD also promised big improvements thanks to its underlying architectural improvements. We already had a taste of that with the XT model so now it's time to see how good the non-XT does here. Before we get underway, this is a collaboration between Sayan Sen (author), and Steven Parker who lent us their test PC for this review. Speaking of which, here are the specs of the test PC: Cooler Master MasterBox NR200P MAX ASRock Z790 PG-ITX/TB4 Intel Core i7-14700K with Thermal Grizzly Carbonaut Pad T-FORCE Delta RGB DDR5 (2x16GB) 7600MT/s CL36 (XMP Profile) 2TB Kingston Fury Renegade SSD Windows 11 24H2 (Build 26100.3194) Drivers used for the 7800 XT, 9070 XT and 9070 were Adrenaline v24.30.31.03 / 25.3.1 RC (press driver provided by AMD), and for the Nvidia RTX 5070 and 4070, GeForce v572.47 was used. (From the left) Sapphire Pulse 9070 XT, Nvidia 5070 FE, and Pulse 9070First up, we have Geekbench AI running on ONNX. The RTX 5070 gets beaten by both the 9070 XT and 9070 in quantized and single precision (FP32) performance. Similarly, the 4070 gets close to the 9070 in half-precision (FP16) performance, but the latter is an enormous 30% faster in quantized score and nearly 12.2% better in single precision (FP32). The reason for this beatdown is the amount of memory available to each card. The Nvidia GPUs have 12GB each and thus only do better in the FP16 precision tests since the other ones are more VRAM-intensive. Next up, we move to UL Procyon suite starting with the Image generation benchmark. We chose the Stable Diffusion XL FP16 test since this is the most intense workload available on Procyon suite. Similar to what we saw on Geekbench AI, the Nvidia GPUs to relatively better here as it is FP16 or half precision which means the used VRAM is lower. So this is something to keep in mind again, if you wish to float32 AI workloads, it is likely that graphics cards with greater than 12 GB buffers would emerge as victors. There is still a big improvement on the RX 9070 compared to the 7800 XT as we see a ~54% gain. This boost is due to improvements to the core architecture itself as VRAM capacities of both cards are the same at 16 Gigs. Following image generation, we move to the text generation benchmark. In this workload, we see the least impressive performance of the 9070 in terms of how much it improves over the 7800 XT. The former is up to ~7.25% faster here. The 9070 is also not as well-performing as the Nvidia 4070 in Phi and Mistral models, although it does do better in both the Llama tests. Another odd result stood out here where the 5070 underperformed all the cards including the 7800 XT in Llama 2. We ran each test three times and considered the best score and so we are not exactly sure what happened here. Wrapping up AI testing, we measured OpenCL throughput in Geekbench compute benchmark. The RX 9070 did not fare well here at all even falling behind the 7800 XT and it is significantly slower than the three other cards. Interestingly, even the RTX 5070 could not beat the 4070 on OpenCL so perhaps this suggests that OpenCL optimization has not been a priority for either AMD or Nvidia this time. It could also be an issue with Geekbench itself. Conclusion We reach the end of our productivity performance review of the 9070 and we have to say we are fairly impressed but there is also a slight bit of disappointment. It is clear that the 9070 as well as the 9070 XT really shine when inferencing precision is higher, and that is due to the higher memory buffers they possess compared to the Nvidia 5070. But on FP16, the Nvidia cards pull ahead. Still RNDA 4, including the RX 9070, see big boost over RDNA 3 (7800 XT). As we noted in the image generation benchmark, which is an intense load, there is over a 50% gain. So what do we make of the RX 9070 as a productivity hardware? We think it's a good card. If someone was looking for a GPU around $550 that can do both gaming and crunch through some AI tasks this is a good card to pick up especially if you are dealing with single precision situations or some other VRAM-intense tasks. And we already know it is efficient so there's that too. For those however looking for a GPU that can deal with more, AMD recently unveiled the Radeon AI PRO R9700 which is essentially a 32 GB refresh of the 9070 XT with some additional workstation-based optimizations. Considering everything, we rate AMD's RX 9070 a 9 out of 10 for its AI performance. Price is less of a factor for those looking at productivity cases compared to ones considering the GPU for gaming, and as such, we felt it did quite decent overall and can be especially handy if you need more than 12 GB. Purchase links: RX 9070 / XT (Amazon US) As an Amazon Associate we earn from qualifying purchases. Tags Report a problem with article Follow @NeowinFeed
    0 Commentarii 0 Distribuiri 0 previzualizare
  • 4 graphics cards you should consider instead of the RTX 5060

    Nvidia’s RTX 5060 is finally here, and many people hoped it’d put up a fight against some of the best graphics cards. Does it really, though? Reviewers are split on the matter. Alas, I’m not here to judge the card. I’m here to show you some alternatives.
    While Nvidia’s xx60 cards typically become some of the most popular GPUs of any given generation, they’re not the only option you have right now. The RTX 5060 might not even be the best option at that price point. Below, I’ll walk you through four GPUs that I think you should buy instead of the RTX 5060.

    Recommended Videos

    Nvidia GeForce RTX 4060
    Jacob Roach / Digital Trends
    I’m not sure whether this will come as a surprise or not, but based on current pricing and benchmarks, the GPU I recommend buying instead of the RTX 5060 is its last-gen equivalent.
    The RTX 4060 is one of the last RTX 40-series graphics cards that are still readily available around MSRP. I found one for at Newegg, and it’s an overclocked model, meaning slightly faster performance than the base version. However, you might as well just buy a used RTX 4060 if you find it from a trustworthy source, as that’ll cost you a whole lot less.
    The RTX 5060 and the RTX 4060 have a lot in common. Spec-wise, they’re not at all far apart, although Nvidia’s newer Blackwell architecture and the switch to GDDR7 VRAM give the newer GPU a bit more oomph. But, unfortunately, both cards share the same 8GB RAM — an increasingly small amount in today’s gaming world — and the same narrow 128-bit bus.
    Some reviewers note that the RTX 5060 isn’t far ahead of the RTX 4060 in raw performance. The newer card gets the full benefit of Nvidia’s Multi-Frame Generation, though. Overall, they’re pretty comparable, but if you can score a used RTX 4060 for cheap, I’d go for it.
    AMD Radeon RX 7600 XTJacob Roach / Digital Trends
    I wasn’t a big fan of the RX 7600 XT 16GB upon launch, and I still have some beef with that card. Much like Nvidia’s options, AMD equipped its mainstream GPU with a really narrow memory interface, stifling the bandwidth and holding back its performance. Still, in the current climate, I’ll take that 16GB with the 128-bit bus over a card that has the same interface and only sports 8GB VRAM.
    The cheapest RX 7600 XT 16GB costs around and you can find it on the shelves with ease. But it’s the same scenario here — if you can find it used from a trustworthy source, it might be worth it, assuming you’re on a tight budget. The state of the GPU market as of late has made me appreciate second-hand GPUs a lot more.
    The RX 7600 XT is slower than the RTX 5060, and it’ll fall behind in ray tracing, but it gives you plenty of RAM where Nvidia’s card offers very little. That alone makes it worthy of your consideration.
    AMD’s upcoming RX 9060 XT could be a great option here, too. I expect it to offer better ray tracing capabilities than the RX 7600 XT, and it’ll have the same price tag as Nvidia’s RTX 5060.
    Nvidia GeForce RTX 5060 Ti 16GB
    Gigabyte
    If your budget is a little bit flexible, you could go one level up and get the RTX 5060 Ti with 16GB of RAM. Unfortunately, the cheapest options are at around right now, which is well over the MSRP and a whopping more than the RTX 5060. However, for that price, you’ll get yourself a GPU that’s better suited to stand the test of time.
    With 16GB of video memory and the full benefit of GDDR7 RAM, the RTX 5060 Ti 16GB offers an upgrade over the last-gen version. It’s not perfect by any stretch, though. Reviewers put the GPU below the RX 9070 non-XT, the RTX 5070, and even the RTX 4070 when you consider pure rasterization. This means no so-called “fake frames,” which is what Nvidia’s DLSS 4 delivers.
    That leaves the RTX 5060 Ti in an odd spot. Basically, if your budget can stretch to it, the RX 9070 and the RTX 5070 are both better cards; they’re also a lot more expensive.
    Intel Arc B580
    Jacob Roach / Digital Trends
    Less demanding gamers might find an option in Intel’s Arc B580. Upon launch, the GPU surprised pretty much everyone with its excellent performance-per-dollar ratio. The downside? That ratio is now a lot less impressive, because unexpected demand and low stock levels brought the price of the Arc B580 far above its recommended list price.
    The Arc B580 is a little bit slower than the RTX 4060 Ti, so it’ll be slower than the RTX 5060, too. It also can’t put up a fight as far as ray tracing goes. But it’s a budget-friendly GPU and a solid alternative to the RTX 5060 if you’d rather pick up something else this time around.
    My advice? Wait it out
    Jacob Roach / Digital Trends
    It’s not a great time to buy a GPU.
    The more successful and impressive cards from this generation, such as AMD’s RX 9070 XT or Nvidia’s RTX 5070 Ti, keep selling above MSRP. Those that aren’t quite as exciting may stick around MSRP… but that doesn’t make up for their shortcomings.
    Given the fact that reviews of the RTX 5060 are still pretty scarce, I’d wait it out for a week or two. Read some comparisons, check out the prices, and then decide. Gambling on a GPU just because the previous generations were solid doesn’t work anymore, and that’s now clearer than ever.
    #graphics #cards #you #should #consider
    4 graphics cards you should consider instead of the RTX 5060
    Nvidia’s RTX 5060 is finally here, and many people hoped it’d put up a fight against some of the best graphics cards. Does it really, though? Reviewers are split on the matter. Alas, I’m not here to judge the card. I’m here to show you some alternatives. While Nvidia’s xx60 cards typically become some of the most popular GPUs of any given generation, they’re not the only option you have right now. The RTX 5060 might not even be the best option at that price point. Below, I’ll walk you through four GPUs that I think you should buy instead of the RTX 5060. Recommended Videos Nvidia GeForce RTX 4060 Jacob Roach / Digital Trends I’m not sure whether this will come as a surprise or not, but based on current pricing and benchmarks, the GPU I recommend buying instead of the RTX 5060 is its last-gen equivalent. The RTX 4060 is one of the last RTX 40-series graphics cards that are still readily available around MSRP. I found one for at Newegg, and it’s an overclocked model, meaning slightly faster performance than the base version. However, you might as well just buy a used RTX 4060 if you find it from a trustworthy source, as that’ll cost you a whole lot less. The RTX 5060 and the RTX 4060 have a lot in common. Spec-wise, they’re not at all far apart, although Nvidia’s newer Blackwell architecture and the switch to GDDR7 VRAM give the newer GPU a bit more oomph. But, unfortunately, both cards share the same 8GB RAM — an increasingly small amount in today’s gaming world — and the same narrow 128-bit bus. Some reviewers note that the RTX 5060 isn’t far ahead of the RTX 4060 in raw performance. The newer card gets the full benefit of Nvidia’s Multi-Frame Generation, though. Overall, they’re pretty comparable, but if you can score a used RTX 4060 for cheap, I’d go for it. AMD Radeon RX 7600 XTJacob Roach / Digital Trends I wasn’t a big fan of the RX 7600 XT 16GB upon launch, and I still have some beef with that card. Much like Nvidia’s options, AMD equipped its mainstream GPU with a really narrow memory interface, stifling the bandwidth and holding back its performance. Still, in the current climate, I’ll take that 16GB with the 128-bit bus over a card that has the same interface and only sports 8GB VRAM. The cheapest RX 7600 XT 16GB costs around and you can find it on the shelves with ease. But it’s the same scenario here — if you can find it used from a trustworthy source, it might be worth it, assuming you’re on a tight budget. The state of the GPU market as of late has made me appreciate second-hand GPUs a lot more. The RX 7600 XT is slower than the RTX 5060, and it’ll fall behind in ray tracing, but it gives you plenty of RAM where Nvidia’s card offers very little. That alone makes it worthy of your consideration. AMD’s upcoming RX 9060 XT could be a great option here, too. I expect it to offer better ray tracing capabilities than the RX 7600 XT, and it’ll have the same price tag as Nvidia’s RTX 5060. Nvidia GeForce RTX 5060 Ti 16GB Gigabyte If your budget is a little bit flexible, you could go one level up and get the RTX 5060 Ti with 16GB of RAM. Unfortunately, the cheapest options are at around right now, which is well over the MSRP and a whopping more than the RTX 5060. However, for that price, you’ll get yourself a GPU that’s better suited to stand the test of time. With 16GB of video memory and the full benefit of GDDR7 RAM, the RTX 5060 Ti 16GB offers an upgrade over the last-gen version. It’s not perfect by any stretch, though. Reviewers put the GPU below the RX 9070 non-XT, the RTX 5070, and even the RTX 4070 when you consider pure rasterization. This means no so-called “fake frames,” which is what Nvidia’s DLSS 4 delivers. That leaves the RTX 5060 Ti in an odd spot. Basically, if your budget can stretch to it, the RX 9070 and the RTX 5070 are both better cards; they’re also a lot more expensive. Intel Arc B580 Jacob Roach / Digital Trends Less demanding gamers might find an option in Intel’s Arc B580. Upon launch, the GPU surprised pretty much everyone with its excellent performance-per-dollar ratio. The downside? That ratio is now a lot less impressive, because unexpected demand and low stock levels brought the price of the Arc B580 far above its recommended list price. The Arc B580 is a little bit slower than the RTX 4060 Ti, so it’ll be slower than the RTX 5060, too. It also can’t put up a fight as far as ray tracing goes. But it’s a budget-friendly GPU and a solid alternative to the RTX 5060 if you’d rather pick up something else this time around. My advice? Wait it out Jacob Roach / Digital Trends It’s not a great time to buy a GPU. The more successful and impressive cards from this generation, such as AMD’s RX 9070 XT or Nvidia’s RTX 5070 Ti, keep selling above MSRP. Those that aren’t quite as exciting may stick around MSRP… but that doesn’t make up for their shortcomings. Given the fact that reviews of the RTX 5060 are still pretty scarce, I’d wait it out for a week or two. Read some comparisons, check out the prices, and then decide. Gambling on a GPU just because the previous generations were solid doesn’t work anymore, and that’s now clearer than ever. #graphics #cards #you #should #consider
    WWW.DIGITALTRENDS.COM
    4 graphics cards you should consider instead of the RTX 5060
    Nvidia’s RTX 5060 is finally here, and many people hoped it’d put up a fight against some of the best graphics cards. Does it really, though? Reviewers are split on the matter. Alas, I’m not here to judge the card. I’m here to show you some alternatives. While Nvidia’s xx60 cards typically become some of the most popular GPUs of any given generation, they’re not the only option you have right now. The RTX 5060 might not even be the best option at that price point. Below, I’ll walk you through four GPUs that I think you should buy instead of the RTX 5060. Recommended Videos Nvidia GeForce RTX 4060 Jacob Roach / Digital Trends I’m not sure whether this will come as a surprise or not, but based on current pricing and benchmarks, the GPU I recommend buying instead of the RTX 5060 is its last-gen equivalent. The RTX 4060 is one of the last RTX 40-series graphics cards that are still readily available around MSRP. I found one for $329 at Newegg, and it’s an overclocked model, meaning slightly faster performance than the base version. However, you might as well just buy a used RTX 4060 if you find it from a trustworthy source, as that’ll cost you a whole lot less. The RTX 5060 and the RTX 4060 have a lot in common. Spec-wise, they’re not at all far apart, although Nvidia’s newer Blackwell architecture and the switch to GDDR7 VRAM give the newer GPU a bit more oomph. But, unfortunately, both cards share the same 8GB RAM — an increasingly small amount in today’s gaming world — and the same narrow 128-bit bus. Some reviewers note that the RTX 5060 isn’t far ahead of the RTX 4060 in raw performance. The newer card gets the full benefit of Nvidia’s Multi-Frame Generation, though. Overall, they’re pretty comparable, but if you can score a used RTX 4060 for cheap, I’d go for it. AMD Radeon RX 7600 XT (or the RX 9060 XT) Jacob Roach / Digital Trends I wasn’t a big fan of the RX 7600 XT 16GB upon launch, and I still have some beef with that card. Much like Nvidia’s options, AMD equipped its mainstream GPU with a really narrow memory interface, stifling the bandwidth and holding back its performance. Still, in the current climate, I’ll take that 16GB with the 128-bit bus over a card that has the same interface and only sports 8GB VRAM. The cheapest RX 7600 XT 16GB costs around $360, and you can find it on the shelves with ease. But it’s the same scenario here — if you can find it used from a trustworthy source, it might be worth it, assuming you’re on a tight budget. The state of the GPU market as of late has made me appreciate second-hand GPUs a lot more. The RX 7600 XT is slower than the RTX 5060, and it’ll fall behind in ray tracing, but it gives you plenty of RAM where Nvidia’s card offers very little. That alone makes it worthy of your consideration. AMD’s upcoming RX 9060 XT could be a great option here, too. I expect it to offer better ray tracing capabilities than the RX 7600 XT, and it’ll have the same $300 price tag as Nvidia’s RTX 5060. Nvidia GeForce RTX 5060 Ti 16GB Gigabyte If your budget is a little bit flexible, you could go one level up and get the RTX 5060 Ti with 16GB of RAM. Unfortunately, the cheapest options are at around $479 right now, which is well over the MSRP and a whopping $180 more than the RTX 5060. However, for that price, you’ll get yourself a GPU that’s better suited to stand the test of time. With 16GB of video memory and the full benefit of GDDR7 RAM, the RTX 5060 Ti 16GB offers an upgrade over the last-gen version. It’s not perfect by any stretch, though. Reviewers put the GPU below the RX 9070 non-XT, the RTX 5070, and even the RTX 4070 when you consider pure rasterization. This means no so-called “fake frames,” which is what Nvidia’s DLSS 4 delivers. That leaves the RTX 5060 Ti in an odd spot. Basically, if your budget can stretch to it, the RX 9070 and the RTX 5070 are both better cards; they’re also a lot more expensive. Intel Arc B580 Jacob Roach / Digital Trends Less demanding gamers might find an option in Intel’s Arc B580. Upon launch, the GPU surprised pretty much everyone with its excellent performance-per-dollar ratio. The downside? That ratio is now a lot less impressive, because unexpected demand and low stock levels brought the price of the Arc B580 far above its $250 recommended list price (MSRP). The Arc B580 is a little bit slower than the RTX 4060 Ti, so it’ll be slower than the RTX 5060, too. It also can’t put up a fight as far as ray tracing goes. But it’s a budget-friendly GPU and a solid alternative to the RTX 5060 if you’d rather pick up something else this time around. My advice? Wait it out Jacob Roach / Digital Trends It’s not a great time to buy a GPU. The more successful and impressive cards from this generation, such as AMD’s RX 9070 XT or Nvidia’s RTX 5070 Ti, keep selling above MSRP. Those that aren’t quite as exciting may stick around MSRP (which is where the RTX 5060 sits right now, mere days after launch) … but that doesn’t make up for their shortcomings. Given the fact that reviews of the RTX 5060 are still pretty scarce, I’d wait it out for a week or two. Read some comparisons, check out the prices, and then decide. Gambling on a GPU just because the previous generations were solid doesn’t work anymore, and that’s now clearer than ever.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Nvidia GeForce RTX 5060 review: better than console performance - but not enough VRAM

    The RTX 5060 is here, finally completing the 50-series lineup that debuted five months ago with the 5090. The new "mainstream" graphics card is far from cheap at /£270, but ought to offer reasonable performance and efficiency while adding the multi frame generation feature that's exclusive to this generation of GPUs. However, the 5060 also ships with just 8GB of VRAM, which could be a big limitation for those looking to play the latest graphics showcases.
    Before we get into our results, it's worth mentioning why this review is a little later than normal, coming a few days after the cards officially went on sale on May 19th. Normally, Nvidia or their partners send a graphics card and the necessary drivers anywhere from a couple of days to a week before the embargo date, which is typically a day before the cards go on sale. That's good for us, because it allows us to do the in-depth analysis that we prefer and still publish at the same time as other outlets, and it's good for potential buyers, as they can get a sense of value and performance and therefore make an informed decision about whether to buy a card or not - from what is often a limited supply at launch.
    For the RTX 5060 launch, Nvidia - via Asus - delivered a card in good time ahead of its release, but the drivers weren't released to reviewers until the card went on sale on May 19th, coinciding with Nvidia's Computex presentation. Without the drivers, the card is a paperweight, so any launch day coverage is necessarily limited - and in many cases, graphics cards went out of stock before the usual tranche of reviews went live from the tech press. It's a frustrating situation all around, and I doubt that even Nvidia's PR department will be thrilled that most reviews start with the same complaint.

    Nvidia's GeForce RTX 5060 gets the Digital Foundry video review treatment.Watch on YouTube
    Following the public release of the drivers, we've been benchmarking around the clock to figure out just how performant the new RTX 5060 is, where its strengths and weaknesses lie, and where it falls compared to the rest of the 50-series line-up, prior generation RTX cards and competing AMD models.
    Looking at the specs, you can see that the RTX 5060 is based around a cut-down version of the same GB206 die that powered the RTX 5060 Ti. The 5060 has 83 percent of the core count and rated power of the full-fat 5060 Ti design, with an innocuous three percent drop to boost clocks and the same 448GB/s of memory bandwidth.
    Unlike the 5060 Ti, however, which debuted in 8GB and 16GB models, the 5060 is only available with 8GB of frame buffer memory - a limitation we'll discuss in some depth later. For your 16.6 percent reduction to core count and TGP versus the 5060 Ti, you pay around 20 percent less - so the 5060 ought to be slightly better value.

    Marvel's Spider-Man 2 and Monster Hunter World - 1440p resolution. We aren't at native resolution. We aren't on ultra settings, but both 8GB RTX 5060 and RTX 5060 Ti see performance collapse. The 16GB RTX 5060 Ti works fine and delivers good performance - proof positive that 8GB is too much of a limiting factor for these cards. | Image credit: Digital Foundry

    RTX 5070 Ti
    RTX 5070
    RTX 5060 Ti
    RTX 5060

    Processor
    GB203
    GB205
    GB206
    GB206

    Cores
    8,960
    6,144
    4,608
    3,840

    Boost Clock
    2.45GHz
    2.51GHz
    2.57GHz
    2.50GHz

    Memory
    16GB GDDR7
    12GB GDDR7
    16GB GDDR7
    8GB GDDR78GB GDDR7

    Memory Bus Width
    256-bit
    192-bit
    128-bit
    128-bit

    Memory Bandwidth
    896GB/s
    672GB/s
    448GB/s
    448GB/s

    Total Graphics Power
    300W
    250W
    180W
    150W

    PSU Recommendation
    750W
    650W
    450W
    450W

    Price
    /£729
    /£539
    /£399
    /£349Release Date
    February 20th
    March 5th
    April 16th
    May 19th

    There's no RTX 5060 Founders Edition, as you'd perhaps expect for a mainstream model, with various third-party cards available in a range of sizes. The RTX 5060 model we received is the Asus Prime model, an over-engineered 2.5-slot, tri-fan design that is nonetheless described as "SFF-ready" due to its relatively modest 268mm length. On top of the robust industrial design, the card features a dual BIOS with "quiet" and "performance" options - always useful. In this case however, the cooler is so large that even the "performance" option is very, very quiet. The card ships with this preset and we recommend it stays there.
    Hilariously, the manufacturer product page recommends a 750W or 850W Asus power supply, though the specs page for the same model makes a more sane 550W recommendation. Regardless, you'll be good to go with a single eight-pin power connector. In terms of ports, we're looking at the RTX 50-series standard assortment, including one HDMI 2.1b and three DisplayPort 2.1b.
    Like the RTX 5060 Ti - but not AMD's just-announced Radeon RX 9060 XT - the RTX 5060 uses a PCIe 8x connection. That's perfectly fine on a modern PCIe 5.0 or 4.0 slot, but potentially problematic on earlier motherboards with PCIe 3.0 slots - something we'll test out in more detail on page eight.
    For our testing, we'll be pairing the RTX 5060 with a bleeding-edge system based around the fastest gaming CPU - the AMD Ryzen 7 9800X3D. We also have 32GB of Corsair DDR5-6000 CL30 memory, a high-end Asus ROG Crosshair X870E Hero motherboard and a 1000W Corsair PSU.
    With all that said, let's get into the benchmarks.
    Nvidia GeForce RTX 5060 Analysis

    To see this content please enable targeting cookies.
    #nvidia #geforce #rtx #review #better
    Nvidia GeForce RTX 5060 review: better than console performance - but not enough VRAM
    The RTX 5060 is here, finally completing the 50-series lineup that debuted five months ago with the 5090. The new "mainstream" graphics card is far from cheap at /£270, but ought to offer reasonable performance and efficiency while adding the multi frame generation feature that's exclusive to this generation of GPUs. However, the 5060 also ships with just 8GB of VRAM, which could be a big limitation for those looking to play the latest graphics showcases. Before we get into our results, it's worth mentioning why this review is a little later than normal, coming a few days after the cards officially went on sale on May 19th. Normally, Nvidia or their partners send a graphics card and the necessary drivers anywhere from a couple of days to a week before the embargo date, which is typically a day before the cards go on sale. That's good for us, because it allows us to do the in-depth analysis that we prefer and still publish at the same time as other outlets, and it's good for potential buyers, as they can get a sense of value and performance and therefore make an informed decision about whether to buy a card or not - from what is often a limited supply at launch. For the RTX 5060 launch, Nvidia - via Asus - delivered a card in good time ahead of its release, but the drivers weren't released to reviewers until the card went on sale on May 19th, coinciding with Nvidia's Computex presentation. Without the drivers, the card is a paperweight, so any launch day coverage is necessarily limited - and in many cases, graphics cards went out of stock before the usual tranche of reviews went live from the tech press. It's a frustrating situation all around, and I doubt that even Nvidia's PR department will be thrilled that most reviews start with the same complaint. Nvidia's GeForce RTX 5060 gets the Digital Foundry video review treatment.Watch on YouTube Following the public release of the drivers, we've been benchmarking around the clock to figure out just how performant the new RTX 5060 is, where its strengths and weaknesses lie, and where it falls compared to the rest of the 50-series line-up, prior generation RTX cards and competing AMD models. Looking at the specs, you can see that the RTX 5060 is based around a cut-down version of the same GB206 die that powered the RTX 5060 Ti. The 5060 has 83 percent of the core count and rated power of the full-fat 5060 Ti design, with an innocuous three percent drop to boost clocks and the same 448GB/s of memory bandwidth. Unlike the 5060 Ti, however, which debuted in 8GB and 16GB models, the 5060 is only available with 8GB of frame buffer memory - a limitation we'll discuss in some depth later. For your 16.6 percent reduction to core count and TGP versus the 5060 Ti, you pay around 20 percent less - so the 5060 ought to be slightly better value. Marvel's Spider-Man 2 and Monster Hunter World - 1440p resolution. We aren't at native resolution. We aren't on ultra settings, but both 8GB RTX 5060 and RTX 5060 Ti see performance collapse. The 16GB RTX 5060 Ti works fine and delivers good performance - proof positive that 8GB is too much of a limiting factor for these cards. | Image credit: Digital Foundry RTX 5070 Ti RTX 5070 RTX 5060 Ti RTX 5060 Processor GB203 GB205 GB206 GB206 Cores 8,960 6,144 4,608 3,840 Boost Clock 2.45GHz 2.51GHz 2.57GHz 2.50GHz Memory 16GB GDDR7 12GB GDDR7 16GB GDDR7 8GB GDDR78GB GDDR7 Memory Bus Width 256-bit 192-bit 128-bit 128-bit Memory Bandwidth 896GB/s 672GB/s 448GB/s 448GB/s Total Graphics Power 300W 250W 180W 150W PSU Recommendation 750W 650W 450W 450W Price /£729 /£539 /£399 /£349Release Date February 20th March 5th April 16th May 19th There's no RTX 5060 Founders Edition, as you'd perhaps expect for a mainstream model, with various third-party cards available in a range of sizes. The RTX 5060 model we received is the Asus Prime model, an over-engineered 2.5-slot, tri-fan design that is nonetheless described as "SFF-ready" due to its relatively modest 268mm length. On top of the robust industrial design, the card features a dual BIOS with "quiet" and "performance" options - always useful. In this case however, the cooler is so large that even the "performance" option is very, very quiet. The card ships with this preset and we recommend it stays there. Hilariously, the manufacturer product page recommends a 750W or 850W Asus power supply, though the specs page for the same model makes a more sane 550W recommendation. Regardless, you'll be good to go with a single eight-pin power connector. In terms of ports, we're looking at the RTX 50-series standard assortment, including one HDMI 2.1b and three DisplayPort 2.1b. Like the RTX 5060 Ti - but not AMD's just-announced Radeon RX 9060 XT - the RTX 5060 uses a PCIe 8x connection. That's perfectly fine on a modern PCIe 5.0 or 4.0 slot, but potentially problematic on earlier motherboards with PCIe 3.0 slots - something we'll test out in more detail on page eight. For our testing, we'll be pairing the RTX 5060 with a bleeding-edge system based around the fastest gaming CPU - the AMD Ryzen 7 9800X3D. We also have 32GB of Corsair DDR5-6000 CL30 memory, a high-end Asus ROG Crosshair X870E Hero motherboard and a 1000W Corsair PSU. With all that said, let's get into the benchmarks. Nvidia GeForce RTX 5060 Analysis To see this content please enable targeting cookies. #nvidia #geforce #rtx #review #better
    WWW.EUROGAMER.NET
    Nvidia GeForce RTX 5060 review: better than console performance - but not enough VRAM
    The RTX 5060 is here, finally completing the 50-series lineup that debuted five months ago with the 5090. The new "mainstream" graphics card is far from cheap at $299/£270, but ought to offer reasonable performance and efficiency while adding the multi frame generation feature that's exclusive to this generation of GPUs. However, the 5060 also ships with just 8GB of VRAM, which could be a big limitation for those looking to play the latest graphics showcases. Before we get into our results, it's worth mentioning why this review is a little later than normal, coming a few days after the cards officially went on sale on May 19th. Normally, Nvidia or their partners send a graphics card and the necessary drivers anywhere from a couple of days to a week before the embargo date, which is typically a day before the cards go on sale. That's good for us, because it allows us to do the in-depth analysis that we prefer and still publish at the same time as other outlets, and it's good for potential buyers, as they can get a sense of value and performance and therefore make an informed decision about whether to buy a card or not - from what is often a limited supply at launch. For the RTX 5060 launch, Nvidia - via Asus - delivered a card in good time ahead of its release, but the drivers weren't released to reviewers until the card went on sale on May 19th, coinciding with Nvidia's Computex presentation. Without the drivers, the card is a paperweight, so any launch day coverage is necessarily limited - and in many cases, graphics cards went out of stock before the usual tranche of reviews went live from the tech press. It's a frustrating situation all around, and I doubt that even Nvidia's PR department will be thrilled that most reviews start with the same complaint. Nvidia's GeForce RTX 5060 gets the Digital Foundry video review treatment.Watch on YouTube Following the public release of the drivers, we've been benchmarking around the clock to figure out just how performant the new RTX 5060 is, where its strengths and weaknesses lie, and where it falls compared to the rest of the 50-series line-up, prior generation RTX cards and competing AMD models. Looking at the specs, you can see that the RTX 5060 is based around a cut-down version of the same GB206 die that powered the RTX 5060 Ti. The 5060 has 83 percent of the core count and rated power of the full-fat 5060 Ti design, with an innocuous three percent drop to boost clocks and the same 448GB/s of memory bandwidth. Unlike the 5060 Ti, however, which debuted in 8GB and 16GB models, the 5060 is only available with 8GB of frame buffer memory - a limitation we'll discuss in some depth later. For your 16.6 percent reduction to core count and TGP versus the 5060 Ti, you pay around 20 percent less - so the 5060 ought to be slightly better value. Marvel's Spider-Man 2 and Monster Hunter World - 1440p resolution. We aren't at native resolution. We aren't on ultra settings, but both 8GB RTX 5060 and RTX 5060 Ti see performance collapse. The 16GB RTX 5060 Ti works fine and delivers good performance - proof positive that 8GB is too much of a limiting factor for these cards. | Image credit: Digital Foundry RTX 5070 Ti RTX 5070 RTX 5060 Ti RTX 5060 Processor GB203 GB205 GB206 GB206 Cores 8,960 6,144 4,608 3,840 Boost Clock 2.45GHz 2.51GHz 2.57GHz 2.50GHz Memory 16GB GDDR7 12GB GDDR7 16GB GDDR7 8GB GDDR78GB GDDR7 Memory Bus Width 256-bit 192-bit 128-bit 128-bit Memory Bandwidth 896GB/s 672GB/s 448GB/s 448GB/s Total Graphics Power 300W 250W 180W 150W PSU Recommendation 750W 650W 450W 450W Price $749/£729 $549/£539 $429/£399 $379/£349$299 Release Date February 20th March 5th April 16th May 19th There's no RTX 5060 Founders Edition, as you'd perhaps expect for a mainstream model, with various third-party cards available in a range of sizes. The RTX 5060 model we received is the Asus Prime model, an over-engineered 2.5-slot, tri-fan design that is nonetheless described as "SFF-ready" due to its relatively modest 268mm length. On top of the robust industrial design, the card features a dual BIOS with "quiet" and "performance" options - always useful. In this case however, the cooler is so large that even the "performance" option is very, very quiet. The card ships with this preset and we recommend it stays there. Hilariously, the manufacturer product page recommends a 750W or 850W Asus power supply, though the specs page for the same model makes a more sane 550W recommendation. Regardless, you'll be good to go with a single eight-pin power connector. In terms of ports, we're looking at the RTX 50-series standard assortment, including one HDMI 2.1b and three DisplayPort 2.1b. Like the RTX 5060 Ti - but not AMD's just-announced Radeon RX 9060 XT - the RTX 5060 uses a PCIe 8x connection. That's perfectly fine on a modern PCIe 5.0 or 4.0 slot, but potentially problematic on earlier motherboards with PCIe 3.0 slots - something we'll test out in more detail on page eight. For our testing, we'll be pairing the RTX 5060 with a bleeding-edge system based around the fastest gaming CPU - the AMD Ryzen 7 9800X3D. We also have 32GB of Corsair DDR5-6000 CL30 memory, a high-end Asus ROG Crosshair X870E Hero motherboard and a 1000W Corsair PSU. With all that said, let's get into the benchmarks. Nvidia GeForce RTX 5060 Analysis To see this content please enable targeting cookies.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • SteamOS adds support for Lenovo Legion Go, Asus ROG Ally, and other AMD handheld PCs

    In brief: Valve has moved forward with its long-teased plan to bring SteamOS to more devices beyond the Steam Deck. The stable build adds support for systems from Lenovo, Asus, and others, offering a more stable Windows alternative for handheld gaming PCs. It also introduces new features and bug fixes.
    Now available for installation, SteamOS 3.7.8 introduces official support for the Lenovo Legion Go S and improves compatibility with other handheld gaming PCs, including the Asus ROG Ally. The release could dramatically alter the user experience on these devices.
    SteamOS's gamepad-friendly interface is better suited to small screens than Windows 11, giving the Steam Deck a notable edge over more powerful portable PCs. Recent updates to the Linux-based system have gradually revealed Valve's broader release plans, with the latest patch opening it up for public testing. The operating system now uses a newer Arch Linux base, the Linux kernel has been upgraded to version 6.11, and desktop mode runs on Plasma 6.2.5.

    Early adopters should download a SteamOS recovery image and follow Valve's setup instructions. Valve optimized 3.7.8 specifically for handheld devices, but it currently supports only PCs with AMD hardware and an NVMe SSD.
    The system requirements likely cover devices built on similar AMD APUs, including the Ryzen 8840U, Ryzen AI HX 370, Ryzen Z1, and Ryzen Z2. Valve provides specific installation instructions for the original Lenovo Legion Go and the ROG Ally. However, the latest build should theoretically support handhelds such as the GPD Win, Ayaneo 3, and Zotac Zone. The Intel-powered MSI Claw remains the only unsupported portable PC, and whether Valve plans to add Intel support is still unclear.

    Adventurous testers will likely explore whether "AMD hardware" extends to desktops or other Ryzen-based form factors. It remains unclear if SteamOS supports Radeon dedicated GPUs and alternative wireless chips.
    The update has a useful new feature that lets users set a battery charge limit, with Valve recommending an 80 percent cap to extend battery life on frequently docked devices. It is functionally similar to the optional 90 percent cap Nintendo recently introduced for the upcoming Switch 2. SteamOS 3.7.8 also adds frame rate limits for internal and external VRR displays, while Bluetooth controllers can now wake the original LCD Steam Deck from sleep. As the first version 3.7 build to reach the stable channel, numerous features have also exited beta.
    Lenovo plans to launch the Legion Go S with SteamOS pre-installed in the coming weeks, offering it at a significantly lower price than the Windows 11 version.
    // Related Stories
    #steamos #adds #support #lenovo #legion
    SteamOS adds support for Lenovo Legion Go, Asus ROG Ally, and other AMD handheld PCs
    In brief: Valve has moved forward with its long-teased plan to bring SteamOS to more devices beyond the Steam Deck. The stable build adds support for systems from Lenovo, Asus, and others, offering a more stable Windows alternative for handheld gaming PCs. It also introduces new features and bug fixes. Now available for installation, SteamOS 3.7.8 introduces official support for the Lenovo Legion Go S and improves compatibility with other handheld gaming PCs, including the Asus ROG Ally. The release could dramatically alter the user experience on these devices. SteamOS's gamepad-friendly interface is better suited to small screens than Windows 11, giving the Steam Deck a notable edge over more powerful portable PCs. Recent updates to the Linux-based system have gradually revealed Valve's broader release plans, with the latest patch opening it up for public testing. The operating system now uses a newer Arch Linux base, the Linux kernel has been upgraded to version 6.11, and desktop mode runs on Plasma 6.2.5. Early adopters should download a SteamOS recovery image and follow Valve's setup instructions. Valve optimized 3.7.8 specifically for handheld devices, but it currently supports only PCs with AMD hardware and an NVMe SSD. The system requirements likely cover devices built on similar AMD APUs, including the Ryzen 8840U, Ryzen AI HX 370, Ryzen Z1, and Ryzen Z2. Valve provides specific installation instructions for the original Lenovo Legion Go and the ROG Ally. However, the latest build should theoretically support handhelds such as the GPD Win, Ayaneo 3, and Zotac Zone. The Intel-powered MSI Claw remains the only unsupported portable PC, and whether Valve plans to add Intel support is still unclear. Adventurous testers will likely explore whether "AMD hardware" extends to desktops or other Ryzen-based form factors. It remains unclear if SteamOS supports Radeon dedicated GPUs and alternative wireless chips. The update has a useful new feature that lets users set a battery charge limit, with Valve recommending an 80 percent cap to extend battery life on frequently docked devices. It is functionally similar to the optional 90 percent cap Nintendo recently introduced for the upcoming Switch 2. SteamOS 3.7.8 also adds frame rate limits for internal and external VRR displays, while Bluetooth controllers can now wake the original LCD Steam Deck from sleep. As the first version 3.7 build to reach the stable channel, numerous features have also exited beta. Lenovo plans to launch the Legion Go S with SteamOS pre-installed in the coming weeks, offering it at a significantly lower price than the Windows 11 version. // Related Stories #steamos #adds #support #lenovo #legion
    WWW.TECHSPOT.COM
    SteamOS adds support for Lenovo Legion Go, Asus ROG Ally, and other AMD handheld PCs
    In brief: Valve has moved forward with its long-teased plan to bring SteamOS to more devices beyond the Steam Deck. The stable build adds support for systems from Lenovo, Asus, and others, offering a more stable Windows alternative for handheld gaming PCs. It also introduces new features and bug fixes. Now available for installation, SteamOS 3.7.8 introduces official support for the Lenovo Legion Go S and improves compatibility with other handheld gaming PCs, including the Asus ROG Ally. The release could dramatically alter the user experience on these devices. SteamOS's gamepad-friendly interface is better suited to small screens than Windows 11, giving the Steam Deck a notable edge over more powerful portable PCs. Recent updates to the Linux-based system have gradually revealed Valve's broader release plans, with the latest patch opening it up for public testing. The operating system now uses a newer Arch Linux base, the Linux kernel has been upgraded to version 6.11, and desktop mode runs on Plasma 6.2.5. Early adopters should download a SteamOS recovery image and follow Valve's setup instructions. Valve optimized 3.7.8 specifically for handheld devices, but it currently supports only PCs with AMD hardware and an NVMe SSD. The system requirements likely cover devices built on similar AMD APUs, including the Ryzen 8840U, Ryzen AI HX 370, Ryzen Z1, and Ryzen Z2. Valve provides specific installation instructions for the original Lenovo Legion Go and the ROG Ally. However, the latest build should theoretically support handhelds such as the GPD Win, Ayaneo 3, and Zotac Zone. The Intel-powered MSI Claw remains the only unsupported portable PC, and whether Valve plans to add Intel support is still unclear. Adventurous testers will likely explore whether "AMD hardware" extends to desktops or other Ryzen-based form factors. It remains unclear if SteamOS supports Radeon dedicated GPUs and alternative wireless chips. The update has a useful new feature that lets users set a battery charge limit, with Valve recommending an 80 percent cap to extend battery life on frequently docked devices. It is functionally similar to the optional 90 percent cap Nintendo recently introduced for the upcoming Switch 2. SteamOS 3.7.8 also adds frame rate limits for internal and external VRR displays, while Bluetooth controllers can now wake the original LCD Steam Deck from sleep. As the first version 3.7 build to reach the stable channel, numerous features have also exited beta. Lenovo plans to launch the Legion Go S with SteamOS pre-installed in the coming weeks, offering it at a significantly lower price than the Windows 11 version. // Related Stories
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com