• Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour

    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899.

    Posted By Joelle Daniels | On 16th, Jun. 2025

    While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With:

    Elden Ring: Nightreign
    Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More
    FBC: Firebreak
    Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More
    Death Stranding 2: On the Beach
    Publisher:Sony Developer:Kojima Productions Platforms:PS5View More
    Amazing Articles You Might Want To Check Out!

    Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ...
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo...
    Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo...
    The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f...
    The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb...
    Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    #asus #rog #xbox #ally #start
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More #asus #rog #xbox #ally #start
    GAMINGBOLT.COM
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour
    Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbox Ally X will cost €899. Posted By Joelle Daniels | On 16th, Jun. 2025 While Microsoft and Asus have unveiled the ROG Xbox Ally and ROG Xbox Ally X handheld gaming systems, the companies have yet to confirm the prices or release dates for the two systems. While the announcement  mentioned that they will be launched later this year, a new report, courtesy of leaker Extas1s, indicates that pre-orders for both devices will be kicked off in August, with the launch then happening in October. As noted by Extas1s, the lower-powered ROG Xbox Ally is expected to be priced around €599. The leaker claims to have corroborated the pricing details for the handheld with two different Europe-based retailers. The more powerful ROG Xbox Ally X, on the other hand, is expected to be priced at €899. This would put its pricing in line with Asus’s own ROG Ally X. Previously, Asus senior manager of marketing content for gaming, Whitson Gordon, had revealed that pricing and power use were the two biggest reasons why both the ROG Xbox Ally and the ROG Xbox Ally X didn’t feature OLED displays. Rather, both systems will come equipped with 7-inch 1080p 120 Hz LCD displays with variable refresh rate capabilities. “We did some R&D and prototyping with OLED, but it’s still not where we want it to be when you factor VRR into the mix and we aren’t willing to give up VRR,” said Gordon. “I’ll draw that line in the sand right now. I am of the opinion that if a display doesn’t have variable refresh rate, it’s not a gaming display in the year 2025 as far as I’m concerned, right? That’s a must-have feature, and OLED with VRR right now draws significantly more power than the LCD that we’re currently using on the Ally and it costs more.” Explaining further that the decision ultimately also came down to keeping the pricing for both systems at reasonable levels, since buyers often tend to get handheld gaming systems as their secondary machiens, Gordon noted that both handhelds would have much higher price tags if OLED displays were used. “That’s all I’ll say about price,” said Gordon. “You have to align your expectations with the market and what we’re doing here. Adding 32GB, OLED, Z2 Extreme, and all of those extra bells and whistles would cost a lot more than the price bracket you guys are used to on the Ally, and the vast majority of users are not willing to pay that kind of price.” Shortly after its announcement, Microsoft and Asus had released a video where the two companies spoke about the various features of the ROG Xbox Ally and ROG Xbox Ally X. In the video, we also get to see an early hardware prototype of the handheld gaming system built inside a cardboard box. The ROG Xbox Ally runs on an AMD Ryzen Z2A chip, and has 16 GB of LPDDR5X-6400 RAM and 512 GB of storage. The ROG Xbox Ally X, on the other hand, runs on an AMD Ryzen Z2 Extreme chip, and has 24 GB of LPDDR5X-8000 RAM and 1 TB of storage. Both systems run on Windows. Tagged With: Elden Ring: Nightreign Publisher:Bandai Namco Developer:FromSoftware Platforms:PS5, Xbox Series X, PS4, Xbox One, PCView More FBC: Firebreak Publisher:Remedy Entertainment Developer:Remedy Entertainment Platforms:PS5, Xbox Series X, PCView More Death Stranding 2: On the Beach Publisher:Sony Developer:Kojima Productions Platforms:PS5View More Amazing Articles You Might Want To Check Out! Summer Game Fest 2025 Saw 89 Percent Growth in Live Concurrent Viewership Since Last Year This year's Summer Game Fest has been the most successful one so far, with around 1.5 million live viewers on ... Asus ROG Xbox Ally, ROG Xbox Ally X to Start Pre-Orders in August, Launch in October – Rumour A new report indicates that the ROG Xbox Ally will be priced at around €599, while the more powerful ROG Xbo... Borderlands 4 Gets New Video Explaining the Process of Creating Vault Hunters According to the development team behind Borderlands 4, the creation of Vault Hunters is a studio-wide collabo... The Witcher 4 Team is Tapping Into the “Good Creative Chaos” From The Witcher 3’s Development Narrative director Philipp Weber says there are "new questions we want to answer because this is supposed to f... The Witcher 4 is Opting for “Console-First Development” to Ensure 60 FPS, Says VP of Tech However, CD Projekt RED's Charles Tremblay says 60 frames per second will be "extremely challenging" on the Xb... Red Dead Redemption Voice Actor Teases “Exciting News” for This Week Actor Rob Wiethoff teases an announcement, potentially the rumored release of Red Dead Redemption 2 on Xbox Se... View More
    Like
    Love
    Wow
    Sad
    Angry
    600
    2 Commentarii 0 Distribuiri 0 previzualizare
  • You can snag this 64GB Crucial DDR5-5600 SO-DIMM kit for just $130 at Amazon

    According to data from CamelCamelCamel, this 64GB Crucial DDR5-5600 kit is now only , its lowest price to date.
    #you #can #snag #this #64gb
    You can snag this 64GB Crucial DDR5-5600 SO-DIMM kit for just $130 at Amazon
    According to data from CamelCamelCamel, this 64GB Crucial DDR5-5600 kit is now only , its lowest price to date. #you #can #snag #this #64gb
    WWW.TOMSHARDWARE.COM
    You can snag this 64GB Crucial DDR5-5600 SO-DIMM kit for just $130 at Amazon
    According to data from CamelCamelCamel, this 64GB Crucial DDR5-5600 kit is now only $130 at Amazon, its lowest price to date.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The latest iPhone 17 rumors: A18 chip, smaller Dynamic Island, more

    According to a recent investor note from GF Securities analyst Jeff Pu, Apple might have a pair of surprises in store for the iPhone 17 lineup this fall. Most notably, Pu has seemingly changed his prediction that the entire iPhone 17 lineup will use a version of the A19 chip. Instead, he now believes the base model iPhone 17 will use the same A18 chip that’s used in the iPhone 16.

    Pu had previously predicted that the entire iPhone 17 lineup would use the A19 chip and the differentiating factor would be the RAM. The analyst had said the iPhone 17 and iPhone 17 Air would have 8GB, while the iPhone 17 Pro and iPhone 17 Pro Max would have 12GB of RAM.
    Now, Pu claims that the iPhone 17 will use 8GB of RAM, while the iPhone 17 Air will feature 12GB of RAM just like the Pro models.
    Also in this analyst note, Pu says that all four iPhone 17 models will adopt a new metalens technology for the proximity sensor. The “metalens” technology may help Apple dramatically reduce the size of the Face ID sensor. Previously, Pu had said this change would only come to the iPhone 17 Pro Max. If it pans out, this means the entire iPhone 17 will feature a smaller Dynamic Island.
    Notably, this contradicts Ming-Chi Kuo, who reported in January that the Dynamic Island across the iPhone 17 lineup will be “largely unchanged” compared to the iPhone 16 models
    Here is Pu’s full breakdown of the iPhone 17 lineup specifications:
    SpeciPhone 17iPhone 17 AiriPhone 17 ProiPhone 17 Pro MaxLaunchSep-25Sep-25Sep-25Sep-25Display6.1″6.6″6.3″6.9″ProcessorA18 N3EA19 N3PA19 Pro, N3PA19 Pro, N3PDRAMLPDDR5 8GBLPDDR5 12GBLPDDR5X 12GBLPDDR5X 12GBFront Camera24MP, 6P24MP, 6P24MP, 6P24MP, 6PRear Camera48MP 7P, 12MP 5P48MP, 7P48MP 7P, Periscope48MP 1G+3P, 48MP 6P48MP 7P, Periscope48MP 1G+3P, 48MP 6PFace IDStructured lightStructured lightStructured lightStructured lightCasingAluminumTitaniumAluminumAluminumModemQualcommQualcomm or AppleQualcommQualcommCharging35W35W35W35WNPIFoxconnFoxconnFoxconnICT
    My favorite iPhone accessories:
    Follow Chance: Threads, Bluesky, Instagram, and Mastodon. 

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #latest #iphone #rumors #a18 #chip
    The latest iPhone 17 rumors: A18 chip, smaller Dynamic Island, more
    According to a recent investor note from GF Securities analyst Jeff Pu, Apple might have a pair of surprises in store for the iPhone 17 lineup this fall. Most notably, Pu has seemingly changed his prediction that the entire iPhone 17 lineup will use a version of the A19 chip. Instead, he now believes the base model iPhone 17 will use the same A18 chip that’s used in the iPhone 16. Pu had previously predicted that the entire iPhone 17 lineup would use the A19 chip and the differentiating factor would be the RAM. The analyst had said the iPhone 17 and iPhone 17 Air would have 8GB, while the iPhone 17 Pro and iPhone 17 Pro Max would have 12GB of RAM. Now, Pu claims that the iPhone 17 will use 8GB of RAM, while the iPhone 17 Air will feature 12GB of RAM just like the Pro models. Also in this analyst note, Pu says that all four iPhone 17 models will adopt a new metalens technology for the proximity sensor. The “metalens” technology may help Apple dramatically reduce the size of the Face ID sensor. Previously, Pu had said this change would only come to the iPhone 17 Pro Max. If it pans out, this means the entire iPhone 17 will feature a smaller Dynamic Island. Notably, this contradicts Ming-Chi Kuo, who reported in January that the Dynamic Island across the iPhone 17 lineup will be “largely unchanged” compared to the iPhone 16 models Here is Pu’s full breakdown of the iPhone 17 lineup specifications: SpeciPhone 17iPhone 17 AiriPhone 17 ProiPhone 17 Pro MaxLaunchSep-25Sep-25Sep-25Sep-25Display6.1″6.6″6.3″6.9″ProcessorA18 N3EA19 N3PA19 Pro, N3PA19 Pro, N3PDRAMLPDDR5 8GBLPDDR5 12GBLPDDR5X 12GBLPDDR5X 12GBFront Camera24MP, 6P24MP, 6P24MP, 6P24MP, 6PRear Camera48MP 7P, 12MP 5P48MP, 7P48MP 7P, Periscope48MP 1G+3P, 48MP 6P48MP 7P, Periscope48MP 1G+3P, 48MP 6PFace IDStructured lightStructured lightStructured lightStructured lightCasingAluminumTitaniumAluminumAluminumModemQualcommQualcomm or AppleQualcommQualcommCharging35W35W35W35WNPIFoxconnFoxconnFoxconnICT My favorite iPhone accessories: Follow Chance: Threads, Bluesky, Instagram, and Mastodon.  Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #latest #iphone #rumors #a18 #chip
    9TO5MAC.COM
    The latest iPhone 17 rumors: A18 chip, smaller Dynamic Island, more
    According to a recent investor note from GF Securities analyst Jeff Pu, Apple might have a pair of surprises in store for the iPhone 17 lineup this fall. Most notably, Pu has seemingly changed his prediction that the entire iPhone 17 lineup will use a version of the A19 chip. Instead, he now believes the base model iPhone 17 will use the same A18 chip that’s used in the iPhone 16. Pu had previously predicted that the entire iPhone 17 lineup would use the A19 chip and the differentiating factor would be the RAM. The analyst had said the iPhone 17 and iPhone 17 Air would have 8GB, while the iPhone 17 Pro and iPhone 17 Pro Max would have 12GB of RAM. Now, Pu claims that the iPhone 17 will use 8GB of RAM, while the iPhone 17 Air will feature 12GB of RAM just like the Pro models. Also in this analyst note, Pu says that all four iPhone 17 models will adopt a new metalens technology for the proximity sensor. The “metalens” technology may help Apple dramatically reduce the size of the Face ID sensor. Previously, Pu had said this change would only come to the iPhone 17 Pro Max. If it pans out, this means the entire iPhone 17 will feature a smaller Dynamic Island. Notably, this contradicts Ming-Chi Kuo, who reported in January that the Dynamic Island across the iPhone 17 lineup will be “largely unchanged” compared to the iPhone 16 models Here is Pu’s full breakdown of the iPhone 17 lineup specifications: SpeciPhone 17iPhone 17 AiriPhone 17 ProiPhone 17 Pro MaxLaunchSep-25Sep-25Sep-25Sep-25Display6.1″6.6″6.3″6.9″ProcessorA18 N3EA19 N3PA19 Pro, N3PA19 Pro, N3PDRAMLPDDR5 8GBLPDDR5 12GBLPDDR5X 12GBLPDDR5X 12GBFront Camera24MP, 6P24MP, 6P24MP, 6P24MP, 6PRear Camera48MP 7P, 12MP 5P48MP, 7P48MP 7P, Periscope48MP 1G+3P, 48MP 6P48MP 7P, Periscope48MP 1G+3P, 48MP 6PFace IDStructured lightStructured lightStructured lightStructured lightCasingAluminumTitaniumAluminumAluminumModemQualcommQualcomm or AppleQualcommQualcommCharging35W35W35W35WNPIFoxconnFoxconnFoxconnICT My favorite iPhone accessories: Follow Chance: Threads, Bluesky, Instagram, and Mastodon.  Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Save $600 Off the Alienware Area-51 GeForce RTX 5090 Prebuilt Gaming PC for Memorial Day

    If you're seeking the absolute best of the best in PC gaming performance, look no further. As part of its Memorial Day Sale, Dell has dropped the price of its flagship Alienware Area-51 prebuilt gaming PC, equipped with the Nvidia GeForce RTX 5090 graphics card, for the lowest price I've seen. This particular model normally retails for but a new instant discount drops it to with free shipping. The RTX 5090 is undisputedly the most powerful graphics card on the market and is pretty much impossible to find for under by itself.Memorial Day Deal: Alienware Area-51 RTX 5090 Gaming PCNew ReleaseAlienware Area-51 Intel Core Ultra 9 285K RTX 5090 Gaming PCThis Alienware Area-51 gaming PC configuration drops to after a off instant discount. Specs include an Intel Core Ultra 9 285K processor, 32GB of DDR5-6400MHz RAM, and 2TB SSD storage. The Core Ultra 9 285K is Intel's latest flagship CPU and offers stellar workstation and gaming performance. It's not quite the performance uplift we wanted from the i9-14900K, but it's still the best all-around CPU that Intel has on offer. The processor is cooled by a massive 360mm all-in-one liquid cooler, and the system is powered by a 1,500W Platinum power supply.New for 2025: The Alienware Area-51 ChassisDell unveiled the new Alienware Area-51 gaming PC at CES 2025. The chassis looks similar to the 2024 R16 system with aesthetic and cooling redesigns and updated components. The I/O panel is positioned at the top of the case instead of the front, and the tempered glass window now spans the entire side panel instead of just a smaller cutout. As a result, the side panel vents are gone, and instead air intakes are located at the bottom as well as the front of the case. Alienware is now pushing a positive airflow design, which means a less dusty interior. The internal components have been refreshed with a new motherboard, faster RAM, and more powerful power supply to accommodate the new generation of CPUs and GPUs.The RTX 5090 Is the Most Powerful Graphics Card EverThe Nvidia GeForce RTX 5090 has emerged as the most powerful consumer GPU on the market. Although Nvidia has prioritized software updates, AI features, and DLSS 4 technology to improve gameplay performance, the 5090 still boasts an impressive 25%-30% uplift over the RTX 4090 in terms of pure hardware-based raster performance. The 5090 also has moreand fasterVRAM compared to the 4090. This GPU is extremely difficult to find at retail price and is currently selling for -on eBay.Nvidia GeForce RTX 5090 FE Review by Jackie Thomas"The Nvidia GeForce RTX 5090 has officially taken the performance crown from the RTX 4090, but with less force than previous generations. When it comes to traditional non-AI gaming performance, the RTX 5090 provides one of the smallest generational uplifts in recent memory. However, in games that support it, DLSS 4 really does deliver huge performance gains – you just have to make your peace with the fact that 75% of the frames are generated with AI."More Alienware Prebuilt Gaming PC DealsAlienware Aurora R16 Intel Core Ultra 7 265F RTX 5080 Gaming PCat AlienwareAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCNew ReleaseAlienware Area-51 Intel Core Ultra 9 285K RTX 5090 Gaming PCAlienware Aurora R16 Intel Core Ultra 7 265F RTX 5070 Gaming PCat AlienwareAlienware Aurora R16 Intel Core Ultra 9 285KF RTX 5070 Gaming PCNew for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PCAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCWhy Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    #save #off #alienware #area51 #geforce
    Save $600 Off the Alienware Area-51 GeForce RTX 5090 Prebuilt Gaming PC for Memorial Day
    If you're seeking the absolute best of the best in PC gaming performance, look no further. As part of its Memorial Day Sale, Dell has dropped the price of its flagship Alienware Area-51 prebuilt gaming PC, equipped with the Nvidia GeForce RTX 5090 graphics card, for the lowest price I've seen. This particular model normally retails for but a new instant discount drops it to with free shipping. The RTX 5090 is undisputedly the most powerful graphics card on the market and is pretty much impossible to find for under by itself.Memorial Day Deal: Alienware Area-51 RTX 5090 Gaming PCNew ReleaseAlienware Area-51 Intel Core Ultra 9 285K RTX 5090 Gaming PCThis Alienware Area-51 gaming PC configuration drops to after a off instant discount. Specs include an Intel Core Ultra 9 285K processor, 32GB of DDR5-6400MHz RAM, and 2TB SSD storage. The Core Ultra 9 285K is Intel's latest flagship CPU and offers stellar workstation and gaming performance. It's not quite the performance uplift we wanted from the i9-14900K, but it's still the best all-around CPU that Intel has on offer. The processor is cooled by a massive 360mm all-in-one liquid cooler, and the system is powered by a 1,500W Platinum power supply.New for 2025: The Alienware Area-51 ChassisDell unveiled the new Alienware Area-51 gaming PC at CES 2025. The chassis looks similar to the 2024 R16 system with aesthetic and cooling redesigns and updated components. The I/O panel is positioned at the top of the case instead of the front, and the tempered glass window now spans the entire side panel instead of just a smaller cutout. As a result, the side panel vents are gone, and instead air intakes are located at the bottom as well as the front of the case. Alienware is now pushing a positive airflow design, which means a less dusty interior. The internal components have been refreshed with a new motherboard, faster RAM, and more powerful power supply to accommodate the new generation of CPUs and GPUs.The RTX 5090 Is the Most Powerful Graphics Card EverThe Nvidia GeForce RTX 5090 has emerged as the most powerful consumer GPU on the market. Although Nvidia has prioritized software updates, AI features, and DLSS 4 technology to improve gameplay performance, the 5090 still boasts an impressive 25%-30% uplift over the RTX 4090 in terms of pure hardware-based raster performance. The 5090 also has moreand fasterVRAM compared to the 4090. This GPU is extremely difficult to find at retail price and is currently selling for -on eBay.Nvidia GeForce RTX 5090 FE Review by Jackie Thomas"The Nvidia GeForce RTX 5090 has officially taken the performance crown from the RTX 4090, but with less force than previous generations. When it comes to traditional non-AI gaming performance, the RTX 5090 provides one of the smallest generational uplifts in recent memory. However, in games that support it, DLSS 4 really does deliver huge performance gains – you just have to make your peace with the fact that 75% of the frames are generated with AI."More Alienware Prebuilt Gaming PC DealsAlienware Aurora R16 Intel Core Ultra 7 265F RTX 5080 Gaming PCat AlienwareAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCNew ReleaseAlienware Area-51 Intel Core Ultra 9 285K RTX 5090 Gaming PCAlienware Aurora R16 Intel Core Ultra 7 265F RTX 5070 Gaming PCat AlienwareAlienware Aurora R16 Intel Core Ultra 9 285KF RTX 5070 Gaming PCNew for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PCAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PCWhy Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time. #save #off #alienware #area51 #geforce
    WWW.IGN.COM
    Save $600 Off the Alienware Area-51 GeForce RTX 5090 Prebuilt Gaming PC for Memorial Day
    If you're seeking the absolute best of the best in PC gaming performance, look no further. As part of its Memorial Day Sale, Dell has dropped the price of its flagship Alienware Area-51 prebuilt gaming PC, equipped with the Nvidia GeForce RTX 5090 graphics card, for the lowest price I've seen. This particular model normally retails for $5,499.99, but a new $600 instant discount drops it to $4,899.99 with free shipping. The RTX 5090 is undisputedly the most powerful graphics card on the market and is pretty much impossible to find for under $3,000 by itself.Memorial Day Deal: Alienware Area-51 RTX 5090 Gaming PCNew ReleaseAlienware Area-51 Intel Core Ultra 9 285K RTX 5090 Gaming PC (32GB/2TB)This Alienware Area-51 gaming PC configuration drops to $4,899.99 after a $600 off instant discount. Specs include an Intel Core Ultra 9 285K processor, 32GB of DDR5-6400MHz RAM, and 2TB SSD storage. The Core Ultra 9 285K is Intel's latest flagship CPU and offers stellar workstation and gaming performance. It's not quite the performance uplift we wanted from the i9-14900K, but it's still the best all-around CPU that Intel has on offer. The processor is cooled by a massive 360mm all-in-one liquid cooler, and the system is powered by a 1,500W Platinum power supply.New for 2025: The Alienware Area-51 ChassisDell unveiled the new Alienware Area-51 gaming PC at CES 2025. The chassis looks similar to the 2024 R16 system with aesthetic and cooling redesigns and updated components. The I/O panel is positioned at the top of the case instead of the front, and the tempered glass window now spans the entire side panel instead of just a smaller cutout. As a result, the side panel vents are gone, and instead air intakes are located at the bottom as well as the front of the case. Alienware is now pushing a positive airflow design (more intake than exhaust airflow), which means a less dusty interior. The internal components have been refreshed with a new motherboard, faster RAM, and more powerful power supply to accommodate the new generation of CPUs and GPUs.The RTX 5090 Is the Most Powerful Graphics Card EverThe Nvidia GeForce RTX 5090 has emerged as the most powerful consumer GPU on the market. Although Nvidia has prioritized software updates, AI features, and DLSS 4 technology to improve gameplay performance, the 5090 still boasts an impressive 25%-30% uplift over the RTX 4090 in terms of pure hardware-based raster performance. The 5090 also has more (32GB vs. 24GB) and faster (GDDR7 vs. GDDR6) VRAM compared to the 4090. This GPU is extremely difficult to find at retail price and is currently selling for $3,500-$4,000 on eBay.Nvidia GeForce RTX 5090 FE Review by Jackie Thomas"The Nvidia GeForce RTX 5090 has officially taken the performance crown from the RTX 4090, but with less force than previous generations. When it comes to traditional non-AI gaming performance, the RTX 5090 provides one of the smallest generational uplifts in recent memory. However, in games that support it, DLSS 4 really does deliver huge performance gains – you just have to make your peace with the fact that 75% of the frames are generated with AI."More Alienware Prebuilt Gaming PC DealsAlienware Aurora R16 Intel Core Ultra 7 265F RTX 5080 Gaming PC (16GB/1TB)$2,349.99 at AlienwareAlienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PC (32GB/2TB)New ReleaseAlienware Area-51 Intel Core Ultra 9 285K RTX 5090 Gaming PC (32GB/2TB)Alienware Aurora R16 Intel Core Ultra 7 265F RTX 5070 Gaming PC$1,849.99 at AlienwareAlienware Aurora R16 Intel Core Ultra 9 285KF RTX 5070 Gaming PC (32GB/2TB)New for 2025Alienware Area-51 Intel Core Ultra 7 265 RTX 5080 Gaming PC (32GB/1TB)Alienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PC (64GB/2TB)Alienware Aurora R16 Intel Core Ultra 9 285K RTX 5080 Gaming PC (64GB/4TB)Why Should You Trust IGN's Deals Team?IGN's deals team has a combined 30+ years of experience finding the best discounts in gaming, tech, and just about every other category. We don't try to trick our readers into buying things they don't need at prices that aren't worth buying something at. Our ultimate goal is to surface the best possible deals from brands we trust and our editorial team has personal experience with. You can check out our deals standards here for more information on our process, or keep up with the latest deals we find on IGN's Deals account on Twitter.Eric Song is the IGN commerce manager in charge of finding the best gaming and tech deals every day. When Eric isn't hunting for deals for other people at work, he's hunting for deals for himself during his free time.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Intel Announces Entry-Level “Core Ultra 200” Workstation Desktop And Laptop CPUs

    Intel has unveiled a wide range of entry-level workstation solutions for consumers, featuring the Intel Arrow Lake "Core Ultra 200" CPUs.
    Intel Debuts Affordable Workstations Based on Arrow Lake "Core Ultra 200" CPUs: Claims Superior Performance Compared to Its Rivals
    The Intel client segment received entry-level workstation systems for consumers, which will offer competitive performance on a budget. Intel has announced both desktop and laptop workstations, equipped with the latest Intel Core Ultra processors, offering noticeable performance uplifts over its rival.

    In the desktop segment, Intel claims up to 13% higher multithreaded performance with Core Ultra 200S in programs like Cinebench Multicore 2024 vs AMD's flagship Ryzen 9 9950X. This is reportedly achieved at 11% better performance per watt compared to the AMD CPU, when both operated at 125W of TDP. The desktop workstation systems will offer up to 256 Gb-6400 of DDR5 EEC memory, WiFi 6E, and features such as remote KVM, Intel vPro, and Pro Codec support.

    The laptop workstation segment brings Intel Core Ultra 200HX and Intel Core Ultra 200H processors for high-performance and thin & light laptops, respectively. The Core Ultra 200HX reportedly delivers up to 8% and 42% higher single and multithreaded performance vs Ryzen AI 9 HX 375, respectively. With a good 41% better power efficiency vs Meteor Lake, the 200HX laptops can deliver superior performance at the same wattage.

    2 of 9

    The HP ZBook Furey 18 is one of the first workstation laptops, which is scheduled for a retail launch in June and will bring up to 256 Gb of EEC DDR5 memory and built-in NPU for local AI workload execution. Intel has also shared benchmarks of the flagship Core Ultra 9 285HX vs the i9 14900HX from previous gen to showcase the uplifts the Core Ultra 200HX brings in popular professional workloads.

    Then we have the Intel Core Ultra 200H-based budget workstation laptops such as the Dell Pro Max 16, which deliver up to 22% faster performance vs Ryzen AI 9 365 in Geekbench 6.3 multicore workload and a staggering 21+ hours of battery life. It will feature the Arc 140T integrated graphics, which can take care of both professional workloads and gaming. Compared to the Ryzen 9 8945HS, which is a Zen 4 chip, the Core Ultra 200H delivers up to 36% better performance across 9 applications.

    2 of 9

    Similarly, the flagship Core Ultra 9 285H delivers up to 26% better performance than Zen 5-based Ryzen AI 9 365 in 6 different applications. Intel also shared its Arc 140T performance prowess in apps like Autodesk Inventor and Chaos V-Ray for Cinema 4D, where the 200H chip is 2.15X and 1.30X faster than the iGPU present on the Core Ultra 185H, respectively. However, those who crave even superior graphical performance can opt for the latest Arc Pro B60 24 GB or the Arc Pro B50 16 GB workstation GPUs, but these can only be used on desktops.

    Deal of the Day
    #intel #announces #entrylevel #core #ultra
    Intel Announces Entry-Level “Core Ultra 200” Workstation Desktop And Laptop CPUs
    Intel has unveiled a wide range of entry-level workstation solutions for consumers, featuring the Intel Arrow Lake "Core Ultra 200" CPUs. Intel Debuts Affordable Workstations Based on Arrow Lake "Core Ultra 200" CPUs: Claims Superior Performance Compared to Its Rivals The Intel client segment received entry-level workstation systems for consumers, which will offer competitive performance on a budget. Intel has announced both desktop and laptop workstations, equipped with the latest Intel Core Ultra processors, offering noticeable performance uplifts over its rival. In the desktop segment, Intel claims up to 13% higher multithreaded performance with Core Ultra 200S in programs like Cinebench Multicore 2024 vs AMD's flagship Ryzen 9 9950X. This is reportedly achieved at 11% better performance per watt compared to the AMD CPU, when both operated at 125W of TDP. The desktop workstation systems will offer up to 256 Gb-6400 of DDR5 EEC memory, WiFi 6E, and features such as remote KVM, Intel vPro, and Pro Codec support. The laptop workstation segment brings Intel Core Ultra 200HX and Intel Core Ultra 200H processors for high-performance and thin & light laptops, respectively. The Core Ultra 200HX reportedly delivers up to 8% and 42% higher single and multithreaded performance vs Ryzen AI 9 HX 375, respectively. With a good 41% better power efficiency vs Meteor Lake, the 200HX laptops can deliver superior performance at the same wattage. 2 of 9 The HP ZBook Furey 18 is one of the first workstation laptops, which is scheduled for a retail launch in June and will bring up to 256 Gb of EEC DDR5 memory and built-in NPU for local AI workload execution. Intel has also shared benchmarks of the flagship Core Ultra 9 285HX vs the i9 14900HX from previous gen to showcase the uplifts the Core Ultra 200HX brings in popular professional workloads. Then we have the Intel Core Ultra 200H-based budget workstation laptops such as the Dell Pro Max 16, which deliver up to 22% faster performance vs Ryzen AI 9 365 in Geekbench 6.3 multicore workload and a staggering 21+ hours of battery life. It will feature the Arc 140T integrated graphics, which can take care of both professional workloads and gaming. Compared to the Ryzen 9 8945HS, which is a Zen 4 chip, the Core Ultra 200H delivers up to 36% better performance across 9 applications. 2 of 9 Similarly, the flagship Core Ultra 9 285H delivers up to 26% better performance than Zen 5-based Ryzen AI 9 365 in 6 different applications. Intel also shared its Arc 140T performance prowess in apps like Autodesk Inventor and Chaos V-Ray for Cinema 4D, where the 200H chip is 2.15X and 1.30X faster than the iGPU present on the Core Ultra 185H, respectively. However, those who crave even superior graphical performance can opt for the latest Arc Pro B60 24 GB or the Arc Pro B50 16 GB workstation GPUs, but these can only be used on desktops. Deal of the Day #intel #announces #entrylevel #core #ultra
    WCCFTECH.COM
    Intel Announces Entry-Level “Core Ultra 200” Workstation Desktop And Laptop CPUs
    Intel has unveiled a wide range of entry-level workstation solutions for consumers, featuring the Intel Arrow Lake "Core Ultra 200" CPUs. Intel Debuts Affordable Workstations Based on Arrow Lake "Core Ultra 200" CPUs: Claims Superior Performance Compared to Its Rivals The Intel client segment received entry-level workstation systems for consumers, which will offer competitive performance on a budget. Intel has announced both desktop and laptop workstations, equipped with the latest Intel Core Ultra processors, offering noticeable performance uplifts over its rival. In the desktop segment, Intel claims up to 13% higher multithreaded performance with Core Ultra 200S in programs like Cinebench Multicore 2024 vs AMD's flagship Ryzen 9 9950X. This is reportedly achieved at 11% better performance per watt compared to the AMD CPU, when both operated at 125W of TDP. The desktop workstation systems will offer up to 256 Gb-6400 of DDR5 EEC memory, WiFi 6E, and features such as remote KVM, Intel vPro, and Pro Codec support. The laptop workstation segment brings Intel Core Ultra 200HX and Intel Core Ultra 200H processors for high-performance and thin & light laptops, respectively. The Core Ultra 200HX reportedly delivers up to 8% and 42% higher single and multithreaded performance vs Ryzen AI 9 HX 375, respectively. With a good 41% better power efficiency vs Meteor Lake, the 200HX laptops can deliver superior performance at the same wattage. 2 of 9 The HP ZBook Furey 18 is one of the first workstation laptops, which is scheduled for a retail launch in June and will bring up to 256 Gb of EEC DDR5 memory and built-in NPU for local AI workload execution. Intel has also shared benchmarks of the flagship Core Ultra 9 285HX vs the i9 14900HX from previous gen to showcase the uplifts the Core Ultra 200HX brings in popular professional workloads. Then we have the Intel Core Ultra 200H-based budget workstation laptops such as the Dell Pro Max 16, which deliver up to 22% faster performance vs Ryzen AI 9 365 in Geekbench 6.3 multicore workload and a staggering 21+ hours of battery life. It will feature the Arc 140T integrated graphics, which can take care of both professional workloads and gaming. Compared to the Ryzen 9 8945HS, which is a Zen 4 chip, the Core Ultra 200H delivers up to 36% better performance across 9 applications. 2 of 9 Similarly, the flagship Core Ultra 9 285H delivers up to 26% better performance than Zen 5-based Ryzen AI 9 365 in 6 different applications. Intel also shared its Arc 140T performance prowess in apps like Autodesk Inventor and Chaos V-Ray for Cinema 4D, where the 200H chip is 2.15X and 1.30X faster than the iGPU present on the Core Ultra 185H, respectively. However, those who crave even superior graphical performance can opt for the latest Arc Pro B60 24 GB or the Arc Pro B50 16 GB workstation GPUs, but these can only be used on desktops. Deal of the Day
    0 Commentarii 0 Distribuiri 0 previzualizare
  • AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070

    Review

     When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070

    Sayan Sen

    Neowin
    @ssc_combater007 ·

    May 24, 2025 14:40 EDT

    Earlier this month, we shared the first part of our review of AMD's new RX 9070. It was about the gaming performance of the GPU, and we gave it a 7.5 out of 10. The 9070 XT, in contrast, received a full 10 out of 10.
    The main reason for the lower score on the non-XT was the relatively high price and thus the poorer value it offered compared to the XT. We thought the price was much closer to the XT than it needed to be.
    While the RX 9070 proved to be more power efficient than the XT, for a desktop gaming graphics card, value and performance typically take the front seat compared to something like power efficiency.

    However, that may not be the case in terms of productivity which also takes into account things like power savings. Thus, similar to the one we did for the XT model, we are doing a dedicated productivity review for the RX 9070 as well where we compare to against the 9070 XT, 7800 XT, as well as Nvidia's 5070 and 4070.
    AI performance is a very important metric in today's world and AMD also promised big improvements thanks to its underlying architectural improvements. We already had a taste of that with the XT model so now it's time to see how good the non-XT does here.

    Before we get underway, this is a collaboration between Sayan Sen, and Steven Parker who lent us their test PC for this review. Speaking of which, here are the specs of the test PC:

    Cooler Master MasterBox NR200P MAX
    ASRock Z790 PG-ITX/TB4
    Intel Core i7-14700K with Thermal Grizzly Carbonaut Pad

    T-FORCE Delta RGB DDR57600MT/s CL362TB Kingston Fury Renegade SSD
    Windows 11 24H2Drivers used for the 7800 XT, 9070 XT and 9070 were Adrenaline v24.30.31.03 / 25.3.1 RC, and for the Nvidia RTX 5070 and 4070, GeForce v572.47 was used.Sapphire Pulse 9070 XT, Nvidia 5070 FE, and Pulse 9070First up, we have Geekbench AI running on ONNX.
    The RTX 5070 gets beaten by both the 9070 XT and 9070 in quantized and single precisionperformance. Similarly, the 4070 gets close to the 9070 in half-precisionperformance, but the latter is an enormous 30% faster in quantized score and nearly 12.2% better in single precision.
    The reason for this beatdown is the amount of memory available to each card. The Nvidia GPUs have 12GB each and thus only do better in the FP16 precision tests since the other ones are more VRAM-intensive.
    Next up, we move to UL Procyon suite starting with the Image generation benchmark.
    We chose the Stable Diffusion XL FP16 test since this is the most intense workload available on Procyon suite. Similar to what we saw on Geekbench AI, the Nvidia GPUs to relatively better here as it is FP16 or half precision which means the used VRAM is lower.
    So this is something to keep in mind again, if you wish to float32 AI workloads, it is likely that graphics cards with greater than 12 GB buffers would emerge as victors.
    There is still a big improvement on the RX 9070 compared to the 7800 XT as we see a ~54% gain. This boost is due to improvements to the core architecture itself as VRAM capacities of both cards are the same at 16 Gigs.
    Following image generation, we move to the text generation benchmark.

    In this workload, we see the least impressive performance of the 9070 in terms of how much it improves over the 7800 XT. The former is up to ~7.25% faster here. The 9070 is also not as well-performing as the Nvidia 4070 in Phi and Mistral models, although it does do better in both the Llama tests.
    Another odd result stood out here where the 5070 underperformed all the cards including the 7800 XT in Llama 2. We ran each test three times and considered the best score and so we are not exactly sure what happened here.
    Wrapping up AI testing, we measured OpenCL throughput in Geekbench compute benchmark.
    The RX 9070 did not fare well here at all even falling behind the 7800 XT and it is significantly slower than the three other cards. Interestingly, even the RTX 5070 could not beat the 4070 on OpenCL so perhaps this suggests that OpenCL optimization has not been a priority for either AMD or Nvidia this time. It could also be an issue with Geekbench itself.
    Conclusion
    We reach the end of our productivity performance review of the 9070 and we have to say we are fairly impressed but there is also a slight bit of disappointment. It is clear that the 9070 as well as the 9070 XT really shine when inferencing precision is higher, and that is due to the higher memory buffers they possess compared to the Nvidia 5070. But on FP16, the Nvidia cards pull ahead.
    Still RNDA 4, including the RX 9070, see big boost over RDNA 3. As we noted in the image generation benchmark, which is an intense load, there is over a 50% gain.
    So what do we make of the RX 9070 as a productivity hardware? We think it's a good card. If someone was looking for a GPU around that can do both gaming and crunch through some AI tasks this is a good card to pick up especially if you are dealing with single precision situations or some other VRAM-intense tasks. And we already know it is efficient so there's that too.
    For those however looking for a GPU that can deal with more, AMD recently unveiled the Radeon AI PRO R9700 which is essentially a 32 GB refresh of the 9070 XT with some additional workstation-based optimizations.
    Considering everything, we rate AMD's RX 9070 a 9 out of 10 for its AI performance. Price is less of a factor for those looking at productivity cases compared to ones considering the GPU for gaming, and as such, we felt it did quite decent overall and can be especially handy if you need more than 12 GB.
    Purchase links: RX 9070 / XTAs an Amazon Associate we earn from qualifying purchases.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #amd #performance #benchmark #review #nvidia
    AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070
    Review  When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070 Sayan Sen Neowin @ssc_combater007 · May 24, 2025 14:40 EDT Earlier this month, we shared the first part of our review of AMD's new RX 9070. It was about the gaming performance of the GPU, and we gave it a 7.5 out of 10. The 9070 XT, in contrast, received a full 10 out of 10. The main reason for the lower score on the non-XT was the relatively high price and thus the poorer value it offered compared to the XT. We thought the price was much closer to the XT than it needed to be. While the RX 9070 proved to be more power efficient than the XT, for a desktop gaming graphics card, value and performance typically take the front seat compared to something like power efficiency. However, that may not be the case in terms of productivity which also takes into account things like power savings. Thus, similar to the one we did for the XT model, we are doing a dedicated productivity review for the RX 9070 as well where we compare to against the 9070 XT, 7800 XT, as well as Nvidia's 5070 and 4070. AI performance is a very important metric in today's world and AMD also promised big improvements thanks to its underlying architectural improvements. We already had a taste of that with the XT model so now it's time to see how good the non-XT does here. Before we get underway, this is a collaboration between Sayan Sen, and Steven Parker who lent us their test PC for this review. Speaking of which, here are the specs of the test PC: Cooler Master MasterBox NR200P MAX ASRock Z790 PG-ITX/TB4 Intel Core i7-14700K with Thermal Grizzly Carbonaut Pad T-FORCE Delta RGB DDR57600MT/s CL362TB Kingston Fury Renegade SSD Windows 11 24H2Drivers used for the 7800 XT, 9070 XT and 9070 were Adrenaline v24.30.31.03 / 25.3.1 RC, and for the Nvidia RTX 5070 and 4070, GeForce v572.47 was used.Sapphire Pulse 9070 XT, Nvidia 5070 FE, and Pulse 9070First up, we have Geekbench AI running on ONNX. The RTX 5070 gets beaten by both the 9070 XT and 9070 in quantized and single precisionperformance. Similarly, the 4070 gets close to the 9070 in half-precisionperformance, but the latter is an enormous 30% faster in quantized score and nearly 12.2% better in single precision. The reason for this beatdown is the amount of memory available to each card. The Nvidia GPUs have 12GB each and thus only do better in the FP16 precision tests since the other ones are more VRAM-intensive. Next up, we move to UL Procyon suite starting with the Image generation benchmark. We chose the Stable Diffusion XL FP16 test since this is the most intense workload available on Procyon suite. Similar to what we saw on Geekbench AI, the Nvidia GPUs to relatively better here as it is FP16 or half precision which means the used VRAM is lower. So this is something to keep in mind again, if you wish to float32 AI workloads, it is likely that graphics cards with greater than 12 GB buffers would emerge as victors. There is still a big improvement on the RX 9070 compared to the 7800 XT as we see a ~54% gain. This boost is due to improvements to the core architecture itself as VRAM capacities of both cards are the same at 16 Gigs. Following image generation, we move to the text generation benchmark. In this workload, we see the least impressive performance of the 9070 in terms of how much it improves over the 7800 XT. The former is up to ~7.25% faster here. The 9070 is also not as well-performing as the Nvidia 4070 in Phi and Mistral models, although it does do better in both the Llama tests. Another odd result stood out here where the 5070 underperformed all the cards including the 7800 XT in Llama 2. We ran each test three times and considered the best score and so we are not exactly sure what happened here. Wrapping up AI testing, we measured OpenCL throughput in Geekbench compute benchmark. The RX 9070 did not fare well here at all even falling behind the 7800 XT and it is significantly slower than the three other cards. Interestingly, even the RTX 5070 could not beat the 4070 on OpenCL so perhaps this suggests that OpenCL optimization has not been a priority for either AMD or Nvidia this time. It could also be an issue with Geekbench itself. Conclusion We reach the end of our productivity performance review of the 9070 and we have to say we are fairly impressed but there is also a slight bit of disappointment. It is clear that the 9070 as well as the 9070 XT really shine when inferencing precision is higher, and that is due to the higher memory buffers they possess compared to the Nvidia 5070. But on FP16, the Nvidia cards pull ahead. Still RNDA 4, including the RX 9070, see big boost over RDNA 3. As we noted in the image generation benchmark, which is an intense load, there is over a 50% gain. So what do we make of the RX 9070 as a productivity hardware? We think it's a good card. If someone was looking for a GPU around that can do both gaming and crunch through some AI tasks this is a good card to pick up especially if you are dealing with single precision situations or some other VRAM-intense tasks. And we already know it is efficient so there's that too. For those however looking for a GPU that can deal with more, AMD recently unveiled the Radeon AI PRO R9700 which is essentially a 32 GB refresh of the 9070 XT with some additional workstation-based optimizations. Considering everything, we rate AMD's RX 9070 a 9 out of 10 for its AI performance. Price is less of a factor for those looking at productivity cases compared to ones considering the GPU for gaming, and as such, we felt it did quite decent overall and can be especially handy if you need more than 12 GB. Purchase links: RX 9070 / XTAs an Amazon Associate we earn from qualifying purchases. Tags Report a problem with article Follow @NeowinFeed #amd #performance #benchmark #review #nvidia
    WWW.NEOWIN.NET
    AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070
    Review  When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. AMD RX 9070 AI performance benchmark review vs 9070 XT, 7800 XT, Nvidia RTX 5070, 4070 Sayan Sen Neowin @ssc_combater007 · May 24, 2025 14:40 EDT Earlier this month, we shared the first part of our review of AMD's new RX 9070. It was about the gaming performance of the GPU, and we gave it a 7.5 out of 10. The 9070 XT, in contrast, received a full 10 out of 10. The main reason for the lower score on the non-XT was the relatively high price and thus the poorer value it offered compared to the XT. We thought the price was much closer to the XT than it needed to be. While the RX 9070 proved to be more power efficient than the XT, for a desktop gaming graphics card, value and performance typically take the front seat compared to something like power efficiency. However, that may not be the case in terms of productivity which also takes into account things like power savings. Thus, similar to the one we did for the XT model, we are doing a dedicated productivity review for the RX 9070 as well where we compare to against the 9070 XT, 7800 XT, as well as Nvidia's 5070 and 4070. AI performance is a very important metric in today's world and AMD also promised big improvements thanks to its underlying architectural improvements. We already had a taste of that with the XT model so now it's time to see how good the non-XT does here. Before we get underway, this is a collaboration between Sayan Sen (author), and Steven Parker who lent us their test PC for this review. Speaking of which, here are the specs of the test PC: Cooler Master MasterBox NR200P MAX ASRock Z790 PG-ITX/TB4 Intel Core i7-14700K with Thermal Grizzly Carbonaut Pad T-FORCE Delta RGB DDR5 (2x16GB) 7600MT/s CL36 (XMP Profile) 2TB Kingston Fury Renegade SSD Windows 11 24H2 (Build 26100.3194) Drivers used for the 7800 XT, 9070 XT and 9070 were Adrenaline v24.30.31.03 / 25.3.1 RC (press driver provided by AMD), and for the Nvidia RTX 5070 and 4070, GeForce v572.47 was used. (From the left) Sapphire Pulse 9070 XT, Nvidia 5070 FE, and Pulse 9070First up, we have Geekbench AI running on ONNX. The RTX 5070 gets beaten by both the 9070 XT and 9070 in quantized and single precision (FP32) performance. Similarly, the 4070 gets close to the 9070 in half-precision (FP16) performance, but the latter is an enormous 30% faster in quantized score and nearly 12.2% better in single precision (FP32). The reason for this beatdown is the amount of memory available to each card. The Nvidia GPUs have 12GB each and thus only do better in the FP16 precision tests since the other ones are more VRAM-intensive. Next up, we move to UL Procyon suite starting with the Image generation benchmark. We chose the Stable Diffusion XL FP16 test since this is the most intense workload available on Procyon suite. Similar to what we saw on Geekbench AI, the Nvidia GPUs to relatively better here as it is FP16 or half precision which means the used VRAM is lower. So this is something to keep in mind again, if you wish to float32 AI workloads, it is likely that graphics cards with greater than 12 GB buffers would emerge as victors. There is still a big improvement on the RX 9070 compared to the 7800 XT as we see a ~54% gain. This boost is due to improvements to the core architecture itself as VRAM capacities of both cards are the same at 16 Gigs. Following image generation, we move to the text generation benchmark. In this workload, we see the least impressive performance of the 9070 in terms of how much it improves over the 7800 XT. The former is up to ~7.25% faster here. The 9070 is also not as well-performing as the Nvidia 4070 in Phi and Mistral models, although it does do better in both the Llama tests. Another odd result stood out here where the 5070 underperformed all the cards including the 7800 XT in Llama 2. We ran each test three times and considered the best score and so we are not exactly sure what happened here. Wrapping up AI testing, we measured OpenCL throughput in Geekbench compute benchmark. The RX 9070 did not fare well here at all even falling behind the 7800 XT and it is significantly slower than the three other cards. Interestingly, even the RTX 5070 could not beat the 4070 on OpenCL so perhaps this suggests that OpenCL optimization has not been a priority for either AMD or Nvidia this time. It could also be an issue with Geekbench itself. Conclusion We reach the end of our productivity performance review of the 9070 and we have to say we are fairly impressed but there is also a slight bit of disappointment. It is clear that the 9070 as well as the 9070 XT really shine when inferencing precision is higher, and that is due to the higher memory buffers they possess compared to the Nvidia 5070. But on FP16, the Nvidia cards pull ahead. Still RNDA 4, including the RX 9070, see big boost over RDNA 3 (7800 XT). As we noted in the image generation benchmark, which is an intense load, there is over a 50% gain. So what do we make of the RX 9070 as a productivity hardware? We think it's a good card. If someone was looking for a GPU around $550 that can do both gaming and crunch through some AI tasks this is a good card to pick up especially if you are dealing with single precision situations or some other VRAM-intense tasks. And we already know it is efficient so there's that too. For those however looking for a GPU that can deal with more, AMD recently unveiled the Radeon AI PRO R9700 which is essentially a 32 GB refresh of the 9070 XT with some additional workstation-based optimizations. Considering everything, we rate AMD's RX 9070 a 9 out of 10 for its AI performance. Price is less of a factor for those looking at productivity cases compared to ones considering the GPU for gaming, and as such, we felt it did quite decent overall and can be especially handy if you need more than 12 GB. Purchase links: RX 9070 / XT (Amazon US) As an Amazon Associate we earn from qualifying purchases. Tags Report a problem with article Follow @NeowinFeed
    0 Commentarii 0 Distribuiri 0 previzualizare
  • Nvidia GeForce RTX 5060 review: better than console performance - but not enough VRAM

    The RTX 5060 is here, finally completing the 50-series lineup that debuted five months ago with the 5090. The new "mainstream" graphics card is far from cheap at /£270, but ought to offer reasonable performance and efficiency while adding the multi frame generation feature that's exclusive to this generation of GPUs. However, the 5060 also ships with just 8GB of VRAM, which could be a big limitation for those looking to play the latest graphics showcases.
    Before we get into our results, it's worth mentioning why this review is a little later than normal, coming a few days after the cards officially went on sale on May 19th. Normally, Nvidia or their partners send a graphics card and the necessary drivers anywhere from a couple of days to a week before the embargo date, which is typically a day before the cards go on sale. That's good for us, because it allows us to do the in-depth analysis that we prefer and still publish at the same time as other outlets, and it's good for potential buyers, as they can get a sense of value and performance and therefore make an informed decision about whether to buy a card or not - from what is often a limited supply at launch.
    For the RTX 5060 launch, Nvidia - via Asus - delivered a card in good time ahead of its release, but the drivers weren't released to reviewers until the card went on sale on May 19th, coinciding with Nvidia's Computex presentation. Without the drivers, the card is a paperweight, so any launch day coverage is necessarily limited - and in many cases, graphics cards went out of stock before the usual tranche of reviews went live from the tech press. It's a frustrating situation all around, and I doubt that even Nvidia's PR department will be thrilled that most reviews start with the same complaint.

    Nvidia's GeForce RTX 5060 gets the Digital Foundry video review treatment.Watch on YouTube
    Following the public release of the drivers, we've been benchmarking around the clock to figure out just how performant the new RTX 5060 is, where its strengths and weaknesses lie, and where it falls compared to the rest of the 50-series line-up, prior generation RTX cards and competing AMD models.
    Looking at the specs, you can see that the RTX 5060 is based around a cut-down version of the same GB206 die that powered the RTX 5060 Ti. The 5060 has 83 percent of the core count and rated power of the full-fat 5060 Ti design, with an innocuous three percent drop to boost clocks and the same 448GB/s of memory bandwidth.
    Unlike the 5060 Ti, however, which debuted in 8GB and 16GB models, the 5060 is only available with 8GB of frame buffer memory - a limitation we'll discuss in some depth later. For your 16.6 percent reduction to core count and TGP versus the 5060 Ti, you pay around 20 percent less - so the 5060 ought to be slightly better value.

    Marvel's Spider-Man 2 and Monster Hunter World - 1440p resolution. We aren't at native resolution. We aren't on ultra settings, but both 8GB RTX 5060 and RTX 5060 Ti see performance collapse. The 16GB RTX 5060 Ti works fine and delivers good performance - proof positive that 8GB is too much of a limiting factor for these cards. | Image credit: Digital Foundry

    RTX 5070 Ti
    RTX 5070
    RTX 5060 Ti
    RTX 5060

    Processor
    GB203
    GB205
    GB206
    GB206

    Cores
    8,960
    6,144
    4,608
    3,840

    Boost Clock
    2.45GHz
    2.51GHz
    2.57GHz
    2.50GHz

    Memory
    16GB GDDR7
    12GB GDDR7
    16GB GDDR7
    8GB GDDR78GB GDDR7

    Memory Bus Width
    256-bit
    192-bit
    128-bit
    128-bit

    Memory Bandwidth
    896GB/s
    672GB/s
    448GB/s
    448GB/s

    Total Graphics Power
    300W
    250W
    180W
    150W

    PSU Recommendation
    750W
    650W
    450W
    450W

    Price
    /£729
    /£539
    /£399
    /£349Release Date
    February 20th
    March 5th
    April 16th
    May 19th

    There's no RTX 5060 Founders Edition, as you'd perhaps expect for a mainstream model, with various third-party cards available in a range of sizes. The RTX 5060 model we received is the Asus Prime model, an over-engineered 2.5-slot, tri-fan design that is nonetheless described as "SFF-ready" due to its relatively modest 268mm length. On top of the robust industrial design, the card features a dual BIOS with "quiet" and "performance" options - always useful. In this case however, the cooler is so large that even the "performance" option is very, very quiet. The card ships with this preset and we recommend it stays there.
    Hilariously, the manufacturer product page recommends a 750W or 850W Asus power supply, though the specs page for the same model makes a more sane 550W recommendation. Regardless, you'll be good to go with a single eight-pin power connector. In terms of ports, we're looking at the RTX 50-series standard assortment, including one HDMI 2.1b and three DisplayPort 2.1b.
    Like the RTX 5060 Ti - but not AMD's just-announced Radeon RX 9060 XT - the RTX 5060 uses a PCIe 8x connection. That's perfectly fine on a modern PCIe 5.0 or 4.0 slot, but potentially problematic on earlier motherboards with PCIe 3.0 slots - something we'll test out in more detail on page eight.
    For our testing, we'll be pairing the RTX 5060 with a bleeding-edge system based around the fastest gaming CPU - the AMD Ryzen 7 9800X3D. We also have 32GB of Corsair DDR5-6000 CL30 memory, a high-end Asus ROG Crosshair X870E Hero motherboard and a 1000W Corsair PSU.
    With all that said, let's get into the benchmarks.
    Nvidia GeForce RTX 5060 Analysis

    To see this content please enable targeting cookies.
    #nvidia #geforce #rtx #review #better
    Nvidia GeForce RTX 5060 review: better than console performance - but not enough VRAM
    The RTX 5060 is here, finally completing the 50-series lineup that debuted five months ago with the 5090. The new "mainstream" graphics card is far from cheap at /£270, but ought to offer reasonable performance and efficiency while adding the multi frame generation feature that's exclusive to this generation of GPUs. However, the 5060 also ships with just 8GB of VRAM, which could be a big limitation for those looking to play the latest graphics showcases. Before we get into our results, it's worth mentioning why this review is a little later than normal, coming a few days after the cards officially went on sale on May 19th. Normally, Nvidia or their partners send a graphics card and the necessary drivers anywhere from a couple of days to a week before the embargo date, which is typically a day before the cards go on sale. That's good for us, because it allows us to do the in-depth analysis that we prefer and still publish at the same time as other outlets, and it's good for potential buyers, as they can get a sense of value and performance and therefore make an informed decision about whether to buy a card or not - from what is often a limited supply at launch. For the RTX 5060 launch, Nvidia - via Asus - delivered a card in good time ahead of its release, but the drivers weren't released to reviewers until the card went on sale on May 19th, coinciding with Nvidia's Computex presentation. Without the drivers, the card is a paperweight, so any launch day coverage is necessarily limited - and in many cases, graphics cards went out of stock before the usual tranche of reviews went live from the tech press. It's a frustrating situation all around, and I doubt that even Nvidia's PR department will be thrilled that most reviews start with the same complaint. Nvidia's GeForce RTX 5060 gets the Digital Foundry video review treatment.Watch on YouTube Following the public release of the drivers, we've been benchmarking around the clock to figure out just how performant the new RTX 5060 is, where its strengths and weaknesses lie, and where it falls compared to the rest of the 50-series line-up, prior generation RTX cards and competing AMD models. Looking at the specs, you can see that the RTX 5060 is based around a cut-down version of the same GB206 die that powered the RTX 5060 Ti. The 5060 has 83 percent of the core count and rated power of the full-fat 5060 Ti design, with an innocuous three percent drop to boost clocks and the same 448GB/s of memory bandwidth. Unlike the 5060 Ti, however, which debuted in 8GB and 16GB models, the 5060 is only available with 8GB of frame buffer memory - a limitation we'll discuss in some depth later. For your 16.6 percent reduction to core count and TGP versus the 5060 Ti, you pay around 20 percent less - so the 5060 ought to be slightly better value. Marvel's Spider-Man 2 and Monster Hunter World - 1440p resolution. We aren't at native resolution. We aren't on ultra settings, but both 8GB RTX 5060 and RTX 5060 Ti see performance collapse. The 16GB RTX 5060 Ti works fine and delivers good performance - proof positive that 8GB is too much of a limiting factor for these cards. | Image credit: Digital Foundry RTX 5070 Ti RTX 5070 RTX 5060 Ti RTX 5060 Processor GB203 GB205 GB206 GB206 Cores 8,960 6,144 4,608 3,840 Boost Clock 2.45GHz 2.51GHz 2.57GHz 2.50GHz Memory 16GB GDDR7 12GB GDDR7 16GB GDDR7 8GB GDDR78GB GDDR7 Memory Bus Width 256-bit 192-bit 128-bit 128-bit Memory Bandwidth 896GB/s 672GB/s 448GB/s 448GB/s Total Graphics Power 300W 250W 180W 150W PSU Recommendation 750W 650W 450W 450W Price /£729 /£539 /£399 /£349Release Date February 20th March 5th April 16th May 19th There's no RTX 5060 Founders Edition, as you'd perhaps expect for a mainstream model, with various third-party cards available in a range of sizes. The RTX 5060 model we received is the Asus Prime model, an over-engineered 2.5-slot, tri-fan design that is nonetheless described as "SFF-ready" due to its relatively modest 268mm length. On top of the robust industrial design, the card features a dual BIOS with "quiet" and "performance" options - always useful. In this case however, the cooler is so large that even the "performance" option is very, very quiet. The card ships with this preset and we recommend it stays there. Hilariously, the manufacturer product page recommends a 750W or 850W Asus power supply, though the specs page for the same model makes a more sane 550W recommendation. Regardless, you'll be good to go with a single eight-pin power connector. In terms of ports, we're looking at the RTX 50-series standard assortment, including one HDMI 2.1b and three DisplayPort 2.1b. Like the RTX 5060 Ti - but not AMD's just-announced Radeon RX 9060 XT - the RTX 5060 uses a PCIe 8x connection. That's perfectly fine on a modern PCIe 5.0 or 4.0 slot, but potentially problematic on earlier motherboards with PCIe 3.0 slots - something we'll test out in more detail on page eight. For our testing, we'll be pairing the RTX 5060 with a bleeding-edge system based around the fastest gaming CPU - the AMD Ryzen 7 9800X3D. We also have 32GB of Corsair DDR5-6000 CL30 memory, a high-end Asus ROG Crosshair X870E Hero motherboard and a 1000W Corsair PSU. With all that said, let's get into the benchmarks. Nvidia GeForce RTX 5060 Analysis To see this content please enable targeting cookies. #nvidia #geforce #rtx #review #better
    WWW.EUROGAMER.NET
    Nvidia GeForce RTX 5060 review: better than console performance - but not enough VRAM
    The RTX 5060 is here, finally completing the 50-series lineup that debuted five months ago with the 5090. The new "mainstream" graphics card is far from cheap at $299/£270, but ought to offer reasonable performance and efficiency while adding the multi frame generation feature that's exclusive to this generation of GPUs. However, the 5060 also ships with just 8GB of VRAM, which could be a big limitation for those looking to play the latest graphics showcases. Before we get into our results, it's worth mentioning why this review is a little later than normal, coming a few days after the cards officially went on sale on May 19th. Normally, Nvidia or their partners send a graphics card and the necessary drivers anywhere from a couple of days to a week before the embargo date, which is typically a day before the cards go on sale. That's good for us, because it allows us to do the in-depth analysis that we prefer and still publish at the same time as other outlets, and it's good for potential buyers, as they can get a sense of value and performance and therefore make an informed decision about whether to buy a card or not - from what is often a limited supply at launch. For the RTX 5060 launch, Nvidia - via Asus - delivered a card in good time ahead of its release, but the drivers weren't released to reviewers until the card went on sale on May 19th, coinciding with Nvidia's Computex presentation. Without the drivers, the card is a paperweight, so any launch day coverage is necessarily limited - and in many cases, graphics cards went out of stock before the usual tranche of reviews went live from the tech press. It's a frustrating situation all around, and I doubt that even Nvidia's PR department will be thrilled that most reviews start with the same complaint. Nvidia's GeForce RTX 5060 gets the Digital Foundry video review treatment.Watch on YouTube Following the public release of the drivers, we've been benchmarking around the clock to figure out just how performant the new RTX 5060 is, where its strengths and weaknesses lie, and where it falls compared to the rest of the 50-series line-up, prior generation RTX cards and competing AMD models. Looking at the specs, you can see that the RTX 5060 is based around a cut-down version of the same GB206 die that powered the RTX 5060 Ti. The 5060 has 83 percent of the core count and rated power of the full-fat 5060 Ti design, with an innocuous three percent drop to boost clocks and the same 448GB/s of memory bandwidth. Unlike the 5060 Ti, however, which debuted in 8GB and 16GB models, the 5060 is only available with 8GB of frame buffer memory - a limitation we'll discuss in some depth later. For your 16.6 percent reduction to core count and TGP versus the 5060 Ti, you pay around 20 percent less - so the 5060 ought to be slightly better value. Marvel's Spider-Man 2 and Monster Hunter World - 1440p resolution. We aren't at native resolution. We aren't on ultra settings, but both 8GB RTX 5060 and RTX 5060 Ti see performance collapse. The 16GB RTX 5060 Ti works fine and delivers good performance - proof positive that 8GB is too much of a limiting factor for these cards. | Image credit: Digital Foundry RTX 5070 Ti RTX 5070 RTX 5060 Ti RTX 5060 Processor GB203 GB205 GB206 GB206 Cores 8,960 6,144 4,608 3,840 Boost Clock 2.45GHz 2.51GHz 2.57GHz 2.50GHz Memory 16GB GDDR7 12GB GDDR7 16GB GDDR7 8GB GDDR78GB GDDR7 Memory Bus Width 256-bit 192-bit 128-bit 128-bit Memory Bandwidth 896GB/s 672GB/s 448GB/s 448GB/s Total Graphics Power 300W 250W 180W 150W PSU Recommendation 750W 650W 450W 450W Price $749/£729 $549/£539 $429/£399 $379/£349$299 Release Date February 20th March 5th April 16th May 19th There's no RTX 5060 Founders Edition, as you'd perhaps expect for a mainstream model, with various third-party cards available in a range of sizes. The RTX 5060 model we received is the Asus Prime model, an over-engineered 2.5-slot, tri-fan design that is nonetheless described as "SFF-ready" due to its relatively modest 268mm length. On top of the robust industrial design, the card features a dual BIOS with "quiet" and "performance" options - always useful. In this case however, the cooler is so large that even the "performance" option is very, very quiet. The card ships with this preset and we recommend it stays there. Hilariously, the manufacturer product page recommends a 750W or 850W Asus power supply, though the specs page for the same model makes a more sane 550W recommendation. Regardless, you'll be good to go with a single eight-pin power connector. In terms of ports, we're looking at the RTX 50-series standard assortment, including one HDMI 2.1b and three DisplayPort 2.1b. Like the RTX 5060 Ti - but not AMD's just-announced Radeon RX 9060 XT - the RTX 5060 uses a PCIe 8x connection. That's perfectly fine on a modern PCIe 5.0 or 4.0 slot, but potentially problematic on earlier motherboards with PCIe 3.0 slots - something we'll test out in more detail on page eight. For our testing, we'll be pairing the RTX 5060 with a bleeding-edge system based around the fastest gaming CPU - the AMD Ryzen 7 9800X3D. We also have 32GB of Corsair DDR5-6000 CL30 memory, a high-end Asus ROG Crosshair X870E Hero motherboard and a 1000W Corsair PSU. With all that said, let's get into the benchmarks. Nvidia GeForce RTX 5060 Analysis To see this content please enable targeting cookies.
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com