WWW.TECHSPOT.COM
Doom: The Dark Ages, 36 GPU Benchmark
Doom: The Dark Ages is the latest first-person shooter developed by id Software, marking the eighth main entry in the franchise and the third installment of the modern series, following 2020's Doom Eternal. As you can imagine, we won't be reviewing the game here but rather, we'll be benchmarking the "hell" out of it, and no doubt a few configurations will be doomed.
The Dark Ages is powered by the idTech8 game engine, which features full dynamic lighting with ray-traced global illumination and ray-traced reflections. As a result, a GPU with hardware RT support is required to play the game. Path tracing is also expected, but it will arrive in a post-launch update. That means no FPS-crushing option just yet for all you RTX 5090 owners.
For testing, Nvidia provided driver version 576.31, and AMD provided driver version 25.5.1. Both are game-ready drivers optimized for Doom: The Dark Ages. After several hours of gameplay, we selected a section called "Siege Part 1, Chapter 6, Aspiring Slayer" for benchmarking. The test begins at a checkpoint just before a large battle sequence, and most of the benchmark takes place during that fight, so this should be a quite demanding test.
If you were to monitor frame rate performance in a less intense section of the game, such as walking through near-empty corridors, you can expect higher frame rates than what we're showing here.
The game offers half a dozen preset options, and while we normally test three or four presets at 1080p, 1440p, and 4K, for this test we're focusing on just two. This is because there was almost no performance difference between the maximum preset (Ultra Nightmare), Nightmare, Ultra, and High.
Look ma, no scaling...
In fact, looking at the RTX 5080 and 9070 XT, we see that the GeForce GPU experienced just a 2% performance increase when going from Ultra Nightmare to High, while the 9070 XT saw a modest 5% uplift.
Oddly, the 9070 XT outperformed the RTX 5080 using the provided drivers. This was unexpected, given Nvidia granted us early access to the game and requested that we highlight the RTX features of their GeForce 50 series. We agreed to do so, because you know we are benchmarking RTX graphics cards, and those features are relevant.
But anyway, even with the Medium preset, the Radeon GPU saw only an additional 8% boost, while the GeForce GPU improved by just 2%. Dropping to Low yielded a further 14% increase for the 9070 XT and a 13% gain for the RTX 5080. This means that, compared to the highest quality preset, the Low preset offers just an 18% performance improvement for the RTX 5080 and 29% for the 9070 XT.
It's a bit surprising to see such limited performance scaling across the presets. There's virtually no difference among the top four settings, a small gain with Medium, and another modest boost with Low. This will be disappointing news for those with lower-end hardware hoping the Low and Medium presets would provide more substantial performance improvements.
Now, the reason for this minimal scaling becomes clear when examining the visual differences between presets. So let's take a quick look at those now…
Image Quality Comparison
In this first example, there's almost no noticeable difference between all six presets (and that's not because of image compression), there simply isn't much difference. Even when viewing the native presentation in-game, we couldn't spot a difference between the top four presets, they all looked exactly the same to us.
Even when comparing Medium to Ultra Nightmare, any differences are minimal. Perhaps the distant shadow quality is slightly better, but it's very hard to tell.
Next, we have the Low preset, and once again, the overall presentation is extremely similar to Ultra Nightmare. The textures on the weapon appear slightly softer, and the distant tower shows a reduction in lighting quality, but otherwise, the two look quite alike.
This second example highlights close-up details, and again, there's almost no visible difference between Low and Ultra Nightmare. We restarted the game between changes to ensure each setting was properly applied, but it made no difference.
The most noticeable variation we found was in this example featuring a muddy surface. That said, the Ultra Nightmare, Nightmare, Ultra, High, and even Medium presets all looked identical to us. With the Low preset, some detail is lost and reflections are reduced, but overall, the difference between the highest and lowest quality settings is extremely minimal.
Before moving on, here's a comparison of a scene with intense fire effects and close-up rubble. Again, it's very difficult to identify any real differences.
The same applies to this scene – we wouldn't blame you for thinking we accidentally showed the same preset six times. But rest assured, that's not the case.
For a clearer view, here's a side-by-side comparison of the Ultra Nightmare and Low presets. If you look closely, you may spot some very minor differences, but they are subtle. Once again, the mud stands out as the area with the most visible variation, though even the Low setting looks respectable.
Given these results, we've decided to focus our benchmarks on the Ultra Nightmare and Medium presets. Performance is virtually the same across Ultra Nightmare, Nightmare, Ultra, and High settings. For the test system, we're using the Ryzen 7 9800X3D paired with 32 GB of DDR5-6000 memory and the latest display drivers.
Test System Specs
Before publishing this review, we received Nvidia's latest driver (version 576.40), which did not improve performance beyond what we're showing here. At most, we observed a ~2-3% uplift. We asked Nvidia what kind of performance gains users should expect with the new driver, but they were unable to provide an answer.
CPU
AMD Ryzen 7 9800X3D
Motherboard
MSI MPG X870E Carbon WiFi
(BIOS 7E49v1A23 - ReBAR enabled)
Memory
G.Skill Trident Z5 RGB DDR5-6000
[CL30-38-38-96]
Graphics Cards:
GeForce RTX 3060
GeForce RTX 3060 Ti
GeForce RTX 3070
GeForce RTX 3080
GeForce RTX 3090
GeForce RTX 4060
GeForce RTX 4060 Ti
GeForce RTX 4070
GeForce RTX 4070 Super
GeForce RTX 4070 Ti
GeForce RTX 4070 Ti Super
GeForce RTX 4080
GeForce RTX 4080 Super
GeForce RTX 4090
GeForce RTX 5060 Ti 8GB
GeForce RTX 5060 Ti 16GB
GeForce RTX 5070
GeForce RTX 5080
GeForce RTX 5090
Radeon RX 6600
Radeon RX 6650 XT
Radeon RX 6750 XT
Radeon RX 6800
Radeon RX 6800 XT
Radeon RX 6950 XT
Radeon RX 7600
Radeon RX 7600 XT
Radeon RX 7700 XT
Radeon RX 7800 XT
Radeon RX 7900 GRE
Radeon RX 7900 XT
Radeon RX 7900 XTX
Radeon RX 9070
Radeon RX 9070 XT
Intel Arc A770
Intel Arc B580
ATX Case
MSI MEG Maestro 700L PZ
Power Supply
Kolink Regulator Gold ATX 3.0 1200W
Storage
TeamGroup T-Force Cardea Z44Q 4TB
Operating System
Windows 11 24H2
Display Driver
Nvidia GeForce Game Ready 576.31
AMD Radeon Adrenalin 25.5.1
Benchmarks
Ultra Nightmare @ 1080p
Starting with the Ultra Nightmare quality settings and using native TAA (we'll look at upscaling data shortly), the RTX 5090 delivered just 151 fps, making it a mere 3% faster than the RTX 4090. It's unclear what's going on with the GeForce 50 series GPUs, but for the most part, they offer little to no performance gain over their predecessors. In fact, the RTX 5080 was 8% slower than the 4080, which is puzzling.
We performed clean installs of the GeForce driver – multiple times, in fact – but were unable to improve the performance of Nvidia's 50 series GPUs. This suggests either a driver issue, a game patch is needed, or the 50 series simply doesn't provide meaningful performance improvements over the 40 series – and in some cases, it's slower.
Interestingly, both AMD's RDNA3 and RDNA4 GPUs performed exceptionally well. The 7900 XTX and 9070 XT matched the RTX 4080 Super, coming in just 16% behind the RTX 5090.
The RX 9070 also matched the RTX 5080 and outperformed the 4070 Ti Super, while the 7900 XT was on par with the 5070 Ti and beat the 4070 Super.
The older 6800 XT slightly edged out the new 5060 Ti, while the 7700 XT delivered similar performance with 68 fps. Below 60 fps, we find the 3060 Ti, 4060, 6750 XT, and Arc B580. Beyond that point, frame rates dip into the mid-40s, leading to a subpar experience.
Ultra Nightmare @ 1440p
Jumping to 1440p with Ultra Nightmare settings, the RTX 5090 dropped to an average of 125 fps – a 6% improvement over the 4090, and 30% faster than the 7900 XTX and 9070 XT. Meanwhile, AMD's flagship GPUs once again slightly edged out the RTX 4080 Super and RTX 5080.
For an average of around 60 fps, the 7800 XT, 6950 XT, RTX 4070, and 5070 performed well. Even the RTX 3080 delivered 65 fps on average. The 7700 XT landed between the 8 GB and 16 GB versions of the 5060 Ti. It's worth noting that 8 GB of VRAM is sufficient for these quality settings – though, as we'll show soon, certain configurations may still run into issues, even with upscaling at 1440p.
Ultra Nightmare @ 4K
At native 4K, most users will likely want to enable upscaling, but for an apples-to-apples comparison, we're looking at native performance. The RTX 5090 managed 82 fps on average – solid performance – but it means just 74 fps for the 4090 and 56 fps for the 9070 XT. So unless you're using a 4090 or 5090, expect sub-60 fps performance without upscaling.
Medium @ 1080p
Switching to the Medium preset at 1080p, most GPUs delivered over 60 fps, including the Arc B580 and the older Radeon RX 6750 XT. However, AMD's RX 7600 series underperformed, with just 52 fps on average, while the RTX 4060 managed 64 fps.
Medium @ 1440p
At 1440p, the RTX 5090 reached an average of 135 fps using the Medium preset, with the RTX 4090 close behind at 130 fps. Again, the 40 and 50 series GPUs are performing much closer than expected. For example, the RTX 5080 was 8% slower than the 4080, allowing the RX 9070 to match it.
The older 9700 XT also matched the 5070 Ti, while the RTX 3090 was 7% faster than the 6950 XT. The 6800 XT and RTX 3080 were nearly identical. Below this level, performance begins to fall under 60 fps.
Medium @ 4K
Finally, at 4K using Medium settings, the RTX 5090 achieved 89 fps – only a 9% improvement over its Ultra Nightmare result. Lower-end cards like the RTX 5070 saw an 11% uplift, but that's still disappointing. Typically, a drop from maximum to medium quality would yield at least a 40% performance gain. Instead, the 5070 only managed 41 fps, meaning even with upscaling, the experience won't be great.
FSR and DLSS Upscaling Performance
Here's a quick look at how FSR and DLSS upscaling compare using the 9070 XT and RTX 5080. Once again, the Radeon GPU is faster, delivering 11% more performance when rendering at native resolution. With the Quality upscaling option, that margin extends to 16%, as the Radeon GPU becomes 40% faster and the GeForce GPU sees a 34% boost.
The performance gap narrows slightly to 15% with the Balanced setting, 14% with Performance, and returns to 11% with Ultra Performance. At 1440p, we recommend the Quality preset for both GPUs, as it provides the best balance between visuals and performance.
VRAM Debate: 8GB vs. 16GB
Now let's talk about 8 GB GPUs in Doom: The Dark Ages, specifically the new 8 GB version of the RTX 5060 Ti. For the most part, 8 GB GPUs perform reasonably well in this game, and it's clear the developer has put effort into optimizing for that configuration. This makes sense, as the majority of PC gamers are still stuck on 8 GB GPUs, largely due to AMD and Nvidia continuing to ship low-VRAM models, and appear to be actively trying to kill PC gaming, but I digress.
As was the case with Space Marine 2, this game could greatly benefit from a proper 4K texture pack. While some textures look excellent, many appear low-resolution and lack detail when viewed at higher resolutions. We made similar comments about Space Marine 2, which some pushed back on – until the 4K texture pack was released. At that point, the game looked dramatically better but became unplayable on 8 GB cards.
To illustrate the difference, here's a look at how the 8 GB and 16 GB versions of the RTX 5060 Ti perform at 4K using DLSS Balanced upscaling during a large horde battle. While the 16 GB model's frame rate isn't great, the game is at least playable. In contrast, the 8 GB version is completely broken in this scenario – though this is an extreme case, meant to test VRAM limits.
Now, if we enabled DLSS quality upscaling at 1440p, the 8 GB 5060 Ti sees very little improvement over native performance, while the 16 GB model is roughly 40% faster. To further test VRAM saturation, we moved beyond the 30-second benchmark pass and played for several minutes.
Initially, the 16 GB card was about 42% faster. But a few minutes in, VRAM usage overwhelmed the 8 GB model, tanking its performance. This resulted in the 16 GB model delivering an 82% higher average frame rate – and over 200% better 1% lows.
We observed similar differences with the Nightmare and Ultra presets. Even the High preset showed some discrepancy, though Medium provided nearly identical performance and visuals, making it a more viable option for 8 GB GPUs.
2GB vs 1.5GB Texture Pool Size
However, this appeared to become a non-issue once we discovered that lowering the texture pool size from the default 2 GB to 1.5 GB drastically improved performance on the 8 GB 5060 Ti at 1440p with upscaling. It matched the 16 GB model's performance with no noticeable visual degradation. We also saw no increased texture or detail pop-in, raising questions about the real benefits of the higher texture pool setting. While the game suggests that allocating more VRAM improves performance, we found no supporting evidence in our testing.
This texture pool setting is especially important for users with 8 GB GPUs – particularly when using features like frame generation. With the default 2 GB pool size, DLSS multi-frame generation often failed to activate, sometimes requiring multiple restarts. Even then, performance was inconsistent.
For example, in an early test of multi-frame generation, it worked as expected on the 16 GB model but reduced performance on the 8 GB version. By moving to a less demanding area, looking down at the ground, and enabling multi-frame generation from the menu, we could sometimes get it to work. It seemed we had to lower VRAM usage first, a problem that never occurred with the 16 GB model.
However, dropping to the Medium preset and reducing the texture pool to 1.5 GB allowed multi-frame generation to work flawlessly on the 8 GB 5060 Ti. In fact, in that specific scenario, the 8 GB model slightly outperformed the 16 GB version, though this is likely down to run to run variance.
Based on our testing, multi-frame generation is buggy on the 8 GB model and flawless on the 16 GB model. Reducing the texture pool size to 1.5 GB on an 8 GB GPU is essential. For those wondering, the texture pool size can be increased to 4 GB on a 16 GB GPU without any performance hit, though we couldn't identify any visual or performance improvements as a result.
What We Learned
Doom: The Dark Ages is a well-optimized game that plays smoothly at 1440p with upscaling enabled. Quality upscaling can boost performance by around 30% to 40%, allowing GPUs such as the Radeon 7700 XT and GeForce RTX 5060 Ti to average over 60 fps, which should be considered the bare minimum for modern PC gaming.
The main optimization issue lies in the lack of performance scaling. At most, we saw a 30% improvement in frame rate when dropping from the maximum to the minimum quality settings. That's why the Radeon RX 6600 managed just 41 fps at native 1080p using the Medium preset. The game still looks great, but there's not much additional performance gained by lowering settings.
We'd like to see an option tailored for older or lower-end GPUs that sacrifices visual fidelity more aggressively, allowing GPUs like the RX 6600 to push beyond 60 fps at 1080p.
Visually, the game looks very good overall, with many impressive effects, explosions are particularly satisfying, however as noted earlier, texture quality isn't consistently high. It's reminiscent of what we observed with Space Marine 2, a game that vastly benefited from a 4K texture pack.
At this stage, it seems mainstream GPUs may be holding PC gaming back. Developers are faced with a tough decision: either optimize games to fit within 8 GB of VRAM, compromising visual quality, or prioritize higher-fidelity visuals and leave behind players limited to 8 GB GPUs.
Doom: The Dark Ages isn't the first game that works well on 8 GB cards but suffers in other ways because of it. We saw similar trade-offs with Black Myth: Wukong, for example. We believe games like this could look significantly better, without demanding more compute power, if 16 GB was the base VRAM configuration for PC gaming.
Hopefully, as with Space Marine 2, we'll see a high-resolution texture pack released for Doom, as the results could be transformative.
As for general stability, we experienced some crashes when using GeForce GPUs, especially the RTX 50 Blackwell models. While stability wasn't terrible, the game did hard-lock a few times, and we encountered multiple crashes when using frame generation on 8 GB GeForce GPUs.
Reducing the texture pool size to 1.5 GB seemed to help in those cases. In contrast, we didn't encounter a single crash or issue while testing with Radeon GPUs. We were impressed by how well the 9070 series ran the game, as well as the performance of the RDNA 3 series.
It will be interesting to see whether the game receives performance-related patches post-launch. We already know that path tracing is coming soon, so that will definitely be worth exploring. Until then, if you enjoyed our work on this article, share it, subscribe to our newsletter for content updates, and check out our TechSpot Elite subscription option to remove ads and receive more perks.
Shopping Shortcuts:
AMD Radeon RX 9070 XT on Amazon
Nvidia GeForce RTX 5070 Ti on Amazon
Nvidia RTX 5060 Ti 16GB on Amazon
AMD Radeon RX 9070 on Amazon
Nvidia GeForce RTX 5080 on Amazon
AMD Radeon RX 7900 XT on Amazon
Nvidia GeForce RTX 5090 on Amazon