WWW.DIGITALTRENDS.COM
Like it or not, Nvidia stole the show at CES 2025
html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Great, heres the entitled journalist telling me that the $2,000 graphics card won CES 2025. Ive seen plenty of strong opinions about Nvidias CES announcements online, but even ignoring the bloated price of the new RTX 5090, Nvidia won this years show. And it kind of won by default. Between Intels barebones announcements and an overstuffed AMD presentation that ignoredwhat might be AMDs most important GPU launch ever, its not surprising that Team Green came out ahead.But thats despite the insane price of the RTX 5090, not because of it.Recommended VideosNvidia introduced a new range of graphics cards, and the impressive multi-frame generation of DLSS 4, but its announcements this year were much more significant than that. It all comes down to the ways that Nvidia is leveraging AI to make PC games better, and the fruits of that labor may not pay off immediately.Jacob Roach / Digital TrendsThere are the developer-facing tools like Neural Materials and Neural Texture Compression, both of which Nvidia briefly touched on during its CES 2025 keynote. For me, however, the standout is neural shaders. They certainly arent as exciting as a new graphics card, at least on the surface, but neural shaders have massive implications for the future of PC games. Even without the RTX 5090, that announcement alone is significant enough for Nvidia to steal this years show.Get your weekly teardown of the tech behind PC gaming Neural shaders arent some buzzword, though Id forgive you for thinking that given the force-feeding of AI weve all experienced over the past couple of years. First, lets start with the shader. If you arent familiar, shaders are essentially the programs that run on your GPU. Decades ago, you had fixed-function shaders; they could only do one thing. In the early 2000s, Nvidia introduced programmable shaders that had far greater capabilities. Now, were starting with neural shaders.RelatedIn short, neural shaders allow developers to add small neural networks to shader code. Then, when youre playing a game, those neural networks can be deployed on the Tensor cores of your graphics card. It unlocks a boatload of computing horsepower that, up to this point, had fairly minimal applications in PC games. They were really just fired up for DLSS.Nvidia has uses for neural shaders that it has announced so far the aforementioned Neural Materials and Neural Texture Compression, and Neural Radiance Cache. Ill start with the last one because its the most interesting. The Neural Radiance Cache essentially allows AI to guess what an infinite number of light bounces in a scene would look like. Now, path tracing in real time can only handle so many light bounces. After a certain point, it becomes too demanding. Neural Radiance Cache not only unlocks more realistic lighting with far more bounces but also improves performance, according to Nvidia. Thats because it only requires one or two light bounces. The rest are inferred from the neural network.Similarly, Neural Materials compresses dense shader code that would normally be reserved for offline rendering, allowing what Nvidia calls film-quality assets to be rendered in real time. Neural Texture Compression applies AI to texture compression, which Nvidia says saves 7x the memory as traditional block-based compression without any loss in quality.NvidiaThats just three applications of neural networks being deployed in PC games, and there are already big implications for how well games can run and how good they can look. Its important to remember that this is the starting line, too AMD, Intel, and Nvidia all have AI hardware on their GPUs now, and I suspect there will be quite a lot of development on what kinds of neural networks can go into a shader in the future.Maybe there are cloth or physics simulations that are normally run on the CPU that can be run through a neural network on Tensor cores. Or maybe you can expand the complexity of meshes by inferring triangles that the GPU doesnt need to account for. There are the visible applications of AI, such as through non-playable characters, but neural shaders open up a world of invisible AI that makes rendering more efficient, and therefore, more powerful.Its easy to get lost in the sauce of CES. If you were to believe every executive keynote, you would walk away with literally thousands of ground-breaking innovations that barely manage to move a patch of dirt. Neural shaders dont fit into that category. There are already three very practical applications of neural shaders that Nvidia is introducing, and people much smarter than myself will likely dream up hundreds more.I should be clear, though that wont come right away. Were only seeing the very surface of what neural shaders could be capable of in the future, and even then, itll likely be multiple years and graphics card generations down the road before their impact is felt. But when looking at the landscape of announcements from AMD, Nvidia, and Intel, only one company introduced something that could really be worthy of that ground-breaking title, and thats Nvidia.Editors Recommendations
0 Comentários 0 Compartilhamentos 50 Visualizações