• Godot 4.5 Dev5 Released

    Godot 4.5 Dev5 Released / News / June 5, 2025 / Godot, Release

    We have a new Dev release of the open-source Godot Game Engine, Godot 4.5 Dev5. This release is theoretically going to be the last dev release as we move on to a beta schedule and more or less represents the final new features we can expect to see in Godot 4.5. We have covered all the previous new released: Dev1, Dev2, Dev3 and Dev4 if you are looking for other features to expect in the upcoming Godot 4.5 release. There is also a new Godot Engine asset bundle showcased in the video, the Godot StarNova Bundle on Gumroad but be sure to use the code SN40 at checkout to save off!
    Highlight new features of the Godot 4.5 dev5 release include:

    Native visionOS Support: Introduces official platform integration for visionOS, enabling users to connect with the Godot XR community.
    GDScript Abstract Classes: Allows the creation of abstract classes using the abstract keyword.
    Shader Baker: An optional feature that significantly speeds up shader compilation, particularly for Apple devices and D3D12.
    WebAssembly SIMD Support: Integrated by default for web builds, enabling parallel processing in single-threaded environments. Inline Color Pickers: Displays the color represented by a variable directly within the script editor.
    SMAA 1x: Now a full engine feature giving another screen space anti aliasing option.
    Bent Normal Maps: Improve specular occlusion and indirect lighting over traditional normal maps.

    Key Links
    Godot 4.5 Dev5
    Interactive Change Log
    You can learn more about the Godot 4.5 Dev5 release in the video below.
    #godot #dev5 #released
    Godot 4.5 Dev5 Released
    Godot 4.5 Dev5 Released / News / June 5, 2025 / Godot, Release We have a new Dev release of the open-source Godot Game Engine, Godot 4.5 Dev5. This release is theoretically going to be the last dev release as we move on to a beta schedule and more or less represents the final new features we can expect to see in Godot 4.5. We have covered all the previous new released: Dev1, Dev2, Dev3 and Dev4 if you are looking for other features to expect in the upcoming Godot 4.5 release. There is also a new Godot Engine asset bundle showcased in the video, the Godot StarNova Bundle on Gumroad but be sure to use the code SN40 at checkout to save off! Highlight new features of the Godot 4.5 dev5 release include: Native visionOS Support: Introduces official platform integration for visionOS, enabling users to connect with the Godot XR community. GDScript Abstract Classes: Allows the creation of abstract classes using the abstract keyword. Shader Baker: An optional feature that significantly speeds up shader compilation, particularly for Apple devices and D3D12. WebAssembly SIMD Support: Integrated by default for web builds, enabling parallel processing in single-threaded environments. Inline Color Pickers: Displays the color represented by a variable directly within the script editor. SMAA 1x: Now a full engine feature giving another screen space anti aliasing option. Bent Normal Maps: Improve specular occlusion and indirect lighting over traditional normal maps. Key Links Godot 4.5 Dev5 Interactive Change Log You can learn more about the Godot 4.5 Dev5 release in the video below. #godot #dev5 #released
    GAMEFROMSCRATCH.COM
    Godot 4.5 Dev5 Released
    Godot 4.5 Dev5 Released / News / June 5, 2025 / Godot, Release We have a new Dev release of the open-source Godot Game Engine, Godot 4.5 Dev5. This release is theoretically going to be the last dev release as we move on to a beta schedule and more or less represents the final new features we can expect to see in Godot 4.5. We have covered all the previous new released: Dev1, Dev2, Dev3 and Dev4 if you are looking for other features to expect in the upcoming Godot 4.5 release. There is also a new Godot Engine asset bundle showcased in the video, the Godot StarNova Bundle on Gumroad but be sure to use the code SN40 at checkout to save $40 off! Highlight new features of the Godot 4.5 dev5 release include: Native visionOS Support: Introduces official platform integration for visionOS, enabling users to connect with the Godot XR community. GDScript Abstract Classes: Allows the creation of abstract classes using the abstract keyword. Shader Baker: An optional feature that significantly speeds up shader compilation, particularly for Apple devices and D3D12. WebAssembly SIMD Support: Integrated by default for web builds, enabling parallel processing in single-threaded environments. Inline Color Pickers: Displays the color represented by a variable directly within the script editor. SMAA 1x: Now a full engine feature giving another screen space anti aliasing option. Bent Normal Maps: Improve specular occlusion and indirect lighting over traditional normal maps. Key Links Godot 4.5 Dev5 Interactive Change Log You can learn more about the Godot 4.5 Dev5 release in the video below.
    Like
    Love
    Wow
    Sad
    Angry
    358
    0 Комментарии 0 Поделились
  • Vicon Launches Active Crown Camera Tracking Solution

    Vicon today announced the launch of Active Crown, a virtual production camera tracking solution that provides users with flexible tracking for cameras positioned anywhere on In-Camera Visual Effects stages. 
    “Virtual production teams experience a unique set of challenges due to the complexity of the various technologies we combine to create immersive virtual sets,” said Tim Doubleday, head of on-set virtual production at Dimension. “Camera tracking is a critical problem to manage, allowing for accurate presentation of the scene in real-time whilst minimizing occlusion and interference within busy stages. With Active Crown, we saw a huge improvement in rotational noise right off the bat, and the setup workflow felt very nice, especially the magnetic detachable stalks. You could tell it had been through Vicon’s thoughtful product development.”
    The launch comes on the heels of Vicon's markerless motion capture launch earlier this year, which allows users to instantly visualize ideas with the Vanguard markerless motion-tracking camera and new software incorporating advanced computer vision, machine learning, and algorithms.
    Source: Vicon

    Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions.
    #vicon #launches #active #crown #camera
    Vicon Launches Active Crown Camera Tracking Solution
    Vicon today announced the launch of Active Crown, a virtual production camera tracking solution that provides users with flexible tracking for cameras positioned anywhere on In-Camera Visual Effects stages.  “Virtual production teams experience a unique set of challenges due to the complexity of the various technologies we combine to create immersive virtual sets,” said Tim Doubleday, head of on-set virtual production at Dimension. “Camera tracking is a critical problem to manage, allowing for accurate presentation of the scene in real-time whilst minimizing occlusion and interference within busy stages. With Active Crown, we saw a huge improvement in rotational noise right off the bat, and the setup workflow felt very nice, especially the magnetic detachable stalks. You could tell it had been through Vicon’s thoughtful product development.” The launch comes on the heels of Vicon's markerless motion capture launch earlier this year, which allows users to instantly visualize ideas with the Vanguard markerless motion-tracking camera and new software incorporating advanced computer vision, machine learning, and algorithms. Source: Vicon Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions. #vicon #launches #active #crown #camera
    WWW.AWN.COM
    Vicon Launches Active Crown Camera Tracking Solution
    Vicon today announced the launch of Active Crown, a virtual production camera tracking solution that provides users with flexible tracking for cameras positioned anywhere on In-Camera Visual Effects stages.  “Virtual production teams experience a unique set of challenges due to the complexity of the various technologies we combine to create immersive virtual sets,” said Tim Doubleday, head of on-set virtual production at Dimension. “Camera tracking is a critical problem to manage, allowing for accurate presentation of the scene in real-time whilst minimizing occlusion and interference within busy stages. With Active Crown, we saw a huge improvement in rotational noise right off the bat, and the setup workflow felt very nice, especially the magnetic detachable stalks. You could tell it had been through Vicon’s thoughtful product development.” The launch comes on the heels of Vicon's markerless motion capture launch earlier this year, which allows users to instantly visualize ideas with the Vanguard markerless motion-tracking camera and new software incorporating advanced computer vision, machine learning, and algorithms. Source: Vicon Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions.
    Like
    Wow
    Love
    Sad
    Angry
    147
    0 Комментарии 0 Поделились
  • Dev snapshot: Godot 4.5 dev 5

    Replicube
    A game by Walaber Entertainment LLCDev snapshot: Godot 4.5 dev 5By:
    Thaddeus Crews2 June 2025Pre-releaseBrrr… Do you feel that? That’s the cold front of the feature freeze just around the corner. It’s not upon us just yet, but this is likely to be our final development snapshot of the 4.5 release cycle. As we enter the home stretch of new features, bugs are naturally going to follow suit, meaning bug reports and feedback will be especially important for a smooth beta timeframe.Jump to the Downloads section, and give it a spin right now, or continue reading to learn more about improvements in this release. You can also try the Web editor or the Android editor for this release. If you are interested in the latter, please request to join our testing group to get access to pre-release builds.The cover illustration is from Replicube, a programming puzzle game where you write code to recreate voxelized objects. It is developed by Walaber Entertainment LLC. You can get the game on Steam.HighlightsIn case you missed them, see the 4.5 dev 1, 4.5 dev 2, 4.5 dev 3, and 4.5 dev 4 release notes for an overview of some key features which were already in those snapshots, and are therefore still available for testing in dev 5.Native visionOS supportNormally, our featured highlights in these development blogs come from long-time contributors. This makes sense of course, as it’s generally those users that have the familiarity necessary for major changes or additions that are commonly used for these highlights. That’s why it might surprise you to hear that visionOS support comes to us from Ricardo Sanchez-Saez, whose pull request GH-105628 is his very first contribution to the engine! It might not surprise you to hear that Ricardo is part of the visionOS engineering team at Apple, which certainly helps get his foot in the door, but that still makes visionOS the first officially-supported platform integration in about a decade.For those unfamiliar, visionOS is Apple’s XR environment. We’re no strangers to XR as a concept, but XR platforms are as distinct from one another as traditional platforms. visionOS users have expressed a strong interest in integrating with our ever-growing XR community, and now we can make that happen. See you all in the next XR Game Jam!GDScript: Abstract classesWhile the Godot Engine utilizes abstract classes—a class that cannot be directly instantiated—frequently, this was only ever supported internally. Thanks to the efforts of Aaron Franke, this paradigm is now available to GDScript users. Now if a user wants to introduce their own abstract class, they merely need to declare it via the new abstract keyword:abstract class_name MyAbstract extends Node
    The purpose of an abstract class is to create a baseline for other classes to derive from:class_name ExtendsMyAbstract extends MyAbstract
    Shader bakerFrom the technical gurus behind implementing ubershaders, Darío Samo and Pedro J. Estébanez bring us another miracle of rendering via GH-102552: shader baker exporting. This is an optional feature that can be enabled at export time to speed up shader compilation massively. This feature works with ubershaders automatically without any work from the user. Using shader baking is strongly recommended when targeting Apple devices or D3D12 since it makes the biggest difference there!Before:After:However, it comes with tradeoffs:Export time will be much longer.Build size will be much larger since the baked shaders can take up a lot of space.We have removed several MoltenVK bug workarounds from the Forward+ shader, therefore we no longer guarantee support for the Forward+ renderer on Intel Macs. If you are targeting Intel Macs, you should use the Mobile or Compatibility renderers.Baking for Vulkan can be done from any device, but baking for D3D12 needs to be done from a Windows device and baking for Apple .metallib requires a Metal compiler.Web: WebAssembly SIMD supportAs you might recall, Godot 4.0 initially released under the assumption that multi-threaded web support would become the standard, and only supported that format for web builds. This assumption unfortunately proved to be wishful thinking, and was reverted in 4.3 by allowing for single-threaded builds once more. However, this doesn’t mean that these single-threaded environments are inherently incapable of parallel processing; it just requires alternative implementations. One such implementation, SIMD, is a perfect candidate thanks to its support across all major browsers. To that end, web-wiz Adam Scott has taken to integrating this implementation for our web builds by default.Inline color pickersWhile it’s always been possible to see what kind of variable is assigned to an exported color in the inspector, some users have expressed a keen interest in allowing for this functionality within the script editor itself. This is because it would mean seeing what kind of color is represented by a variable without it needing to be exposed, as well as making it more intuitive at a glance as to what color a name or code corresponds to. Koliur Rahman has blessed us with this quality-of-life goodness, which adds an inline color picker GH-105724. Now no matter where the color is declared, users will be able to immediately and intuitively know what is actually represented in a non-intrusive manner.Rendering goodiesThe renderer got a fair amount of love this snapshot; not from any one PR, but rather a multitude of community members bringing some long-awaited features to light. Raymond DiDonato helped SMAA 1x make its transition from addon to fully-fledged engine feature. Capry brings bent normal maps to further enhance specular occlusion and indirect lighting. Our very own Clay John converted our Compatibility backend to use a fragment shader copy instead of a blit copy, working around common sample rate issues on mobile devices. More technical information on these rendering changes can be found in their associated PRs.SMAA comparison:OffOnBent normal map comparison:BeforeAfterAnd more!There are too many exciting changes to list them all here, but here’s a curated selection:Animation: Add alphabetical sorting to Animation Player.Animation: Add animation filtering to animation editor.Audio: Implement seek operation for Theora video files, improve multi-channel audio resampling.Core: Add --scene command line argument.Core: Overhaul resource duplication.Core: Use Grisu2 algorithm in String::num_scientific to fix serializing.Editor: Add “Quick Load” button to EditorResourcePicker.Editor: Add PROPERTY_HINT_INPUT_NAME for use with @export_custom to allow using input actions.Editor: Add named EditorScripts to the command palette.GUI: Add file sort to FileDialog.I18n: Add translation preview in editor.Import: Add Channel Remap settings to ResourceImporterTexture.Physics: Improve performance with non-monitoring areas when using Jolt Physics.Porting: Android: Add export option for custom theme attributes.Porting: Android: Add support for 16 KB page sizes, update to NDK r28b.Porting: Android: Remove the gradle_build/compress_native_libraries export option.Porting: Web: Use actual PThread pool size for get_default_thread_pool_size.Porting: Windows/macOS/Linux: Use SSE 4.2 as a baseline when compiling Godot.Rendering: Add new StandardMaterial properties to allow users to control FPS-style objects.Rendering: FTI - Optimize SceneTree traversal.Changelog109 contributors submitted 252 fixes for this release. See our interactive changelog for the complete list of changes since the previous 4.5-dev4 snapshot.This release is built from commit 64b09905c.DownloadsGodot is downloading...Godot exists thanks to donations from people like you. Help us continue our work:Make a DonationStandard build includes support for GDScript and GDExtension..NET buildincludes support for C#, as well as GDScript and GDExtension.While engine maintainers try their best to ensure that each preview snapshot and release candidate is stable, this is by definition a pre-release piece of software. Be sure to make frequent backups, or use a version control system such as Git, to preserve your projects in case of corruption or data loss.Known issuesWindows executableshave been signed with an expired certificate. You may see warnings from Windows Defender’s SmartScreen when running this version, or outright be prevented from running the executables with a double-click. Running Godot from the command line can circumvent this. We will soon have a renewed certificate which will be used for future builds.With every release, we accept that there are going to be various issues, which have already been reported but haven’t been fixed yet. See the GitHub issue tracker for a complete list of known bugs.Bug reportsAs a tester, we encourage you to open bug reports if you experience issues with this release. Please check the existing issues on GitHub first, using the search function with relevant keywords, to ensure that the bug you experience is not already known.In particular, any change that would cause a regression in your projects is very important to report.SupportGodot is a non-profit, open source game engine developed by hundreds of contributors on their free time, as well as a handful of part and full-time developers hired thanks to generous donations from the Godot community. A big thank you to everyone who has contributed their time or their financial support to the project!If you’d like to support the project financially and help us secure our future hires, you can do so using the Godot Development Fund.Donate now
    #dev #snapshot #godot
    Dev snapshot: Godot 4.5 dev 5
    Replicube A game by Walaber Entertainment LLCDev snapshot: Godot 4.5 dev 5By: Thaddeus Crews2 June 2025Pre-releaseBrrr… Do you feel that? That’s the cold front of the feature freeze just around the corner. It’s not upon us just yet, but this is likely to be our final development snapshot of the 4.5 release cycle. As we enter the home stretch of new features, bugs are naturally going to follow suit, meaning bug reports and feedback will be especially important for a smooth beta timeframe.Jump to the Downloads section, and give it a spin right now, or continue reading to learn more about improvements in this release. You can also try the Web editor or the Android editor for this release. If you are interested in the latter, please request to join our testing group to get access to pre-release builds.The cover illustration is from Replicube, a programming puzzle game where you write code to recreate voxelized objects. It is developed by Walaber Entertainment LLC. You can get the game on Steam.HighlightsIn case you missed them, see the 4.5 dev 1, 4.5 dev 2, 4.5 dev 3, and 4.5 dev 4 release notes for an overview of some key features which were already in those snapshots, and are therefore still available for testing in dev 5.Native visionOS supportNormally, our featured highlights in these development blogs come from long-time contributors. This makes sense of course, as it’s generally those users that have the familiarity necessary for major changes or additions that are commonly used for these highlights. That’s why it might surprise you to hear that visionOS support comes to us from Ricardo Sanchez-Saez, whose pull request GH-105628 is his very first contribution to the engine! It might not surprise you to hear that Ricardo is part of the visionOS engineering team at Apple, which certainly helps get his foot in the door, but that still makes visionOS the first officially-supported platform integration in about a decade.For those unfamiliar, visionOS is Apple’s XR environment. We’re no strangers to XR as a concept, but XR platforms are as distinct from one another as traditional platforms. visionOS users have expressed a strong interest in integrating with our ever-growing XR community, and now we can make that happen. See you all in the next XR Game Jam!GDScript: Abstract classesWhile the Godot Engine utilizes abstract classes—a class that cannot be directly instantiated—frequently, this was only ever supported internally. Thanks to the efforts of Aaron Franke, this paradigm is now available to GDScript users. Now if a user wants to introduce their own abstract class, they merely need to declare it via the new abstract keyword:abstract class_name MyAbstract extends Node The purpose of an abstract class is to create a baseline for other classes to derive from:class_name ExtendsMyAbstract extends MyAbstract Shader bakerFrom the technical gurus behind implementing ubershaders, Darío Samo and Pedro J. Estébanez bring us another miracle of rendering via GH-102552: shader baker exporting. This is an optional feature that can be enabled at export time to speed up shader compilation massively. This feature works with ubershaders automatically without any work from the user. Using shader baking is strongly recommended when targeting Apple devices or D3D12 since it makes the biggest difference there!Before:After:However, it comes with tradeoffs:Export time will be much longer.Build size will be much larger since the baked shaders can take up a lot of space.We have removed several MoltenVK bug workarounds from the Forward+ shader, therefore we no longer guarantee support for the Forward+ renderer on Intel Macs. If you are targeting Intel Macs, you should use the Mobile or Compatibility renderers.Baking for Vulkan can be done from any device, but baking for D3D12 needs to be done from a Windows device and baking for Apple .metallib requires a Metal compiler.Web: WebAssembly SIMD supportAs you might recall, Godot 4.0 initially released under the assumption that multi-threaded web support would become the standard, and only supported that format for web builds. This assumption unfortunately proved to be wishful thinking, and was reverted in 4.3 by allowing for single-threaded builds once more. However, this doesn’t mean that these single-threaded environments are inherently incapable of parallel processing; it just requires alternative implementations. One such implementation, SIMD, is a perfect candidate thanks to its support across all major browsers. To that end, web-wiz Adam Scott has taken to integrating this implementation for our web builds by default.Inline color pickersWhile it’s always been possible to see what kind of variable is assigned to an exported color in the inspector, some users have expressed a keen interest in allowing for this functionality within the script editor itself. This is because it would mean seeing what kind of color is represented by a variable without it needing to be exposed, as well as making it more intuitive at a glance as to what color a name or code corresponds to. Koliur Rahman has blessed us with this quality-of-life goodness, which adds an inline color picker GH-105724. Now no matter where the color is declared, users will be able to immediately and intuitively know what is actually represented in a non-intrusive manner.Rendering goodiesThe renderer got a fair amount of love this snapshot; not from any one PR, but rather a multitude of community members bringing some long-awaited features to light. Raymond DiDonato helped SMAA 1x make its transition from addon to fully-fledged engine feature. Capry brings bent normal maps to further enhance specular occlusion and indirect lighting. Our very own Clay John converted our Compatibility backend to use a fragment shader copy instead of a blit copy, working around common sample rate issues on mobile devices. More technical information on these rendering changes can be found in their associated PRs.SMAA comparison:OffOnBent normal map comparison:BeforeAfterAnd more!There are too many exciting changes to list them all here, but here’s a curated selection:Animation: Add alphabetical sorting to Animation Player.Animation: Add animation filtering to animation editor.Audio: Implement seek operation for Theora video files, improve multi-channel audio resampling.Core: Add --scene command line argument.Core: Overhaul resource duplication.Core: Use Grisu2 algorithm in String::num_scientific to fix serializing.Editor: Add “Quick Load” button to EditorResourcePicker.Editor: Add PROPERTY_HINT_INPUT_NAME for use with @export_custom to allow using input actions.Editor: Add named EditorScripts to the command palette.GUI: Add file sort to FileDialog.I18n: Add translation preview in editor.Import: Add Channel Remap settings to ResourceImporterTexture.Physics: Improve performance with non-monitoring areas when using Jolt Physics.Porting: Android: Add export option for custom theme attributes.Porting: Android: Add support for 16 KB page sizes, update to NDK r28b.Porting: Android: Remove the gradle_build/compress_native_libraries export option.Porting: Web: Use actual PThread pool size for get_default_thread_pool_size.Porting: Windows/macOS/Linux: Use SSE 4.2 as a baseline when compiling Godot.Rendering: Add new StandardMaterial properties to allow users to control FPS-style objects.Rendering: FTI - Optimize SceneTree traversal.Changelog109 contributors submitted 252 fixes for this release. See our interactive changelog for the complete list of changes since the previous 4.5-dev4 snapshot.This release is built from commit 64b09905c.DownloadsGodot is downloading...Godot exists thanks to donations from people like you. Help us continue our work:Make a DonationStandard build includes support for GDScript and GDExtension..NET buildincludes support for C#, as well as GDScript and GDExtension.While engine maintainers try their best to ensure that each preview snapshot and release candidate is stable, this is by definition a pre-release piece of software. Be sure to make frequent backups, or use a version control system such as Git, to preserve your projects in case of corruption or data loss.Known issuesWindows executableshave been signed with an expired certificate. You may see warnings from Windows Defender’s SmartScreen when running this version, or outright be prevented from running the executables with a double-click. Running Godot from the command line can circumvent this. We will soon have a renewed certificate which will be used for future builds.With every release, we accept that there are going to be various issues, which have already been reported but haven’t been fixed yet. See the GitHub issue tracker for a complete list of known bugs.Bug reportsAs a tester, we encourage you to open bug reports if you experience issues with this release. Please check the existing issues on GitHub first, using the search function with relevant keywords, to ensure that the bug you experience is not already known.In particular, any change that would cause a regression in your projects is very important to report.SupportGodot is a non-profit, open source game engine developed by hundreds of contributors on their free time, as well as a handful of part and full-time developers hired thanks to generous donations from the Godot community. A big thank you to everyone who has contributed their time or their financial support to the project!If you’d like to support the project financially and help us secure our future hires, you can do so using the Godot Development Fund.Donate now #dev #snapshot #godot
    GODOTENGINE.ORG
    Dev snapshot: Godot 4.5 dev 5
    Replicube A game by Walaber Entertainment LLCDev snapshot: Godot 4.5 dev 5By: Thaddeus Crews2 June 2025Pre-releaseBrrr… Do you feel that? That’s the cold front of the feature freeze just around the corner. It’s not upon us just yet, but this is likely to be our final development snapshot of the 4.5 release cycle. As we enter the home stretch of new features, bugs are naturally going to follow suit, meaning bug reports and feedback will be especially important for a smooth beta timeframe.Jump to the Downloads section, and give it a spin right now, or continue reading to learn more about improvements in this release. You can also try the Web editor or the Android editor for this release. If you are interested in the latter, please request to join our testing group to get access to pre-release builds.The cover illustration is from Replicube, a programming puzzle game where you write code to recreate voxelized objects. It is developed by Walaber Entertainment LLC (Bluesky, Twitter). You can get the game on Steam.HighlightsIn case you missed them, see the 4.5 dev 1, 4.5 dev 2, 4.5 dev 3, and 4.5 dev 4 release notes for an overview of some key features which were already in those snapshots, and are therefore still available for testing in dev 5.Native visionOS supportNormally, our featured highlights in these development blogs come from long-time contributors. This makes sense of course, as it’s generally those users that have the familiarity necessary for major changes or additions that are commonly used for these highlights. That’s why it might surprise you to hear that visionOS support comes to us from Ricardo Sanchez-Saez, whose pull request GH-105628 is his very first contribution to the engine! It might not surprise you to hear that Ricardo is part of the visionOS engineering team at Apple, which certainly helps get his foot in the door, but that still makes visionOS the first officially-supported platform integration in about a decade.For those unfamiliar, visionOS is Apple’s XR environment. We’re no strangers to XR as a concept (see our recent XR blogpost highlighting the latest Godot XR Game Jam), but XR platforms are as distinct from one another as traditional platforms. visionOS users have expressed a strong interest in integrating with our ever-growing XR community, and now we can make that happen. See you all in the next XR Game Jam!GDScript: Abstract classesWhile the Godot Engine utilizes abstract classes—a class that cannot be directly instantiated—frequently, this was only ever supported internally. Thanks to the efforts of Aaron Franke, this paradigm is now available to GDScript users (GH-67777). Now if a user wants to introduce their own abstract class, they merely need to declare it via the new abstract keyword:abstract class_name MyAbstract extends Node The purpose of an abstract class is to create a baseline for other classes to derive from:class_name ExtendsMyAbstract extends MyAbstract Shader bakerFrom the technical gurus behind implementing ubershaders, Darío Samo and Pedro J. Estébanez bring us another miracle of rendering via GH-102552: shader baker exporting. This is an optional feature that can be enabled at export time to speed up shader compilation massively. This feature works with ubershaders automatically without any work from the user. Using shader baking is strongly recommended when targeting Apple devices or D3D12 since it makes the biggest difference there (over 20× decrease in load times in the TPS demo)!Before:After:However, it comes with tradeoffs:Export time will be much longer.Build size will be much larger since the baked shaders can take up a lot of space.We have removed several MoltenVK bug workarounds from the Forward+ shader, therefore we no longer guarantee support for the Forward+ renderer on Intel Macs. If you are targeting Intel Macs, you should use the Mobile or Compatibility renderers.Baking for Vulkan can be done from any device, but baking for D3D12 needs to be done from a Windows device and baking for Apple .metallib requires a Metal compiler (macOS with Xcode / Command Line Tools installed).Web: WebAssembly SIMD supportAs you might recall, Godot 4.0 initially released under the assumption that multi-threaded web support would become the standard, and only supported that format for web builds. This assumption unfortunately proved to be wishful thinking, and was reverted in 4.3 by allowing for single-threaded builds once more. However, this doesn’t mean that these single-threaded environments are inherently incapable of parallel processing; it just requires alternative implementations. One such implementation, SIMD, is a perfect candidate thanks to its support across all major browsers. To that end, web-wiz Adam Scott has taken to integrating this implementation for our web builds by default (GH-106319).Inline color pickersWhile it’s always been possible to see what kind of variable is assigned to an exported color in the inspector, some users have expressed a keen interest in allowing for this functionality within the script editor itself. This is because it would mean seeing what kind of color is represented by a variable without it needing to be exposed, as well as making it more intuitive at a glance as to what color a name or code corresponds to. Koliur Rahman has blessed us with this quality-of-life goodness, which adds an inline color picker GH-105724. Now no matter where the color is declared, users will be able to immediately and intuitively know what is actually represented in a non-intrusive manner.Rendering goodiesThe renderer got a fair amount of love this snapshot; not from any one PR, but rather a multitude of community members bringing some long-awaited features to light. Raymond DiDonato helped SMAA 1x make its transition from addon to fully-fledged engine feature (GH-102330). Capry brings bent normal maps to further enhance specular occlusion and indirect lighting (GH-89988). Our very own Clay John converted our Compatibility backend to use a fragment shader copy instead of a blit copy, working around common sample rate issues on mobile devices (GH-106267). More technical information on these rendering changes can be found in their associated PRs.SMAA comparison:OffOnBent normal map comparison:BeforeAfterAnd more!There are too many exciting changes to list them all here, but here’s a curated selection:Animation: Add alphabetical sorting to Animation Player (GH-103584).Animation: Add animation filtering to animation editor (GH-103130).Audio: Implement seek operation for Theora video files, improve multi-channel audio resampling (GH-102360).Core: Add --scene command line argument (GH-105302).Core: Overhaul resource duplication (GH-100673).Core: Use Grisu2 algorithm in String::num_scientific to fix serializing (GH-98750).Editor: Add “Quick Load” button to EditorResourcePicker (GH-104490).Editor: Add PROPERTY_HINT_INPUT_NAME for use with @export_custom to allow using input actions (GH-96611).Editor: Add named EditorScripts to the command palette (GH-99318).GUI: Add file sort to FileDialog (GH-105723).I18n: Add translation preview in editor (GH-96921).Import: Add Channel Remap settings to ResourceImporterTexture (GH-99676).Physics: Improve performance with non-monitoring areas when using Jolt Physics (GH-106490).Porting: Android: Add export option for custom theme attributes (GH-106724).Porting: Android: Add support for 16 KB page sizes, update to NDK r28b (GH-106358).Porting: Android: Remove the gradle_build/compress_native_libraries export option (GH-106359).Porting: Web: Use actual PThread pool size for get_default_thread_pool_size() (GH-104458).Porting: Windows/macOS/Linux: Use SSE 4.2 as a baseline when compiling Godot (GH-59595).Rendering: Add new StandardMaterial properties to allow users to control FPS-style objects (hands, weapons, tools close to the camera) (GH-93142).Rendering: FTI - Optimize SceneTree traversal (GH-106244).Changelog109 contributors submitted 252 fixes for this release. See our interactive changelog for the complete list of changes since the previous 4.5-dev4 snapshot.This release is built from commit 64b09905c.DownloadsGodot is downloading...Godot exists thanks to donations from people like you. Help us continue our work:Make a DonationStandard build includes support for GDScript and GDExtension..NET build (marked as mono) includes support for C#, as well as GDScript and GDExtension.While engine maintainers try their best to ensure that each preview snapshot and release candidate is stable, this is by definition a pre-release piece of software. Be sure to make frequent backups, or use a version control system such as Git, to preserve your projects in case of corruption or data loss.Known issuesWindows executables (both the editor and export templates) have been signed with an expired certificate. You may see warnings from Windows Defender’s SmartScreen when running this version, or outright be prevented from running the executables with a double-click (GH-106373). Running Godot from the command line can circumvent this. We will soon have a renewed certificate which will be used for future builds.With every release, we accept that there are going to be various issues, which have already been reported but haven’t been fixed yet. See the GitHub issue tracker for a complete list of known bugs.Bug reportsAs a tester, we encourage you to open bug reports if you experience issues with this release. Please check the existing issues on GitHub first, using the search function with relevant keywords, to ensure that the bug you experience is not already known.In particular, any change that would cause a regression in your projects is very important to report (e.g. if something that worked fine in previous 4.x releases, but no longer works in this snapshot).SupportGodot is a non-profit, open source game engine developed by hundreds of contributors on their free time, as well as a handful of part and full-time developers hired thanks to generous donations from the Godot community. A big thank you to everyone who has contributed their time or their financial support to the project!If you’d like to support the project financially and help us secure our future hires, you can do so using the Godot Development Fund.Donate now
    0 Комментарии 0 Поделились
  • The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen

    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam.
    #multiplayer #stack #behind #mmorpg #pantheon
    The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen
    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam. #multiplayer #stack #behind #mmorpg #pantheon
    UNITY.COM
    The multiplayer stack behind MMORPG Pantheon: Rise of the Fallen
    Finding your own path is at the core of gameplay in Pantheon: Rise of the Fallen – players can go anywhere, climb anything, forge new routes, and follow their curiosity to find adventure. It’s not that different from how its creators, Visionary Realms, approaches building this MMORPG – they’re doing it their own way.Transporting players to the fantasy world of Terminus, Pantheon: Rise of the Fallen harkens back to classic MMOs, where accidental discovery wandering through an open world and social interactions with other players are at the heart of the game experience.Creating any multiplayer game is a challenge – but a highly social online game at this scale is an epic quest. We sat down with lead programmer Kyle Olsen about how the team is using Unity to connect players in this MMORPG fantasy world.So what makes Pantheon: Rise of the Fallen unique compared to other MMO games?It’s definitely the social aspect. You have to experience the world and move through it naturally. It can be a bit more of a grind in a way, but it I think connects you more to your character, to the game, and the world instead of just sort of teleporting everywhere and joining LFG systems or just being placed in a dungeon. You learn the land a bit better, you have to navigate and you use your eyes more than just bouncing around like a pinball from objective to objective, following quest markers and stuff. It’s more of a thought game.How are you managing synchronization between the player experience and specific world instances?We have our own network library we built for the socket transport layer called ViNL. That’s the bread and butter for all of the zone communications, between zones and player to zone. SQL server in the back end, kind of standard stuff there. But most of the transports are handled by our own network library.How do you approach asset loading for this giant world?We’ve got a step where we bake our continents out into these tiles, and we’ve got different backends that we can plug into that. We’ve got one that just outputs standard Prefabs, and we’ve got one that outputs subscenes that we were using before Unity 6, and then we’ve got actual full-on Unity scenes that you can load additively, so you can choose how you want to output your content. Before Unity 6, we had moved away from Prefabs and started loading the DOTS subscenes and using that, built on BRG.We also have an output that can render directly to our own custom batch render group as well, just using scriptable objects and managing our own data. So we’ve been able to experiment and test out the different ones, and see what yields the best client performance. Prior to Unity 6, we were outputting and rendering the entire continent with subscenes, but with Unity 6 we actually switched back to using Prefabs with Instantiate Async and Addressables to manage everything.We’re using the Resident Drawer and GPU occlusion culling, which ended up yielding even better performance than subscenes and our own batch render group – I’m assuming because GPU occlusion culling just isn’t supported by some of the other render paths at the moment. So we’ve bounced around quite a bit, and we landed on Addressables for managing all the memory and asset loading, and regular Instantiate Prefabs with the GPU Resident Drawer seems to be the best client-side performance at the moment.Did you upgrade to Unity 6 to take advantage of the GPU Resident Drawer, specifically?Actually, I really wanted it for the occlusion culling. I wasn’t aware that only certain render paths made use of the occlusion culling, so we were attempting to use it with the same subscene rendering that we were using prior to Unity 6 and realizing nothing’s actually being culled. So we opted to switch back to the Prefab output to see what that looked like with the Resident Drawer, and occlusion culling and FPS went up.We had some issues initially, because Instantiate Async wasn’t in before Unity 6, so we had some stalls when we would instantiate our tiles. There were quite a few things being instantiated, but switching that over to Instantiate Async after we fixed a couple of bugs we got rid of the stall on load and the overall frame rate was higher after load, so it was just a win-win.Were there any really remarkable productivity gains that came with the switch to Unity 6?Everything I've talked about so far was client-facing, so our players experienced those wins. For the developer side of things, the stability and performance of the Editor went up quite a bit. The Editor stability in Unity 6 has gone up pretty substantially – it’s very rare to actually crash now. That alone has been, at least for the coding side, a huge win. It feels more stable in its entirety for sure.How do you handle making changes and updates without breaking everything?We build with Addressables using the labels very heavily, and we do the Addressable packaging by labels. So if we edit a specific zone or an asset in a zone, or like a VFX that’s associated with a spell or something like that, only those bundles that touch that label get updated at all.And then, our own content delivery system, we have the game available on Steam and our own patcher, and those both handle the delta changes, where we’re just delivering small updates through those Addressable bundles. The netcode requires the same version to be connected in the first place, so the network library side of that is automatically handled in the handshake process.What guidance would you give someone who’s trying to tackle an MMO game or another ambitious multiplayer project?You kind of start small, I guess. It's a step-by-step process. If you’re a small team, you You start small. It's a step-by-step process. If you’re a small team, you can’t bite off too much. It’d be completely overwhelming – but that holds true with any larger-scale game, not just an MMO. Probably technology selection – making smart choices upfront and sticking to them. It’s going to be a lot of middleware and backend tech that you’re going to have to wrangle and get working well together, and swapping to the newest cool thing all the time is not going to bode well.What’s the most exciting technical achievement for your team with this game?I think that there aren’t many open world MMOs, period, that have been pulled off in Unity. We don’t have a huge team, and we're making a game that is genuinely massive, so we have to focus on little isolated areas, develop them as best we can, and then move on and get feedback.The whole package together is fairly new grounds – when there is an MMO, it needs to feel like an MMO in spirit, with lots of people all around, doing their own thing. And we’ve pulled that off – I think better than pretty much any Unity MMO ever has. I think we can pat ourselves on the back for that.Get more insights from developers on Unity’s Resources page and here on the blog. Check out Pantheon: Rise of the Fallen in Early Access on Steam.
    0 Комментарии 0 Поделились
  • The Witcher 3 is Getting One Last Patch This Year, Bringing Cross-Platform Mods to Consoles

    CD Projekt RED has announced that it will be capping off 10 years of post-launch support for critically-acclaimed RPG The Witcher 3: Wild Hunt with one final update. Announced through a post on social media platform X, the studio has said that the update will be coming to PC, PS5 and Xbox Series X/S, and will bring with it cross-platform mod support.
    While mod support has been available in the PC version of The Witcher 3 for a while now, with this update, CD Projekt RED will allow players to create and share mods across the different platforms on which it is available. It is worth noting that this update will not be coming to the Nintendo Switch version of the game. CD Projekt RED hasn’t yet confirmed when this update will be released, saying simply that it will be coming out “later this year”.
    In an FAQ on its official website, the studio has also confirmed that players can upload and download mods to the platform being used – mod.io – at no cost. Players will, however, need to create a mod.io account that is connected to their CD Projekt RED account to access cross-platform mods. The use of mod.io will also not prevent PC players from sticking to Nexus Mods or other modding platforms.
    “As we celebrate the 10th anniversary of The Witcher 3: Wild Hunt, we’re excited to share some good news for our players and modders,” wrote the studio on its official website. “Later this year, we will release one more patch for The Witcher 3: Wild Hunt. This update will introduce cross-platform mod support across PC, PlayStation 5, and Xbox Series X/S via mod.io.”
    “Creating, sharing, and enjoying mods will be easier and more accessible, as players on PC, PS5, and Xbox Series X|S will be able to share a modding ecosystem.”
    Since its release back in 2015, The Witcher 3: Wild Hunt has proven to be an incredibly successful game for CD Projekt RED. The studio had announced during its recent earnings report that more than 50 million copies of The Witcher 3 had been sold so far. This include copies across launch platforms PC, PS4 and Xbox One, as well as platforms that got the game later – PS5, Xbox Series X/S and Nintendo Switch.
    The current-gen version of the game, dubbed the Complete Edition, brought with it a host of new features to take advantage of modern hardware, including ray-traced global illumination and ambient occlusion, 4K textures, and even new content. These enhancements were also released for the PC version of The Witcher 3 as a free update.
    The studio had also released a video to celebrate a decade of killing monsters since The Witcher 3’s original release. The video focused mostly on the epic journey that players took alongside Geralt through the game’s main storyline as well as some of its more notable side quests.
    In the meantime, CD Projekt RED is working on The Witcher 4 for PC, PS5 and Xbox Series X/S. The game doesn’t yet have a solid release date.

    #10YearsofTheWitcher3 and one more patch! We will introduce cross-platform mod support for PC, PlayStation 5, and Xbox Series X|S later this year. For the first time, creating, sharing, and enjoying mods for The Witcher 3: Wild Hunt will be easier and more accessible than… pic.twitter.com/qiSh9nqd8i— The WitcherMay 30, 2025
    #witcher #getting #one #last #patch
    The Witcher 3 is Getting One Last Patch This Year, Bringing Cross-Platform Mods to Consoles
    CD Projekt RED has announced that it will be capping off 10 years of post-launch support for critically-acclaimed RPG The Witcher 3: Wild Hunt with one final update. Announced through a post on social media platform X, the studio has said that the update will be coming to PC, PS5 and Xbox Series X/S, and will bring with it cross-platform mod support. While mod support has been available in the PC version of The Witcher 3 for a while now, with this update, CD Projekt RED will allow players to create and share mods across the different platforms on which it is available. It is worth noting that this update will not be coming to the Nintendo Switch version of the game. CD Projekt RED hasn’t yet confirmed when this update will be released, saying simply that it will be coming out “later this year”. In an FAQ on its official website, the studio has also confirmed that players can upload and download mods to the platform being used – mod.io – at no cost. Players will, however, need to create a mod.io account that is connected to their CD Projekt RED account to access cross-platform mods. The use of mod.io will also not prevent PC players from sticking to Nexus Mods or other modding platforms. “As we celebrate the 10th anniversary of The Witcher 3: Wild Hunt, we’re excited to share some good news for our players and modders,” wrote the studio on its official website. “Later this year, we will release one more patch for The Witcher 3: Wild Hunt. This update will introduce cross-platform mod support across PC, PlayStation 5, and Xbox Series X/S via mod.io.” “Creating, sharing, and enjoying mods will be easier and more accessible, as players on PC, PS5, and Xbox Series X|S will be able to share a modding ecosystem.” Since its release back in 2015, The Witcher 3: Wild Hunt has proven to be an incredibly successful game for CD Projekt RED. The studio had announced during its recent earnings report that more than 50 million copies of The Witcher 3 had been sold so far. This include copies across launch platforms PC, PS4 and Xbox One, as well as platforms that got the game later – PS5, Xbox Series X/S and Nintendo Switch. The current-gen version of the game, dubbed the Complete Edition, brought with it a host of new features to take advantage of modern hardware, including ray-traced global illumination and ambient occlusion, 4K textures, and even new content. These enhancements were also released for the PC version of The Witcher 3 as a free update. The studio had also released a video to celebrate a decade of killing monsters since The Witcher 3’s original release. The video focused mostly on the epic journey that players took alongside Geralt through the game’s main storyline as well as some of its more notable side quests. In the meantime, CD Projekt RED is working on The Witcher 4 for PC, PS5 and Xbox Series X/S. The game doesn’t yet have a solid release date. #10YearsofTheWitcher3 and one more patch! 🎉We will introduce cross-platform mod support for PC, PlayStation 5, and Xbox Series X|S later this year. For the first time, creating, sharing, and enjoying mods for The Witcher 3: Wild Hunt will be easier and more accessible than… pic.twitter.com/qiSh9nqd8i— The WitcherMay 30, 2025 #witcher #getting #one #last #patch
    GAMINGBOLT.COM
    The Witcher 3 is Getting One Last Patch This Year, Bringing Cross-Platform Mods to Consoles
    CD Projekt RED has announced that it will be capping off 10 years of post-launch support for critically-acclaimed RPG The Witcher 3: Wild Hunt with one final update. Announced through a post on social media platform X, the studio has said that the update will be coming to PC, PS5 and Xbox Series X/S, and will bring with it cross-platform mod support. While mod support has been available in the PC version of The Witcher 3 for a while now, with this update, CD Projekt RED will allow players to create and share mods across the different platforms on which it is available. It is worth noting that this update will not be coming to the Nintendo Switch version of the game. CD Projekt RED hasn’t yet confirmed when this update will be released, saying simply that it will be coming out “later this year”. In an FAQ on its official website, the studio has also confirmed that players can upload and download mods to the platform being used – mod.io – at no cost. Players will, however, need to create a mod.io account that is connected to their CD Projekt RED account to access cross-platform mods. The use of mod.io will also not prevent PC players from sticking to Nexus Mods or other modding platforms. “As we celebrate the 10th anniversary of The Witcher 3: Wild Hunt, we’re excited to share some good news for our players and modders,” wrote the studio on its official website. “Later this year, we will release one more patch for The Witcher 3: Wild Hunt. This update will introduce cross-platform mod support across PC, PlayStation 5, and Xbox Series X/S via mod.io.” “Creating, sharing, and enjoying mods will be easier and more accessible, as players on PC, PS5, and Xbox Series X|S will be able to share a modding ecosystem.” Since its release back in 2015, The Witcher 3: Wild Hunt has proven to be an incredibly successful game for CD Projekt RED. The studio had announced during its recent earnings report that more than 50 million copies of The Witcher 3 had been sold so far. This include copies across launch platforms PC, PS4 and Xbox One, as well as platforms that got the game later – PS5, Xbox Series X/S and Nintendo Switch. The current-gen version of the game, dubbed the Complete Edition, brought with it a host of new features to take advantage of modern hardware, including ray-traced global illumination and ambient occlusion, 4K textures, and even new content. These enhancements were also released for the PC version of The Witcher 3 as a free update. The studio had also released a video to celebrate a decade of killing monsters since The Witcher 3’s original release. The video focused mostly on the epic journey that players took alongside Geralt through the game’s main storyline as well as some of its more notable side quests. In the meantime, CD Projekt RED is working on The Witcher 4 for PC, PS5 and Xbox Series X/S. The game doesn’t yet have a solid release date. #10YearsofTheWitcher3 and one more patch! 🎉We will introduce cross-platform mod support for PC, PlayStation 5, and Xbox Series X|S later this year. For the first time, creating, sharing, and enjoying mods for The Witcher 3: Wild Hunt will be easier and more accessible than… pic.twitter.com/qiSh9nqd8i— The Witcher (@thewitcher) May 30, 2025
    0 Комментарии 0 Поделились
  • F1 25 PS5 Pro Enhancements Include Quality, Performance, and 8K Resolution Modes

    Codemasters’ next iteration in its flagship Formula One racing series arrives this week with F1 25. Alongside My Team 2.0 and the next chapter of Braking Point, it also offers some new events and modes. But how does it push the series’ boundaries for fidelity, especially on PS5 Pro?
    We spoke to creative director Lee Mathew, who confirmed three modes on the console – Quality, Performance and Resolution. Quality Mode targets 4K/60 Hz with on-track ray tracing and PlayStation Spectral Super Resolution enabled. Performance Mode delivers 4K for 120 Hz screens and offers “a crisp, smooth experience and extra clarity from the increased resolution.”
    Finally, there’s Resolution Mode, which debuted last year when F1 24 received PS5 Pro support. It runs at 8K/60 Hz, and supports ray traced dynamic diffuse global illuminationon tracks. Ambient occlusion, reflections, shadow effects and RT DDGI are also viewable in cutscenes, Photo Mode, and replays in 8K/30 Hz.
    If that wasn’t enough, couch co-op in split-screen runs at 60 Hz “without compromise.” F1 25 launches on May 30th for Xbox Series X/S, PS5, and PC, but Iconic Edition owners can start playing on May 27th. Check out our feature for everything you should know.
    #ps5 #pro #enhancements #include #quality
    F1 25 PS5 Pro Enhancements Include Quality, Performance, and 8K Resolution Modes
    Codemasters’ next iteration in its flagship Formula One racing series arrives this week with F1 25. Alongside My Team 2.0 and the next chapter of Braking Point, it also offers some new events and modes. But how does it push the series’ boundaries for fidelity, especially on PS5 Pro? We spoke to creative director Lee Mathew, who confirmed three modes on the console – Quality, Performance and Resolution. Quality Mode targets 4K/60 Hz with on-track ray tracing and PlayStation Spectral Super Resolution enabled. Performance Mode delivers 4K for 120 Hz screens and offers “a crisp, smooth experience and extra clarity from the increased resolution.” Finally, there’s Resolution Mode, which debuted last year when F1 24 received PS5 Pro support. It runs at 8K/60 Hz, and supports ray traced dynamic diffuse global illuminationon tracks. Ambient occlusion, reflections, shadow effects and RT DDGI are also viewable in cutscenes, Photo Mode, and replays in 8K/30 Hz. If that wasn’t enough, couch co-op in split-screen runs at 60 Hz “without compromise.” F1 25 launches on May 30th for Xbox Series X/S, PS5, and PC, but Iconic Edition owners can start playing on May 27th. Check out our feature for everything you should know. #ps5 #pro #enhancements #include #quality
    GAMINGBOLT.COM
    F1 25 PS5 Pro Enhancements Include Quality, Performance, and 8K Resolution Modes
    Codemasters’ next iteration in its flagship Formula One racing series arrives this week with F1 25. Alongside My Team 2.0 and the next chapter of Braking Point, it also offers some new events and modes. But how does it push the series’ boundaries for fidelity, especially on PS5 Pro? We spoke to creative director Lee Mathew, who confirmed three modes on the console – Quality, Performance and Resolution. Quality Mode targets 4K/60 Hz with on-track ray tracing and PlayStation Spectral Super Resolution enabled. Performance Mode delivers 4K for 120 Hz screens and offers “a crisp, smooth experience and extra clarity from the increased resolution.” Finally, there’s Resolution Mode, which debuted last year when F1 24 received PS5 Pro support. It runs at 8K/60 Hz, and supports ray traced dynamic diffuse global illumination (RT DDGI) on tracks. Ambient occlusion, reflections, shadow effects and RT DDGI are also viewable in cutscenes, Photo Mode, and replays in 8K/30 Hz. If that wasn’t enough, couch co-op in split-screen runs at 60 Hz “without compromise.” F1 25 launches on May 30th for Xbox Series X/S, PS5, and PC, but Iconic Edition owners can start playing on May 27th. Check out our feature for everything you should know.
    0 Комментарии 0 Поделились
  • Godot 3D Audio – Audio Occlusion

    Audio is such an important component of any game and 3D audio in 3D games adds massively to immersion. Today we are going to look at two different Godot add-ons that add Audio Occlusion capabilities to the Godot game engine. The first solution is VASTLY easier to use; the aptly named Godot Audio Occlusion Plugin that’s part of the Audio Arsenal bundle by Ovani Sounds. The other is part of the free and open-source Giga Audio plugin, which also provides two different 3D area audio zones.
    The Ovani Godot Audio Occlusion Plugin is described as:

    The Audio Occlusion Plugin for Godot makes sounds in your game behave more realistically. When a sound is behind a wall, door, or obstacle, this plugin will automatically make it sound muffled or filtered—just like it would in real life.
    It works by attaching an AudioOccluder to any AudioStreamPlayer3D in your scene. The plugin calculates how sound would travel through the environment and adjusts the audio in real time, depending on what’s between the source and the listener.
    To do this, it simplifies your world into a voxel grid—a 3D block-based map—and simulates how sound waves move through it. You can even preview how the plugin sees your world by enabling Voxel Preview in the Inspector.
    You can easily customize settings like:

    Range
    Voxel resolution
    Collision mask
    Detection margin

    The Giga Audio plugin is described as:

    Audio Occlusion, Audio Areas, and Audio Depth Areas for your project.

    …yeah, slightly less verbose description there. You will find that as a general trend, the Giga Audio plugin has less documentation and no samples to get you up and going. Don’t worry though, we have that process mostly covered in the video below.
    Key Links
    Audio Arsenal bundle by Ovani Sounds
    Ovani Godot Audio Plugin
    Giga Audio GitHub Repository
    Giga Audio YouTube Video
    Using the links on this page to purchase the bundle helps support GFS and thanks so much if you do! You can learn more about using both of the Godot 4.x audio occlusion add-ons in the video below.
    #godot #audio #occlusion
    Godot 3D Audio – Audio Occlusion
    Audio is such an important component of any game and 3D audio in 3D games adds massively to immersion. Today we are going to look at two different Godot add-ons that add Audio Occlusion capabilities to the Godot game engine. The first solution is VASTLY easier to use; the aptly named Godot Audio Occlusion Plugin that’s part of the Audio Arsenal bundle by Ovani Sounds. The other is part of the free and open-source Giga Audio plugin, which also provides two different 3D area audio zones. The Ovani Godot Audio Occlusion Plugin is described as: The Audio Occlusion Plugin for Godot makes sounds in your game behave more realistically. When a sound is behind a wall, door, or obstacle, this plugin will automatically make it sound muffled or filtered—just like it would in real life. It works by attaching an AudioOccluder to any AudioStreamPlayer3D in your scene. The plugin calculates how sound would travel through the environment and adjusts the audio in real time, depending on what’s between the source and the listener. To do this, it simplifies your world into a voxel grid—a 3D block-based map—and simulates how sound waves move through it. You can even preview how the plugin sees your world by enabling Voxel Preview in the Inspector. You can easily customize settings like: Range Voxel resolution Collision mask Detection margin The Giga Audio plugin is described as: Audio Occlusion, Audio Areas, and Audio Depth Areas for your project. …yeah, slightly less verbose description there. 😉 You will find that as a general trend, the Giga Audio plugin has less documentation and no samples to get you up and going. Don’t worry though, we have that process mostly covered in the video below. Key Links Audio Arsenal bundle by Ovani Sounds Ovani Godot Audio Plugin Giga Audio GitHub Repository Giga Audio YouTube Video Using the links on this page to purchase the bundle helps support GFS and thanks so much if you do! You can learn more about using both of the Godot 4.x audio occlusion add-ons in the video below. #godot #audio #occlusion
    GAMEFROMSCRATCH.COM
    Godot 3D Audio – Audio Occlusion
    Audio is such an important component of any game and 3D audio in 3D games adds massively to immersion. Today we are going to look at two different Godot add-ons that add Audio Occlusion capabilities to the Godot game engine. The first solution is VASTLY easier to use; the aptly named Godot Audio Occlusion Plugin that’s part of the Audio Arsenal bundle by Ovani Sounds. The other is part of the free and open-source Giga Audio plugin, which also provides two different 3D area audio zones. The Ovani Godot Audio Occlusion Plugin is described as: The Audio Occlusion Plugin for Godot makes sounds in your game behave more realistically. When a sound is behind a wall, door, or obstacle, this plugin will automatically make it sound muffled or filtered—just like it would in real life. It works by attaching an AudioOccluder (a Node3D) to any AudioStreamPlayer3D in your scene. The plugin calculates how sound would travel through the environment and adjusts the audio in real time, depending on what’s between the source and the listener. To do this, it simplifies your world into a voxel grid—a 3D block-based map—and simulates how sound waves move through it. You can even preview how the plugin sees your world by enabling Voxel Preview in the Inspector (after activating the included plugin.gd). You can easily customize settings like: Range Voxel resolution Collision mask Detection margin The Giga Audio plugin is described as: Audio Occlusion, Audio Areas, and Audio Depth Areas for your project. …yeah, slightly less verbose description there. 😉 You will find that as a general trend, the Giga Audio plugin has less documentation and no samples to get you up and going. Don’t worry though, we have that process mostly covered in the video below. Key Links Audio Arsenal bundle by Ovani Sounds Ovani Godot Audio Plugin Giga Audio GitHub Repository Giga Audio YouTube Video Using the links on this page to purchase the bundle helps support GFS and thanks so much if you do! You can learn more about using both of the Godot 4.x audio occlusion add-ons in the video below.
    0 Комментарии 0 Поделились
  • CDPR releases 37 minutes of Cyberpunk 2077 Switch 2 video - so what have we learned?

    CDPR releases 37 minutes of Cyberpunk 2077 Switch 2 video - so what have we learned?
    Previewing the latest port vs Xbox Series S and PlayStation 4.

    Image credit: CD Projekt RED

    Face-off

    by Thomas Morgan
    Senior Staff Writer, Digital Foundry

    Published on May 26, 2025

    Developer CD Projekt RED has uploaded a generous batch of Switch 2 Cyberpunk 2077 footage week - 37 minutes of direct 4K capture to be exact - giving us an early glimpse at the state of its docked 30fps quality mode. Since it releases on 5th June as a Switch 2 launch title, we don't really have too long to wait to see the real thing in action, though given that this footage comes with no "early build" disclaimer or suchlike it appears CDPR is confident in what it's showing in this material - and for good reason. Poring over all the assets, we have plenty to work with for some preliminary comparisons and even frame-rate analysis. In short, the prospects for this Switch 2 rendition are encouraging overall.

    In terms of content, CDPR is showing all manner of gameplay: driving, combat, major mission set pieces - you name it, it's included. Some clips even reveal, quite openly, the challenges Switch 2 faces in running such a complex open world game - notably for high speed car action. To its credit, frame-rate delivery at 30 frames per second is strong based on this footage overall, with drops into the 20-30fps range mainly being a problem while speeding through Night City's streets. Especially at points where multiple AI cars clog up its roads, it appears drops and traversal hitches are possible, something we're keen to re-test on its release. It's a positive showing overall, though: on-foot exploration around its markets, the bustling parade sequence teeming with NPCs, and even combat during the Phantom Liberty DLC all run at a perfect 30fps here.

    In performance terms, this showing is perhaps best put in the context of what's currently possible on last-gen consoles, and also Series S. In re-testing the base PS4 version today for example, it's sobering to find that open world roaming there still plays out with hitching, geometry pop-in and drops to 20-30fps - certainly more than is evident in this Switch 2 footage. Going hands-on with the final build ourselves is a must for any final word on this, but early signs point to fewer glaring issues in traversal and battle.

    Sit back, relax and enjoy another massive episode of DF Direct Weekly.Watch on YouTube
    0:00:00 Introduction
    0:00:39 News 1: 37 minutes of Cyberpunk 2077 Switch 2 footage released!
    0:18:51 News 2: AMD introduces 9060 XT
    0:31:43 News 3: AMD teases "FSR Redstone"
    0:44:15 News 4: Doom has hidden performance metrics on Xbox
    0:53:38 News 5: Mario Kart World originally planned for Switch 1
    1:02:49 News 6: Hellblade 2 coming to PS5
    1:11:29 Supporter Q1: What do you make of the Nvidia/Gamers Nexus controversy?
    1:19:41 Supporter Q2: If Microsoft is working on an Xbox emulator for Windows, does that signal the end for traditional Xbox consoles?
    1:28:56 Supporter Q3: Should Nintendo release a non-portable, home-only Switch 2?
    1:35:32 Supporter Q4: Could Switch 2 become a dumping ground for last-gen games?
    1:40:29 Supporter Q5: What are your hopes and concerns for Switch 2?
    On the other hand, Xbox Series S' performance level - in its own 30fps quality mode - is a more aspirational target for Switch 2. We described this version as 'what last-gen should have been' in our original review, thanks to it boasting a broadly rock-solid 30fps experience, and it even went on to receive a 60fps mode post-release. A question mark hovers over the viability of Switch 2's own 40fps performance mode though, where we have no recent assets. More to come on this when we get the game ourselves.

    In terms of comparisons, image quality is a plus point for Switch 2 when compared to the older PS4 release, and even Series S. Much of this boils down to Nvidia's DLSS upscaling technology being available to Switch 2's Tegra 239 processor. CDPR has already confirmed the use of DLSS to hit a 1080p target in docked play in this case. However, the actual native pixel counts are typically lower than 1080p - with dynamic scaling taking us to 1280x720 at its nadir during the most extreme 20fps drop on record here while driving. More typically though, numbers like 792p, 810p and 864p crop up at less taxing points in the footage, which is a high enough base pixel count for DLSS towork its magic and reconstruct a 1080p frame.

    For perspective, Series S' quality mode renders at a 1296p-1440p range using AMD's FSR 2 as its upscaler. Meanwhile, base PS4 continues to run at a 720p-900p range using CDPR's own in-house temporal AA solution. In both cases Switch 2 has an advantage in temporal stability, at least. Even though it runs at a lower pixel count than Series S, DLSS more adeptly cleans up the game's visual noise in certain scenarios compared to FSR 2. Shimmer is minimised across the dampened floors of the market area, while during static moments, fences and character detail up-close resolve with added sharpness via Switch 2's upscaler.

    To see this content please enable targeting cookies.

    On the downside, for all its benefits, DLSS does not always hide its lower base pixel input. Driving at speed reveals blocking artefacts on Switch 2, while a later Johnny Silverhand dialogue sequence shows similar break-up around two background NPCs playing basketball. There are some limits on show, then, but it's a respectably competitive result next to Series S all things considered. In fact, it's similar to what we found with Street Fighter 6 comparisons between these two consoles, where Switch 2 pushes a sharper, less visibly noisy frame via DLSS - and despite Capcom's fightert running at a lower native res in that case.

    Focusing on visual quality, it's a surprise to find Switch 2 is on par with both PS4 and Series S in a great many of its core settings. Paired side-by-side with each, there is scarce evidence of any differences in recreated shots: texture quality is a match, SSR is enabled across the floors, and motion blur is engaged too. There is a difference in ambient occlusionthat needs further investigation - and it's clear that Switch 2 also loses the lens flare effect of the Series S release. That aside, the variance in time of day and NPC placement account for a majority of the differences in the open city - whereas in confined interiors that are perfectly matched, the main difference is again DLSS' impact on image quality.

    It's a positive peak at CDPR's optimisation efforts so far and it appears to be an improvement on the build I played at Nintendo's Switch 2 event in London last month. We're just ten days away from what's undeniably one of the most technically challenging third party games on Switch 2, and it's certainly a big one for coverage plans at Digital Foundry. In fact as I type this, there's an ongoing effort to bank as much Cyberpunk 2077 footage on other platforms for comparison. Roll on June 5th!
    #cdpr #releases #minutes #cyberpunk #switch
    CDPR releases 37 minutes of Cyberpunk 2077 Switch 2 video - so what have we learned?
    CDPR releases 37 minutes of Cyberpunk 2077 Switch 2 video - so what have we learned? Previewing the latest port vs Xbox Series S and PlayStation 4. Image credit: CD Projekt RED Face-off by Thomas Morgan Senior Staff Writer, Digital Foundry Published on May 26, 2025 Developer CD Projekt RED has uploaded a generous batch of Switch 2 Cyberpunk 2077 footage week - 37 minutes of direct 4K capture to be exact - giving us an early glimpse at the state of its docked 30fps quality mode. Since it releases on 5th June as a Switch 2 launch title, we don't really have too long to wait to see the real thing in action, though given that this footage comes with no "early build" disclaimer or suchlike it appears CDPR is confident in what it's showing in this material - and for good reason. Poring over all the assets, we have plenty to work with for some preliminary comparisons and even frame-rate analysis. In short, the prospects for this Switch 2 rendition are encouraging overall. In terms of content, CDPR is showing all manner of gameplay: driving, combat, major mission set pieces - you name it, it's included. Some clips even reveal, quite openly, the challenges Switch 2 faces in running such a complex open world game - notably for high speed car action. To its credit, frame-rate delivery at 30 frames per second is strong based on this footage overall, with drops into the 20-30fps range mainly being a problem while speeding through Night City's streets. Especially at points where multiple AI cars clog up its roads, it appears drops and traversal hitches are possible, something we're keen to re-test on its release. It's a positive showing overall, though: on-foot exploration around its markets, the bustling parade sequence teeming with NPCs, and even combat during the Phantom Liberty DLC all run at a perfect 30fps here. In performance terms, this showing is perhaps best put in the context of what's currently possible on last-gen consoles, and also Series S. In re-testing the base PS4 version today for example, it's sobering to find that open world roaming there still plays out with hitching, geometry pop-in and drops to 20-30fps - certainly more than is evident in this Switch 2 footage. Going hands-on with the final build ourselves is a must for any final word on this, but early signs point to fewer glaring issues in traversal and battle. Sit back, relax and enjoy another massive episode of DF Direct Weekly.Watch on YouTube 0:00:00 Introduction 0:00:39 News 1: 37 minutes of Cyberpunk 2077 Switch 2 footage released! 0:18:51 News 2: AMD introduces 9060 XT 0:31:43 News 3: AMD teases "FSR Redstone" 0:44:15 News 4: Doom has hidden performance metrics on Xbox 0:53:38 News 5: Mario Kart World originally planned for Switch 1 1:02:49 News 6: Hellblade 2 coming to PS5 1:11:29 Supporter Q1: What do you make of the Nvidia/Gamers Nexus controversy? 1:19:41 Supporter Q2: If Microsoft is working on an Xbox emulator for Windows, does that signal the end for traditional Xbox consoles? 1:28:56 Supporter Q3: Should Nintendo release a non-portable, home-only Switch 2? 1:35:32 Supporter Q4: Could Switch 2 become a dumping ground for last-gen games? 1:40:29 Supporter Q5: What are your hopes and concerns for Switch 2? On the other hand, Xbox Series S' performance level - in its own 30fps quality mode - is a more aspirational target for Switch 2. We described this version as 'what last-gen should have been' in our original review, thanks to it boasting a broadly rock-solid 30fps experience, and it even went on to receive a 60fps mode post-release. A question mark hovers over the viability of Switch 2's own 40fps performance mode though, where we have no recent assets. More to come on this when we get the game ourselves. In terms of comparisons, image quality is a plus point for Switch 2 when compared to the older PS4 release, and even Series S. Much of this boils down to Nvidia's DLSS upscaling technology being available to Switch 2's Tegra 239 processor. CDPR has already confirmed the use of DLSS to hit a 1080p target in docked play in this case. However, the actual native pixel counts are typically lower than 1080p - with dynamic scaling taking us to 1280x720 at its nadir during the most extreme 20fps drop on record here while driving. More typically though, numbers like 792p, 810p and 864p crop up at less taxing points in the footage, which is a high enough base pixel count for DLSS towork its magic and reconstruct a 1080p frame. For perspective, Series S' quality mode renders at a 1296p-1440p range using AMD's FSR 2 as its upscaler. Meanwhile, base PS4 continues to run at a 720p-900p range using CDPR's own in-house temporal AA solution. In both cases Switch 2 has an advantage in temporal stability, at least. Even though it runs at a lower pixel count than Series S, DLSS more adeptly cleans up the game's visual noise in certain scenarios compared to FSR 2. Shimmer is minimised across the dampened floors of the market area, while during static moments, fences and character detail up-close resolve with added sharpness via Switch 2's upscaler. To see this content please enable targeting cookies. On the downside, for all its benefits, DLSS does not always hide its lower base pixel input. Driving at speed reveals blocking artefacts on Switch 2, while a later Johnny Silverhand dialogue sequence shows similar break-up around two background NPCs playing basketball. There are some limits on show, then, but it's a respectably competitive result next to Series S all things considered. In fact, it's similar to what we found with Street Fighter 6 comparisons between these two consoles, where Switch 2 pushes a sharper, less visibly noisy frame via DLSS - and despite Capcom's fightert running at a lower native res in that case. Focusing on visual quality, it's a surprise to find Switch 2 is on par with both PS4 and Series S in a great many of its core settings. Paired side-by-side with each, there is scarce evidence of any differences in recreated shots: texture quality is a match, SSR is enabled across the floors, and motion blur is engaged too. There is a difference in ambient occlusionthat needs further investigation - and it's clear that Switch 2 also loses the lens flare effect of the Series S release. That aside, the variance in time of day and NPC placement account for a majority of the differences in the open city - whereas in confined interiors that are perfectly matched, the main difference is again DLSS' impact on image quality. It's a positive peak at CDPR's optimisation efforts so far and it appears to be an improvement on the build I played at Nintendo's Switch 2 event in London last month. We're just ten days away from what's undeniably one of the most technically challenging third party games on Switch 2, and it's certainly a big one for coverage plans at Digital Foundry. In fact as I type this, there's an ongoing effort to bank as much Cyberpunk 2077 footage on other platforms for comparison. Roll on June 5th! #cdpr #releases #minutes #cyberpunk #switch
    WWW.EUROGAMER.NET
    CDPR releases 37 minutes of Cyberpunk 2077 Switch 2 video - so what have we learned?
    CDPR releases 37 minutes of Cyberpunk 2077 Switch 2 video - so what have we learned? Previewing the latest port vs Xbox Series S and PlayStation 4. Image credit: CD Projekt RED Face-off by Thomas Morgan Senior Staff Writer, Digital Foundry Published on May 26, 2025 Developer CD Projekt RED has uploaded a generous batch of Switch 2 Cyberpunk 2077 footage week - 37 minutes of direct 4K capture to be exact - giving us an early glimpse at the state of its docked 30fps quality mode. Since it releases on 5th June as a Switch 2 launch title, we don't really have too long to wait to see the real thing in action, though given that this footage comes with no "early build" disclaimer or suchlike it appears CDPR is confident in what it's showing in this material - and for good reason. Poring over all the assets, we have plenty to work with for some preliminary comparisons and even frame-rate analysis. In short, the prospects for this Switch 2 rendition are encouraging overall. In terms of content, CDPR is showing all manner of gameplay: driving, combat, major mission set pieces - you name it, it's included. Some clips even reveal, quite openly, the challenges Switch 2 faces in running such a complex open world game - notably for high speed car action. To its credit, frame-rate delivery at 30 frames per second is strong based on this footage overall, with drops into the 20-30fps range mainly being a problem while speeding through Night City's streets. Especially at points where multiple AI cars clog up its roads, it appears drops and traversal hitches are possible, something we're keen to re-test on its release. It's a positive showing overall, though: on-foot exploration around its markets, the bustling parade sequence teeming with NPCs, and even combat during the Phantom Liberty DLC all run at a perfect 30fps here. In performance terms, this showing is perhaps best put in the context of what's currently possible on last-gen consoles, and also Series S. In re-testing the base PS4 version today for example, it's sobering to find that open world roaming there still plays out with hitching, geometry pop-in and drops to 20-30fps - certainly more than is evident in this Switch 2 footage. Going hands-on with the final build ourselves is a must for any final word on this, but early signs point to fewer glaring issues in traversal and battle. Sit back, relax and enjoy another massive episode of DF Direct Weekly.Watch on YouTube 0:00:00 Introduction 0:00:39 News 1: 37 minutes of Cyberpunk 2077 Switch 2 footage released! 0:18:51 News 2: AMD introduces 9060 XT 0:31:43 News 3: AMD teases "FSR Redstone" 0:44:15 News 4: Doom has hidden performance metrics on Xbox 0:53:38 News 5: Mario Kart World originally planned for Switch 1 1:02:49 News 6: Hellblade 2 coming to PS5 1:11:29 Supporter Q1: What do you make of the Nvidia/Gamers Nexus controversy? 1:19:41 Supporter Q2: If Microsoft is working on an Xbox emulator for Windows, does that signal the end for traditional Xbox consoles? 1:28:56 Supporter Q3: Should Nintendo release a non-portable, home-only Switch 2? 1:35:32 Supporter Q4: Could Switch 2 become a dumping ground for last-gen games? 1:40:29 Supporter Q5: What are your hopes and concerns for Switch 2? On the other hand, Xbox Series S' performance level - in its own 30fps quality mode - is a more aspirational target for Switch 2. We described this version as 'what last-gen should have been' in our original review, thanks to it boasting a broadly rock-solid 30fps experience, and it even went on to receive a 60fps mode post-release. A question mark hovers over the viability of Switch 2's own 40fps performance mode though, where we have no recent assets. More to come on this when we get the game ourselves. In terms of comparisons, image quality is a plus point for Switch 2 when compared to the older PS4 release, and even Series S. Much of this boils down to Nvidia's DLSS upscaling technology being available to Switch 2's Tegra 239 processor. CDPR has already confirmed the use of DLSS to hit a 1080p target in docked play in this case (and a 720p target in handheld mode). However, the actual native pixel counts are typically lower than 1080p - with dynamic scaling taking us to 1280x720 at its nadir during the most extreme 20fps drop on record here while driving. More typically though, numbers like 792p, 810p and 864p crop up at less taxing points in the footage, which is a high enough base pixel count for DLSS to (usually) work its magic and reconstruct a 1080p frame. For perspective, Series S' quality mode renders at a 1296p-1440p range using AMD's FSR 2 as its upscaler (as of a late 2022 patch 1.61, following an upgrade from TAA). Meanwhile, base PS4 continues to run at a 720p-900p range using CDPR's own in-house temporal AA solution. In both cases Switch 2 has an advantage in temporal stability, at least. Even though it runs at a lower pixel count than Series S, DLSS more adeptly cleans up the game's visual noise in certain scenarios compared to FSR 2. Shimmer is minimised across the dampened floors of the market area, while during static moments, fences and character detail up-close resolve with added sharpness via Switch 2's upscaler. To see this content please enable targeting cookies. On the downside, for all its benefits, DLSS does not always hide its lower base pixel input. Driving at speed reveals blocking artefacts on Switch 2, while a later Johnny Silverhand dialogue sequence shows similar break-up around two background NPCs playing basketball. There are some limits on show, then, but it's a respectably competitive result next to Series S all things considered. In fact, it's similar to what we found with Street Fighter 6 comparisons between these two consoles, where Switch 2 pushes a sharper, less visibly noisy frame via DLSS - and despite Capcom's fightert running at a lower native res in that case. Focusing on visual quality, it's a surprise to find Switch 2 is on par with both PS4 and Series S in a great many of its core settings. Paired side-by-side with each, there is scarce evidence of any differences in recreated shots: texture quality is a match, SSR is enabled across the floors, and motion blur is engaged too. There is a difference in ambient occlusion (resulting in thicker pockets of object shading on Switch 2) that needs further investigation - and it's clear that Switch 2 also loses the lens flare effect of the Series S release. That aside, the variance in time of day and NPC placement account for a majority of the differences in the open city - whereas in confined interiors that are perfectly matched, the main difference is again DLSS' impact on image quality. It's a positive peak at CDPR's optimisation efforts so far and it appears to be an improvement on the build I played at Nintendo's Switch 2 event in London last month. We're just ten days away from what's undeniably one of the most technically challenging third party games on Switch 2, and it's certainly a big one for coverage plans at Digital Foundry. In fact as I type this, there's an ongoing effort to bank as much Cyberpunk 2077 footage on other platforms for comparison. Roll on June 5th!
    0 Комментарии 0 Поделились
  • “Baby Botox” and the psychology of cosmetic procedures

    Botox injections used to be a secret forwomen in their 40s and 50s. But growing numbers ofwomen in their 20s and 30s are turning to “baby Botox,” or smaller doses that are intended to prevent aging rather than combat it.Baby Botox is just one intervention that doctors say younger people now frequently seek, and some view the trend with concern. Dr. Michelle Hure, a physician specializing in dermatology and dermatopathology, says younger patients aren’t considering the cost of procedures that require lifetime maintenance, and are expressing dissatisfaction with their looks to a degree that borders on the absurd.Hure traces the demand for “baby Botox” and other procedures to the start of the pandemic.“Everyone was basically chronically online,” she told Vox. “They were on Zoom, they were looking at themselves, and there was the rise of of TikTok and the filters and people were really seeing these perceived flaws that either aren’t there or are so minimal and just normal anatomy. And they have really made it front and center where it affects them. It affects their daily life and I really feel that it has become more of a pathological thing.”Hure spoke to Today, Explained co-host Noel King about the rise of “baby Botox” and her concerns with the cosmetic dermatology industry. An excerpt of their conversation, edited for length and clarity, is below. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts and Spotify.
    You told us about a patient that you saw yesterday, and you said you probably wouldn’t keep her on because her mentality really worried you. Would you tell me about that young woman?I had this patient who was mid-20s, and really a beautiful girl. Isee a lot of signs of aging on her face, but she was coming in for Botox. There wasn’t a lot for me to treat. And at the end of the session she was asking me, “So what do you think about my nasolabial folds?”Basically, it’s the fold that goes from the corner of your nose down to the corner of your mouth. It’s the barrier between the upper lip and your cheek, and when you smile it kind of folds. Of course, the more you age, the more of the line will be left behind when you’re not smiling. And she was pointing to her cheek as if there was something there, but there was nothing there. And so I had to tell her, “Well, I don’t see that, you’re perfect.” It’s a phantom nasolabial fold. It didn’t exist.That sort of mentality where someone is perceiving a flaw that is absolutely not there — providers need to say no. Unfortunately, they’re incentivized not to. Especially if you have a cosmetic office, if you’re a med spa, if you have a cosmetic derm or plastic surgery office, of course you’re incentivized to do what the patient wants. Well, I’m not going to do that. That’s not what I do.That means you may get paid for seeing her in that visit, but you’re not getting paid for putting filler in her face. I think what I hear you saying is other doctors would have done that.Absolutely. One hundred percent. I know this for a fact because many times those patients will come to my office to get that filler dissolved because they don’t like it. In the larger practices or practices that are private equity-owned, which is a huge problem in medicine, you are absolutely meant to sell as many products, as many procedures as possible. Oftentimes I was told to sell as much filler as possible, because every syringe is several hundred dollars. And then if they’re there, talk them into a laser. Talk them into this, talk them into that. Then you become a salesman. For my skin check patients, I’m looking for skin cancer. I’m counseling them on how to take care of their skin. I was told, “Don’t talk to them about using sunscreen, because we want them to get skin cancer and come back.”I was pulled out of the room by my boss and reprimanded for explaining why it’s so important to use sunscreen. And so this is why I couldn’t do it anymore. I had to start my own office and be on my own. I can’t do that. That goes against everything that I believe in, in my oath. Because there is potential harm on many different levels for cosmetic procedures.What are the risks to giving someone a cosmetic procedure that they don’t really need?This is a medical procedure. There is always risk for any type of intervention, right? What gets me is, like, Nordstrom is talking about having injections in their stores. This is ridiculous! This is a medical procedure. You can get infection, you can get vascular occlusion that can lead to death of the tissue overlying where you inject. It can lead to blindness. This is a big deal. It’s fairly safe if you know what you’re doing. But not everyone knows what they’re doing and knows how to handle the complications that can come about. Honestly, I feel like the psychological aspect of it is a big problem. At some point you become dependent, almost, on these procedures to either feel happy or feel good about yourself. And at what point is it not going to be enough? One of my colleagues actually coined this term. It’s called perception drift. At some point, you will do these little, little, incremental tweaks until you look like a different person. And you might look very abnormal. So even if someone comes to me for something that is legitimate, it’s still: Once you start, it’s going to be hard for you to stop. If you’re barely able to scrimp together enough to pay for that one thing, and you have it done, great. What about all the rest of your life that you’re going to want to do something? Are you going to be able to manage it?I wonder how all of this makes you think about your profession. Most people get into medicine, it has always been my assumption, to be helpful. And you’ve laid out a world in which procedures are being done that are not only not helpful, they could be dangerous. And you don’t seem to like it very much.This is why it is a smaller and smaller percentage of what I do in my office. I love cosmetics to an extent, right? I love to make people love how they look. But when you start using cosmetics as a tool to make them feel better about themselves in a major way, it’s a slippery slope. It should be more of a targeted thing, not making you look like an entirely different person because society has told you you can’t age. It’s really disturbing to me.See More:
    #baby #botox #psychology #cosmetic #procedures
    “Baby Botox” and the psychology of cosmetic procedures
    Botox injections used to be a secret forwomen in their 40s and 50s. But growing numbers ofwomen in their 20s and 30s are turning to “baby Botox,” or smaller doses that are intended to prevent aging rather than combat it.Baby Botox is just one intervention that doctors say younger people now frequently seek, and some view the trend with concern. Dr. Michelle Hure, a physician specializing in dermatology and dermatopathology, says younger patients aren’t considering the cost of procedures that require lifetime maintenance, and are expressing dissatisfaction with their looks to a degree that borders on the absurd.Hure traces the demand for “baby Botox” and other procedures to the start of the pandemic.“Everyone was basically chronically online,” she told Vox. “They were on Zoom, they were looking at themselves, and there was the rise of of TikTok and the filters and people were really seeing these perceived flaws that either aren’t there or are so minimal and just normal anatomy. And they have really made it front and center where it affects them. It affects their daily life and I really feel that it has become more of a pathological thing.”Hure spoke to Today, Explained co-host Noel King about the rise of “baby Botox” and her concerns with the cosmetic dermatology industry. An excerpt of their conversation, edited for length and clarity, is below. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts and Spotify. You told us about a patient that you saw yesterday, and you said you probably wouldn’t keep her on because her mentality really worried you. Would you tell me about that young woman?I had this patient who was mid-20s, and really a beautiful girl. Isee a lot of signs of aging on her face, but she was coming in for Botox. There wasn’t a lot for me to treat. And at the end of the session she was asking me, “So what do you think about my nasolabial folds?”Basically, it’s the fold that goes from the corner of your nose down to the corner of your mouth. It’s the barrier between the upper lip and your cheek, and when you smile it kind of folds. Of course, the more you age, the more of the line will be left behind when you’re not smiling. And she was pointing to her cheek as if there was something there, but there was nothing there. And so I had to tell her, “Well, I don’t see that, you’re perfect.” It’s a phantom nasolabial fold. It didn’t exist.That sort of mentality where someone is perceiving a flaw that is absolutely not there — providers need to say no. Unfortunately, they’re incentivized not to. Especially if you have a cosmetic office, if you’re a med spa, if you have a cosmetic derm or plastic surgery office, of course you’re incentivized to do what the patient wants. Well, I’m not going to do that. That’s not what I do.That means you may get paid for seeing her in that visit, but you’re not getting paid for putting filler in her face. I think what I hear you saying is other doctors would have done that.Absolutely. One hundred percent. I know this for a fact because many times those patients will come to my office to get that filler dissolved because they don’t like it. In the larger practices or practices that are private equity-owned, which is a huge problem in medicine, you are absolutely meant to sell as many products, as many procedures as possible. Oftentimes I was told to sell as much filler as possible, because every syringe is several hundred dollars. And then if they’re there, talk them into a laser. Talk them into this, talk them into that. Then you become a salesman. For my skin check patients, I’m looking for skin cancer. I’m counseling them on how to take care of their skin. I was told, “Don’t talk to them about using sunscreen, because we want them to get skin cancer and come back.”I was pulled out of the room by my boss and reprimanded for explaining why it’s so important to use sunscreen. And so this is why I couldn’t do it anymore. I had to start my own office and be on my own. I can’t do that. That goes against everything that I believe in, in my oath. Because there is potential harm on many different levels for cosmetic procedures.What are the risks to giving someone a cosmetic procedure that they don’t really need?This is a medical procedure. There is always risk for any type of intervention, right? What gets me is, like, Nordstrom is talking about having injections in their stores. This is ridiculous! This is a medical procedure. You can get infection, you can get vascular occlusion that can lead to death of the tissue overlying where you inject. It can lead to blindness. This is a big deal. It’s fairly safe if you know what you’re doing. But not everyone knows what they’re doing and knows how to handle the complications that can come about. Honestly, I feel like the psychological aspect of it is a big problem. At some point you become dependent, almost, on these procedures to either feel happy or feel good about yourself. And at what point is it not going to be enough? One of my colleagues actually coined this term. It’s called perception drift. At some point, you will do these little, little, incremental tweaks until you look like a different person. And you might look very abnormal. So even if someone comes to me for something that is legitimate, it’s still: Once you start, it’s going to be hard for you to stop. If you’re barely able to scrimp together enough to pay for that one thing, and you have it done, great. What about all the rest of your life that you’re going to want to do something? Are you going to be able to manage it?I wonder how all of this makes you think about your profession. Most people get into medicine, it has always been my assumption, to be helpful. And you’ve laid out a world in which procedures are being done that are not only not helpful, they could be dangerous. And you don’t seem to like it very much.This is why it is a smaller and smaller percentage of what I do in my office. I love cosmetics to an extent, right? I love to make people love how they look. But when you start using cosmetics as a tool to make them feel better about themselves in a major way, it’s a slippery slope. It should be more of a targeted thing, not making you look like an entirely different person because society has told you you can’t age. It’s really disturbing to me.See More: #baby #botox #psychology #cosmetic #procedures
    WWW.VOX.COM
    “Baby Botox” and the psychology of cosmetic procedures
    Botox injections used to be a secret for (largely) women in their 40s and 50s. But growing numbers of (largely) women in their 20s and 30s are turning to “baby Botox,” or smaller doses that are intended to prevent aging rather than combat it.Baby Botox is just one intervention that doctors say younger people now frequently seek, and some view the trend with concern. Dr. Michelle Hure, a physician specializing in dermatology and dermatopathology, says younger patients aren’t considering the cost of procedures that require lifetime maintenance, and are expressing dissatisfaction with their looks to a degree that borders on the absurd.Hure traces the demand for “baby Botox” and other procedures to the start of the pandemic.“Everyone was basically chronically online,” she told Vox. “They were on Zoom, they were looking at themselves, and there was the rise of of TikTok and the filters and people were really seeing these perceived flaws that either aren’t there or are so minimal and just normal anatomy. And they have really made it front and center where it affects them. It affects their daily life and I really feel that it has become more of a pathological thing.”Hure spoke to Today, Explained co-host Noel King about the rise of “baby Botox” and her concerns with the cosmetic dermatology industry. An excerpt of their conversation, edited for length and clarity, is below. There’s much more in the full podcast, so listen to Today, Explained wherever you get podcasts, including Apple Podcasts and Spotify. You told us about a patient that you saw yesterday, and you said you probably wouldn’t keep her on because her mentality really worried you. Would you tell me about that young woman?I had this patient who was mid-20s, and really a beautiful girl. I [didn’t] see a lot of signs of aging on her face, but she was coming in for Botox. There wasn’t a lot for me to treat. And at the end of the session she was asking me, “So what do you think about my nasolabial folds?”Basically, it’s the fold that goes from the corner of your nose down to the corner of your mouth. It’s the barrier between the upper lip and your cheek, and when you smile it kind of folds. Of course, the more you age, the more of the line will be left behind when you’re not smiling. And she was pointing to her cheek as if there was something there, but there was nothing there. And so I had to tell her, “Well, I don’t see that, you’re perfect.” It’s a phantom nasolabial fold. It didn’t exist.That sort of mentality where someone is perceiving a flaw that is absolutely not there — providers need to say no. Unfortunately, they’re incentivized not to. Especially if you have a cosmetic office, if you’re a med spa, if you have a cosmetic derm or plastic surgery office, of course you’re incentivized to do what the patient wants. Well, I’m not going to do that. That’s not what I do.That means you may get paid for seeing her in that visit, but you’re not getting paid for putting filler in her face. I think what I hear you saying is other doctors would have done that.Absolutely. One hundred percent. I know this for a fact because many times those patients will come to my office to get that filler dissolved because they don’t like it. In the larger practices or practices that are private equity-owned, which is a huge problem in medicine, you are absolutely meant to sell as many products, as many procedures as possible. Oftentimes I was told to sell as much filler as possible, because every syringe is several hundred dollars. And then if they’re there, talk them into a laser. Talk them into this, talk them into that. Then you become a salesman. For my skin check patients, I’m looking for skin cancer. I’m counseling them on how to take care of their skin. I was told, “Don’t talk to them about using sunscreen, because we want them to get skin cancer and come back.”I was pulled out of the room by my boss and reprimanded for explaining why it’s so important to use sunscreen. And so this is why I couldn’t do it anymore. I had to start my own office and be on my own. I can’t do that. That goes against everything that I believe in, in my oath. Because there is potential harm on many different levels for cosmetic procedures.What are the risks to giving someone a cosmetic procedure that they don’t really need?This is a medical procedure. There is always risk for any type of intervention, right? What gets me is, like, Nordstrom is talking about having injections in their stores. This is ridiculous! This is a medical procedure. You can get infection, you can get vascular occlusion that can lead to death of the tissue overlying where you inject. It can lead to blindness. This is a big deal. It’s fairly safe if you know what you’re doing. But not everyone knows what they’re doing and knows how to handle the complications that can come about. Honestly, I feel like the psychological aspect of it is a big problem. At some point you become dependent, almost, on these procedures to either feel happy or feel good about yourself. And at what point is it not going to be enough? One of my colleagues actually coined this term. It’s called perception drift. At some point, you will do these little, little, incremental tweaks until you look like a different person. And you might look very abnormal. So even if someone comes to me for something that is legitimate, it’s still: Once you start, it’s going to be hard for you to stop. If you’re barely able to scrimp together enough to pay for that one thing, and you have it done, great. What about all the rest of your life that you’re going to want to do something? Are you going to be able to manage it?I wonder how all of this makes you think about your profession. Most people get into medicine, it has always been my assumption, to be helpful. And you’ve laid out a world in which procedures are being done that are not only not helpful, they could be dangerous. And you don’t seem to like it very much.This is why it is a smaller and smaller percentage of what I do in my office. I love cosmetics to an extent, right? I love to make people love how they look. But when you start using cosmetics as a tool to make them feel better about themselves in a major way, it’s a slippery slope. It should be more of a targeted thing, not making you look like an entirely different person because society has told you you can’t age. It’s really disturbing to me.See More:
    0 Комментарии 0 Поделились
  • NPR Project

    NPR Project

    May 23rd, 2025
    Code Design, General Development

    Clément Foucault

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    Wing it! Early NPR project by Blender Studio.
    In July 2024 the NPRproject officially started, with a workshop with Dillo Goo Studio and Blender developers.
    While the use-cases were clear, the architecture and overall design were not. To help with this, the team started working in a prototype containing many shading features essential to the NPR workflow.
    This prototype received a lot of attention, with users contributing a lot of nice examples of what is possible with such system. The feedback showed that there is a big interest from the community for a wide range of effects.
    However the amount of flexibity made possible with the prototype came with a cost: it locked NPR features within EEVEE, alienating Cycles from part of the NPR pipeline. It also deviated from the EEVEE architecture, which could limit future feature development.
    After much consideration, the design was modified to address these core issues. The outcome can be summarized as:

    Move filters and color modification to a multi-stage compositing workflow.
    Keep shading features inside the renderer’s material system.

    Multi-stage compositing
    One of the core feature needed for NPR is the ability to access and modify the shaded pixels.
    Doing it inside a render engine has been notoriously difficult. The current way of doing it inside EEVEE is to use the ShaderToRGB node, which comes with a lot of limitations. In Cycles, limited effects can be achieved using custom OSL nodes.
    As a result, in production pipeline, this is often done through very cumbersome and time consuming scene-wide compositing. The major downside is that all asset specific compositing needs to be manually merged and managed inside the scene compositor.
    Instead, the parts of the compositing pipeline that are specific to a certain asset should be defined at the asset level. The reasoning is that these compositing nodes define the appearance of this asset and should be shared between scene.
    Multi-stage compositing is just that! A part of the compositing pipeline is linked to a specific object or material. This part receives the rendered color as well as its AOVs and render passes as input, and output the modified rendered color.
    The object level compositor at the bottom right define the final appearance of the object
    In this example the appearance of the Suzanne object is defined at the object level inside its asset file. When linked into a scene with other elements, it is automatically combined with other assets.
    From left to right: Smooth Toon shading with alpha over specular, Pixelate, Half-Tone with Outline
    This new multi-stage compositing will be reusing the compositor nodes, with a different subset of nodes available at the object and material levels. This is an opportunity to streamline the workflow between material nodes editing and compositor nodes.
    Grease Pencil Effects can eventually be replaced by this solution.
    Final render showing 3 objects with different stylizations seamlessly integrated.
    There are a lot more to be said about this feature. For more details see the associated development task.
    Anti-Aliased output
    A major issue with working with the a compositing workflow is Anti-Aliasing. When compositing anti-aliased input, results often include hard to resolve fringes.
    Left: Render Pass, Middle: Object Matte, Right: Extracted Object Render Pass
    The common workaround to this issue is to render at higher resolution without AA and downscale after compositing. This method is very memory intensive and only allows for 4x or 9x AA with usually less than ideal filtering. Another option is to use post-process AA filters but that often results in flickering animations.
    Left: Anti-Aliased done before compositor based shadingRight: Anti-Aliasing is done after compositor.

    The solution to this problem is to run the compositor for each AA step and filter the composited pixels like a renderer would do. This will produce the best image quality with only the added memory usage of the final frame.

    Converged input
    One of the main issues with modern renderers is that their output is noisy. This doesn’t play well with NPR workflows as many effects require applying sharp transformations of the rendered image or light buffers.
    For instance, this is what happens when applying a constant interpolated color ramp over the ambient occlusion node. The averaging operation is run on a noisy output instead of running on a noisy input before the transformation.
    Left: Original AO, Middle: Constant Ramp in material, Right: Ramp applied in compositorDoing these effects at compositing time gives us the final converged image as input. However, as explained above, the compositor needs to run before the AA filtering.
    So the multi-stage compositors needs to be able to run on converged or denoised inputs while being AA free. In other words, it means that the render samples will be distributed between render passes convergence and final compositor AA.
    Engine Features
    While improving the compositing workflow is important for stylization flexibility, some features are more suited for the inside of the render engine. This allows builtin interaction with light transport and other renderer features. These features are not exclusive to NPR workflows and fit well inside the engine architecture.
    As such, the following features are planned to be directly implemented inside the render engines:

    Ray Queries
    Portal BSDF
    Custom Shading
    Depth Offset

    The development will start after the Blender 5.0 release, planned for November 2025.
    Meanwhile, to follow the project, subscribe to the development task. For more details about the project, join the announcement thread.

    Support the Future of Blender
    Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases.

    ♥ Donate to Blender
    #npr #project
    NPR Project
    NPR Project May 23rd, 2025 Code Design, General Development Clément Foucault html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Wing it! Early NPR project by Blender Studio. In July 2024 the NPRproject officially started, with a workshop with Dillo Goo Studio and Blender developers. While the use-cases were clear, the architecture and overall design were not. To help with this, the team started working in a prototype containing many shading features essential to the NPR workflow. This prototype received a lot of attention, with users contributing a lot of nice examples of what is possible with such system. The feedback showed that there is a big interest from the community for a wide range of effects. However the amount of flexibity made possible with the prototype came with a cost: it locked NPR features within EEVEE, alienating Cycles from part of the NPR pipeline. It also deviated from the EEVEE architecture, which could limit future feature development. After much consideration, the design was modified to address these core issues. The outcome can be summarized as: Move filters and color modification to a multi-stage compositing workflow. Keep shading features inside the renderer’s material system. Multi-stage compositing One of the core feature needed for NPR is the ability to access and modify the shaded pixels. Doing it inside a render engine has been notoriously difficult. The current way of doing it inside EEVEE is to use the ShaderToRGB node, which comes with a lot of limitations. In Cycles, limited effects can be achieved using custom OSL nodes. As a result, in production pipeline, this is often done through very cumbersome and time consuming scene-wide compositing. The major downside is that all asset specific compositing needs to be manually merged and managed inside the scene compositor. Instead, the parts of the compositing pipeline that are specific to a certain asset should be defined at the asset level. The reasoning is that these compositing nodes define the appearance of this asset and should be shared between scene. Multi-stage compositing is just that! A part of the compositing pipeline is linked to a specific object or material. This part receives the rendered color as well as its AOVs and render passes as input, and output the modified rendered color. The object level compositor at the bottom right define the final appearance of the object In this example the appearance of the Suzanne object is defined at the object level inside its asset file. When linked into a scene with other elements, it is automatically combined with other assets. From left to right: Smooth Toon shading with alpha over specular, Pixelate, Half-Tone with Outline This new multi-stage compositing will be reusing the compositor nodes, with a different subset of nodes available at the object and material levels. This is an opportunity to streamline the workflow between material nodes editing and compositor nodes. Grease Pencil Effects can eventually be replaced by this solution. Final render showing 3 objects with different stylizations seamlessly integrated. There are a lot more to be said about this feature. For more details see the associated development task. Anti-Aliased output A major issue with working with the a compositing workflow is Anti-Aliasing. When compositing anti-aliased input, results often include hard to resolve fringes. Left: Render Pass, Middle: Object Matte, Right: Extracted Object Render Pass The common workaround to this issue is to render at higher resolution without AA and downscale after compositing. This method is very memory intensive and only allows for 4x or 9x AA with usually less than ideal filtering. Another option is to use post-process AA filters but that often results in flickering animations. Left: Anti-Aliased done before compositor based shadingRight: Anti-Aliasing is done after compositor. The solution to this problem is to run the compositor for each AA step and filter the composited pixels like a renderer would do. This will produce the best image quality with only the added memory usage of the final frame. Converged input One of the main issues with modern renderers is that their output is noisy. This doesn’t play well with NPR workflows as many effects require applying sharp transformations of the rendered image or light buffers. For instance, this is what happens when applying a constant interpolated color ramp over the ambient occlusion node. The averaging operation is run on a noisy output instead of running on a noisy input before the transformation. Left: Original AO, Middle: Constant Ramp in material, Right: Ramp applied in compositorDoing these effects at compositing time gives us the final converged image as input. However, as explained above, the compositor needs to run before the AA filtering. So the multi-stage compositors needs to be able to run on converged or denoised inputs while being AA free. In other words, it means that the render samples will be distributed between render passes convergence and final compositor AA. Engine Features While improving the compositing workflow is important for stylization flexibility, some features are more suited for the inside of the render engine. This allows builtin interaction with light transport and other renderer features. These features are not exclusive to NPR workflows and fit well inside the engine architecture. As such, the following features are planned to be directly implemented inside the render engines: Ray Queries Portal BSDF Custom Shading Depth Offset The development will start after the Blender 5.0 release, planned for November 2025. Meanwhile, to follow the project, subscribe to the development task. For more details about the project, join the announcement thread. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender #npr #project
    CODE.BLENDER.ORG
    NPR Project
    NPR Project May 23rd, 2025 Code Design, General Development Clément Foucault html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Wing it! Early NPR project by Blender Studio. In July 2024 the NPR (Non-Photorealistic Rendering) project officially started, with a workshop with Dillo Goo Studio and Blender developers. While the use-cases were clear, the architecture and overall design were not. To help with this, the team started working in a prototype containing many shading features essential to the NPR workflow (such as filter support, custom shading, and AOV access). This prototype received a lot of attention, with users contributing a lot of nice examples of what is possible with such system. The feedback showed that there is a big interest from the community for a wide range of effects. However the amount of flexibity made possible with the prototype came with a cost: it locked NPR features within EEVEE, alienating Cycles from part of the NPR pipeline. It also deviated from the EEVEE architecture, which could limit future feature development. After much consideration, the design was modified to address these core issues. The outcome can be summarized as: Move filters and color modification to a multi-stage compositing workflow. Keep shading features inside the renderer’s material system. Multi-stage compositing One of the core feature needed for NPR is the ability to access and modify the shaded pixels. Doing it inside a render engine has been notoriously difficult. The current way of doing it inside EEVEE is to use the ShaderToRGB node, which comes with a lot of limitations. In Cycles, limited effects can be achieved using custom OSL nodes. As a result, in production pipeline, this is often done through very cumbersome and time consuming scene-wide compositing. The major downside is that all asset specific compositing needs to be manually merged and managed inside the scene compositor. Instead, the parts of the compositing pipeline that are specific to a certain asset should be defined at the asset level. The reasoning is that these compositing nodes define the appearance of this asset and should be shared between scene. Multi-stage compositing is just that! A part of the compositing pipeline is linked to a specific object or material. This part receives the rendered color as well as its AOVs and render passes as input, and output the modified rendered color. The object level compositor at the bottom right define the final appearance of the object In this example the appearance of the Suzanne object is defined at the object level inside its asset file. When linked into a scene with other elements, it is automatically combined with other assets. From left to right: Smooth Toon shading with alpha over specular, Pixelate, Half-Tone with Outline This new multi-stage compositing will be reusing the compositor nodes, with a different subset of nodes available at the object and material levels. This is an opportunity to streamline the workflow between material nodes editing and compositor nodes. Grease Pencil Effects can eventually be replaced by this solution. Final render showing 3 objects with different stylizations seamlessly integrated. There are a lot more to be said about this feature. For more details see the associated development task. Anti-Aliased output A major issue with working with the a compositing workflow is Anti-Aliasing (AA). When compositing anti-aliased input, results often include hard to resolve fringes. Left: Render Pass, Middle: Object Matte, Right: Extracted Object Render Pass The common workaround to this issue is to render at higher resolution without AA and downscale after compositing. This method is very memory intensive and only allows for 4x or 9x AA with usually less than ideal filtering. Another option is to use post-process AA filters but that often results in flickering animations. Left: Anti-Aliased done before compositor based shadingRight: Anti-Aliasing is done after compositor. The solution to this problem is to run the compositor for each AA step and filter the composited pixels like a renderer would do. This will produce the best image quality with only the added memory usage of the final frame. Converged input One of the main issues with modern renderers is that their output is noisy. This doesn’t play well with NPR workflows as many effects require applying sharp transformations of the rendered image or light buffers. For instance, this is what happens when applying a constant interpolated color ramp over the ambient occlusion node. The averaging operation is run on a noisy output instead of running on a noisy input before the transformation. Left: Original AO, Middle: Constant Ramp in material, Right: Ramp applied in compositor (desired) Doing these effects at compositing time gives us the final converged image as input. However, as explained above, the compositor needs to run before the AA filtering. So the multi-stage compositors needs to be able to run on converged or denoised inputs while being AA free. In other words, it means that the render samples will be distributed between render passes convergence and final compositor AA. Engine Features While improving the compositing workflow is important for stylization flexibility, some features are more suited for the inside of the render engine. This allows builtin interaction with light transport and other renderer features. These features are not exclusive to NPR workflows and fit well inside the engine architecture. As such, the following features are planned to be directly implemented inside the render engines: Ray Queries Portal BSDF Custom Shading Depth Offset The development will start after the Blender 5.0 release, planned for November 2025. Meanwhile, to follow the project, subscribe to the development task. For more details about the project, join the announcement thread. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender
    0 Комментарии 0 Поделились
Расширенные страницы