Blender
Blender
Blender is the free and open source 3D creation suite. Free to use for any purpose, forever.
  • 508 людям нравится это
  • 271 Записей
  • 10 Фото
  • 104 Видео
  • company
Поиск
Недавние обновления
  • NPR Project

    NPR Project

    May 23rd, 2025
    Code Design, General Development

    Clément Foucault

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    Wing it! Early NPR project by Blender Studio.
    In July 2024 the NPRproject officially started, with a workshop with Dillo Goo Studio and Blender developers.
    While the use-cases were clear, the architecture and overall design were not. To help with this, the team started working in a prototype containing many shading features essential to the NPR workflow.
    This prototype received a lot of attention, with users contributing a lot of nice examples of what is possible with such system. The feedback showed that there is a big interest from the community for a wide range of effects.
    However the amount of flexibity made possible with the prototype came with a cost: it locked NPR features within EEVEE, alienating Cycles from part of the NPR pipeline. It also deviated from the EEVEE architecture, which could limit future feature development.
    After much consideration, the design was modified to address these core issues. The outcome can be summarized as:

    Move filters and color modification to a multi-stage compositing workflow.
    Keep shading features inside the renderer’s material system.

    Multi-stage compositing
    One of the core feature needed for NPR is the ability to access and modify the shaded pixels.
    Doing it inside a render engine has been notoriously difficult. The current way of doing it inside EEVEE is to use the ShaderToRGB node, which comes with a lot of limitations. In Cycles, limited effects can be achieved using custom OSL nodes.
    As a result, in production pipeline, this is often done through very cumbersome and time consuming scene-wide compositing. The major downside is that all asset specific compositing needs to be manually merged and managed inside the scene compositor.
    Instead, the parts of the compositing pipeline that are specific to a certain asset should be defined at the asset level. The reasoning is that these compositing nodes define the appearance of this asset and should be shared between scene.
    Multi-stage compositing is just that! A part of the compositing pipeline is linked to a specific object or material. This part receives the rendered color as well as its AOVs and render passes as input, and output the modified rendered color.
    The object level compositor at the bottom right define the final appearance of the object
    In this example the appearance of the Suzanne object is defined at the object level inside its asset file. When linked into a scene with other elements, it is automatically combined with other assets.
    From left to right: Smooth Toon shading with alpha over specular, Pixelate, Half-Tone with Outline
    This new multi-stage compositing will be reusing the compositor nodes, with a different subset of nodes available at the object and material levels. This is an opportunity to streamline the workflow between material nodes editing and compositor nodes.
    Grease Pencil Effects can eventually be replaced by this solution.
    Final render showing 3 objects with different stylizations seamlessly integrated.
    There are a lot more to be said about this feature. For more details see the associated development task.
    Anti-Aliased output
    A major issue with working with the a compositing workflow is Anti-Aliasing. When compositing anti-aliased input, results often include hard to resolve fringes.
    Left: Render Pass, Middle: Object Matte, Right: Extracted Object Render Pass
    The common workaround to this issue is to render at higher resolution without AA and downscale after compositing. This method is very memory intensive and only allows for 4x or 9x AA with usually less than ideal filtering. Another option is to use post-process AA filters but that often results in flickering animations.
    Left: Anti-Aliased done before compositor based shadingRight: Anti-Aliasing is done after compositor.

    The solution to this problem is to run the compositor for each AA step and filter the composited pixels like a renderer would do. This will produce the best image quality with only the added memory usage of the final frame.

    Converged input
    One of the main issues with modern renderers is that their output is noisy. This doesn’t play well with NPR workflows as many effects require applying sharp transformations of the rendered image or light buffers.
    For instance, this is what happens when applying a constant interpolated color ramp over the ambient occlusion node. The averaging operation is run on a noisy output instead of running on a noisy input before the transformation.
    Left: Original AO, Middle: Constant Ramp in material, Right: Ramp applied in compositorDoing these effects at compositing time gives us the final converged image as input. However, as explained above, the compositor needs to run before the AA filtering.
    So the multi-stage compositors needs to be able to run on converged or denoised inputs while being AA free. In other words, it means that the render samples will be distributed between render passes convergence and final compositor AA.
    Engine Features
    While improving the compositing workflow is important for stylization flexibility, some features are more suited for the inside of the render engine. This allows builtin interaction with light transport and other renderer features. These features are not exclusive to NPR workflows and fit well inside the engine architecture.
    As such, the following features are planned to be directly implemented inside the render engines:

    Ray Queries
    Portal BSDF
    Custom Shading
    Depth Offset

    The development will start after the Blender 5.0 release, planned for November 2025.
    Meanwhile, to follow the project, subscribe to the development task. For more details about the project, join the announcement thread.

    Support the Future of Blender
    Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases.

    ♥ Donate to Blender
    #npr #project
    NPR Project
    NPR Project May 23rd, 2025 Code Design, General Development Clément Foucault html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Wing it! Early NPR project by Blender Studio. In July 2024 the NPRproject officially started, with a workshop with Dillo Goo Studio and Blender developers. While the use-cases were clear, the architecture and overall design were not. To help with this, the team started working in a prototype containing many shading features essential to the NPR workflow. This prototype received a lot of attention, with users contributing a lot of nice examples of what is possible with such system. The feedback showed that there is a big interest from the community for a wide range of effects. However the amount of flexibity made possible with the prototype came with a cost: it locked NPR features within EEVEE, alienating Cycles from part of the NPR pipeline. It also deviated from the EEVEE architecture, which could limit future feature development. After much consideration, the design was modified to address these core issues. The outcome can be summarized as: Move filters and color modification to a multi-stage compositing workflow. Keep shading features inside the renderer’s material system. Multi-stage compositing One of the core feature needed for NPR is the ability to access and modify the shaded pixels. Doing it inside a render engine has been notoriously difficult. The current way of doing it inside EEVEE is to use the ShaderToRGB node, which comes with a lot of limitations. In Cycles, limited effects can be achieved using custom OSL nodes. As a result, in production pipeline, this is often done through very cumbersome and time consuming scene-wide compositing. The major downside is that all asset specific compositing needs to be manually merged and managed inside the scene compositor. Instead, the parts of the compositing pipeline that are specific to a certain asset should be defined at the asset level. The reasoning is that these compositing nodes define the appearance of this asset and should be shared between scene. Multi-stage compositing is just that! A part of the compositing pipeline is linked to a specific object or material. This part receives the rendered color as well as its AOVs and render passes as input, and output the modified rendered color. The object level compositor at the bottom right define the final appearance of the object In this example the appearance of the Suzanne object is defined at the object level inside its asset file. When linked into a scene with other elements, it is automatically combined with other assets. From left to right: Smooth Toon shading with alpha over specular, Pixelate, Half-Tone with Outline This new multi-stage compositing will be reusing the compositor nodes, with a different subset of nodes available at the object and material levels. This is an opportunity to streamline the workflow between material nodes editing and compositor nodes. Grease Pencil Effects can eventually be replaced by this solution. Final render showing 3 objects with different stylizations seamlessly integrated. There are a lot more to be said about this feature. For more details see the associated development task. Anti-Aliased output A major issue with working with the a compositing workflow is Anti-Aliasing. When compositing anti-aliased input, results often include hard to resolve fringes. Left: Render Pass, Middle: Object Matte, Right: Extracted Object Render Pass The common workaround to this issue is to render at higher resolution without AA and downscale after compositing. This method is very memory intensive and only allows for 4x or 9x AA with usually less than ideal filtering. Another option is to use post-process AA filters but that often results in flickering animations. Left: Anti-Aliased done before compositor based shadingRight: Anti-Aliasing is done after compositor. The solution to this problem is to run the compositor for each AA step and filter the composited pixels like a renderer would do. This will produce the best image quality with only the added memory usage of the final frame. Converged input One of the main issues with modern renderers is that their output is noisy. This doesn’t play well with NPR workflows as many effects require applying sharp transformations of the rendered image or light buffers. For instance, this is what happens when applying a constant interpolated color ramp over the ambient occlusion node. The averaging operation is run on a noisy output instead of running on a noisy input before the transformation. Left: Original AO, Middle: Constant Ramp in material, Right: Ramp applied in compositorDoing these effects at compositing time gives us the final converged image as input. However, as explained above, the compositor needs to run before the AA filtering. So the multi-stage compositors needs to be able to run on converged or denoised inputs while being AA free. In other words, it means that the render samples will be distributed between render passes convergence and final compositor AA. Engine Features While improving the compositing workflow is important for stylization flexibility, some features are more suited for the inside of the render engine. This allows builtin interaction with light transport and other renderer features. These features are not exclusive to NPR workflows and fit well inside the engine architecture. As such, the following features are planned to be directly implemented inside the render engines: Ray Queries Portal BSDF Custom Shading Depth Offset The development will start after the Blender 5.0 release, planned for November 2025. Meanwhile, to follow the project, subscribe to the development task. For more details about the project, join the announcement thread. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender #npr #project
    CODE.BLENDER.ORG
    NPR Project
    NPR Project May 23rd, 2025 Code Design, General Development Clément Foucault html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Wing it! Early NPR project by Blender Studio. In July 2024 the NPR (Non-Photorealistic Rendering) project officially started, with a workshop with Dillo Goo Studio and Blender developers. While the use-cases were clear, the architecture and overall design were not. To help with this, the team started working in a prototype containing many shading features essential to the NPR workflow (such as filter support, custom shading, and AOV access). This prototype received a lot of attention, with users contributing a lot of nice examples of what is possible with such system. The feedback showed that there is a big interest from the community for a wide range of effects. However the amount of flexibity made possible with the prototype came with a cost: it locked NPR features within EEVEE, alienating Cycles from part of the NPR pipeline. It also deviated from the EEVEE architecture, which could limit future feature development. After much consideration, the design was modified to address these core issues. The outcome can be summarized as: Move filters and color modification to a multi-stage compositing workflow. Keep shading features inside the renderer’s material system. Multi-stage compositing One of the core feature needed for NPR is the ability to access and modify the shaded pixels. Doing it inside a render engine has been notoriously difficult. The current way of doing it inside EEVEE is to use the ShaderToRGB node, which comes with a lot of limitations. In Cycles, limited effects can be achieved using custom OSL nodes. As a result, in production pipeline, this is often done through very cumbersome and time consuming scene-wide compositing. The major downside is that all asset specific compositing needs to be manually merged and managed inside the scene compositor. Instead, the parts of the compositing pipeline that are specific to a certain asset should be defined at the asset level. The reasoning is that these compositing nodes define the appearance of this asset and should be shared between scene. Multi-stage compositing is just that! A part of the compositing pipeline is linked to a specific object or material. This part receives the rendered color as well as its AOVs and render passes as input, and output the modified rendered color. The object level compositor at the bottom right define the final appearance of the object In this example the appearance of the Suzanne object is defined at the object level inside its asset file. When linked into a scene with other elements, it is automatically combined with other assets. From left to right: Smooth Toon shading with alpha over specular, Pixelate, Half-Tone with Outline This new multi-stage compositing will be reusing the compositor nodes, with a different subset of nodes available at the object and material levels. This is an opportunity to streamline the workflow between material nodes editing and compositor nodes. Grease Pencil Effects can eventually be replaced by this solution. Final render showing 3 objects with different stylizations seamlessly integrated. There are a lot more to be said about this feature. For more details see the associated development task. Anti-Aliased output A major issue with working with the a compositing workflow is Anti-Aliasing (AA). When compositing anti-aliased input, results often include hard to resolve fringes. Left: Render Pass, Middle: Object Matte, Right: Extracted Object Render Pass The common workaround to this issue is to render at higher resolution without AA and downscale after compositing. This method is very memory intensive and only allows for 4x or 9x AA with usually less than ideal filtering. Another option is to use post-process AA filters but that often results in flickering animations. Left: Anti-Aliased done before compositor based shadingRight: Anti-Aliasing is done after compositor. The solution to this problem is to run the compositor for each AA step and filter the composited pixels like a renderer would do. This will produce the best image quality with only the added memory usage of the final frame. Converged input One of the main issues with modern renderers is that their output is noisy. This doesn’t play well with NPR workflows as many effects require applying sharp transformations of the rendered image or light buffers. For instance, this is what happens when applying a constant interpolated color ramp over the ambient occlusion node. The averaging operation is run on a noisy output instead of running on a noisy input before the transformation. Left: Original AO, Middle: Constant Ramp in material, Right: Ramp applied in compositor (desired) Doing these effects at compositing time gives us the final converged image as input. However, as explained above, the compositor needs to run before the AA filtering. So the multi-stage compositors needs to be able to run on converged or denoised inputs while being AA free. In other words, it means that the render samples will be distributed between render passes convergence and final compositor AA. Engine Features While improving the compositing workflow is important for stylization flexibility, some features are more suited for the inside of the render engine. This allows builtin interaction with light transport and other renderer features. These features are not exclusive to NPR workflows and fit well inside the engine architecture. As such, the following features are planned to be directly implemented inside the render engines: Ray Queries Portal BSDF Custom Shading Depth Offset The development will start after the Blender 5.0 release, planned for November 2025. Meanwhile, to follow the project, subscribe to the development task. For more details about the project, join the announcement thread. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender
    0 Комментарии 0 Поделились 0 предпросмотр
  • Projects Update – Q2/2025

    Projects Update – Q2/2025

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    At the beginning of 2025, several projects were announced as the initial targets for the year. Now that we’re in the middle of the second quarter, let’s take a look at where each project stands.

    Complete
    Vulkan
    Vulkan is now officially supported in the upcoming 4.5 LTS release, offering feature parity and comparable performance to the OpenGL backend.
    The next step is to monitor bug reports and eventually make it the default backend for non-macOS systems.

    Almost Complete
    UV Sync
    All issues from theUV Sync design task have been addressed. Development is ongoing in the pr-uv-sync-select branch.
    The remaining work involves finalizing the port of certain selection operators and resolving minor issues. Follow the project at #136817.
    Better integration across node trees
    The compositor is moving closer to feature parity with shading and geometry nodes, thanks to the addition of new nodes such as Vector Math, Vector Rotate, Vector Mix, Value Mix, Clamp, Float Curve, and Blackbody.
    Compositor Assets Mockup.
    The next step is to expose most of the existing node options as socket inputs,. This will enable compositor node assets to be bundled with Blender.
    After that, the focus will remain on simplifying onboarding for new compositor users by making node trees reusable.

    In Progress
    Project Setup
    The first milestonewas recently merged. The next step is to handle Path Template errors in a more robust way.
    Project Setup Mockup.
    After that, work will begin on the Project Definition phase. Follow the project at #133001.
    Shape Keys Improvements
    What was originally framed as a performance problem has shifted focus toward usability and management of shape keys.
    As part of this, new operators for duplicating and updating shape keys have already been merged, with their mirrored counterparts to follow.
    Additionally, work on multi-select and editing of shape keys is gaining momentum. Follow the project at #136838.
    Remote Asset Libraries
    The project has broadened in scope to address usability improvements, including:

    Preview generation for all selected assets.
    A more compact view with a horizontal list and two-line names.
    Snapping for dragged collection assets.

    Remote Asset Library mockup.
    Meanwhile, the code for handling downloads has been submitted for review but encountered a setback.
    Development is taking place in the remote-asset-library-monolithic branch. Follow the project at #134495.
    Hair Dynamics
    The Hair Dynamics project consists of multiple deliverables:

    Embedded Linked Data— still under review
    Bundles and Closures — merged as experimental
    Declarative Systems — published as a design proposal
    Hair Solver — see below

    For the hairsolver, the plan is to use the upcoming Blender Studio project—currently unnamed but focused on a facial rig—as a use case, at least to develop it as an experimental feature.
    This will also involve addressing existing issues with animated hair and integrating animation and simulation for the same character—initially using separate hair objects.

    Design/Prototype
    Texture cache and mipmaps
    Initially unplanned due to limited resources, this project was eventually added to the agenda. A rudimentary prototype is already available in the cycles-tx branch. In the Attic and Bistro benchmark scenes, memory usage is already significantly reduced.
    These scenes were chosen because they include texture cachefiles. To learn how to test it, follow the project at #68917.
    NPR
    The NPRprototype received extensive feedback, helping to map out all planned and unsupported use cases for the project.
    More details, including the final design and development plans, will be shared soon.
    In brief:

    Some features will be implemented as EEVEE nodes.
    Others will be enabled via per-material/object compositing nodes.

    EEVEE features will be prioritized first, while the per-material/object compositing nodes require further design.
    So far, the focus has been on prototyping and finalizing the design to pull VSE strips out of the scene and create a dedicated sequence data-block.
    Story Tools Mockup.
    The next step is to finalize the design by either:

    Exploring once more the idea of keeping the sequence as part of the scene; or
    Settling on a per-camera and file settings design.

    After that, a technical breakdown will follow, then development. Follow the project at #131329.

    Not Started
    Layered Sculpting
    Layered sculpting hasn’t started yet. The original plan was to first address multi-resolution undo and rebuild issues, followed by fixing propagation spikes.
    However, in recent months the focus shifted to tackling sculpting performance issues present since the 4.3 release, mainly:

    Performance problems with smaller brush strokes.
    Local brush management.

    The performance patches are currently under review and expected in time for the upcoming 4.5 LTS release. Once completed, work on undo and multi-resolution will resume.
    Dynamic Overrides
    The team is busy with the 5.0 breaking change targets and other tasks and could not reserve the time to start the initial changes expected to simplify the overrides process.
    The team is currently focused on the 5.0 breaking change targets and other tasks, so they have not yet been able to start the initial changes aimed at simplifying the overrides process.

    And more…
    Beyond these projects, daily activity continues across various development modules. For a more frequent, day-to-day view of progress, check out the Weekly Updates and Module Meetings.
    All this progress is made possible thanks to donations and ongoing community involvement and contributions.

    Support the Future of Blender
    Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases.

    ♥ Donate to Blender
    #projects #update #q22025
    Projects Update – Q2/2025
    Projects Update – Q2/2025 html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; At the beginning of 2025, several projects were announced as the initial targets for the year. Now that we’re in the middle of the second quarter, let’s take a look at where each project stands. Complete Vulkan Vulkan is now officially supported in the upcoming 4.5 LTS release, offering feature parity and comparable performance to the OpenGL backend. The next step is to monitor bug reports and eventually make it the default backend for non-macOS systems. Almost Complete UV Sync All issues from theUV Sync design task have been addressed. Development is ongoing in the pr-uv-sync-select branch. The remaining work involves finalizing the port of certain selection operators and resolving minor issues. Follow the project at #136817. Better integration across node trees The compositor is moving closer to feature parity with shading and geometry nodes, thanks to the addition of new nodes such as Vector Math, Vector Rotate, Vector Mix, Value Mix, Clamp, Float Curve, and Blackbody. Compositor Assets Mockup. The next step is to expose most of the existing node options as socket inputs,. This will enable compositor node assets to be bundled with Blender. After that, the focus will remain on simplifying onboarding for new compositor users by making node trees reusable. In Progress Project Setup The first milestonewas recently merged. The next step is to handle Path Template errors in a more robust way. Project Setup Mockup. After that, work will begin on the Project Definition phase. Follow the project at #133001. Shape Keys Improvements What was originally framed as a performance problem has shifted focus toward usability and management of shape keys. As part of this, new operators for duplicating and updating shape keys have already been merged, with their mirrored counterparts to follow. Additionally, work on multi-select and editing of shape keys is gaining momentum. Follow the project at #136838. Remote Asset Libraries The project has broadened in scope to address usability improvements, including: Preview generation for all selected assets. A more compact view with a horizontal list and two-line names. Snapping for dragged collection assets. Remote Asset Library mockup. Meanwhile, the code for handling downloads has been submitted for review but encountered a setback. Development is taking place in the remote-asset-library-monolithic branch. Follow the project at #134495. Hair Dynamics The Hair Dynamics project consists of multiple deliverables: Embedded Linked Data— still under review Bundles and Closures — merged as experimental Declarative Systems — published as a design proposal Hair Solver — see below For the hairsolver, the plan is to use the upcoming Blender Studio project—currently unnamed but focused on a facial rig—as a use case, at least to develop it as an experimental feature. This will also involve addressing existing issues with animated hair and integrating animation and simulation for the same character—initially using separate hair objects. Design/Prototype Texture cache and mipmaps Initially unplanned due to limited resources, this project was eventually added to the agenda. A rudimentary prototype is already available in the cycles-tx branch. In the Attic and Bistro benchmark scenes, memory usage is already significantly reduced. These scenes were chosen because they include texture cachefiles. To learn how to test it, follow the project at #68917. NPR The NPRprototype received extensive feedback, helping to map out all planned and unsupported use cases for the project. More details, including the final design and development plans, will be shared soon. In brief: Some features will be implemented as EEVEE nodes. Others will be enabled via per-material/object compositing nodes. EEVEE features will be prioritized first, while the per-material/object compositing nodes require further design. So far, the focus has been on prototyping and finalizing the design to pull VSE strips out of the scene and create a dedicated sequence data-block. Story Tools Mockup. The next step is to finalize the design by either: Exploring once more the idea of keeping the sequence as part of the scene; or Settling on a per-camera and file settings design. After that, a technical breakdown will follow, then development. Follow the project at #131329. Not Started Layered Sculpting Layered sculpting hasn’t started yet. The original plan was to first address multi-resolution undo and rebuild issues, followed by fixing propagation spikes. However, in recent months the focus shifted to tackling sculpting performance issues present since the 4.3 release, mainly: Performance problems with smaller brush strokes. Local brush management. The performance patches are currently under review and expected in time for the upcoming 4.5 LTS release. Once completed, work on undo and multi-resolution will resume. Dynamic Overrides The team is busy with the 5.0 breaking change targets and other tasks and could not reserve the time to start the initial changes expected to simplify the overrides process. The team is currently focused on the 5.0 breaking change targets and other tasks, so they have not yet been able to start the initial changes aimed at simplifying the overrides process. And more… Beyond these projects, daily activity continues across various development modules. For a more frequent, day-to-day view of progress, check out the Weekly Updates and Module Meetings. All this progress is made possible thanks to donations and ongoing community involvement and contributions. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender #projects #update #q22025
    CODE.BLENDER.ORG
    Projects Update – Q2/2025
    Projects Update – Q2/2025 html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" At the beginning of 2025, several projects were announced as the initial targets for the year. Now that we’re in the middle of the second quarter, let’s take a look at where each project stands. Complete Vulkan Vulkan is now officially supported in the upcoming 4.5 LTS release, offering feature parity and comparable performance to the OpenGL backend. The next step is to monitor bug reports and eventually make it the default backend for non-macOS systems. Almost Complete UV Sync All issues from the (five-year-old!) UV Sync design task have been addressed. Development is ongoing in the pr-uv-sync-select branch (builds are available here). The remaining work involves finalizing the port of certain selection operators and resolving minor issues. Follow the project at #136817. Better integration across node trees The compositor is moving closer to feature parity with shading and geometry nodes, thanks to the addition of new nodes such as Vector Math, Vector Rotate, Vector Mix, Value Mix, Clamp, Float Curve, and Blackbody. Compositor Assets Mockup. The next step is to expose most of the existing node options as socket inputs, (#137223). This will enable compositor node assets to be bundled with Blender. After that, the focus will remain on simplifying onboarding for new compositor users by making node trees reusable (#135223). In Progress Project Setup The first milestone (Blender variables) was recently merged. The next step is to handle Path Template errors in a more robust way. Project Setup Mockup. After that, work will begin on the Project Definition phase. Follow the project at #133001. Shape Keys Improvements What was originally framed as a performance problem has shifted focus toward usability and management of shape keys. As part of this, new operators for duplicating and updating shape keys have already been merged, with their mirrored counterparts to follow. Additionally, work on multi-select and editing of shape keys is gaining momentum. Follow the project at #136838. Remote Asset Libraries The project has broadened in scope to address usability improvements, including: Preview generation for all selected assets. A more compact view with a horizontal list and two-line names. Snapping for dragged collection assets. Remote Asset Library mockup. Meanwhile, the code for handling downloads has been submitted for review but encountered a setback. Development is taking place in the remote-asset-library-monolithic branch. Follow the project at #134495. Hair Dynamics The Hair Dynamics project consists of multiple deliverables: Embedded Linked Data (#133801) — still under review Bundles and Closures — merged as experimental Declarative Systems — published as a design proposal Hair Solver — see below For the hair (physics) solver, the plan is to use the upcoming Blender Studio project—currently unnamed but focused on a facial rig—as a use case, at least to develop it as an experimental feature. This will also involve addressing existing issues with animated hair and integrating animation and simulation for the same character—initially using separate hair objects. Design/Prototype Texture cache and mipmaps Initially unplanned due to limited resources, this project was eventually added to the agenda. A rudimentary prototype is already available in the cycles-tx branch. In the Attic and Bistro benchmark scenes, memory usage is already significantly reduced. These scenes were chosen because they include texture cache (.tx) files. To learn how to test it, follow the project at #68917. NPR The NPR (Non-Photo Realism) prototype received extensive feedback, helping to map out all planned and unsupported use cases for the project. More details, including the final design and development plans, will be shared soon. In brief: Some features will be implemented as EEVEE nodes. Others will be enabled via per-material/object compositing nodes. EEVEE features will be prioritized first, while the per-material/object compositing nodes require further design. So far, the focus has been on prototyping and finalizing the design to pull VSE strips out of the scene and create a dedicated sequence data-block. Story Tools Mockup. The next step is to finalize the design by either: Exploring once more the idea of keeping the sequence as part of the scene; or Settling on a per-camera and file settings design. After that, a technical breakdown will follow, then development. Follow the project at #131329. Not Started Layered Sculpting Layered sculpting hasn’t started yet. The original plan was to first address multi-resolution undo and rebuild issues, followed by fixing propagation spikes. However, in recent months the focus shifted to tackling sculpting performance issues present since the 4.3 release, mainly: Performance problems with smaller brush strokes. Local brush management. The performance patches are currently under review and expected in time for the upcoming 4.5 LTS release. Once completed, work on undo and multi-resolution will resume. Dynamic Overrides The team is busy with the 5.0 breaking change targets and other tasks and could not reserve the time to start the initial changes expected to simplify the overrides process. The team is currently focused on the 5.0 breaking change targets and other tasks, so they have not yet been able to start the initial changes aimed at simplifying the overrides process. And more… Beyond these projects, daily activity continues across various development modules. For a more frequent, day-to-day view of progress, check out the Weekly Updates and Module Meetings. All this progress is made possible thanks to donations and ongoing community involvement and contributions. Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender
    0 Комментарии 0 Поделились 0 предпросмотр
  • Frame Node Improvements

    Frame Node Improvements

    May 19th, 2025
    General Development

    Jacques Lucke

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    Larger node setups have a tendency to become hard to understand. One of the main tools at our disposal to organize them are frames. A frame allows grouping nodes together visually and giving them a label.
    Unfortunately, using frames extensively has been rather annoying for various reasons including hard-to-reach shortcuts, bad readability of nested frames, and various smaller bugs.
    In an effort to encourage and help users to build more understandable node groups, we designed and implemented various workflow improvements for dealing with frames. These will be available in Blender 4.5 LTS.

    The F Key
    The most important functionality related to frames has been moved to the F key, which is easy to reach and remember. It has two main functions:

    Create a new Frame around the selected nodes, or a new empty frame. This will also open the rename popup where users can start typing right away to set a label. This makes creating labeled frames as easy as it can be, which is good as those help much more with readability than unlabeled frames.
    Detach and attach selected nodes while transforming them. This makes moving nodes between frames a breeze, unlike before where frames felt like they got in the way.

    The old functionality has been moved to J.

    Visualization Improvements
    Frame Highlighting: It used to be difficult which frame nodes will be attached to while moving them. This has been solved by highlighting the border of the frame under the cursor.
    Better Labels: Frame labels now sit a bit higher so they’re easier to read. The frame size also adjusts better by considering the full area/bounding box of the nodes inside.
    Alternating Frame Colors: Frames are quite dark in the default theme, which made nested frames very hard to see. Now, nested frames use alternating shading depending on the nesting depth. The frame color from the theme is used, just slightly darker/lighter. This keeps all frames visible regardless of the nesting level.
    Frame visualization improvements.

    “Frame First” Workflow
    With all of these improvements, working with frames has become much more enjoyable than before. Our hope is that this lowers the barrier to using frames so much that they become more frequently and not just as an afterthought when cleaning it up later on.
    In fact, I’d even be curious to see how well a “frame first” workflow would work for people. The thought came up when thinking about how when writing code, one usually first writes a function or variable name before writing any of the actual logic. Maybe something similar could work for node systems too, where a labeled frame is created before the first node is added inside. However, it’s not clear yet how well that works in practice right now or if more features would be needed to make that feasible. Let me know!

    Try It!
    Download the latest Blender 4.5 LTS Alpha build and test out the new workflow!

    Support the Future of Blender
    Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases.

    ♥ Donate to Blender
    #frame #node #improvements
    Frame Node Improvements
    Frame Node Improvements May 19th, 2025 General Development Jacques Lucke html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Larger node setups have a tendency to become hard to understand. One of the main tools at our disposal to organize them are frames. A frame allows grouping nodes together visually and giving them a label. Unfortunately, using frames extensively has been rather annoying for various reasons including hard-to-reach shortcuts, bad readability of nested frames, and various smaller bugs. In an effort to encourage and help users to build more understandable node groups, we designed and implemented various workflow improvements for dealing with frames. These will be available in Blender 4.5 LTS. The F Key The most important functionality related to frames has been moved to the F key, which is easy to reach and remember. It has two main functions: Create a new Frame around the selected nodes, or a new empty frame. This will also open the rename popup where users can start typing right away to set a label. This makes creating labeled frames as easy as it can be, which is good as those help much more with readability than unlabeled frames. Detach and attach selected nodes while transforming them. This makes moving nodes between frames a breeze, unlike before where frames felt like they got in the way. The old functionality has been moved to J. Visualization Improvements Frame Highlighting: It used to be difficult which frame nodes will be attached to while moving them. This has been solved by highlighting the border of the frame under the cursor. Better Labels: Frame labels now sit a bit higher so they’re easier to read. The frame size also adjusts better by considering the full area/bounding box of the nodes inside. Alternating Frame Colors: Frames are quite dark in the default theme, which made nested frames very hard to see. Now, nested frames use alternating shading depending on the nesting depth. The frame color from the theme is used, just slightly darker/lighter. This keeps all frames visible regardless of the nesting level. Frame visualization improvements. “Frame First” Workflow With all of these improvements, working with frames has become much more enjoyable than before. Our hope is that this lowers the barrier to using frames so much that they become more frequently and not just as an afterthought when cleaning it up later on. In fact, I’d even be curious to see how well a “frame first” workflow would work for people. The thought came up when thinking about how when writing code, one usually first writes a function or variable name before writing any of the actual logic. Maybe something similar could work for node systems too, where a labeled frame is created before the first node is added inside. However, it’s not clear yet how well that works in practice right now or if more features would be needed to make that feasible. Let me know! Try It! Download the latest Blender 4.5 LTS Alpha build and test out the new workflow! Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender #frame #node #improvements
    CODE.BLENDER.ORG
    Frame Node Improvements
    Frame Node Improvements May 19th, 2025 General Development Jacques Lucke html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Larger node setups have a tendency to become hard to understand. One of the main tools at our disposal to organize them are frames. A frame allows grouping nodes together visually and giving them a label. Unfortunately, using frames extensively has been rather annoying for various reasons including hard-to-reach shortcuts (ctrl + J, alt + P, F2), bad readability of nested frames, and various smaller bugs. In an effort to encourage and help users to build more understandable node groups, we designed and implemented various workflow improvements for dealing with frames. These will be available in Blender 4.5 LTS. The F Key The most important functionality related to frames has been moved to the F key, which is easy to reach and remember. It has two main functions: Create a new Frame around the selected nodes, or a new empty frame. This will also open the rename popup where users can start typing right away to set a label. This makes creating labeled frames as easy as it can be, which is good as those help much more with readability than unlabeled frames. Detach and attach selected nodes while transforming them. This makes moving nodes between frames a breeze, unlike before where frames felt like they got in the way. The old functionality has been moved to J. Visualization Improvements Frame Highlighting: It used to be difficult which frame nodes will be attached to while moving them. This has been solved by highlighting the border of the frame under the cursor. Better Labels: Frame labels now sit a bit higher so they’re easier to read. The frame size also adjusts better by considering the full area/bounding box of the nodes inside. Alternating Frame Colors: Frames are quite dark in the default theme, which made nested frames very hard to see (they were essentially black on black). Now, nested frames use alternating shading depending on the nesting depth. The frame color from the theme is used, just slightly darker/lighter. This keeps all frames visible regardless of the nesting level. Frame visualization improvements. “Frame First” Workflow With all of these improvements, working with frames has become much more enjoyable than before. Our hope is that this lowers the barrier to using frames so much that they become more frequently and not just as an afterthought when cleaning it up later on. In fact, I’d even be curious to see how well a “frame first” workflow would work for people. The thought came up when thinking about how when writing code, one usually first writes a function or variable name before writing any of the actual logic. Maybe something similar could work for node systems too, where a labeled frame is created before the first node is added inside. However, it’s not clear yet how well that works in practice right now or if more features would be needed to make that feasible. Let me know! Try It! Download the latest Blender 4.5 LTS Alpha build and test out the new workflow! Support the Future of Blender Donate to Blender by joining the Development Fund to support the Blender Foundation’s work on core development, maintenance, and new releases. ♥ Donate to Blender
    0 Комментарии 0 Поделились 0 предпросмотр
  • Blender at Annecy 2025

    Blender at Annecy 2025
    May 13th, 2025
    Press Releases
    Fiona Cohen
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"" style="color: #0066cc;">http://www.w3.org/TR/REC-html40/loose.dtd"
    As it is now tradition, a small delegation of the Blender team will attend the Annecy festival, June 9-13.
    This year though, no booth but a bigger gathering for people who want to connect.
    Blender for Breakfast – new formula
    Following the success of the last two years, Blender is going to host a “studios” breakfast session (Birds of a Feather-style), this time as part of the official program and in a bigger venue within the Festival-MIFA.
    Focused on connecting producers, developers, TDs and artists working in studios where Blender is part of the pipeline, the goal is to share ongoing Blender development, Blender Studio pipeline insights, and give visibility to teams who wish to share their experience using Blender in production.
    All while being treated some delicious croissant and coffee!
    The event will take place on Tuesday, June 10 at 09:00 at Le Campus Mifa – Salle 1.
    Limited spots are available and you need an accreditation to enter the Campus – make sure you register if you are interested, and we’ll send confirmations closer to the event.
    See you there!

    Source: https://www.blender.org/press/blender-at-annecy-2025/" style="color: #0066cc;">https://www.blender.org/press/blender-at-annecy-2025/
    #blender #annecy
    Blender at Annecy 2025
    Blender at Annecy 2025 May 13th, 2025 Press Releases Fiona Cohen html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" As it is now tradition, a small delegation of the Blender team will attend the Annecy festival, June 9-13. This year though, no booth but a bigger gathering for people who want to connect. Blender for Breakfast – new formula Following the success of the last two years, Blender is going to host a “studios” breakfast session (Birds of a Feather-style), this time as part of the official program and in a bigger venue within the Festival-MIFA. Focused on connecting producers, developers, TDs and artists working in studios where Blender is part of the pipeline, the goal is to share ongoing Blender development, Blender Studio pipeline insights, and give visibility to teams who wish to share their experience using Blender in production. All while being treated some delicious croissant and coffee! The event will take place on Tuesday, June 10 at 09:00 at Le Campus Mifa – Salle 1. Limited spots are available and you need an accreditation to enter the Campus – make sure you register if you are interested, and we’ll send confirmations closer to the event. See you there! Source: https://www.blender.org/press/blender-at-annecy-2025/ #blender #annecy
    WWW.BLENDER.ORG
    Blender at Annecy 2025
    Blender at Annecy 2025 May 13th, 2025 Press Releases Fiona Cohen html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" As it is now tradition, a small delegation of the Blender team will attend the Annecy festival, June 9-13. This year though, no booth but a bigger gathering for people who want to connect. Blender for Breakfast – new formula Following the success of the last two years, Blender is going to host a “studios” breakfast session (Birds of a Feather-style), this time as part of the official program and in a bigger venue within the Festival-MIFA. Focused on connecting producers, developers, TDs and artists working in studios where Blender is part of the pipeline, the goal is to share ongoing Blender development, Blender Studio pipeline insights, and give visibility to teams who wish to share their experience using Blender in production. All while being treated some delicious croissant and coffee! The event will take place on Tuesday, June 10 at 09:00 at Le Campus Mifa – Salle 1. Limited spots are available and you need an accreditation to enter the Campus – make sure you register if you are interested, and we’ll send confirmations closer to the event. See you there!
    0 Комментарии 0 Поделились 0 предпросмотр
  • CODE.BLENDER.ORG
    Declarative Systems in Geometry Nodes
    Declarative Systems in Geometry Nodes May 2nd, 2025 General Development Jacques Lucke html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Currently, one of our primary goals for Geometry Nodes is to get to a point where we can build high level and easy to use node group assets for physics simulations. Our general approach was described briefly last year. The goal of this document is to go into a bit more detail. For the examples below I’ll use a particle simulation because it’s the easiest, but the same applies to many other simulation types like hair, cloth and fluids. Two Different Views There are two different ways to look at a particle simulation: Declarative view: Describe the intended behavior of the particles by combining various existing behaviors like emitters, forces, colliders, surface attachment and more. Imperative view: Describe the computation steps to simulate the particles. Such steps are for example “integrate forces”, “update positions”, “find collision points”, etc. Both are totally valid views at a particle simulation, but depending on what you’re trying to achieve exactly, one view is more practical than the other. Most particle simulations can be described using a combination of various behaviors in which case the declarative view is more practical, but when you’re building a particle system from scratch or have custom needs, the imperative approach works better because it has more flexibility at the cost of dealing with low-level details. Current State Geometry Nodes is already quite decent at building particle systems using the imperative approach. Obviously, there are various missing built-in features to really be able to tackle any kind of simulation, but the overall structure is there. Building particle systems using the declarative approach by combining a set of behaviors is much harder, because it requires creating node groups that describe a certain behavior which are passed to a “solver” which performs the actual computations in the right order. Goal We are working towards various core features (bundles, closures, lists) that together will allow building declarative systems. The image below shows how individual behaviors could be encapsulated by node groups. All behaviors have the same type which allows easily combining them to form higher level behaviors. Also note that the order in which they are passed to the particle solver is largely irrelevant. This approach to building particle solvers allows someone to focus on building the actual solver that exposes a specific interface while anyone else can just build and combine behaviors for that solver without worrying too much about how it works internally. Since all behavior node groups have a very similar structure (various inputs, one behavior output), we can also build a higher level UI for managing them. For example, if the Particle Solver node group is used as modifier, the UI could look like in the following mockup. Note that each panel corresponds to one behavior and that an arbitrary number of behaviors can be added. The individual behaviors would be node group assets coming from an asset library. Of course one always has more flexibility in the node editor, but this kind of UI should be capable of solving the most common needs without requiring the node editor at all. Lastly, this approach of using declarative systems also simplifies creating add-ons that have a custom high level UI for some node tree. That’s because it’s way easier to write code that merges a few behaviors together automatically, than it is to edit potentially deeply nested node groups. Beyond Simulations As mentioned, this behavior-based approach works well for all kinds of physics simulations. However, using declarative systems also makes sense in other cases where the user thinks more about the what and not the how. One example for such a system is Line Art. A potential Line Art node is supposed to generate curves based on various features (e.g. silhouette, non-occluded edges and intersections). Here again, the user does not really care about how the curves are computed exactly, but only what kind of curves are expected. One could build node groups describing the different kind of curves, pass these to the Line Art node which then outputs all the requested curves. Something similar could be done for landscape generation, where behaviors encapsulate terrain features or asset scattering rules. There is always a bit of a trade-off when choosing between the imperative and declarative approach, but for high level tools used by potentially millions of people, the declarative approach is often better because it significantly reduces the cognitive load at the cost of a little less flexibility. How do Bundles and Closures fit in? Bundles and closures currently an experimental feature. They have many interesting uses on their own, but the primary reason for why we are working on them is to allow building these declarative systems. You can check out the pull request for more details, but in short a Bundle allows passing multiple values along using a single link and a Closure allows passing around functions that can be evaluated anywhere. Each individual behavior will generally be encapsulated as a bundle. The exact structure of that bundle is determined by the solver that uses the behavior. For example, a very basic mesh particle emitter could look like in the image below. Note that the Type in the bundle will be used by the solver to figure out what kind of behavior this is. The solver will call the Emit callback to actually create the new particles. The inputs and outputs of the callback are also determined by the particle solver. Closures are a fundamental feature to make this whole system work, while at the same time being a fairly niche feature that most people will not have to know about. Most people will just use systems that heavily rely on closures under the hood to be able to provide a good high level user interface. Feedback can be posted in the corresponding forum thread.
    0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • WWW.BLENDER.ORG
    Reinforcing the Blender Trademark Guidelines
    Reinforcing the Blender Trademark Guidelines April 18th, 2025 News Francesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" As Blender grows in popularity, the amount of products and projects using “Blender” as part of their name is growing as well. While there are some short-term benefits in incorporating the “Blender” name in a product, there are often unintended consequences when it reaches a wider audience. For example: An extension becomes popular and starts charging a fee to download. Using Blender as part of the brand could imply a relationship or endorsement from the Blender project.  A community website or service starts being considered official or affiliated to the project, because it contains the Blender name in it. In reality, the project is fully independent. With the introduction of the Blender Extensions platform, branding issues become more obvious, as add-ons that have “Blender” in their name could appear right inside the software, creating confusion.  As general branding guidelines for products or services, we recommend third parties to avoid using the name “Blender”, especially not as the start of the name. Blender Foundation wishes to distinguish official products or services that way. For example: Blender Conference, Blender Extensions, Blender ID. This explained in detail in the Trademark Policy page. While we understand that using the name “Blender” in a product will give quick recognition and short term benefits, it is also common sense that in the mid and long term developing a unique own brand for products will always be more beneficial. It allows you to trademark it, promote it, and expand it beyond Blender itself. Besides that, brand diversity in the Blender ecosystem is highly desirable, as it reflects the amount of independent business happening around Blender. This is what happened with Blender Market. At the time (15 years ago), the idea of creating a commercial Blender-focused marketplace was truly pioneering, and that initiative was welcomed and supported by Blender Foundation. Initially, it was helpful for the market to bear Blender’s name as it helped with discoverability. Today, not so much, as it often gets mistaken for Blender’s actual marketplace.  Blender does not have plans to create an add-on marketplace, but rather focus on the free extensions platform, so it becomes important to set a distinction between the two. Last year, Autotroph (the organization behind Blender Market) announced the decision to rebrand the marketplace to “Superhive”, in an effort to align with the Blender trademark policies. The rebrand is now in place, and this is something that Blender Foundation highly appreciates.  Existing and new products featuring the Blender name are encouraged to follow Blender Market’s steps. We understand that websites and products using the name Blender already for a long time, will not find it practical or desirable to rename.  Blender Foundation does not intend to seek legal or public action in cases of such trademark violations. However, Blender Foundation does reserve the right to not advertise confusingly named Blender products on the extensions platform. For more information, blender.org/about/trademark-policy.
    0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • WWW.BLENDER.ORG
    Blender 4.4 Release
    Blender 4.4 ReleaseMarch 18th, 2025Press ReleasesPablo Vazquez html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Blender Foundation and the online developer community proudly present Blender 4.4!Splash artwork: Flow Dream Well Studio, Sacrebleu Productions, Take FiveImage licensed under CC-BY-SA https://flow.movie/Featuring artwork from Flow, the Oscar-winning film entirely made in Blender. Read the user story with director Gints Zilbalodis.Showcase ReelWhats NewBlender 4.4 focuses on stability, with over 700 issues tackled through the Winter of Quality. Read more on the Code blog.Major improvements include a new way to pack multiple data-block animations into one Action using Slots, a full CPU rewrite for the Compositor, Grease Pencil stabilization and feature parity, Video Sequencer text editing improvements, and, as always, better performance and a more refined user interface to enhance the user experience.Watch the video summary on Blenders YouTube channel.Explore the release notes for an in-depth look at whats new!Thank you!This work is made possible thanks to the outstanding contributions of the Blender community, and the support of the over 7300 individuals and 35 organizations contributing to theBlender Development Fund.Happy Blending!The Blender TeamMarch 18th, 2025Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender
    0 Комментарии 0 Поделились 0 предпросмотр
  • WWW.BLENDER.ORG
    Blender at GDC 2025
    Blender at GDC 2025March 11th, 2025EventsFrancesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"A small delegation of the Blender team (Dalai Felinto and Francesco Siddi) is going to attend theGame Developers Conference GDC in San Francisco on 17-21 March, with the main goal to connect with individuals and teams using Blender to create interactive content.This year we are also organizing a Blender Meetup event (Birds of a Feather-style), focused on connecting producers, developers, TDs and artists working in studios where Blender is part of the pipeline. The goal is to share ongoing Blender development, Blender Studio pipeline insights, and give visibility to teams who wish to share their experience using Blender in production.This time we go for an informal setting on Tuesday, 18 March at 10:00AM at the Yerba Buena Gardens (in front of the Moscone conference center). Look for the Blender flag, bring your own coffee and see you there!If you want to connect besides the event, reach out to francesco at blender.org.
    0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • CODE.BLENDER.ORG
    Winter of Quality 2025
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"During the 20242025 winter (northern hemisphere), Blender developers focused on quality and stability. This blog post offers an overview of the work accomplished.BugfixesBetween December 1 and January 31, 2025, more than 500 reported issues were fixed.High severity bugs since January 1st 2025Here is a per-module breakdown of the fixed reports:Animation & Rigging: 24Asset System: 6Core: 10Grease Pencil: 82Modeling: 33Nodes & Physics: 71Pipeline & IO: 9Python API: 3Render & Cycles: 37Sculpt, Paint & Texture: 31User Interface: 82VFX & Video: 40Viewport & EEVEE: 76Additionally, many old reports have been double-checked and either fixed or closed (for example, if the issue could not be reproduced in a recent Blender version). Issues that were not reported by users were also addressed.Module Work OverviewIn addition to fixing bugs, developers also spent time tackling technical debt, updating documentation, and stabilizing certain areas of the code. Lets take a look at the reports from each module.Animation & RiggingMultiple bugs have been closed (either fixed or re-investigated and closed).Pose Library:Added support for slotted actions, including multi-object poses.Added support for pushing to the pose library.New tools for slotted actions have been implemented.Switching assigned actions is now smoother.Library Overrides for slotted actions have improved.Code documentation is much better, APIs have been polished, and the new animation system is generally easier to work with for future development.CompositorThe CPU compositor rewrite has been completed, aiming to improve maintainability by unifying the CPU and GPU compositors. In the process: Tens of thousands of lines of code were removed. Performance of many operations was improved. The behavior of CPU and GPU devices was unified.The main benefit of this rewrite is that future development of the compositor will be faster and easier.Grease PencilThe focus was on stabilization following the significant Grease Pencil v3 changes introduced in Blender 4.3. A total of 91 bug fixes were committed, 86 of which were high-severity bugs, many addressing regressions.ModelingThe focus was on fixing crashes and older unresolved bugs. 17 issues were addressed, including several long-standing bugs and 8 crashes. In total, 74 issues were closed.Nodes & PhysicsThe team kicked off the quality project by developing a tool to categorize all module issues and split them into batches. This tool made the project feel far less daunting, as they were only tackling 20 issues per day instead of 700.The focus was on addressing reports for active projects (rather than end-of-life features like the particle system or inactive areas such as the cloth modifier). In total, they closed about 150 issues and resolved several long-standing, frustrating problems.They also made time for some important refactors, including:Reducing the number of changes required to add a new node.Starting a replacement for the CustomData system.Implementing various smaller improvements.Render & CyclesThe team improved test coverage, fixed around 40 bugs, and cleaned up the code to better align with modern C++ standards.Sculpt, Texture & PaintThe team focused on bug fixes, as well as improving testing and documentation.IssuesClosed 25 high severity reports.Followed up on issues tagged with Need Information from DevelopersClosed 57 additional issues (Design, To Do, and Bugs).Created a tracking issue for Sculpt/Paint undo problems.Testing & DocumentationAdded a BVH building unit test.Work is ongoing for a sculpt stroke + render test.Added a technical documentation overview for mesh painting.USD (Universal Scene Description)The team focused on a variety of tasks, including adding additional test coverage, completing partially implemented features, and upgrading the user manual.They also investigated, fixed, and prototyped performance improvements for USD import, which were also useful in determining some other areas of Blender that should be improved.User InterfaceThey focused on reducing the number of UI issues to a more manageable state. Out of more than 1000 open issues, 275 were closed.Viewport & EEVEEThe team migrated overlay, selection, and image engine to the new drawing manager API. This migration fixes known and long-standing issues with overlays, and it paves the way for future optimizations, including removal of a global lock/freeze.The team also focused on improving test cases, fixing high-severity issues, and better platform support.Video Sequence EditorThe team focused on bug reports, code cleanups, and refactors:The Sequence -> Strip rename in the codebase (and to some extent the Python API) to make the code less confusing.All movie read/write-related C++ code is now in imbuf/movie, with clearer naming and improved structure.The code for VSE effects and modifiers was cleaned up and modernized to align more with C++ standards, resulting in a slight performance boost.Retiming code received fixes, cleanups, and improved developer documentation.Several UI interactions were polished for better usability.Future Quality ProjectsAfter a year of exciting new features like EEVEE-Next and Grease Pencil v3, the Blender Winter of Quality was the perfect opportunity to tackle loose ends and enhance stability.While quality is always a core focus of new development and part of the daily work of module teams, having a dedicated period to focus solely on it was great. As a result, future quality projects are being considered, and feedback is currently being gathered to learn about the experience for developers and identify areas for improvement in the future.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender
    0 Комментарии 0 Поделились 0 предпросмотр
  • 0 Комментарии 0 Поделились 0 предпросмотр
  • WWW.BLENDER.ORG
    Making Flow Interview with director Gints Zilbalodis
    Making Flow Interview with director Gints ZilbalodisJanuary 22nd, 2025User StoriesFrancesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Flow, the animated feature film following the mystical journey of a dark grey cat and his companions, is the manifestation of Blenders mission, where a small, independent team with a limited budget is able to create a story that moves audiences worldwide, and achieve recognition with over 60 awards, including a Golden Globe for Best Animation and two Oscar nominations.In this interview, Gints Zilbalodis, writer and director (and more!) of the film, shares how Blender was instrumental in the creation of the film.Gints: Ive done animation, all kinds of animation. I started doing hand drawn, 2D, digital animation. But after making a few shorts, I realized that Im not good at drawing, and I switched to 3D because I could model things, and move the camera. And so at first, I used Maya, which was taught at our school at that time.After finishing my first feature Away, I decided to switch to Blender in 2019, mainly because of EEVEE. I started using the 2.8 beta or even alpha release. It took a while to learn some of the stuff, but it was actually pretty straightforward. Many of the animators in Flow took less than a week to switch to Blender.EEVEE was interesting to me because, even my first feature Away, it was all playblasted, which is not like proper rendering, rather its like previews.I was excited to find that workflow in Blender, but in a more advanced way that gave me greater control. Speed is really important to menot just in rendering but also in working with files, setting up lighting, and creating the overall look. I like to work on multiple aspects at the same time; for example, when setting up the camera, I also need lights in place because lighting influences camera placement and how the scene looks. Thats why EEVEE was so appealing to me.I briefly experimented with some game engines, but at least back then, it was really difficult to figure out a workflow for making films in them.And Blender was ideal: it had all the tools I needed.The entire project took about five and a half years. In the first year, I was writing the script, learning Blender, and looking for funding as Dream Well Studio. That was in 2019.In 2020, we secured some funding, and I moved into a co-working studio space with other artists and developers who were using Blender. Thats where I connected with Mrti Uptis and Konstantns Vievskis.Mrti was one of the first people I approachednot specifically for water simulation, but just to see how he could contribute. However, it quickly became clear that he had a deep expertise in water, unlike anyone else.We were fortunate that, in the early stages, it was just me, so the pandemic didnt affect us much. By the time we moved into full production in 2023, things had stabilized.I created a short pilot for Flow about a minute and a half long where I went through the entire workflow. It was technically basic, but it was useful to test the process. That led to our first teaser, which I never showed publicly. Later, we made another, entirely new teaser, which we used for pitching.In 2021, we started hiring concept artists and building the team. We brought in riggers and developers to create custom scripts that helped streamline the workflow while I was working on the animatic.The Latvian studio was relatively small, it all fit in one room. In total, we had around 15 to 20 people, but at any given time, there were usually only three to five people working, since different teams handled pre-production and post-production.We had a set-dressing team. I would design the initial scene in previz, and they would refine it by adding more plants, props, and environmental details. Concept artists sketched out buildings and figured out their construction, incorporating storytelling elements into the environments.Other team members focused on developing tools. Water was a huge part of the film, but only two people handled all the water effects. Mrti had already been researching water simulations and posting his findings on YouTube, but he hadnt yet put everything together. He eventually developed a Blender add-on for water effects.Meanwhile, Konstantns handled smaller simulations, such as splashes. He also researched techniques for stylized fur and feathers, working on shaders. In addition to that, he did rigging, and character modeling along with other team members.In 2022, Belgian and French co-producers Take Five and Sacrableu Productions joined the project to work on sound, character animation and additional aspects of the film. Expanding the team with experienced character and pipeline TDs, as well as animators working in a well-structured process, was essential to handle the complexity required by the film. This was a truly international coproduction.The film premiered at the Cannes Film Festival in the Un Certain Regard selection in 2024.How did you learn Blender?I learned a lot online, but it was great to have someone with more experience next to me (Konstantns). He did a lot of rigging and was much more technical than me, so I could ask him for advice. Sometimes, I needed something specific in the animatic, like the deer moving in a spiral, and he would write a script to automate it. This was before Geometry Nodes.I cant write scripts myself, so having someone in the studio to help was invaluable. But learning never really stops. I still feel like theres so much I dont know about Blender or anything else. And with these long projects, you sometimes forget things you learned five years ago.Flow was made entirely with Blender and rendered with EEVEE. Each frame took from about 0.5 10 seconds to render in 4k. We didnt use a renderfarm. The final render was done on my PC. There was no compositing, all the colors were tweaked and adjusted using shaders.How does the previz process work?When creating the previz or animatic, I just try to get things done as quickly as possible. This approach helps me explore ideas efficiently. Im not great at drawing, so previz works better for me. Its faster, and I like to move the camera a lot. Sometimes, I roughly sketch out a building, but its often very basic.I then hand these files over to a concept artist. Many environmental concept artists use Blender as well, so they can import my files. While they usually rebuild everything from scratch, my files at least provide the correct proportions. Sometimes, they paint over my models, but in other cases, they design everything directly in 3D.When they send the files back, I ask them to leave assets in place rather than moving them to the center of the scene. That way, I can easily import everything back, and it aligns perfectly.The animation teams in France and Belgium brought a great deal of organization to the process. They developed further tools and rigs to deliver character animation, they had to optimize the scenes, removing everything except the assets the characters interacted with and cleaning them up thoroughly. However, I didnt use these optimized assets directly, I would import their animations back into my heavier scenes.For lighting, it was just me. We had other people handling different tasks, but I was solely responsible for lighting. This setup made things easier.Since I handled a lot of tasks myself, it was simpler to work with large files where everything was imported. In each file, I made extensive adjustments to assets. For example, when setting up lighting, I tweaked materials for the assets in each shot, making them slightly lighter or darker to get the right look. I know this could be done with library overrides, but I was also working across different computers: my desktop PC and my MacBook.Switching between operating systems sometimes caused issues with linked assets, even when using relative file paths. To avoid breaking links, I found it easier to keep everything within the file itself. Some of the smaller scenes were around 300 MB compressed, while a few of the largest ones reached nearly 2 GB compressed.Maybe I could have figured out a better way to link assets, but during production, speed was the priority. The production timeline required me to move fast, so I opted for the most efficient workflow rather than experimenting with alternatives.Learn more about the animation of Flow in this Blender Conference presentation by Animation Supervisor Lo Silly-Plissier.A glimpse into the water surface system used in Flow.As an early adopter of Blender 2.8, did you upgrade as new releases became available?I started with Blender 2.8 alpha while it was still in development, and I was constantly updating things. I think when the team joined, we were using 2.9 or maybe 3.0.With each major version, we decided to update since there were only a few of us at the time, and we werent sharing files. That made it safer because everyone was working on their own files independently, without links. The last version we used was 3.6. EEVEE definitely improved over time, but it wasnt just EEVEE. Geometry Nodes and other features made upgrading worthwhile.Of course, before each update, we ran a lot of tests, opening different files to check for issues. Some things did break, but overall, our workflow remained stable.Early on, when the team was small, updating wasnt a big deal. But once all the animators started in 2023, they worked in 3.3 and stuck with it throughout production. After they finished animation and I moved on to lighting, I imported everything into 3.6, which wasnt a problem.Which add-ons were part of your workflow?We used a few. One of them was GeoScatter, a popular scattering add-on for distributing plants and other environmental elements. We also used Animation Layers, not for character animation, but for the camera, specifically to create handheld, shaky camera movements.I created separate layers for a standstill shot, for walking in place, and for running. This allowed me to mix and adjust them as needed. I believe some add-ons have been developed since then specifically for this kind of workflow. To generate camera motion, I also tested VirtuCamera. I experimented with recording live camera movements by walking around, but I found it too imprecise. Instead, I preferred keyframing and layering different types of motion.For fluid simulations, we sometimes combined different techniques, starting with large-scale waves using Cell Fluids and then adding details with FLIP Fluids.Other tools we used included Bagapie Vegetation Generator, Bagapie Rain Generator, and Copy Global Transform.What I love is how fast the files open. It might seem like a small thing, but it actually saves a lot of time and frustration.EEVEE is great. Also, I love how customizable everything is. I created a lot of custom keyboard shortcuts, which worked really well when I was working alone. However, once we started working in the studio, it caused some issues, especially when I had to demonstrate something on someone elses computer. But we figured it out.I also love the amount of resources available online. There are so many tutorials and tools, and I can quickly find answers to almost anything.What could improve in Blender for indie filmmakers?Well, there were some challenges with using Blender, but we solved them.Sometimes things werent clear at first, but once you actually put your mind into it, you can figure it out. Thats often the case with Blender: you encounter obstacles, but with enough effort, you find a way through.What Id love to see and I think its already happening is more focus on NPR (non-photorealistic rendering) workflows, which is great. Further improvements to interactive and real-time rendering would also be a huge benefit.I havent worked much in Blender over the past six months, but Im already working on my next project, and I plan to use Blender for it.Final thoughts?Ive never worked in a big studio, so I dont really know exactly how they operate. But I think that if youre working on a smaller indie-scale project, you shouldnt try to copy what big studios do. Instead, you should develop a workflow that best suits you and your smaller team.In our case, we didnt rely heavily on concept art. We modeled the characters directly in 3D and found ways to skip certain steps. Many of us wore multiple hats, figuring out how to streamline tasks rather than having separate departments for everything.For me, its also easier to handle the camera and lighting simultaneously rather than treating them as separate stages. Having a smaller team made the process more flexible and efficient.When developing my first feature, I structured the story around elements that were relatively easy to animate. I avoided large crowds and complex effects because, in the end, most viewers dont think about how difficult something was to create. I think its valuable for filmmakers to collaborate with tool developers early on to understand which things are challenging and which are easy. This can actually spark creative ideas rather than feeling like a limitation.Storytelling offers infinite possibilities, but sometimes constraints can be beneficial. For example, deciding to use only four characters and a handful of locations can lead to stronger creative choices. Some of my favorite films take this approach. They dont need an epic scope to be powerful.That said, I think a certain level of naivety is necessary when starting a project. If I had known how difficult it would be, I might never have started. But because I didnt fully grasp the challenges ahead, I just dove in and figured things out along the way.
    0 Комментарии 0 Поделились 0 предпросмотр
Больше
CGShares https://cgshares.com