Blender
Blender
Blender is the free and open source 3D creation suite. Free to use for any purpose, forever.
  • 312 people like this
  • 254 Posts
  • 10 Photos
  • 104 Videos
  • 0 Reviews 5.0
  • company
Search
Recent Updates
  • 0 Comments 0 Shares 97 Views
  • 0 Comments 0 Shares 240 Views
  • WWW.BLENDER.ORG
    Blender 4.3 Release
    Blender 4.3 ReleaseNovember 19th, 2024Press ReleasesPablo Vazquez html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Join the 2% Just 2 percent of users donating can help bring in more developers to keep Blender the best 3D software out there. Free for everyone, forever! Blender Foundation and the online developers community are proud to present Blender 4.3!Blender 4.3 splash artwork by Blender StudioWhats NewBlender 4.3 builds on the feature-packed 4.2 LTS with improvements to existing tools, performance enhancements, and the foundations that will shape the years to come.Some highlights:EEVEE: Light & Shadow LinkingRendering: Metallic BSDF, and new Gabor Noise texture.Compositor: Support for EEVEE passes, new White Point Color Balance, and more.Grease Pencil: Complete rewrite to support Layer Groups, Geometry Nodes, better erase tool, gradients, and much more.Geometry Nodes: For Each Zone, Gizmos, Bakes can be packed now, new nodes and UI improvements.Sculpt: Major refactor under the hood to improve performance.UV: New Minimum Stretch (SLIM) unwrapping method.Modeling: Bevel modifier can now use custom attributes.Brush Assets: All brushes are now assets, to be shared easily between projects.USD: Support for exporting Point Clouds.glTF: Draco mesh compression for importing, better export of UDIM tiles, quaternions and matrix attributes. Plus loads of bug fixes.And so much more!Watch the video summary on Blenders YouTube channel.And many more features and fixes await youexplore the release notes for an in-depth look at whats new!Thank you!This work is made possible thanks to the outstanding contributions of the Blender community, and the support of the over 4800 individuals and 35 organizations contributing to theBlender Development Fund.Happy Blending!The Blender TeamNovember 19th, 2024Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender
    0 Comments 0 Shares 278 Views
  • 0 Comments 0 Shares 286 Views
  • 0 Comments 0 Shares 279 Views
  • WWW.BLENDER.ORG
    Blender Foundations 2024 Fundraiser
    Blender Foundations 2024 FundraiserNovember 18th, 2024Press ReleasesFrancesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"AMSTERDAM, Netherlands The Blender Foundation today announced its latest fundraising campaign. This campaign, themed Join the 2 percent, aims to substantially increase the amount of incoming donations to support the continued development and maintenance of the beloved free and open-source 3D creation software.Blender is free for everyone. However, developing and maintaining the project is not without cost. These costs are solely covered by donations from thousands of individuals and several corporations. While having a good relationship with corporations is important, individual donations from users are crucial, as they allow Blender to remain an independent community project with development focus on end-user benefits.Blender is massively popular: 20 million downloads were registered in 2023. Understandably, most people are not in the position to financially support the project, but its reasonable to estimate that around 2% of the users have benefited from Blender in one way or another. Its to these people that the Foundation wishes to reach out: join the 2% of users that donate to Blender and keep it free for everyone!About the Blender FoundationThe Blender Foundation is a public benefit organization with the mission to provide everyone access to the worlds best 3D CG technology as free/open source tools, by facilitating and supporting the projects at blender.org. Blender is being used by millions of artists, designers, filmmakers, and professionals worldwide. The foundation is committed to ensuring that Blender remains a powerful, free and open-source tool for creative expression and innovation.Contact:Ton Roosendaal, Blender Foundation[emailprotected]
    0 Comments 0 Shares 288 Views
  • 0 Comments 0 Shares 288 Views
  • 0 Comments 0 Shares 309 Views
  • 0 Comments 0 Shares 342 Views
  • 0 Comments 0 Shares 348 Views
  • 0 Comments 0 Shares 349 Views
  • 0 Comments 0 Shares 347 Views
  • 0 Comments 0 Shares 350 Views
  • 0 Comments 0 Shares 353 Views
  • CODE.BLENDER.ORG
    This Summers Sculpt Mode Refactor
    This Summers Sculpt Mode RefactorNovember 7th, 2024Code Design, General DevelopmentHans Goudey html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Over the past several months sculpt mode underwent a large rewrite. Since the project has wrapped up, this post gives an overview of what changed.Unlike most other development projects, this had no effect on the interface. Before and after the project, Blender looked exactly the same. Typically this should raise some eyebrows, because it often means developers are prioritizing work based on its effect on the code rather than its utility to users. In this case, problems with the code have made feature development significantly harder over the years, and refactoring came with plenty of potential performance improvements.Overall, for those who want to skip all the technical details, entering sculpt mode in Blender 4.3 is over 5x faster, brushes themselves are about 8x faster, and memory usage is reduced by about 30%. For actual visible changes to sculpting in 4.3, see brush assets. For a full list of the refactor work, see the task.Entering Sculpt ModeEntering sculpt mode was known to be quite slow. Based on profiles, it also looked much slower than it should be, since it was completely single threaded.A profile of Blender as it enters sculpt mode on a large mesh in 4.2, where each row is a CPU core.It turns out Blender was bottlenecked by two things: building the BVH tree that accelerates spatial searches and raycasting and uploading the mesh data to the GPU for drawing.Improving the BVH build time was a months-long iterative process of finding bottlenecks with a profiler, addressing them, and cleaning the code to make further refactoring possible. Adding trivial multi-threading to the calculation of bounds and other temporary data was the most significant improvement, at almost 5x. Beyond that, reducing memory usage improved performance by another 30%, simplifying the spatial partitioning of face indices using the C++ standard library another 30%. And finally, changing the BVH from storing triangles to storing faces (for a quad mesh there are half as many triangles as faces!) improved performance by another 2.3x.Entering sculpt mode is about 5 times faster compared to 4.2 (a change from 11 to 1.9 seconds with a 16 million face mesh with a Ryzen 7950x).Lessons for DevelopersAny array the size of a mesh is far from free. We should think hard about whether all the data in the array is really necessary.Any algorithm should clearly separate serial and parallel parts. Any loop that can be done in parallel should be inside a parallel_for.We shouldnt be reimplementing common algorithms like partitioning; that makes code so scary and weird that no one touches it for years.DrawingThere is a fundamental cost of uploading geometry data to the GPU and we will always be bottlenecked to some extent by the large amount of data we need to render. However, as a tweaked version of code from 15 years ago, sculpt mode drawing had enough overhead and complexity that significant improvements were possible.The GPU data for the whole mesh is split into chunks, with one chunk per BVH node. One main problem with the old data upload was its outer loop over nodes. That forced all the book-keeping to be duplicated for every node. Often just focusing on simplifying the code gave performance improvements indirectly. Removing two levels of function call indirection for multires data upload roughly doubled the performance, and removing function calls for every mesh edge gave another 30% improvement.The main change to the drawing code was a rewrite to avoid all duplicate work per BVH node, add multi-threading, and change the way we tag changed data. This improved memory usage by roughly 15% (we now calculate viewport wireframe data if the overlay is actually turned on), and entering sculpt mode became at least 10% faster.GPU memory usage was reduced by almost 2x using indexed drawing to avoid duplicating vertex data for every single triangle. Now vertex data is only duplicated per face corner.Previously, sculpting on a BVH node would cause every single attribute to be reuploaded to the GPU. Now we only reupload attributes that actually changed. For example, changing face sets only reuploads face sets. Tracking this state only costs a single bit per node.BVH Tree DesignPreviously, the sculpt BVH tree, often referred to as the PBVH (Paint Bounding Volume Hierarchy) was a catch-all storage for any data needed anywhere in sculpt mode. To reduce the codes spaghetti factor and clarify the design, we wanted to focus the BVH on its goal of accelerating spatial lookups and raycasting. To do that we removed references to mesh visibility, topology, positions, colors, masks, the viewport clipping planes, back pointers to the geometry, etc. from the BVH tree. All of this data was stored redundantly in the BVH tree, so whenever it changed, the BVH tree needed to change too. Nowadays the design is more focused and its much easier to understand the purpose of the BVH.Another fundamental change to the BVH was replacing each nodes references to triangles with references to faces. In a typical quad mesh there are twice as many triangles as faces, so this allowed us to halve a good portion of the BVH trees memory overhead.Brush EvaluationTo evaluate a brush, regions (BVH nodes) of the mesh are first tested roughly for inclusion within its radius. For every vertex in each of these regions, we calculate a position translation and the brushs strength. The vertex strength includes more granular filtering based on the brush radius, mask values, automasking, and other brush settings.Meshes are split into multiple BVH nodes which are used to filter unaffected geometry.Prior to this project, all these calculations were performed vertex by vertex. For each vertex, we retrieved the necessary information, calculated the deformation and the relative strength and then finally applied the brushs change. Because mesh data is stored in large contiguous arrays, it is inefficient from a memory perspective to process all attributes for a particular vertex at once, as this likely results in many cache misses and evictions.While the previous code was somewhat concise, handling all three sculpt mesh types (regular meshes, dynamic topology, multires) at once, this generic processing had some significant negative side effects:The old brush code was hard reason about because of C macros and the combination of multiple data structures in one loop.The structure had little opportunity for improved performance because of runtime switching between data structures and the lowest-common-denominator effect of handling different formats.A do everything for each vertex structure has memory access patterns that dont align with the way data is actually stored.Brush code now just processes a single action for all the vertices in a node at the same time, splitting the code into very simple hot loops which can use SIMD, use much more predictable memory access patterns, and have significantly less branching per-vertex.For further reference, here is a change that refactored the clay thumb brush. Though the new code has more lines, its more independent, flexible, and easier to change.Proxy SystemPreviously, brush deformations were accumulating into a temporary proxy storage on each BVH node. This accumulation occurred for each symmetry iteration until the end of a given brush step, at which point the data was written into the evaluated mesh positions, shape key data, and the base mesh itself.We completely removed the proxy system as part of refactoring each brush. Instead, brushes now immediately write their deformation during each each symmetry step calculation. This avoids storing temporary data and improves cache access patterns by writing to memory that is already cached. Removing the proxy storage also reduced the size of BVH nodes by around 40%, which aligns with our ongoing goal of improving performance by splitting the mesh into more nodes.Profiling revealed a significant bottleneck during brush evaluation: just storing the meshs initial state for the undo system was taking 60% of the time. When something so simple is taking so much time, there is clearly a problem.The issue turned out to be that most threads involved in brush evaluation were waiting for a lock while a single thread did a linear search through the undo data, trying to find the values for its BVH node.for (std::unique_ptr<undo::Node> &unode : step_data->nodes) { if (unode->bvh_node == bvh_node && unode->data_type == type) { return unode.get(); }}Simply changing the vector to a Map hash table gave us back that time and significantly improved the responsiveness of brushes.return step_data->undo_nodes_by_pbvh_node.lookup({node, type});Though there was plenty of refactoring required to make this possible, the nice part is how often very little time with a profiler is necessary to identify significant improvements.Undo Data Memory UsageUndo steps also became slightly more memory efficient in 4.3. The overhead of each BVH nodes undo storage for a brush stroke reduced 10x from about 4KB to about 400 bytes.In the future we would like to look into compressing stored undo step data. This could require significantly less memory.For another example of thread contention, we look to the counting of undo step memory usage. Undo data is created from multiple threads, and each thread incremented the same memory usage counter variable. Simply counting memory usage later on with a proper reduction gave a 4% brush evaluation performance improvement.Writing to the same memory from multiple threads at the same time is slow!In yet another thread contention problem, writing true to a single boolean from multiple threads turned out to be a significant issue for the calculation of the average mesh normal under the cursor. The boolean was logically redundant, so just removing it improved brush evaluation performance by 2x.Multi-Resolution ModifierMost of these performance improvements were targeted at base mesh sculpting where there was more low-hanging fruit. However, multires changes followed the same design and there were a few more specific optimizations for it too. Most significantly, moving to a struct-of-arrays format for positions, normals, and masks gave a 32% improvement to brush performance, and simplified code.The sculpt-mode multires data structure was optimized the same way meshes were optimized over the past years (see last years conference talk)Some multires workflows have remaining bottlenecks though, like subdivision evaluation or bad performance at very high subdivision levels.The End!Thanks for reading! It was a pleasure to be able to iterate on the internals of sculpt mode. Hopefully the changes can be a solid foundation for many future improvements.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender
    0 Comments 0 Shares 372 Views
  • CODE.BLENDER.ORG
    Geometry Nodes Workshop: October 2024
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"After the Blender Conference, the Geometry Nodes team came together once again to discuss many design topics. This time our focus main focus was to improve support for physics, especially hair dynamics in Geometry Nodes. A few other topics were discussed as well though. You can also read the raw notes we took during the meetings.The following people participated in the workshop (from left to right): Lukas Tnne, Hans Goudey, Simon Thommes (afternoons) and Jacques Lucke. Additionally, Dalai Felinto helped kickoff the workshop and Falk David joined in every now and then.Previously in Geometry NodesOur last workshop was 5 months ago. This section provides a quick update on the topics we discussed there. Omitted topics dont have any news.Gizmos: They are part of the Blender 4.3 release. Next step here is to add gizmos to some built-in nodes like the Transform Geometry or Grid nodes.Baking: Bakes can be packed now. Next step is to provide higher level tooling to work with multiple bakes in a scene.Rename Sockets in Nodes: Ctrl+click to rename sockets works in a few nodes now (e.g. Bake, Simulation, Capture Attribute). There are some technical difficulties with making it work with double click and for right-aligned labels.Tools for Node Tree UX: Built-in nodes support socket separators now (used in the For-Each Zone). Support will be added to node groups at some point. The viewer node automatically changes its position now.Asset Embedding: A prototype was built to test the behavior. We agreed on how we solve the technical difficulties with it, but some UI aspects are still somewhat unclear (e.g. how this is presented to the user as a new import method besides linking and appending).Menu Socket: We improved the error handling when there are invalid links, giving more information to the user about what is wrong. This applies to menu sockets, but also other invalid links like invalid type conversions.Socket Shapes: We found a design where everyone is okay with the trade-offs it makes. A prototype was built. The work on it is still ongoing.Grease Pencil: Geometry Nodes can work with Grease Pencil data starting with Blender 4.3.For-Each Zones: There is a new For-Each Element zone in 4.3. Work on other kinds of For-Each zones is ongoing.Approaching PhysicsAs usual, there are many different perspectives that we have to take into account for when designing how we want to approach physics in Geometry Nodes:Using high level node group assets to setup e.g. a hair simulation.Building and/or customizing solvers for specialized effects.The modifier-only workflow.Higher level add-ons which abstract away the node and modifier interface.We started out by clarifying that there is a fairly fundamental difference in how to think when chaining multiple geometry operations vs. setting up a physics simulation. The difference is that when creating a simulation, one thinks about the desired behavior (forces, emitters, colliders, ) first, and not so much about the order in which the geometry is actually processed. In fact, the majority of users should only have to care about the behavior without worrying about specific geometry operations.We therefore want to provide better ways to separate describing the desired behavior from actually implementing the behavior. We call this the declarative approach. It gives users high level control over a potentially very complex evaluation system that makes all the desired behaviors work. The declarative approach can also be very useful for things beyond physics like building a brush engine or scattering system.To achieve this separation, we will introduce two new socket types: bundles and closures, which are explained in more detail below (exact names are not set in stone yet).BundlesA bundle is a container that allows packing multiple values into a a single socket. Values of different types can be put into a single bundle. A work-in-progress patch is available already.Bundles are quite useful to reduce the number of necessary links. For now, we are mostly interested in how they can be used to create a uniform interface for various kinds of simulation behaviors. Each behavior will be a bundle that contains the necessary information for the solver to understand what to do with it.ClosuresClosures sockets allow passing around arbitrary functions including entire node groups. For example, this allows passing a node group as an input into another group which will then evaluate it. This is an entirely new paradigm in Blenders node systems, and without being already familiar with the concept of passing functions around as data, its not trivial to understand. However, its incredibly powerful and allows building much more flexible and user-friendly high level node groups.In programming, the term closure refers to functions which may be passed around as data and can capture variables from where they are created. We have not found a good alternative name yet.To create closures, we use a new closure zone. Its a bit like creating a small local node group that can then be passed around. Just using existing node groups does not work, because we need the ability to pass data from the outside into the closure (like in all other zone types). Also, its good to have the ability to build the entire node tree in a single group to see everything at once.Position Based DynamicsThe declarative approach with bundles and closures is generally useful for all kinds of physics simulations. While we want to enable users to build their own solvers, we also want to integrate hair simulation specifically into Geometry Nodes directly.The hair simulation is designed around a Position-Based Dynamics (PBD/XPBD) solver. This solver has been applied to soft-body simulation, cloth, hair, granular materials and more.The PBD method is often used for real-time game physics and is relatively easy to implement. It has advantages in terms of speed and accuracy over the linearized velocity-based cloth solver currently used for hair dynamics. There are lots of learning resources and scientific papers on the topic for people to learn more. When first looking into this, we found this overview and this video tutorial series particularly useful.We will try and implement as much of this as possible using generic geometry nodes. Some parts like collision detection and constraint grouping may require new built-in nodes for performance reasons. This will be decided when we get there.ListsFor this project well likely need lists in different places, for example to manage a list of behaviors passed into the solver and to process contact points after collision detection. Lists have been a talking point in previous workshops and we dont have much new information that has not been said before. We went over the set of nodes wed need, but there were no real surprises there.Lists are also particularly important for hair, because we need to map generated hair to potentially multiple guide hair strands. Currently, there is no good way to store that mapping which makes any workflow that uses guides, especially for simulation, quite unreliable.The main blocker to get lists into Geometry Nodes is still the socket shapes discussion.Socket ShapesThe last blog post contains an explanation of the topic. Last time, we didnt come to a conclusion for how to deal with socket shapes when we get more types like fields, lists, grids and images. The tricky thing is that we cant show all information wed like to with just socket shapes, so we have to decide what we dont want to show anymore.Some design work has been done on the topic in the last couple of months and a simple prototype has been built too. Were now at a point where we are all at least okay with the solutions tradeoffs so that we can hopefully progress on the topic. Once that is resolved, volume grids and lists are much easier to get into a releasable state.For Each Geometry ZoneBlender 4.3 comes with the For Each Element zone. While thats very useful already, there are other kinds of for-each zones that can be useful. One of those is a For Each Geometry zone, that we used to call For Each Unique Instance in previous workshops.Its purpose is to iterate over each real geometry in a geometry set that may contain an instance hierarchy. Many built-in nodes do this internally already. For example, the Subdivision Surface node applies its effect to all meshes in the input, including those in instances. For various reasons, not all built-in nodes can or should do this. A new For Each Geometry zone would allow adding the same functionality to all built-in nodes and custom node groups which is impossible currently.This is quite different from the Instances mode in the For Each Element zone. If the geometry to be processed contains many instances of the same mesh, the existing zone would run for each mesh separately, while this new zone would only run once, because there is only a single mesh.There is already some previous design work available in #123021.We reconfirmed the overall design for modal node tools from a year ago. Since then, we also noticed that there are two kinds of modal operators in Blender currently:Operators based on the initial state (like bevel). These have redo panels.History dependent operators using the previous state at every modal step (like brushes). These dont have redo panels.Both kinds of operators could be created with nodes. However, when we talked about modal node tools so far, we were mainly concerned with the second type. Many use cases of the first kind can probably be solved with gizmos or a gizmo like system. Thats because the interactive part of these operators is mostly just used to control some input values for a non-modal operator.We also noticed that there are problems caused by fact that all node tools are just a single operator in the end (geometry.execute_node_group), but none of these seem impossible to solve. For example, we want modal node tools to come with their own keymap, but users should be able to override this keymap like any other keymap in Blender. Typically, there is a mapping from modal operator to keymap, but that does not work well here yet for the mentioned reason. Alternatively, it may be a nice solution to attach keymaps to specific assets in the user preferences instead of just to operators.It can also be possible to register a separate operator for each node tool, but that comes with its own problems. For example, that would introduce yet another way to reference specific asset data-blocks by their operator name and can easily cause operator name conflicts too.Field Context ZoneWe started discussing a new Field Context Zone. The overall design is very incomplete and we dont have concrete answers to many questions surrounding it yet. The general idea is to give access to the field evaluation context more explicitly.For example, for a field thats evaluated on a geometry, the new zone would have the context geometry as input, and would output a field that depends on that geometry. This opens up new opportunities for building fields that would be much more annoying to build before.The zone would also reduce redundancy in the design of nodes. We have pairs of nodes like Sample Index and Evaluate at Index which are the same except that one has geometry as an input and the other does not. A goal of the zone is that the Evaluate at Index node could be built out of the Sample Index node.A limitation of geometry nodes is that it can only output a geometry that is then passed to the next modifier. Sometimes it would be very useful to output other data like another geometry or single values. Those values could become part of the evaluated state of an object so that it can be referenced by other objects using nodes or drivers.This would allow outputting a bunch of vectors from Geometry Nodes which are then used to drive an armature. Additionally, we could allow outputting a bundle of values that is then passed into the next modifier. This way it becomes possible to build more rich modifier stacks without the limitation of having to encode all information in the geometry passed between modifiers.We could even allow outputting fields and closures from objects (probably with some limitations due to the lifetime of some data). This would allow building all kinds of effector objects that encode some behavior that can be understood by other Geometry Nodes setups. This can also be thought of as a generalization of the existing force field object type.Internal Data SocketsIn some cases, we want to add functionality that requires passing around data of that we dont want to expose fully. A good example would be KD trees and BVH trees which allow speeding up algorithms that require finding nearest points or doing ray casts. These data-structures have well defined APIs that we could expose, but exposing their implementation details could make future optimization much more difficult, because optimizations could require breaking files.It does not seem benefitial to add a new socket type for each kind of internal data. So far we think that it is good enough to only add a single type (with a single color) that is used to pass around all kinds of internal data.Another use-case that came up in the past is a Bake Reference socket that passes data from a Bake to an Import Bake node (once we have that). The tricky thing with an Import Bake node is that it has to be able to read bakes from disk as well as packed bakes. So just giving it a file path input does not work. Reading from files should still be possible of course, but we also need a solution for packed bakes.Group Input DefaultsEvery input of a node group has a default value. For some types, the default is currently hardcoded (e.g. an empty geometry). Others can be choosen manually in the sidebar where some socket types support more complex inputs. For example, vector sockets can be the position field by default. However, the set of possible defaults is currently hardcoded. The goal of this topic is to generalize the system for defaults to remove limitations.The overall idea is to have a new Group Defaults node that has an input socket for every input of the node group. The default of any input is specified by just connecting the value to the node like in the mockup below.We could also make it possible for some default values to depend other input values, but its not clear yet how much complexity this adds, so that may only be done later.A tricky aspect is that adding a default to a socket that did not have one yet may override its value in all group nodes that use this group. Thats kind of the inverse of a problem we have already: changing group input defaults are not propagated to group nodes at all. The problem is that we dont really know if a value has already been modified or not, which becomes even trickier when the node group is linked from another file.Context InputsThe goal of this topic is to solve the following problems:We want to remove the need for control node groups as a way to get global input values (example). While useful in some setups, this approach does not work all that well when building reusable node systems.We have no good way to pass the hair systems surface geometry to the relevant hair nodes in a good way.We have no way to override existing contextual input nodes like Mouse Position, Active Camera and Scene Time.We need a more flexible replacement for the Is Viewport node, which is used to control a performance vs. quality trade-off. Just making this decision based on whether were rendering or not is not good enough. Sometimes the fast mode of a node group should be used when in edit mode, and otherwise the high quality mode.What all these issues have in common is that we want to pass information into nested node groups without having to set up all the intermediate links which would cause a lot of annoying boilerplate. Nevertheless, we want to be able to override all these inputs at any intermediate level.The proposed solution is to generalize the concept of Context Inputs. There are many existing context input nodes (like Scene Time and Mouse Position) already. We also want to add a Context Input node for custom inputs. Whenever a context node is used in a (nested) node group, that will automatically create an input for the node group. Group nodes at a higher level can then decide to either pass in a specific value for that input or to not connect it. If its not connected, the context input will be propagated further up.If the context value has not been provided by any node, its propagated up to the Geometry Nodes modifier where again users can choose to specify it. If not, we could support reading the value from a custom property of the object or scene.There is a work-in-progress pull request for this feature.Modifier InputsWe want to add more features to group inputs in the modifier:For context inputs, we need the ability to decide whether a specific input should be provided or not.For geometry inputs we want to choose whether an object or collection input should be used and if the original or relative space is used (like in the Object Info node).Putting all these choices in the modifier and having them always visible is problematic from a UI perspective. Even now, the button to switch between single value and attribute input adds clutter that is not needed in many cases.We explored options for how this could work like putting the options in the right click menu, in the manage panel or having edit button in the modifier that allows temporarily showing all additional settings. For now, the approach with the right click menu seems best even if it is a little less discoverable at first.Bundles for Dynamic Socket CountsWhen we explored bundles further, we noticed that they may also provide a good solution for another long standing limitation which we discussed back in 2022: dynamic socket counts. Since then, quite some effort went into improved support for dynamic socket counts and nowadays we have them in multiple built-in nodes like all the zones, Capture Attribute and Bake. Whats missing is support for building node groups that have a dynamic number of inputs and outputs.We could allow tagging a bundle input of a node group as an extensible socket. Then from the outside, one could pass multiple values which will become a bundle inside the group. When outputting that bundle from the group, all the values are separated again.Inside of the group, the nodes would have to process all elements in the bundle. Built-in nodes could do that automatically. For example, when the Capture Attribute node has a bundle input, it could recursively capture each contained field and replace it with the captured anonymous attribute field. Something similar can be done in other nodes that already have a dynamic number of sockets.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender
    0 Comments 0 Shares 375 Views
  • CODE.BLENDER.ORG
    New Brush Thumbnails
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Since the start of 2023 when the Brush Asset project went into full force, the goal was also to overhaul the brush thumbnails. A lot of thought went into the new design to make it future proof and fit into the current UI.This ended up as an active community effort to find a coherent and clear visual language. A big thanks to everyone who gave feedback and helped shape the thumbnails that will now be part of Blender 4.3!Style Guides & Example FilesTo make the process fully transparent and easy, a detailed style guide can be found in the developer documentation. Even though no elaborate setups are needed to create authentic looking thumbnails, it also links to the repository where the thumbnails were created.A short snippet of the style guide pageAn Open & Future Proof StyleFor about 15 years since Blender 2.5 the previous brush thumbnails have been added and were built upon. Unfortunately each new addition and iteration created more inconsistencies.A collage of previous brush thumbnails from Blender 2.5 4.2A primary goal was to create a recognizable and consistent design language for all Blender brushes. For all modes and object types. The thumbnails had to seamlessly fit into the themes of the UI and reuse similar accent colors.With the addition of Brush Assets its easier to create huge brush libraries than ever. This exposed a big issue.Previously it was quite difficult to expand the set of brush thumbnails and icons. The files were not accessible to recreate the original thumbnails or create new ones and the process was opaque. Because of this many brushes that were added over the years were lacking a thumbnail or were reused existing ones. Even the process of creating new toolbar icons had a limit to how much variation is possible.Thats why the the creation of Blender 4.3s new thumbnails had to be easy to reproduce and that they seamlessly fit in with all other brushes. The built-in set of Essentials brushes was expanded quite a bit with useful presets, all with new recognizable thumbnails. Users and asset authors should find it just as easy to expand it further.Various early concepts and ideasWe also explored the idea of automatically generated brush previews during the development of Blender 2.8. But covering all possible 2D and 3D brush types and stroke effects is too complex for a procedural system. Instead the creation should be in the hands of the user and as straight forward as possible.Node asset thumbnails for the new hair curves were also created at the same time and the look was directly affected by this. Ideally all official Essentials assets should fit into a similarly coherent look.Iteration Towards Ease of CreationOver the past two years the style of the thumbnails kept being shifted and refined. Many aspects were simplified or dropped in favor for making the creation and visuals simpler.In the original design the thumbnails were supposed to make use of a set of unique icons in the corner to communicate an otherwise obscure meaning or behavior or the brush types. This idea slowly evolved into the flat colored arrows and lines on most of the thumbnails, which are much easier to create and be creative with.Colors also stayed a very secondary element for identifying brushes to keep the thumbnails color-blind friendly.All thumbnails were also supposed to utilize colors, but to keep them clear and focused eventually any regular draw brushes were left without unnecessary colors or strokes.An example of iteration over the Draw and Snake Hook brushes from start to final result.There was also testing of different shaders and lighting effects but the final look always came back to the idea that anybody should be able to create a perfect brush thumbnail on the fly. Some thumbnails are a bit more specific and involved but the key look of Blender thumbnails should be accessible. Simple use of Matcap/flat shading is all you need.To put a direct comparison to the old thumbnails above, here is the collage of the final thumbnail selection that was used as a base reference to create all remaining thumbnails. Many more new brushes and existing brushes with missing thumbnails were added since then.A focused selection of key brushes from every mode and object typeTry it Out!More features can be added for future releases to make the creation of custom thumbnails much faster. For example by making screenshots directly within Blender to assign asset thumbnails. And by adding the exact same Matcap as part of the default selection.We look forward to how the community will be able to expand the brush selection far more than ever before and share distinct looking brushes. Download the Blender 4.3 Beta now to test it out.For feedback and contributing to the Essentials brushes, visit the Call for Content: Default Brushes.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender
    0 Comments 0 Shares 375 Views
  • WWW.BLENDER.ORG
    Blender Conference 2024 Recap
    Blender Conference 2024 RecapNovember 1st, 2024Press ReleasesFrancesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Blender Conference 2024 wrapped one week ago, hopefully we all made it past the post-bcon blues!As usual, you can enjoy all the recorded presentations on BlendersYouTube channel, and on PeerTube.Dont forget to check out the Photo Gallery!FeedbackOverall, feedback was positive. Compared to previous years, food and venue rating went up, while overall satisfaction with the event remains very positive. When it comes to the program, satisfaction moved from extremely high to high, due to the average quality of a few sessions. This is something we will definitely focus on improving for next year!We will also explore additional ways to encourage attendees to engage with one another, and we aim to make the venue even more welcoming and comfortable.Thank you!The event was made possible thanks to the contribution of many people and made memorable thanks to all attendees and speakers. Special thanks to Amerpodia and the Felix Meritis staff, to Faber audiovisuals and especially to the Blender HQ and remote teams for making this an amazing experience.See you next year!Francesco
    0 Comments 0 Shares 404 Views
  • 0 Comments 0 Shares 417 Views
  • 0 Comments 0 Shares 434 Views
  • 0 Comments 0 Shares 449 Views
  • 0 Comments 0 Shares 464 Views
  • 0 Comments 0 Shares 461 Views
  • 0 Comments 0 Shares 466 Views
  • 0 Comments 0 Shares 474 Views
  • 0 Comments 0 Shares 472 Views
  • 0 Comments 0 Shares 492 Views
  • 0 Comments 0 Shares 494 Views
  • 0 Comments 0 Shares 497 Views
  • 0 Comments 0 Shares 498 Views
  • 0 Comments 0 Shares 493 Views
  • 0 Comments 0 Shares 495 Views
  • 0 Comments 0 Shares 510 Views
  • 0 Comments 0 Shares 521 Views
  • 0 Comments 0 Shares 505 Views
  • 0 Comments 0 Shares 522 Views
  • 0 Comments 0 Shares 517 Views
  • 0 Comments 0 Shares 517 Views
  • 0 Comments 0 Shares 521 Views
  • 0 Comments 0 Shares 523 Views
More Stories