-
- EXPLORE
-
-
-
-
Blender is the free and open source 3D creation suite. Free to use for any purpose, forever.
Recent Updates
-
Please log in to like, share and comment!
-
-
-
-
-
-
-
CODE.BLENDER.ORGThis Summers Sculpt Mode RefactorThis Summers Sculpt Mode RefactorNovember 7th, 2024Code Design, General DevelopmentHans Goudey html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Over the past several months sculpt mode underwent a large rewrite. Since the project has wrapped up, this post gives an overview of what changed.Unlike most other development projects, this had no effect on the interface. Before and after the project, Blender looked exactly the same. Typically this should raise some eyebrows, because it often means developers are prioritizing work based on its effect on the code rather than its utility to users. In this case, problems with the code have made feature development significantly harder over the years, and refactoring came with plenty of potential performance improvements.Overall, for those who want to skip all the technical details, entering sculpt mode in Blender 4.3 is over 5x faster, brushes themselves are about 8x faster, and memory usage is reduced by about 30%. For actual visible changes to sculpting in 4.3, see brush assets. For a full list of the refactor work, see the task.Entering Sculpt ModeEntering sculpt mode was known to be quite slow. Based on profiles, it also looked much slower than it should be, since it was completely single threaded.A profile of Blender as it enters sculpt mode on a large mesh in 4.2, where each row is a CPU core.It turns out Blender was bottlenecked by two things: building the BVH tree that accelerates spatial searches and raycasting and uploading the mesh data to the GPU for drawing.Improving the BVH build time was a months-long iterative process of finding bottlenecks with a profiler, addressing them, and cleaning the code to make further refactoring possible. Adding trivial multi-threading to the calculation of bounds and other temporary data was the most significant improvement, at almost 5x. Beyond that, reducing memory usage improved performance by another 30%, simplifying the spatial partitioning of face indices using the C++ standard library another 30%. And finally, changing the BVH from storing triangles to storing faces (for a quad mesh there are half as many triangles as faces!) improved performance by another 2.3x.Entering sculpt mode is about 5 times faster compared to 4.2 (a change from 11 to 1.9 seconds with a 16 million face mesh with a Ryzen 7950x).Lessons for DevelopersAny array the size of a mesh is far from free. We should think hard about whether all the data in the array is really necessary.Any algorithm should clearly separate serial and parallel parts. Any loop that can be done in parallel should be inside a parallel_for.We shouldnt be reimplementing common algorithms like partitioning; that makes code so scary and weird that no one touches it for years.DrawingThere is a fundamental cost of uploading geometry data to the GPU and we will always be bottlenecked to some extent by the large amount of data we need to render. However, as a tweaked version of code from 15 years ago, sculpt mode drawing had enough overhead and complexity that significant improvements were possible.The GPU data for the whole mesh is split into chunks, with one chunk per BVH node. One main problem with the old data upload was its outer loop over nodes. That forced all the book-keeping to be duplicated for every node. Often just focusing on simplifying the code gave performance improvements indirectly. Removing two levels of function call indirection for multires data upload roughly doubled the performance, and removing function calls for every mesh edge gave another 30% improvement.The main change to the drawing code was a rewrite to avoid all duplicate work per BVH node, add multi-threading, and change the way we tag changed data. This improved memory usage by roughly 15% (we now calculate viewport wireframe data if the overlay is actually turned on), and entering sculpt mode became at least 10% faster.GPU memory usage was reduced by almost 2x using indexed drawing to avoid duplicating vertex data for every single triangle. Now vertex data is only duplicated per face corner.Previously, sculpting on a BVH node would cause every single attribute to be reuploaded to the GPU. Now we only reupload attributes that actually changed. For example, changing face sets only reuploads face sets. Tracking this state only costs a single bit per node.BVH Tree DesignPreviously, the sculpt BVH tree, often referred to as the PBVH (Paint Bounding Volume Hierarchy) was a catch-all storage for any data needed anywhere in sculpt mode. To reduce the codes spaghetti factor and clarify the design, we wanted to focus the BVH on its goal of accelerating spatial lookups and raycasting. To do that we removed references to mesh visibility, topology, positions, colors, masks, the viewport clipping planes, back pointers to the geometry, etc. from the BVH tree. All of this data was stored redundantly in the BVH tree, so whenever it changed, the BVH tree needed to change too. Nowadays the design is more focused and its much easier to understand the purpose of the BVH.Another fundamental change to the BVH was replacing each nodes references to triangles with references to faces. In a typical quad mesh there are twice as many triangles as faces, so this allowed us to halve a good portion of the BVH trees memory overhead.Brush EvaluationTo evaluate a brush, regions (BVH nodes) of the mesh are first tested roughly for inclusion within its radius. For every vertex in each of these regions, we calculate a position translation and the brushs strength. The vertex strength includes more granular filtering based on the brush radius, mask values, automasking, and other brush settings.Meshes are split into multiple BVH nodes which are used to filter unaffected geometry.Prior to this project, all these calculations were performed vertex by vertex. For each vertex, we retrieved the necessary information, calculated the deformation and the relative strength and then finally applied the brushs change. Because mesh data is stored in large contiguous arrays, it is inefficient from a memory perspective to process all attributes for a particular vertex at once, as this likely results in many cache misses and evictions.While the previous code was somewhat concise, handling all three sculpt mesh types (regular meshes, dynamic topology, multires) at once, this generic processing had some significant negative side effects:The old brush code was hard reason about because of C macros and the combination of multiple data structures in one loop.The structure had little opportunity for improved performance because of runtime switching between data structures and the lowest-common-denominator effect of handling different formats.A do everything for each vertex structure has memory access patterns that dont align with the way data is actually stored.Brush code now just processes a single action for all the vertices in a node at the same time, splitting the code into very simple hot loops which can use SIMD, use much more predictable memory access patterns, and have significantly less branching per-vertex.For further reference, here is a change that refactored the clay thumb brush. Though the new code has more lines, its more independent, flexible, and easier to change.Proxy SystemPreviously, brush deformations were accumulating into a temporary proxy storage on each BVH node. This accumulation occurred for each symmetry iteration until the end of a given brush step, at which point the data was written into the evaluated mesh positions, shape key data, and the base mesh itself.We completely removed the proxy system as part of refactoring each brush. Instead, brushes now immediately write their deformation during each each symmetry step calculation. This avoids storing temporary data and improves cache access patterns by writing to memory that is already cached. Removing the proxy storage also reduced the size of BVH nodes by around 40%, which aligns with our ongoing goal of improving performance by splitting the mesh into more nodes.Profiling revealed a significant bottleneck during brush evaluation: just storing the meshs initial state for the undo system was taking 60% of the time. When something so simple is taking so much time, there is clearly a problem.The issue turned out to be that most threads involved in brush evaluation were waiting for a lock while a single thread did a linear search through the undo data, trying to find the values for its BVH node.for (std::unique_ptr<undo::Node> &unode : step_data->nodes) { if (unode->bvh_node == bvh_node && unode->data_type == type) { return unode.get(); }}Simply changing the vector to a Map hash table gave us back that time and significantly improved the responsiveness of brushes.return step_data->undo_nodes_by_pbvh_node.lookup({node, type});Though there was plenty of refactoring required to make this possible, the nice part is how often very little time with a profiler is necessary to identify significant improvements.Undo Data Memory UsageUndo steps also became slightly more memory efficient in 4.3. The overhead of each BVH nodes undo storage for a brush stroke reduced 10x from about 4KB to about 400 bytes.In the future we would like to look into compressing stored undo step data. This could require significantly less memory.For another example of thread contention, we look to the counting of undo step memory usage. Undo data is created from multiple threads, and each thread incremented the same memory usage counter variable. Simply counting memory usage later on with a proper reduction gave a 4% brush evaluation performance improvement.Writing to the same memory from multiple threads at the same time is slow!In yet another thread contention problem, writing true to a single boolean from multiple threads turned out to be a significant issue for the calculation of the average mesh normal under the cursor. The boolean was logically redundant, so just removing it improved brush evaluation performance by 2x.Multi-Resolution ModifierMost of these performance improvements were targeted at base mesh sculpting where there was more low-hanging fruit. However, multires changes followed the same design and there were a few more specific optimizations for it too. Most significantly, moving to a struct-of-arrays format for positions, normals, and masks gave a 32% improvement to brush performance, and simplified code.The sculpt-mode multires data structure was optimized the same way meshes were optimized over the past years (see last years conference talk)Some multires workflows have remaining bottlenecks though, like subdivision evaluation or bad performance at very high subdivision levels.The End!Thanks for reading! It was a pleasure to be able to iterate on the internals of sculpt mode. Hopefully the changes can be a solid foundation for many future improvements.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender0 Comments 0 Shares 125 Views
-
CODE.BLENDER.ORGGeometry Nodes Workshop: October 2024html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"After the Blender Conference, the Geometry Nodes team came together once again to discuss many design topics. This time our focus main focus was to improve support for physics, especially hair dynamics in Geometry Nodes. A few other topics were discussed as well though. You can also read the raw notes we took during the meetings.The following people participated in the workshop (from left to right): Lukas Tnne, Hans Goudey, Simon Thommes (afternoons) and Jacques Lucke. Additionally, Dalai Felinto helped kickoff the workshop and Falk David joined in every now and then.Previously in Geometry NodesOur last workshop was 5 months ago. This section provides a quick update on the topics we discussed there. Omitted topics dont have any news.Gizmos: They are part of the Blender 4.3 release. Next step here is to add gizmos to some built-in nodes like the Transform Geometry or Grid nodes.Baking: Bakes can be packed now. Next step is to provide higher level tooling to work with multiple bakes in a scene.Rename Sockets in Nodes: Ctrl+click to rename sockets works in a few nodes now (e.g. Bake, Simulation, Capture Attribute). There are some technical difficulties with making it work with double click and for right-aligned labels.Tools for Node Tree UX: Built-in nodes support socket separators now (used in the For-Each Zone). Support will be added to node groups at some point. The viewer node automatically changes its position now.Asset Embedding: A prototype was built to test the behavior. We agreed on how we solve the technical difficulties with it, but some UI aspects are still somewhat unclear (e.g. how this is presented to the user as a new import method besides linking and appending).Menu Socket: We improved the error handling when there are invalid links, giving more information to the user about what is wrong. This applies to menu sockets, but also other invalid links like invalid type conversions.Socket Shapes: We found a design where everyone is okay with the trade-offs it makes. A prototype was built. The work on it is still ongoing.Grease Pencil: Geometry Nodes can work with Grease Pencil data starting with Blender 4.3.For-Each Zones: There is a new For-Each Element zone in 4.3. Work on other kinds of For-Each zones is ongoing.Approaching PhysicsAs usual, there are many different perspectives that we have to take into account for when designing how we want to approach physics in Geometry Nodes:Using high level node group assets to setup e.g. a hair simulation.Building and/or customizing solvers for specialized effects.The modifier-only workflow.Higher level add-ons which abstract away the node and modifier interface.We started out by clarifying that there is a fairly fundamental difference in how to think when chaining multiple geometry operations vs. setting up a physics simulation. The difference is that when creating a simulation, one thinks about the desired behavior (forces, emitters, colliders, ) first, and not so much about the order in which the geometry is actually processed. In fact, the majority of users should only have to care about the behavior without worrying about specific geometry operations.We therefore want to provide better ways to separate describing the desired behavior from actually implementing the behavior. We call this the declarative approach. It gives users high level control over a potentially very complex evaluation system that makes all the desired behaviors work. The declarative approach can also be very useful for things beyond physics like building a brush engine or scattering system.To achieve this separation, we will introduce two new socket types: bundles and closures, which are explained in more detail below (exact names are not set in stone yet).BundlesA bundle is a container that allows packing multiple values into a a single socket. Values of different types can be put into a single bundle. A work-in-progress patch is available already.Bundles are quite useful to reduce the number of necessary links. For now, we are mostly interested in how they can be used to create a uniform interface for various kinds of simulation behaviors. Each behavior will be a bundle that contains the necessary information for the solver to understand what to do with it.ClosuresClosures sockets allow passing around arbitrary functions including entire node groups. For example, this allows passing a node group as an input into another group which will then evaluate it. This is an entirely new paradigm in Blenders node systems, and without being already familiar with the concept of passing functions around as data, its not trivial to understand. However, its incredibly powerful and allows building much more flexible and user-friendly high level node groups.In programming, the term closure refers to functions which may be passed around as data and can capture variables from where they are created. We have not found a good alternative name yet.To create closures, we use a new closure zone. Its a bit like creating a small local node group that can then be passed around. Just using existing node groups does not work, because we need the ability to pass data from the outside into the closure (like in all other zone types). Also, its good to have the ability to build the entire node tree in a single group to see everything at once.Position Based DynamicsThe declarative approach with bundles and closures is generally useful for all kinds of physics simulations. While we want to enable users to build their own solvers, we also want to integrate hair simulation specifically into Geometry Nodes directly.The hair simulation is designed around a Position-Based Dynamics (PBD/XPBD) solver. This solver has been applied to soft-body simulation, cloth, hair, granular materials and more.The PBD method is often used for real-time game physics and is relatively easy to implement. It has advantages in terms of speed and accuracy over the linearized velocity-based cloth solver currently used for hair dynamics. There are lots of learning resources and scientific papers on the topic for people to learn more. When first looking into this, we found this overview and this video tutorial series particularly useful.We will try and implement as much of this as possible using generic geometry nodes. Some parts like collision detection and constraint grouping may require new built-in nodes for performance reasons. This will be decided when we get there.ListsFor this project well likely need lists in different places, for example to manage a list of behaviors passed into the solver and to process contact points after collision detection. Lists have been a talking point in previous workshops and we dont have much new information that has not been said before. We went over the set of nodes wed need, but there were no real surprises there.Lists are also particularly important for hair, because we need to map generated hair to potentially multiple guide hair strands. Currently, there is no good way to store that mapping which makes any workflow that uses guides, especially for simulation, quite unreliable.The main blocker to get lists into Geometry Nodes is still the socket shapes discussion.Socket ShapesThe last blog post contains an explanation of the topic. Last time, we didnt come to a conclusion for how to deal with socket shapes when we get more types like fields, lists, grids and images. The tricky thing is that we cant show all information wed like to with just socket shapes, so we have to decide what we dont want to show anymore.Some design work has been done on the topic in the last couple of months and a simple prototype has been built too. Were now at a point where we are all at least okay with the solutions tradeoffs so that we can hopefully progress on the topic. Once that is resolved, volume grids and lists are much easier to get into a releasable state.For Each Geometry ZoneBlender 4.3 comes with the For Each Element zone. While thats very useful already, there are other kinds of for-each zones that can be useful. One of those is a For Each Geometry zone, that we used to call For Each Unique Instance in previous workshops.Its purpose is to iterate over each real geometry in a geometry set that may contain an instance hierarchy. Many built-in nodes do this internally already. For example, the Subdivision Surface node applies its effect to all meshes in the input, including those in instances. For various reasons, not all built-in nodes can or should do this. A new For Each Geometry zone would allow adding the same functionality to all built-in nodes and custom node groups which is impossible currently.This is quite different from the Instances mode in the For Each Element zone. If the geometry to be processed contains many instances of the same mesh, the existing zone would run for each mesh separately, while this new zone would only run once, because there is only a single mesh.There is already some previous design work available in #123021.We reconfirmed the overall design for modal node tools from a year ago. Since then, we also noticed that there are two kinds of modal operators in Blender currently:Operators based on the initial state (like bevel). These have redo panels.History dependent operators using the previous state at every modal step (like brushes). These dont have redo panels.Both kinds of operators could be created with nodes. However, when we talked about modal node tools so far, we were mainly concerned with the second type. Many use cases of the first kind can probably be solved with gizmos or a gizmo like system. Thats because the interactive part of these operators is mostly just used to control some input values for a non-modal operator.We also noticed that there are problems caused by fact that all node tools are just a single operator in the end (geometry.execute_node_group), but none of these seem impossible to solve. For example, we want modal node tools to come with their own keymap, but users should be able to override this keymap like any other keymap in Blender. Typically, there is a mapping from modal operator to keymap, but that does not work well here yet for the mentioned reason. Alternatively, it may be a nice solution to attach keymaps to specific assets in the user preferences instead of just to operators.It can also be possible to register a separate operator for each node tool, but that comes with its own problems. For example, that would introduce yet another way to reference specific asset data-blocks by their operator name and can easily cause operator name conflicts too.Field Context ZoneWe started discussing a new Field Context Zone. The overall design is very incomplete and we dont have concrete answers to many questions surrounding it yet. The general idea is to give access to the field evaluation context more explicitly.For example, for a field thats evaluated on a geometry, the new zone would have the context geometry as input, and would output a field that depends on that geometry. This opens up new opportunities for building fields that would be much more annoying to build before.The zone would also reduce redundancy in the design of nodes. We have pairs of nodes like Sample Index and Evaluate at Index which are the same except that one has geometry as an input and the other does not. A goal of the zone is that the Evaluate at Index node could be built out of the Sample Index node.A limitation of geometry nodes is that it can only output a geometry that is then passed to the next modifier. Sometimes it would be very useful to output other data like another geometry or single values. Those values could become part of the evaluated state of an object so that it can be referenced by other objects using nodes or drivers.This would allow outputting a bunch of vectors from Geometry Nodes which are then used to drive an armature. Additionally, we could allow outputting a bundle of values that is then passed into the next modifier. This way it becomes possible to build more rich modifier stacks without the limitation of having to encode all information in the geometry passed between modifiers.We could even allow outputting fields and closures from objects (probably with some limitations due to the lifetime of some data). This would allow building all kinds of effector objects that encode some behavior that can be understood by other Geometry Nodes setups. This can also be thought of as a generalization of the existing force field object type.Internal Data SocketsIn some cases, we want to add functionality that requires passing around data of that we dont want to expose fully. A good example would be KD trees and BVH trees which allow speeding up algorithms that require finding nearest points or doing ray casts. These data-structures have well defined APIs that we could expose, but exposing their implementation details could make future optimization much more difficult, because optimizations could require breaking files.It does not seem benefitial to add a new socket type for each kind of internal data. So far we think that it is good enough to only add a single type (with a single color) that is used to pass around all kinds of internal data.Another use-case that came up in the past is a Bake Reference socket that passes data from a Bake to an Import Bake node (once we have that). The tricky thing with an Import Bake node is that it has to be able to read bakes from disk as well as packed bakes. So just giving it a file path input does not work. Reading from files should still be possible of course, but we also need a solution for packed bakes.Group Input DefaultsEvery input of a node group has a default value. For some types, the default is currently hardcoded (e.g. an empty geometry). Others can be choosen manually in the sidebar where some socket types support more complex inputs. For example, vector sockets can be the position field by default. However, the set of possible defaults is currently hardcoded. The goal of this topic is to generalize the system for defaults to remove limitations.The overall idea is to have a new Group Defaults node that has an input socket for every input of the node group. The default of any input is specified by just connecting the value to the node like in the mockup below.We could also make it possible for some default values to depend other input values, but its not clear yet how much complexity this adds, so that may only be done later.A tricky aspect is that adding a default to a socket that did not have one yet may override its value in all group nodes that use this group. Thats kind of the inverse of a problem we have already: changing group input defaults are not propagated to group nodes at all. The problem is that we dont really know if a value has already been modified or not, which becomes even trickier when the node group is linked from another file.Context InputsThe goal of this topic is to solve the following problems:We want to remove the need for control node groups as a way to get global input values (example). While useful in some setups, this approach does not work all that well when building reusable node systems.We have no good way to pass the hair systems surface geometry to the relevant hair nodes in a good way.We have no way to override existing contextual input nodes like Mouse Position, Active Camera and Scene Time.We need a more flexible replacement for the Is Viewport node, which is used to control a performance vs. quality trade-off. Just making this decision based on whether were rendering or not is not good enough. Sometimes the fast mode of a node group should be used when in edit mode, and otherwise the high quality mode.What all these issues have in common is that we want to pass information into nested node groups without having to set up all the intermediate links which would cause a lot of annoying boilerplate. Nevertheless, we want to be able to override all these inputs at any intermediate level.The proposed solution is to generalize the concept of Context Inputs. There are many existing context input nodes (like Scene Time and Mouse Position) already. We also want to add a Context Input node for custom inputs. Whenever a context node is used in a (nested) node group, that will automatically create an input for the node group. Group nodes at a higher level can then decide to either pass in a specific value for that input or to not connect it. If its not connected, the context input will be propagated further up.If the context value has not been provided by any node, its propagated up to the Geometry Nodes modifier where again users can choose to specify it. If not, we could support reading the value from a custom property of the object or scene.There is a work-in-progress pull request for this feature.Modifier InputsWe want to add more features to group inputs in the modifier:For context inputs, we need the ability to decide whether a specific input should be provided or not.For geometry inputs we want to choose whether an object or collection input should be used and if the original or relative space is used (like in the Object Info node).Putting all these choices in the modifier and having them always visible is problematic from a UI perspective. Even now, the button to switch between single value and attribute input adds clutter that is not needed in many cases.We explored options for how this could work like putting the options in the right click menu, in the manage panel or having edit button in the modifier that allows temporarily showing all additional settings. For now, the approach with the right click menu seems best even if it is a little less discoverable at first.Bundles for Dynamic Socket CountsWhen we explored bundles further, we noticed that they may also provide a good solution for another long standing limitation which we discussed back in 2022: dynamic socket counts. Since then, quite some effort went into improved support for dynamic socket counts and nowadays we have them in multiple built-in nodes like all the zones, Capture Attribute and Bake. Whats missing is support for building node groups that have a dynamic number of inputs and outputs.We could allow tagging a bundle input of a node group as an extensible socket. Then from the outside, one could pass multiple values which will become a bundle inside the group. When outputting that bundle from the group, all the values are separated again.Inside of the group, the nodes would have to process all elements in the bundle. Built-in nodes could do that automatically. For example, when the Capture Attribute node has a bundle input, it could recursively capture each contained field and replace it with the captured anonymous attribute field. Something similar can be done in other nodes that already have a dynamic number of sockets.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender0 Comments 0 Shares 133 Views
-
CODE.BLENDER.ORGNew Brush Thumbnailshtml PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Since the start of 2023 when the Brush Asset project went into full force, the goal was also to overhaul the brush thumbnails. A lot of thought went into the new design to make it future proof and fit into the current UI.This ended up as an active community effort to find a coherent and clear visual language. A big thanks to everyone who gave feedback and helped shape the thumbnails that will now be part of Blender 4.3!Style Guides & Example FilesTo make the process fully transparent and easy, a detailed style guide can be found in the developer documentation. Even though no elaborate setups are needed to create authentic looking thumbnails, it also links to the repository where the thumbnails were created.A short snippet of the style guide pageAn Open & Future Proof StyleFor about 15 years since Blender 2.5 the previous brush thumbnails have been added and were built upon. Unfortunately each new addition and iteration created more inconsistencies.A collage of previous brush thumbnails from Blender 2.5 4.2A primary goal was to create a recognizable and consistent design language for all Blender brushes. For all modes and object types. The thumbnails had to seamlessly fit into the themes of the UI and reuse similar accent colors.With the addition of Brush Assets its easier to create huge brush libraries than ever. This exposed a big issue.Previously it was quite difficult to expand the set of brush thumbnails and icons. The files were not accessible to recreate the original thumbnails or create new ones and the process was opaque. Because of this many brushes that were added over the years were lacking a thumbnail or were reused existing ones. Even the process of creating new toolbar icons had a limit to how much variation is possible.Thats why the the creation of Blender 4.3s new thumbnails had to be easy to reproduce and that they seamlessly fit in with all other brushes. The built-in set of Essentials brushes was expanded quite a bit with useful presets, all with new recognizable thumbnails. Users and asset authors should find it just as easy to expand it further.Various early concepts and ideasWe also explored the idea of automatically generated brush previews during the development of Blender 2.8. But covering all possible 2D and 3D brush types and stroke effects is too complex for a procedural system. Instead the creation should be in the hands of the user and as straight forward as possible.Node asset thumbnails for the new hair curves were also created at the same time and the look was directly affected by this. Ideally all official Essentials assets should fit into a similarly coherent look.Iteration Towards Ease of CreationOver the past two years the style of the thumbnails kept being shifted and refined. Many aspects were simplified or dropped in favor for making the creation and visuals simpler.In the original design the thumbnails were supposed to make use of a set of unique icons in the corner to communicate an otherwise obscure meaning or behavior or the brush types. This idea slowly evolved into the flat colored arrows and lines on most of the thumbnails, which are much easier to create and be creative with.Colors also stayed a very secondary element for identifying brushes to keep the thumbnails color-blind friendly.All thumbnails were also supposed to utilize colors, but to keep them clear and focused eventually any regular draw brushes were left without unnecessary colors or strokes.An example of iteration over the Draw and Snake Hook brushes from start to final result.There was also testing of different shaders and lighting effects but the final look always came back to the idea that anybody should be able to create a perfect brush thumbnail on the fly. Some thumbnails are a bit more specific and involved but the key look of Blender thumbnails should be accessible. Simple use of Matcap/flat shading is all you need.To put a direct comparison to the old thumbnails above, here is the collage of the final thumbnail selection that was used as a base reference to create all remaining thumbnails. Many more new brushes and existing brushes with missing thumbnails were added since then.A focused selection of key brushes from every mode and object typeTry it Out!More features can be added for future releases to make the creation of custom thumbnails much faster. For example by making screenshots directly within Blender to assign asset thumbnails. And by adding the exact same Matcap as part of the default selection.We look forward to how the community will be able to expand the brush selection far more than ever before and share distinct looking brushes. Download the Blender 4.3 Beta now to test it out.For feedback and contributing to the Essentials brushes, visit the Call for Content: Default Brushes.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender0 Comments 0 Shares 147 Views
-
WWW.BLENDER.ORGBlender Conference 2024 RecapBlender Conference 2024 RecapNovember 1st, 2024Press ReleasesFrancesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Blender Conference 2024 wrapped one week ago, hopefully we all made it past the post-bcon blues!As usual, you can enjoy all the recorded presentations on BlendersYouTube channel, and on PeerTube.Dont forget to check out the Photo Gallery!FeedbackOverall, feedback was positive. Compared to previous years, food and venue rating went up, while overall satisfaction with the event remains very positive. When it comes to the program, satisfaction moved from extremely high to high, due to the average quality of a few sessions. This is something we will definitely focus on improving for next year!We will also explore additional ways to encourage attendees to engage with one another, and we aim to make the venue even more welcoming and comfortable.Thank you!The event was made possible thanks to the contribution of many people and made memorable thanks to all attendees and speakers. Special thanks to Amerpodia and the Felix Meritis staff, to Faber audiovisuals and especially to the Blender HQ and remote teams for making this an amazing experience.See you next year!Francesco0 Comments 0 Shares 188 Views
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
CODE.BLENDER.ORGEEVEE Next Generation in Blender 4.2 LTShtml PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"After over 2 years of development, the EEVEE Next project has produced a major evolution of the Blender viewport system, set to be a key feature of the upcoming 4.2 LTS release.Project goalsThe EEVEE Next project was aimed at modernizing viewport rendering, address technical debt and overcome existing design limitation. The main target of the project was to improve performance and visual quality of viewport rendering, leading to a better user experience while building, animating and rendering 3D scenes.What changedThe viewport system is now more predictable and supports a wider range of lighting and shading corner cases. A new shadow system was introduced, providing more stable and higher quality shadows, while remaining memory efficient. Other noteworthy features are a new global illumination system, unlimited lights in a scene, improved volumes rendering, motion blur and depth of field.More detailed technical documentation is available in the Release Notes, while the functionality is described in the User Manual.Here are some overviews from the community. Compatibility issuesSignificant efforts have be made to avoid breaking compatibility with previous versions, but given the magnitude of the change, and the number of workarounds used by artists in production, some issue are expected.A migration guide is available to mitigate these issues. If you are encountering undocumented behavior, please do share your findings and help to improve the guide!Next stepsThis is the foundation of a new interactive lighting pipeline. In the next releases we can expect improved performance, both for rendering and shader compilation, and hardware-accelerated raytracing.Congratulations to Clment, Miguel and Jeroen on this milestone. Special thanks to the community of contributors for working on stabilizing and integrating on several platforms.Thumbnail image by Hamza N. Meo.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender0 Comments 0 Shares 366 Views
-
CODE.BLENDER.ORGOnline Assets Workshop Reporthtml PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"During July a workshop was held at the Blender HQ about online asset libraries. The design discussions are the first step to scope the project, which is planned to follow the Brush Assets project.Online asset librariesTo enhance its out-of-the-box experience, Blender will come pre-configured with a remote asset library called Online Essentials, containing a selection of CC-0 assets. These assets include crucial resources like base meshes, materials and brushes, readily available to all users.To integrate the asset system with the internet, a new type of Asset Library will be supported: Remote Asset Library. This will work similarly to the existing (Local) Asset Library system but with a few differences:The library content can be downloaded on-demand and cached locally for reusability.Each blend-FileAdditionally, specialized asset libraries, such as content from Blenders open movies, may be accessible on the extensions platform.Extensions and asset librariesHow do asset libraries fit within Blenders extension framework? Both extensions and asset libraries are pivotal in expanding Blenders functionality, and conceptually asset libraries can be seen as a type of extension.However, for simplicitys sake, they will continue to be configured independently. Adding a remote asset library will be done on the Asset Library tab (to be added), while the Get Extensions tab will be reserved for Add-ons and Themes.They will still share the same internet access policy and will also be available from the Blender Extensions Platform. The main difference there is that assets are more strictly curated while the other extensions are a purely community-organized project. This can be still a community project though.Variants, representation and versionsAn important topic for assets is variants, representation and versions. As a recap, these are example of them:Variants:New, old, damaged.Dry, wet.Red, blue.Representation:LOD 0, LOD 1, 1K, 2K, 4K, 8K.Versions are the state of variants and representation. (v1.0.0, v1.0.1, etc).For a simple example imagine an asset with different color variants, and different geometry resolution for different levels of detail:Note that not all representations are required on all the variants.A more realistic example is a production character like Sintel:Asset: Sintel.Metadata: Name, Author, License, Tags, Catalog, Data-blocks: Damaged Lod 0, Damaged Lod 1, New Lod 0, New Lod 1.Sintel LOD-0 representationSintel LOD-1 representationThe existing asset system is limited to one variant/representation/version per asset. For the online project, it will be important to revisit this, particularly with regard to representations users shouldnt have to download a 8K HDRI when a simple 2K JPG panorama is sufficient.This has profound implication for asset integration in Blender, even affecting its definition.What is an asset?Assets were originally defined as: An asset is a data-block with meaning.Its been 4 years since then, and its time to revisit this definition with variants in mind: An asset is a data-block with meaning, combined with its variants and representations.In this context, meaning refers to metadata (such as name, author, etc.), which is common to all the different variants and representation available for use.To support this, Blender needs a way to connect different data-blocks under the same ID metadata.Organizing AssetsFeatures such as tagging, advanced search, smart catalogs, and the ability to mark favorites are essential tools that can help artists navigate and use their assets more efficiently:It should be simple to change the catalog, create or remove tags, for any asset, whether the asset ships with Blender, comes from a remote asset library, or belongs to the current blend-File.Advanced search should be supported, with a syntax that allows for AND, OR, specifying tags, and using wildcards.Search results should be saveable as smart catalogs for quick access in the Asset Shelf and Asset Browser.Any asset can be marked as a favorite and will accessible in a new Favorites catalog, which will always be displayed in the Asset Shelf.While not directly related to online assets, effectively organization is a pressing need that will only become more critical with the addition of online asset libraries.This is a good place for the community to help, as each of these topics has its own set of good-to-have tasks, which may have to be postponed in order to prioritize the online features.DeliverablesTo keep track of progress and encourage community involvement, the proposed deliverables are outlined on the issue tracker. They are also listed here; please note that this list may change.Improving asset management and experience:Asset de-duplication (append & reuse)#115660Metadata customization, including catalogsFavorites#125432Tag system improvements#125430Advanced search syntax#125434Asset publishing (local)#125437Finish asset system refactor#122439Online asset libraries:Remote Asset Library (simple)#125597Remote Asset Library (advanced)#125600External thumbnails#125598Online essentials asset library content creationAuthorized access (for commercial/private remote asset libraries)Targets that still need a better definition:Asset representationsAsset variationsPython hooksOnline publishingNon-data-block assetsNext StepsAs we move forward, the focus will be on refining the design, breaking down the technical requirements, and planning.As mentioned, I hope we can also count on the community for the more approchable targets, such as the asset experience deliverables. If you want to help, be sure to reach out in the #asset-project chat channel.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender0 Comments 0 Shares 367 Views
-
CODE.BLENDER.ORGResults from Google Summer of Code 2024html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"This summer, Blender had the privilege of welcoming six talented contributors as part of the Google Summer of Code program. They worked on exciting projects, enhancing Blenders nodes, unwrapping, user interface, rendering and video editing features.This blog post highlights their work and presents an excerpt of the results. You can find further details in the contributors final reports, linked on each section.UV Stitching ImprovementsFinal report by Anish Bharadwaj. The project focused on enhancing UV mapping workflows by introducing a user-friendly weld-seams utility for efficient UV island merging, drawing inspiration from community suggestions.The Seam Weld feature allows users to select two non-cyclic edge loops on twoseparate UV islands and merge those islands together along that seam. This deliverable took the most time to implement of the three that I worked on due to various misunderstandings I had about how the UV system works in Blender. One major challenge was grasping the concept of loops and understanding that the UV islands are not actually connected to one another but are instead multiple loops sharing locations in 2D space.Anish BharadwajSample Sound NodeFinal report by Lleu Yang. This project aimed to provide the ability to retrieve sounds from files in Geomety Nodes, generate amplitude/frequency response information based on several customizable parameters, and be written in native C++ with caching/proxy operations to speed up execution.The video below used Sample Sound node and simulation zone together, forming a 3-D waterfall plot. The project file is attached in the description of my working pull request. The audio is made by myself and licensed under CC0.Lleu YangGeometry Nodes: File Import NodesFinal report by Devashish Lal. This project aimed to reduce disk usage by externalizing data and also enabling data visualization workflows in Blender. Devashish introduced OBJ, STL, PLY, and CSV import nodes, currently behind an experimental feature flag in the alpha version.Blender doesnt have the capability to import CSV files so the first thing was to implement the importer and we didnt wanna use existing CSV libraries cuz its is just simple string handling right ? well the importer is still pretty simple but there were a lot of edge cases although the most time I spent was in designing the importer figuring out which individual functions should be created and utility classes.[]I spent a week looking at the existing OBJ and PLY importers to get inspired into writing the CSV importer and ended up with an implementation that I am quite proud of.Devashish LalSprucing Up the SequencerFinal report by John Swenson. This project aimed to improve snapping support in the sequencer by adding the option to snap strips to markers, add a link property to all audio/video strips for a given video file and add the ability to select active channels.After meeting with [my mentor] Aras at the beginning of the summer, we decided that my first order of business should be to get snapping working in the VSE Preview in theory its simple, wouldnt require as much back-and-forth to determine what the UI should look like or how it should work, etc.John SwensonImprovements to the Blender macOS User Interface ExperienceFinal report by Jonas Holzman. The project aimed at adding native macOS menubar support and inline titlebar decoration to Blender.Over this summer, I got to work on improving the Blender macOS User Interface Experience. While the initial goal of the project was to add native macOS menubar support and inline titlebar decoration to Blender, the end goal ended up shifting towards more general macOS and general UI research, while still working on client-side window decorations, as outlined in this kickoff meeting note by my mentor Julian.As such, this project ended up focusing on a new cross-platform API for client-side window decorations, combined with a practical colored titlebar macOS decoration implementation. I also worked on general macOS user experience and interface improvements, as well as backend refactors, while also experimenting with additional general UI enhancements.Jonas HolzmanImprove Distributed Rendering & Task ExecutionFinal report by David Zhang. The main objective of this project was to enhance the distributed rendering and task execution capabilities within Flamenco through several key improvements.I have implemented the ability for users to pause jobs in Flamenco, introducing a new paused state with relevant status transition logic. The frontend has been updated to allow users to pause a job, and comprehensive unit tests have been added to ensure the new functionality works correctly with existing systems. This feature has been fully implemented, reviewed, and the pull request has been merged into the main codebase.David ZhangThats it! Make sure to check the contributors full reports for further details.I want to thank all mentors, contributors and of course Google for accepting us into the GSoC program again. Its been a pleasure to participate in the 25th anniversary edition this year and we plan to apply next year again.Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender0 Comments 0 Shares 377 Views
-
WWW.BLENDER.ORGBlender at SIGGRAPH 2024Blender at SIGGRAPH 2024July 27th, 2024EventsFrancesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"SIGGRAPH (the premier conference & exhibition on computer graphics and interactive techniques) will take place in Denver (CO) at the Denver Convention Center, from Sunday 28th July until Thursday 1st August.Besides the unusual location, there are a few highlights for this trip. The Blender team delegation is minimal, yet at the same time Blender will be visible everywhere:DigiPro2024 Creating Tools for Stylized Design WorkflowsSaturday 27th, 12:10 PM 12:30 PMDenver Center for the Performing ArtsOpen Source Days Blender Adpotion in the IndustrySunday 28th, 2:50 PMHyatt Regency Denver at Colorado Convention CenterBirds of a Feather: Blender Foundation Community meetingTuesday 30th, 4:00 PM 5:30 PMRoom 710Next to that, there will be a screening of Wing It! at the Electronic Theatre as part of the computer animation festival, and Blender will be visible at a demo pod the Dell booth in the trade show. If you are at SIGGRAPH and want to connect feel free to reach out directly via mail (francesco at blender org) or on social media!0 Comments 0 Shares 363 Views
-
WWW.BLENDER.ORGBlender at Annecy 2024 RecapBlender at Annecy 2024 RecapJune 20th, 2024Events, Press ReleasesFrancesco Siddi html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Lets recap yet another successful Annecy experience for the Blender team. This year we had a corner booth at the film market, a short film selected in the festival, and a studio meet-up event in the city center.The film market went well. At this point everyone is well aware of Blenders existence, and there is a growing amount of professional users successfully tackling more and more complex challenges with the software.I had informal conversations with several high-budget film directors and story artists using and praising Blender for the story development process. Adoption into the production pipeline is still slow for large organization, while in smaller, younger and more agile studio, using Blender at the core is simply the norm. From those studios, there is an active interest in the Blender Studio Tools a set of scripts, add-ons and documentation aimed at facilitating collaborative work in a Blender-centric environment.There has been a growing trend in the past few years, where individuals and teams come by to share the work in progress on their production, or to let us know that the production was successfully completed. This year, several production were selected and screened at the festival also winning awards. Congratulations to FLOW (feature film, 4 Awards), The car that came back from the sea (short film, 2 awards), Pictoplasma opener 2023 (1 award), and The Worlds Divide (feature film).The Blender for Breakfast event was a great opportunity to create connections between professionals and get a glimpse of the growing Blender-based studio landscape. Will definitely repeat next year.Finally, special thanks to Dell for supporting us with a demo workstation during the event, so we could present high quality demos of the brushstroke technology from the Gold project, and more.Blender is truly making it possible for filmmakers to create and share excellent work. See you next time!0 Comments 0 Shares 367 Views
-
WWW.BLENDER.ORGBlender 4.2 LTS ReleaseBlender 4.2 LTS ReleaseJuly 16th, 2024Press ReleasesPablo Vazquez html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Blender Foundation and the online developers community are proud to present Blender 4.2 LTS!Whats NewAs usual with LTS (long-term support) releases, Blender 4.2 LTS comes packed with features, improvements, and fixes ready to power your projects for the next two years.Some highlights:New render engine: EEVEE got a complete rewrite, bringing global illumination, displacement, better SSS, volumetric lighting and shadows, viewport motion blur, and so much more.Cycles: Ray Portals, Thin Film Interference, better volume rendering, blue noise, and more.The Compositor: Can now utilize GPU acceleration for offline renders, its faster on CPU, and more nodes are supported in the 3D Viewport.Extensions: A new way to share, install, and update add-ons and themes online within Blender.Portable installation.Khronos PBR Neutral Tone Mapper.New sculpt tools.Video Sequencer: Strips redesign, including highlighting missing media, text strip enhancements, and better performance.Collection exporters.USD, Alembic, glTF, OBJ, PLY, and STL enhancements.Undo up to 5x faster.Animation editors and tools upgrades.And so much more!This also marks the launch of the online Extensions Platform a community website to share free and open source add-ons and themes.Watch the video summary on Blenders YouTube channel.And many more features and fixes await youexplore the release notes for an in-depth look at whats new!Long-term CommitmentBlender 4.2 is the fifth LTS release, each covering two years of fixes, once again solidifying the Blender Foundations commitment to making Blender a reliable tool for use in production and educational environments.Blender 2.83 LTS 2020 2022 (20 releases)Blender 2.93 LTS 2021 2023 (18 releases)Blender 3.3 LTS 2022 2024 (21 releases so far)Blender 3.6 LTS 2023 2025 (14 releases so far)Blender 4.2 LTS 2024 2026Thank you!This work is made possible thanks to the outstanding contributions of the Blender community, and the support of the over 4400 individuals and 38 organizations contributing to theBlender Development Fund.Happy Blending!The Blender TeamJuly 16th, 2024Support the Future of BlenderDonate to Blender by joining the Development Fund to support the Blender Foundations work on core development, maintenance, and new releases. Donate to Blender0 Comments 0 Shares 369 Views
More Stories