• As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion

    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments

    Image credit: Disney / Epic Games

    Opinion

    by Rob Fahey
    Contributing Editor

    Published on June 13, 2025

    In some regards, the past couple of weeks have felt rather reassuring.
    We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games.
    It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games.
    If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool.
    In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool
    I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations.
    The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them.

    If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation.
    The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation.
    In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic.
    To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner.
    The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once
    AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form.
    That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation.
    Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products.
    Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI.
    Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues.

    Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology.
    Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry.
    Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity.
    The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights.
    Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future.
    This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees.
    The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues.
    The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background.
    Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time.
    Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business.
    The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models.
    Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    #faces #court #challenges #disney #universal
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation. In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights. Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions. #faces #court #challenges #disney #universal
    WWW.GAMESINDUSTRY.BIZ
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companies (including game studios and publishers) to protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation (at least for famous characters belonging to rich companies; if you're an individual or a smaller company, it's entirely the Wild West out there as regards your IP rights). In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights (you can infringe someone else's copyright with it, but generally can't impose your own copyright on its creations – opening careless companies up to a risk of having key assets in their game being technically public domain and impossible to protect). Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds (which is entirely valid given the highly dubious land-grab these companies have done for their training data), the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Overlapping vertices?

    Author

    Hi Gamedev! :DSo I'm using some existing models from other games for a PRIVATE mod im working on. But when i import them into blender or 3ds max the modeling software tells me it's got overlapping vertices. Is this normal with game models or is every vertex supposed to be welded?Kind regards!

    Maybe. They might not be duplicates, it could be that there was additional information which was lost, such as two points that had different normal information or texture coordinates even though they're at the same position.It could be normal for that project, but no, in general duplicate verts, overlapping verts, degenerate triangles, and similar can cause rendering issues and are often flagged by tools.  If it is something you extracted it might be the result of processing that took place rather than coming from the original, like a script that ends up stripping the non-duplicate information or that ends up traversing a mesh more than once.Most likely your warning is exactly the same one artists in the game would receive, and they just need to be welded, fused, or otherwise processed back into place.

    Advertisement

    It's normal. Reasons to split a mesh edge of geometrically adjacent triangles are:Differing materials / textures / uv coordsEdge should show discontinutiy in lightinginstead smooth shadingDividing mesh into smaller pieces for fine grained cullingThus, splitting models and duplicating vertices is a post process necessary to use them in game engines, while artists keep the original models to do changes and for archivation.Turning such assets back to editable models requires welding with a tolerance of zero, or eventually a very small number. Issues might still remain.
    Other things, e.g. the original cage of a subdivision model, or Nurbs control points, etc. can't be reconstructed that easily.

    Author

    Hi Guy's so i usually use this tutorial if i get overlapping:The reason im asking this is because: Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids? Or should they still be welded then?Does it matter how small/large the mesh is when welding by distance?Kind regards!

    That is another “it depends on the details” question. There might be visual artifacts or not, depending on the details. There can be performance differences depending on the details. There are reasons to do it that we're already covered, a vertex can have far more than just position data which would make them different despite both being at the same location. There are details and choices beyond just the vertex positions overlapping. 

    Newgamemodder said:Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids?Usually no. You need to regenerate the meshlets anyway after editing a model. It's done by a preprocessing tool, and the usual asset pipeline is: Model from artist → automated tool to split edges where needed to get one mesh per material, compute meshlet clusters, quantization for compression, reorder vertices for cache efficiency, etc → save as asset to ship with the game.So meshlets do not add to the risks from welding vertices which you already have. Artwork is not affected from meshlets in general.However, this applies to games production, not to modding. Things like Nanite and meshlets ofc. make it even harder to mod existing assets, since modders don't have those automated preprocessing tools if devs don't provide them.Newgamemodder said:Does it matter how small/large the mesh is when welding by distance?Yes. Usually you give a distance threshold for the welding, so the scale of the model matters.
    My advise is to use the smallest threshold possible and observing UVs, which should not change from the welding operation. 
    #overlapping #vertices
    Overlapping vertices?
    Author Hi Gamedev! :DSo I'm using some existing models from other games for a PRIVATE mod im working on. But when i import them into blender or 3ds max the modeling software tells me it's got overlapping vertices. Is this normal with game models or is every vertex supposed to be welded?Kind regards! Maybe. They might not be duplicates, it could be that there was additional information which was lost, such as two points that had different normal information or texture coordinates even though they're at the same position.It could be normal for that project, but no, in general duplicate verts, overlapping verts, degenerate triangles, and similar can cause rendering issues and are often flagged by tools.  If it is something you extracted it might be the result of processing that took place rather than coming from the original, like a script that ends up stripping the non-duplicate information or that ends up traversing a mesh more than once.Most likely your warning is exactly the same one artists in the game would receive, and they just need to be welded, fused, or otherwise processed back into place. Advertisement It's normal. Reasons to split a mesh edge of geometrically adjacent triangles are:Differing materials / textures / uv coordsEdge should show discontinutiy in lightinginstead smooth shadingDividing mesh into smaller pieces for fine grained cullingThus, splitting models and duplicating vertices is a post process necessary to use them in game engines, while artists keep the original models to do changes and for archivation.Turning such assets back to editable models requires welding with a tolerance of zero, or eventually a very small number. Issues might still remain. Other things, e.g. the original cage of a subdivision model, or Nurbs control points, etc. can't be reconstructed that easily. Author Hi Guy's so i usually use this tutorial if i get overlapping:The reason im asking this is because: Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids? Or should they still be welded then?Does it matter how small/large the mesh is when welding by distance?Kind regards! That is another “it depends on the details” question. There might be visual artifacts or not, depending on the details. There can be performance differences depending on the details. There are reasons to do it that we're already covered, a vertex can have far more than just position data which would make them different despite both being at the same location. There are details and choices beyond just the vertex positions overlapping.  Newgamemodder said:Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids?Usually no. You need to regenerate the meshlets anyway after editing a model. It's done by a preprocessing tool, and the usual asset pipeline is: Model from artist → automated tool to split edges where needed to get one mesh per material, compute meshlet clusters, quantization for compression, reorder vertices for cache efficiency, etc → save as asset to ship with the game.So meshlets do not add to the risks from welding vertices which you already have. Artwork is not affected from meshlets in general.However, this applies to games production, not to modding. Things like Nanite and meshlets ofc. make it even harder to mod existing assets, since modders don't have those automated preprocessing tools if devs don't provide them.Newgamemodder said:Does it matter how small/large the mesh is when welding by distance?Yes. Usually you give a distance threshold for the welding, so the scale of the model matters. My advise is to use the smallest threshold possible and observing UVs, which should not change from the welding operation.  #overlapping #vertices
    Overlapping vertices?
    Author Hi Gamedev! :DSo I'm using some existing models from other games for a PRIVATE mod im working on (so no restributing, don't wanna rip of talented artists and using existing meshes form games due to cost). But when i import them into blender or 3ds max the modeling software tells me it's got overlapping vertices. Is this normal with game models or is every vertex supposed to be welded?Kind regards! Maybe. They might not be duplicates, it could be that there was additional information which was lost, such as two points that had different normal information or texture coordinates even though they're at the same position.It could be normal for that project, but no, in general duplicate verts, overlapping verts, degenerate triangles, and similar can cause rendering issues and are often flagged by tools.  If it is something you extracted it might be the result of processing that took place rather than coming from the original, like a script that ends up stripping the non-duplicate information or that ends up traversing a mesh more than once.Most likely your warning is exactly the same one artists in the game would receive, and they just need to be welded, fused, or otherwise processed back into place. Advertisement It's normal. Reasons to split a mesh edge of geometrically adjacent triangles are:Differing materials / textures / uv coordsEdge should show discontinutiy in lighting (e.g. cube) instead smooth shading (e.g. sphere)Dividing mesh into smaller pieces for fine grained cullingThus, splitting models and duplicating vertices is a post process necessary to use them in game engines, while artists keep the original models to do changes and for archivation.Turning such assets back to editable models requires welding with a tolerance of zero, or eventually a very small number. Issues might still remain. Other things, e.g. the original cage of a subdivision model, or Nurbs control points, etc. can't be reconstructed that easily. Author Hi Guy's so i usually use this tutorial if i get overlapping:The reason im asking this is because: Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids? Or should they still be welded then?Does it matter how small/large the mesh is when welding by distance?Kind regards! That is another “it depends on the details” question. There might be visual artifacts or not, depending on the details. There can be performance differences depending on the details. There are reasons to do it that we're already covered, a vertex can have far more than just position data which would make them different despite both being at the same location. There are details and choices beyond just the vertex positions overlapping.  Newgamemodder said:Does it matter if faces are welded or not if i convert them to meshlets like Nvidias asteroids?Usually no. You need to regenerate the meshlets anyway after editing a model. It's done by a preprocessing tool, and the usual asset pipeline is: Model from artist → automated tool to split edges where needed to get one mesh per material, compute meshlet clusters, quantization for compression, reorder vertices for cache efficiency, etc → save as asset to ship with the game.So meshlets do not add to the risks from welding vertices which you already have (e.g. accidental corruption of UV coordinates or merging of material groups). Artwork is not affected from meshlets in general.However, this applies to games production, not to modding. Things like Nanite and meshlets ofc. make it even harder to mod existing assets, since modders don't have those automated preprocessing tools if devs don't provide them.Newgamemodder said:Does it matter how small/large the mesh is when welding by distance?Yes. Usually you give a distance threshold for the welding, so the scale of the model matters. My advise is to use the smallest threshold possible and observing UVs, which should not change from the welding operation. 
    Like
    Love
    Wow
    Sad
    Angry
    586
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Upgrade Your Blender Workflow With RetopoFlow 4 Beta

    RetopoFlow is a suite of intuitive, sketch-based retopology tools for Blender developed by Orange Turbine. It allows you to draw directly onto the surface of high-poly models, automatically generating geometry that conforms to their shape.The toolset has recently been updated with the release of RetopoFlow 4, a complete rewrite that introduces powerful new features. While still in beta, RetopoFlow 4 is available at a discounted price.In RetopoFlow 4, the PolyPen tool offers precise control for building complex topology one vertex at a time, while the PolyStrips tool makes it easy to map out and refine key face loops on complex models. The Strokes tool is highly versatile, ideal for quickly sketching out quad patches or filling gaps in topology, and the Contours tool gives you a quick and easy way to retopologize cylindrical forms. For interactive adjustments, the Tweak Brush lets you reposition vertices directly on the source mesh, and the Relax Brush helps smooth out vertex positions while keeping them constrained to the mesh surface.While RetopoFlow 3 included a symmetry system that essentially relied on Blender's Mirror Modifier behind the scenes, RetopoFlow 4 will give you direct access to the Mirror Modifier itself.Please note that RetopoFlow 3 will not receive new features going forward, but it will continue to be supported for at least two years following the official release of version 4.If you previously purchased RetopoFlow 3 through Superhive, you can upgrade to version 4 at an additional 25% discount, on top of the current beta pricing, by using the coupon code retopoflow-og. You can also sign up to test the beta for free here.Purchase RetopoFlow 4 by clicking this link and join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    #upgrade #your #blender #workflow #with
    Upgrade Your Blender Workflow With RetopoFlow 4 Beta
    RetopoFlow is a suite of intuitive, sketch-based retopology tools for Blender developed by Orange Turbine. It allows you to draw directly onto the surface of high-poly models, automatically generating geometry that conforms to their shape.The toolset has recently been updated with the release of RetopoFlow 4, a complete rewrite that introduces powerful new features. While still in beta, RetopoFlow 4 is available at a discounted price.In RetopoFlow 4, the PolyPen tool offers precise control for building complex topology one vertex at a time, while the PolyStrips tool makes it easy to map out and refine key face loops on complex models. The Strokes tool is highly versatile, ideal for quickly sketching out quad patches or filling gaps in topology, and the Contours tool gives you a quick and easy way to retopologize cylindrical forms. For interactive adjustments, the Tweak Brush lets you reposition vertices directly on the source mesh, and the Relax Brush helps smooth out vertex positions while keeping them constrained to the mesh surface.While RetopoFlow 3 included a symmetry system that essentially relied on Blender's Mirror Modifier behind the scenes, RetopoFlow 4 will give you direct access to the Mirror Modifier itself.Please note that RetopoFlow 3 will not receive new features going forward, but it will continue to be supported for at least two years following the official release of version 4.If you previously purchased RetopoFlow 3 through Superhive, you can upgrade to version 4 at an additional 25% discount, on top of the current beta pricing, by using the coupon code retopoflow-og. You can also sign up to test the beta for free here.Purchase RetopoFlow 4 by clicking this link and join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more. #upgrade #your #blender #workflow #with
    80.LV
    Upgrade Your Blender Workflow With RetopoFlow 4 Beta
    RetopoFlow is a suite of intuitive, sketch-based retopology tools for Blender developed by Orange Turbine. It allows you to draw directly onto the surface of high-poly models, automatically generating geometry that conforms to their shape.The toolset has recently been updated with the release of RetopoFlow 4, a complete rewrite that introduces powerful new features. While still in beta, RetopoFlow 4 is available at a discounted price.In RetopoFlow 4, the PolyPen tool offers precise control for building complex topology one vertex at a time, while the PolyStrips tool makes it easy to map out and refine key face loops on complex models. The Strokes tool is highly versatile, ideal for quickly sketching out quad patches or filling gaps in topology, and the Contours tool gives you a quick and easy way to retopologize cylindrical forms. For interactive adjustments, the Tweak Brush lets you reposition vertices directly on the source mesh, and the Relax Brush helps smooth out vertex positions while keeping them constrained to the mesh surface.While RetopoFlow 3 included a symmetry system that essentially relied on Blender's Mirror Modifier behind the scenes, RetopoFlow 4 will give you direct access to the Mirror Modifier itself.Please note that RetopoFlow 3 will not receive new features going forward, but it will continue to be supported for at least two years following the official release of version 4.If you previously purchased RetopoFlow 3 through Superhive (Blender Market), you can upgrade to version 4 at an additional 25% discount, on top of the current beta pricing, by using the coupon code retopoflow-og. You can also sign up to test the beta for free here.Purchase RetopoFlow 4 by clicking this link and join our 80 Level Talent platform and our new Discord server, follow us on Instagram, Twitter, LinkedIn, Telegram, TikTok, and Threads, where we share breakdowns, the latest news, awesome artworks, and more.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Can Terrain-Based Color Grading Really Reflect Real-World Altitude Perception Accurately?

    Author

    I recently got intrigued by how certain online tools render terrain using dynamic color gradients to show depth or elevation changes, especially when visualizing geographical data or landscape layers on a 2D canvas. What caught my attention was how a color transition, say from green to brown to white, can subtly convey a mountain’s progression — and how much this alone can shape how we perceive space, depth, and realism without using any lighting or shadows. I’d love to dive deeper into the logic and techniques behind this and how it’s approached from a GPU programming perspective.One thing I started questioning is how effective and precise color-based elevation rendering is, especially when it comes to shader implementation. For instance, I observed that some tools use a simple gradient approach linked to altitude values, which works fine visually but might not reflect real-world depth unless tuned carefully. I tried assigning color ramps in fragment shaders, interpolated from DEMvalues, but it wasn’t quite as expressive as I expected — especially when transitioning over large terrain with small elevation variance.To simulate some form of perceptual realism, I began blending color ramps using noise functions to introduce a more organic transition, but I’m not confident this is the best way to approach it. I also played around with multi-step gradients, assigning different hue families per range, but it raises the question of universality — is there a standard or accepted practice for terrain color logic in shader design? Or should we just lean into stylized rendering if it communicates the structure effectively?Elevation itself refers to the height of a specific point on the Earth's surface relative to sea level. It’s a key component in any terrain rendering logic and often forms the foundation for visual differentiation of the landscape. When using an online elevation tool, the elevation values are typically mapped to colors or heightmaps to produce a more tangible view of the land’s shape. This numerical-to-visual translation plays a central role in how users interpret spatial data. I inspired from this idea positively because it proves that even raw altitude numbers can create an intuitive and informative visual experience.What I couldn’t figure out clearly is how people deal with the in-between areas — those subtle transitions where terrain rises or drops slowly — without making the result look blocky or washed out. I’ve attempted linear color interpolation based on normalized height values directly inside the fragment shader, and I’ve also experimented with stepping through fixed color zones. Both methods gave me somewhat predictable results, but neither satisfied the realism I was aiming for when zooming closer to the terrain.I also wonder about the performance side of this. If I rely on fragment shader-based rendering with multiple condition checks and interpolations, will that scale well on larger canvases or with more detailed elevation data? Or would pushing color values per-vertex and interpolating across fragments give a better balance of performance and detail? It’s not immediately clear to me which path is more commonly used or recommended.Another question I’ve been mulling over is whether a lookup tablewould make more sense for GPU-side elevation rendering. If I store predefined biome and elevation color data in a LUT, is it practical to access and apply that in real-time shader logic? And if so, what’s the cleanest way to structure and query this LUT in a WebGL or GLSL environment?I’m looking to understand how others have approached this type of rendering, specifically when color is used to express terrain form based solely on elevation values. I’m especially curious about shader structure, transition smoothing methods, and how to avoid that “posterized” look when mapping heights to colors over wide areas.

    If you want to apply colors in the shader based on elevation, the standard approach would be to use a 1D texture as a lookup table. You then map elevation to texture coordinate inand use that to sample the texture. You can do this per-vertex if you vertices are dense enough. This allows you to use arbitrarily complex gradients.However elevation-based coloring is not very flexible. It works for some situations but otherwise is not ideal. For more complicated and realistic colors there are two other options:Add Layers - e.g. you can have another texture for your terrain which alters color based on other properties like water depth or temperature, etc. This can be combined with the elevation-based coloring. This can be done in the shader but more layers result in slower rendering.Vertex colors - compute a color per-vertex on the CPU. This can use any approach to assign the colors. You pay a bit more memory but have a faster rendering. You may need more vertices to have fine details or if the terrain is steep.To make colors more diverse you can use other terrain attributes to affect the color:ElevationSlopeevaluated a certain scale.water depthclimate / biomefractal noiseI would have a 1D texture or gradient for each attribute and then blend them in some way. Use fractal noise to “dither” the results and break up banding artifacts.You also can combine colored terrain with texture variation. In my terrain system each vertex has a texture index from a texture array. I manually interpolate the textures from the 3 vertices of a triangle in the shader. Per-vertex texturing gives great flexibility, as I can have as many textures as slots in the texture array. To fully use such a system you need a way to assign textures based on material type. Slope-based texturingis common but I use a much more complicated material system based on rock layers and erosion. I had a blog here but all the images got deleted:/
    #can #terrainbased #color #grading #really
    Can Terrain-Based Color Grading Really Reflect Real-World Altitude Perception Accurately?
    Author I recently got intrigued by how certain online tools render terrain using dynamic color gradients to show depth or elevation changes, especially when visualizing geographical data or landscape layers on a 2D canvas. What caught my attention was how a color transition, say from green to brown to white, can subtly convey a mountain’s progression — and how much this alone can shape how we perceive space, depth, and realism without using any lighting or shadows. I’d love to dive deeper into the logic and techniques behind this and how it’s approached from a GPU programming perspective.One thing I started questioning is how effective and precise color-based elevation rendering is, especially when it comes to shader implementation. For instance, I observed that some tools use a simple gradient approach linked to altitude values, which works fine visually but might not reflect real-world depth unless tuned carefully. I tried assigning color ramps in fragment shaders, interpolated from DEMvalues, but it wasn’t quite as expressive as I expected — especially when transitioning over large terrain with small elevation variance.To simulate some form of perceptual realism, I began blending color ramps using noise functions to introduce a more organic transition, but I’m not confident this is the best way to approach it. I also played around with multi-step gradients, assigning different hue families per range, but it raises the question of universality — is there a standard or accepted practice for terrain color logic in shader design? Or should we just lean into stylized rendering if it communicates the structure effectively?Elevation itself refers to the height of a specific point on the Earth's surface relative to sea level. It’s a key component in any terrain rendering logic and often forms the foundation for visual differentiation of the landscape. When using an online elevation tool, the elevation values are typically mapped to colors or heightmaps to produce a more tangible view of the land’s shape. This numerical-to-visual translation plays a central role in how users interpret spatial data. I inspired from this idea positively because it proves that even raw altitude numbers can create an intuitive and informative visual experience.What I couldn’t figure out clearly is how people deal with the in-between areas — those subtle transitions where terrain rises or drops slowly — without making the result look blocky or washed out. I’ve attempted linear color interpolation based on normalized height values directly inside the fragment shader, and I’ve also experimented with stepping through fixed color zones. Both methods gave me somewhat predictable results, but neither satisfied the realism I was aiming for when zooming closer to the terrain.I also wonder about the performance side of this. If I rely on fragment shader-based rendering with multiple condition checks and interpolations, will that scale well on larger canvases or with more detailed elevation data? Or would pushing color values per-vertex and interpolating across fragments give a better balance of performance and detail? It’s not immediately clear to me which path is more commonly used or recommended.Another question I’ve been mulling over is whether a lookup tablewould make more sense for GPU-side elevation rendering. If I store predefined biome and elevation color data in a LUT, is it practical to access and apply that in real-time shader logic? And if so, what’s the cleanest way to structure and query this LUT in a WebGL or GLSL environment?I’m looking to understand how others have approached this type of rendering, specifically when color is used to express terrain form based solely on elevation values. I’m especially curious about shader structure, transition smoothing methods, and how to avoid that “posterized” look when mapping heights to colors over wide areas. If you want to apply colors in the shader based on elevation, the standard approach would be to use a 1D texture as a lookup table. You then map elevation to texture coordinate inand use that to sample the texture. You can do this per-vertex if you vertices are dense enough. This allows you to use arbitrarily complex gradients.However elevation-based coloring is not very flexible. It works for some situations but otherwise is not ideal. For more complicated and realistic colors there are two other options:Add Layers - e.g. you can have another texture for your terrain which alters color based on other properties like water depth or temperature, etc. This can be combined with the elevation-based coloring. This can be done in the shader but more layers result in slower rendering.Vertex colors - compute a color per-vertex on the CPU. This can use any approach to assign the colors. You pay a bit more memory but have a faster rendering. You may need more vertices to have fine details or if the terrain is steep.To make colors more diverse you can use other terrain attributes to affect the color:ElevationSlopeevaluated a certain scale.water depthclimate / biomefractal noiseI would have a 1D texture or gradient for each attribute and then blend them in some way. Use fractal noise to “dither” the results and break up banding artifacts.You also can combine colored terrain with texture variation. In my terrain system each vertex has a texture index from a texture array. I manually interpolate the textures from the 3 vertices of a triangle in the shader. Per-vertex texturing gives great flexibility, as I can have as many textures as slots in the texture array. To fully use such a system you need a way to assign textures based on material type. Slope-based texturingis common but I use a much more complicated material system based on rock layers and erosion. I had a blog here but all the images got deleted:/ #can #terrainbased #color #grading #really
    Can Terrain-Based Color Grading Really Reflect Real-World Altitude Perception Accurately?
    Author I recently got intrigued by how certain online tools render terrain using dynamic color gradients to show depth or elevation changes, especially when visualizing geographical data or landscape layers on a 2D canvas. What caught my attention was how a color transition, say from green to brown to white, can subtly convey a mountain’s progression — and how much this alone can shape how we perceive space, depth, and realism without using any lighting or shadows. I’d love to dive deeper into the logic and techniques behind this and how it’s approached from a GPU programming perspective.One thing I started questioning is how effective and precise color-based elevation rendering is, especially when it comes to shader implementation. For instance, I observed that some tools use a simple gradient approach linked to altitude values, which works fine visually but might not reflect real-world depth unless tuned carefully. I tried assigning color ramps in fragment shaders, interpolated from DEM (digital elevation model) values, but it wasn’t quite as expressive as I expected — especially when transitioning over large terrain with small elevation variance.To simulate some form of perceptual realism, I began blending color ramps using noise functions to introduce a more organic transition, but I’m not confident this is the best way to approach it. I also played around with multi-step gradients, assigning different hue families per range (e.g., green under 500m, brown 500–1500m, grey and white above that), but it raises the question of universality — is there a standard or accepted practice for terrain color logic in shader design? Or should we just lean into stylized rendering if it communicates the structure effectively?Elevation itself refers to the height of a specific point on the Earth's surface relative to sea level. It’s a key component in any terrain rendering logic and often forms the foundation for visual differentiation of the landscape. When using an online elevation tool, the elevation values are typically mapped to colors or heightmaps to produce a more tangible view of the land’s shape. This numerical-to-visual translation plays a central role in how users interpret spatial data. I inspired from this idea positively because it proves that even raw altitude numbers can create an intuitive and informative visual experience.What I couldn’t figure out clearly is how people deal with the in-between areas — those subtle transitions where terrain rises or drops slowly — without making the result look blocky or washed out. I’ve attempted linear color interpolation based on normalized height values directly inside the fragment shader, and I’ve also experimented with stepping through fixed color zones. Both methods gave me somewhat predictable results, but neither satisfied the realism I was aiming for when zooming closer to the terrain.I also wonder about the performance side of this. If I rely on fragment shader-based rendering with multiple condition checks and interpolations, will that scale well on larger canvases or with more detailed elevation data? Or would pushing color values per-vertex and interpolating across fragments give a better balance of performance and detail? It’s not immediately clear to me which path is more commonly used or recommended.Another question I’ve been mulling over is whether a lookup table (LUT) would make more sense for GPU-side elevation rendering. If I store predefined biome and elevation color data in a LUT, is it practical to access and apply that in real-time shader logic? And if so, what’s the cleanest way to structure and query this LUT in a WebGL or GLSL environment?I’m looking to understand how others have approached this type of rendering, specifically when color is used to express terrain form based solely on elevation values. I’m especially curious about shader structure, transition smoothing methods, and how to avoid that “posterized” look when mapping heights to colors over wide areas. If you want to apply colors in the shader based on elevation, the standard approach would be to use a 1D texture as a lookup table. You then map elevation to texture coordinate in [0,1] and use that to sample the texture (which should use linear interpolation). You can do this per-vertex if you vertices are dense enough. This allows you to use arbitrarily complex gradients.However elevation-based coloring is not very flexible. It works for some situations but otherwise is not ideal. For more complicated and realistic colors there are two other options:Add Layers - e.g. you can have another texture for your terrain which alters color based on other properties like water depth or temperature, etc. This can be combined with the elevation-based coloring. This can be done in the shader but more layers result in slower rendering.Vertex colors - compute a color per-vertex on the CPU. This can use any approach to assign the colors. You pay a bit more memory but have a faster rendering. You may need more vertices to have fine details or if the terrain is steep.To make colors more diverse you can use other terrain attributes to affect the color:ElevationSlope (gradient magnitude) evaluated a certain scale.water depthclimate / biomefractal noiseI would have a 1D texture or gradient for each attribute and then blend them in some way. Use fractal noise to “dither” the results and break up banding artifacts.You also can combine colored terrain with texture variation. In my terrain system each vertex has a texture index from a texture array. I manually interpolate the textures from the 3 vertices of a triangle in the shader. Per-vertex texturing gives great flexibility, as I can have as many textures as slots in the texture array. To fully use such a system you need a way to assign textures based on material type (rock, dirt, grass, etc.). Slope-based texturing (e.g. slope 0-0.2 is grass, 0.2-0.4 is dirt, >0.4 is rock) is common but I use a much more complicated material system based on rock layers and erosion. I had a blog here but all the images got deleted:https://gamedev.net/blogs/entry/2284060-rock-layers-for-real-time-erosion-simulation/
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Spaceframe: procedurally generating terrain on a planetary scale

    Author

    Hello devs. Apologies in advance if this is not the correct forum for this. I've recently been spending a lot of my free time working on an old project of mine. It originated in 2013, and after several years-long hiatuses and complete rewrites, it's now in a state that I think is worth sharing.The goal is to simulate spheroids with real-time, procedurally generated terrain on a planetary scale and beyond. My inspiration for this project are old voxel games like Comanche. I wanted to do something like that, but instead of the finite play area being enforced through artificial boundaries, have it be a natural consequence of the world's geometry.Here's a short demo I recorded: planet in this demo has a radius of 2^23 meters which makes it about 2.28x the size of Earth by volume. The surface can be traversed freely without loading areas as it's all sampled in real time. Admittedly, the detail in this video is not that great. I've only recently begun working on the terrain sampler, so this is in a very early stage of development.To speak briefly on the geometry; the world is an icosahedron formed by 20 recursively subdivided tetrahedra. The tetrahedra subdivide into an oct-tet truss: a lattice of octahedra and tetrahedra. This structure is also called a spaceframe, and it's the cause for the shape of the terrain.If this sort of thing intrigues you, please let me know.

    I have been working on generating realistic planet-scale terrain for the last 2.5 years. You can read about it on my blog.I decided against the icosahedron subdivision for my planet engine because it makes lots of things more difficult. I use the cube-sphere subdivision instead, because it leads to a quadtree subdivision of each cube face. This makes it much easier to do things like determine what terrain tiles are near to a given location, or to implement erosion. Everything exists on a rectilinear grid, which makes hydraulic erosion much easier.It seems like you should increase the distance at which you subdivide the terrain, to reduce the LOD popping. As it is, terrain features materialize to near to the camera. You ideally should implement a system so that the terrain is subdivided until a certain screen-space resolution is achieved.

    Advertisement

    Author

    Your project looks very nice, but we have chosen very different paths in our implementations of planetary terrain. There is no grid or mesh spaceframe. The surface map is a direct result of sampling. The space is recursively sampled with a binary function and vertices are emitted based on the boundary resolved between filled and empty space. The detail popping is a consequence of the size difference between shapes in finer and coarser LODs. In theory this could be mitigated with higher LODs, but that is not a practical option given how CPU intensive real time, full planet binary space partitioning and boundary resolution is. Maybe one day I'll write a GPU implementation, though.

    What I'm saying is that your approach, while interesting, cannot use various optimizations which can be applied to rectilinear coordinates, which may explain why it is slow and can't increase the detail as much. My approach, running entirely on a 12-year old 4-core CPU can in real time generate triangles down to 5mm resolution at an angular LOD of 4 pixels per triangle, covering an earth-sized planet, using around 4GB RAM. Furthermore, I'm not just generating terrain by fractal noise, I'm also applying various erosion processes which use the majority of the compute. This is only possible by a careful choice of spatial representation of the terrain, asynchronous multithreaded generation, and by highly optimized code. Since the terrain is on a 2D rectangular grid, all operations aside from final mesh generation are essentially image processing, and make good use of CPU SIMD capabilities, which provides up to 8x speedup with AVX. Unless your data is laid out in a similar compact way you will have a hard time making it fast on modern CPUs. I don't think the “space frame” is a good fit for modern CPU architectures.
    #spaceframe #procedurally #generating #terrain #planetary
    Spaceframe: procedurally generating terrain on a planetary scale
    Author Hello devs. Apologies in advance if this is not the correct forum for this. I've recently been spending a lot of my free time working on an old project of mine. It originated in 2013, and after several years-long hiatuses and complete rewrites, it's now in a state that I think is worth sharing.The goal is to simulate spheroids with real-time, procedurally generated terrain on a planetary scale and beyond. My inspiration for this project are old voxel games like Comanche. I wanted to do something like that, but instead of the finite play area being enforced through artificial boundaries, have it be a natural consequence of the world's geometry.Here's a short demo I recorded: planet in this demo has a radius of 2^23 meters which makes it about 2.28x the size of Earth by volume. The surface can be traversed freely without loading areas as it's all sampled in real time. Admittedly, the detail in this video is not that great. I've only recently begun working on the terrain sampler, so this is in a very early stage of development.To speak briefly on the geometry; the world is an icosahedron formed by 20 recursively subdivided tetrahedra. The tetrahedra subdivide into an oct-tet truss: a lattice of octahedra and tetrahedra. This structure is also called a spaceframe, and it's the cause for the shape of the terrain.If this sort of thing intrigues you, please let me know. I have been working on generating realistic planet-scale terrain for the last 2.5 years. You can read about it on my blog.I decided against the icosahedron subdivision for my planet engine because it makes lots of things more difficult. I use the cube-sphere subdivision instead, because it leads to a quadtree subdivision of each cube face. This makes it much easier to do things like determine what terrain tiles are near to a given location, or to implement erosion. Everything exists on a rectilinear grid, which makes hydraulic erosion much easier.It seems like you should increase the distance at which you subdivide the terrain, to reduce the LOD popping. As it is, terrain features materialize to near to the camera. You ideally should implement a system so that the terrain is subdivided until a certain screen-space resolution is achieved. Advertisement Author Your project looks very nice, but we have chosen very different paths in our implementations of planetary terrain. There is no grid or mesh spaceframe. The surface map is a direct result of sampling. The space is recursively sampled with a binary function and vertices are emitted based on the boundary resolved between filled and empty space. The detail popping is a consequence of the size difference between shapes in finer and coarser LODs. In theory this could be mitigated with higher LODs, but that is not a practical option given how CPU intensive real time, full planet binary space partitioning and boundary resolution is. Maybe one day I'll write a GPU implementation, though. What I'm saying is that your approach, while interesting, cannot use various optimizations which can be applied to rectilinear coordinates, which may explain why it is slow and can't increase the detail as much. My approach, running entirely on a 12-year old 4-core CPU can in real time generate triangles down to 5mm resolution at an angular LOD of 4 pixels per triangle, covering an earth-sized planet, using around 4GB RAM. Furthermore, I'm not just generating terrain by fractal noise, I'm also applying various erosion processes which use the majority of the compute. This is only possible by a careful choice of spatial representation of the terrain, asynchronous multithreaded generation, and by highly optimized code. Since the terrain is on a 2D rectangular grid, all operations aside from final mesh generation are essentially image processing, and make good use of CPU SIMD capabilities, which provides up to 8x speedup with AVX. Unless your data is laid out in a similar compact way you will have a hard time making it fast on modern CPUs. I don't think the “space frame” is a good fit for modern CPU architectures. #spaceframe #procedurally #generating #terrain #planetary
    Spaceframe: procedurally generating terrain on a planetary scale
    Author Hello devs. Apologies in advance if this is not the correct forum for this. I've recently been spending a lot of my free time working on an old project of mine. It originated in 2013, and after several years-long hiatuses and complete rewrites, it's now in a state that I think is worth sharing.The goal is to simulate spheroids with real-time, procedurally generated terrain on a planetary scale and beyond. My inspiration for this project are old voxel games like Comanche. I wanted to do something like that, but instead of the finite play area being enforced through artificial boundaries, have it be a natural consequence of the world's geometry.Here's a short demo I recorded: https://www.youtube.com/watch?v=8S3MPTzymX8The planet in this demo has a radius of 2^23 meters which makes it about 2.28x the size of Earth by volume. The surface can be traversed freely without loading areas as it's all sampled in real time. Admittedly, the detail in this video is not that great. I've only recently begun working on the terrain sampler, so this is in a very early stage of development.To speak briefly on the geometry; the world is an icosahedron formed by 20 recursively subdivided tetrahedra. The tetrahedra subdivide into an oct-tet truss: a lattice of octahedra and tetrahedra. This structure is also called a spaceframe, and it's the cause for the shape of the terrain.If this sort of thing intrigues you, please let me know. I have been working on generating realistic planet-scale terrain for the last 2.5 years. You can read about it on my blog (start at the bottom).I decided against the icosahedron subdivision for my planet engine because it makes lots of things more difficult. I use the cube-sphere subdivision instead, because it leads to a quadtree subdivision of each cube face. This makes it much easier to do things like determine what terrain tiles are near to a given location, or to implement erosion. Everything exists on a rectilinear grid, which makes hydraulic erosion much easier.It seems like you should increase the distance at which you subdivide the terrain, to reduce the LOD popping. As it is, terrain features materialize to near to the camera. You ideally should implement a system so that the terrain is subdivided until a certain screen-space resolution is achieved (e.g. size of a triangle in pixels). Advertisement Author Your project looks very nice, but we have chosen very different paths in our implementations of planetary terrain. There is no grid or mesh spaceframe. The surface map is a direct result of sampling. The space is recursively sampled with a binary function and vertices are emitted based on the boundary resolved between filled and empty space. The detail popping is a consequence of the size difference between shapes in finer and coarser LODs. In theory this could be mitigated with higher LODs, but that is not a practical option given how CPU intensive real time, full planet binary space partitioning and boundary resolution is. Maybe one day I'll write a GPU implementation, though. What I'm saying is that your approach, while interesting, cannot use various optimizations which can be applied to rectilinear coordinates, which may explain why it is slow and can't increase the detail as much. My approach, running entirely on a 12-year old 4-core CPU can in real time generate triangles down to 5mm resolution at an angular LOD of 4 pixels per triangle, covering an earth-sized planet (and much bigger is not a problem), using around 4GB RAM. Furthermore, I'm not just generating terrain by fractal noise, I'm also applying various erosion processes which use the majority of the compute. This is only possible by a careful choice of spatial representation of the terrain, asynchronous multithreaded generation, and by highly optimized code (nearly all uses SIMD). Since the terrain is on a 2D rectangular grid, all operations aside from final mesh generation are essentially image processing, and make good use of CPU SIMD capabilities, which provides up to 8x speedup with AVX. Unless your data is laid out in a similar compact way you will have a hard time making it fast on modern CPUs. I don't think the “space frame” is a good fit for modern CPU architectures.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Scene Group releases Cavalry 2.4

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";



    Originally posted on 6 February 2024. Scroll down for news of the Cavalry 2.4 update.
    Scene Group has begun the next big series of releases for Cavalry, its motion design software.
    Cavalry 2.0 adds animatable scene cameras, making it possible to create 2.5D effects, plus an experimental new particle system, and increases scene playback speed around 200%.
    A next-gen 2D motion graphics tool inspired by 3D software

    Originally released in 2020, Cavalry is a procedural animation app “combining the power and flexibility of 3D with the ease of use of 2D”.Although currently a pure 2D animation tool, it supports workflows that will be familiar to 3D animators, including keyframing, curve editing, deformation, rigging, scattering and instancing.
    Scene Group’s background is also in 3D motion graphics: the firm is a spin-off from Mainframe North, which developed MASH, Maya’s motion graphics toolset.
    Once created, images may be exported in a range of file formats, including as JPEG, PNG or SVG sequences, as animated PNGs, as WEBM or QuickTime movies, or in Lottie format.


    Add a Camera to a scene to create 2.5D animations

    Major changes in Cavalry 2.0 include support for Cameras, making it possible to create 2.5D effects like the one above.Users can create Freeform or Look At cameras, with the option to offset the position of the camera and look-at target to create secondary motion, and to set view distance limits for layers.
    Experimental new particle system creates 2D particle effects

    Cavalry 2.0 also introduces an experimental new particle system, for creating particle effects.It’s still a tech preview, but it already includes a range of standard basic features, including settings for particle shape, and a range of emitter types and modifiers.
    Particles can be emitted from points, shapes, paths or Distributions; and it is possible to direct particle motion with paths, goals, forces or turbulence.
    Other new features and performance improvements

    Other new features in Cavalry 2.0 include a new Auto-Animate behavior for animating Shapes with fewer keyframes, and support for tapered strokes along Shapes.Workflow improvements include the option to set up overrides for Pre-Comps, making it easier to create variants for a composition.
    Users can also now group Layers into simplified custom containers called Components, controlling which Attributes are exposed in the UI.
    Performance improvements include boosts of 10-600% in playback speed: the improvement is greater in complex scenes, but Scene Group says that the average is around 200%.
    Cavalry also now supports background rendering, making it possible to continue to work while a scene is rendering.


    Updated 23 May 2024: Scene Group has released Cavalry 2.1.
    The update focuses on the audio tools, adding support for multi-track audio playback, and the option to export audio from Cavalry.
    Audio projects can be exported as AAC files, or in MP4, QuickTime or WebM files.
    It is also possible to import audio files in more formats, now including AAC, MP3 and CAF.

    Updated 11 November 2024: Scene Group has released Cavalry 2.2.
    The biggest change in the update is support for OpenType fonts in the Text Shape, with the option to control OpenType features like ligatures and superscript procedurally.
    It is also possible to create color gradients along Strokes, and to add multiple Strokes to paths.
    Other changes include the option to fill closed paths with Stitches, new Sweep and Shape Falloff patterns, a new Quick Mask mode, and proportional easing when scaling keyframes.
    Users of the paid Pro edition also get a new Knot behavior, which automatically adds gaps to paths where they self-intersect, and a new Stroke Duplicator feature.

    Released 11 December 2025: Scene Group has released Cavalry 2.3.
    Workflow improvements include the option to save presets for Layers, Compositions and Render Queue Items, and to ‘seed‘ random values for Attributes.
    Users of the Pro edition get the option to open multiple viewports, and to convert images or shape layers to contours, and to convert contours within shapes to sub-meshes.

    Updated 15 May 2025: Scene Group has released Cavalry 2.4.
    It’s a sizeable update, adding the option to write custom image-manipulation filters in SkSL, and save them as third-party plugins.
    SkSL, the shading language used by open-source 2D graphics library Skia, is a variant of GLSL, so code from sites like Shadertoy should work with “minor modifications”.
    The update also adds a new SLA shader for creating a range of animatable noise types.
    Text styling and .xslx import

    Other changes include new styling options in the Text Shape, making it possible to apply apply effects like underline, strikethrough and superscript to text.It is also now possible to import Excel .xslx files.

    Pro edition: use 2D meshes to deform images and shaders

    Users of the paid Pro edition also get the option to create 2D meshes to deform images or shaders by placing control vertices.When creating 2.5D animation, it is now possible to use a Camera Guide – a representation of the region of the scene visible to the camera – to drive its animation.
    For managing complex projects, a new Dependency Graph window provides an editable schematic view of a composition, replacing the old Flow Graph.
    Price and system requirements

    Cavalry 2.4 is available for Windows 10+ and macOS 12.0+. The full software is available rental-only, with Pro subscriptions costing £192/year. The free edition caps renders at full HD resolution, and lacks the advanced features in this table.
    Read a full list of new features in Cavalry in the online release notes
    Read an overview of original Cavalry 2.0 update on Scene Group’s blog
    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #scene #group #releases #cavalry
    Scene Group releases Cavalry 2.4
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Originally posted on 6 February 2024. Scroll down for news of the Cavalry 2.4 update. Scene Group has begun the next big series of releases for Cavalry, its motion design software. Cavalry 2.0 adds animatable scene cameras, making it possible to create 2.5D effects, plus an experimental new particle system, and increases scene playback speed around 200%. A next-gen 2D motion graphics tool inspired by 3D software Originally released in 2020, Cavalry is a procedural animation app “combining the power and flexibility of 3D with the ease of use of 2D”.Although currently a pure 2D animation tool, it supports workflows that will be familiar to 3D animators, including keyframing, curve editing, deformation, rigging, scattering and instancing. Scene Group’s background is also in 3D motion graphics: the firm is a spin-off from Mainframe North, which developed MASH, Maya’s motion graphics toolset. Once created, images may be exported in a range of file formats, including as JPEG, PNG or SVG sequences, as animated PNGs, as WEBM or QuickTime movies, or in Lottie format. Add a Camera to a scene to create 2.5D animations Major changes in Cavalry 2.0 include support for Cameras, making it possible to create 2.5D effects like the one above.Users can create Freeform or Look At cameras, with the option to offset the position of the camera and look-at target to create secondary motion, and to set view distance limits for layers. Experimental new particle system creates 2D particle effects Cavalry 2.0 also introduces an experimental new particle system, for creating particle effects.It’s still a tech preview, but it already includes a range of standard basic features, including settings for particle shape, and a range of emitter types and modifiers. Particles can be emitted from points, shapes, paths or Distributions; and it is possible to direct particle motion with paths, goals, forces or turbulence. Other new features and performance improvements Other new features in Cavalry 2.0 include a new Auto-Animate behavior for animating Shapes with fewer keyframes, and support for tapered strokes along Shapes.Workflow improvements include the option to set up overrides for Pre-Comps, making it easier to create variants for a composition. Users can also now group Layers into simplified custom containers called Components, controlling which Attributes are exposed in the UI. Performance improvements include boosts of 10-600% in playback speed: the improvement is greater in complex scenes, but Scene Group says that the average is around 200%. Cavalry also now supports background rendering, making it possible to continue to work while a scene is rendering. Updated 23 May 2024: Scene Group has released Cavalry 2.1. The update focuses on the audio tools, adding support for multi-track audio playback, and the option to export audio from Cavalry. Audio projects can be exported as AAC files, or in MP4, QuickTime or WebM files. It is also possible to import audio files in more formats, now including AAC, MP3 and CAF. Updated 11 November 2024: Scene Group has released Cavalry 2.2. The biggest change in the update is support for OpenType fonts in the Text Shape, with the option to control OpenType features like ligatures and superscript procedurally. It is also possible to create color gradients along Strokes, and to add multiple Strokes to paths. Other changes include the option to fill closed paths with Stitches, new Sweep and Shape Falloff patterns, a new Quick Mask mode, and proportional easing when scaling keyframes. Users of the paid Pro edition also get a new Knot behavior, which automatically adds gaps to paths where they self-intersect, and a new Stroke Duplicator feature. Released 11 December 2025: Scene Group has released Cavalry 2.3. Workflow improvements include the option to save presets for Layers, Compositions and Render Queue Items, and to ‘seed‘ random values for Attributes. Users of the Pro edition get the option to open multiple viewports, and to convert images or shape layers to contours, and to convert contours within shapes to sub-meshes. Updated 15 May 2025: Scene Group has released Cavalry 2.4. It’s a sizeable update, adding the option to write custom image-manipulation filters in SkSL, and save them as third-party plugins. SkSL, the shading language used by open-source 2D graphics library Skia, is a variant of GLSL, so code from sites like Shadertoy should work with “minor modifications”. The update also adds a new SLA shader for creating a range of animatable noise types. Text styling and .xslx import Other changes include new styling options in the Text Shape, making it possible to apply apply effects like underline, strikethrough and superscript to text.It is also now possible to import Excel .xslx files. Pro edition: use 2D meshes to deform images and shaders Users of the paid Pro edition also get the option to create 2D meshes to deform images or shaders by placing control vertices.When creating 2.5D animation, it is now possible to use a Camera Guide – a representation of the region of the scene visible to the camera – to drive its animation. For managing complex projects, a new Dependency Graph window provides an editable schematic view of a composition, replacing the old Flow Graph. Price and system requirements Cavalry 2.4 is available for Windows 10+ and macOS 12.0+. The full software is available rental-only, with Pro subscriptions costing £192/year. The free edition caps renders at full HD resolution, and lacks the advanced features in this table. Read a full list of new features in Cavalry in the online release notes Read an overview of original Cavalry 2.0 update on Scene Group’s blog Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #scene #group #releases #cavalry
    WWW.CGCHANNEL.COM
    Scene Group releases Cavalry 2.4
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" https://www.cgchannel.com/wp-content/uploads/2024/02/240206_Cavalry2_particles.mp4 Originally posted on 6 February 2024. Scroll down for news of the Cavalry 2.4 update. Scene Group has begun the next big series of releases for Cavalry, its motion design software. Cavalry 2.0 adds animatable scene cameras, making it possible to create 2.5D effects, plus an experimental new particle system, and increases scene playback speed around 200%. A next-gen 2D motion graphics tool inspired by 3D software Originally released in 2020, Cavalry is a procedural animation app “combining the power and flexibility of 3D with the ease of use of 2D”.Although currently a pure 2D animation tool, it supports workflows that will be familiar to 3D animators, including keyframing, curve editing, deformation, rigging, scattering and instancing. Scene Group’s background is also in 3D motion graphics: the firm is a spin-off from Mainframe North, which developed MASH, Maya’s motion graphics toolset. Once created, images may be exported in a range of file formats, including as JPEG, PNG or SVG sequences, as animated PNGs, as WEBM or QuickTime movies, or in Lottie format. https://www.cgchannel.com/wp-content/uploads/2024/02/240206_Cavalry2_cameras.mp4 Add a Camera to a scene to create 2.5D animations Major changes in Cavalry 2.0 include support for Cameras, making it possible to create 2.5D effects like the one above.Users can create Freeform or Look At cameras, with the option to offset the position of the camera and look-at target to create secondary motion, and to set view distance limits for layers. Experimental new particle system creates 2D particle effects Cavalry 2.0 also introduces an experimental new particle system, for creating particle effects.It’s still a tech preview, but it already includes a range of standard basic features, including settings for particle shape, and a range of emitter types and modifiers. Particles can be emitted from points, shapes, paths or Distributions; and it is possible to direct particle motion with paths, goals, forces or turbulence. Other new features and performance improvements Other new features in Cavalry 2.0 include a new Auto-Animate behavior for animating Shapes with fewer keyframes, and support for tapered strokes along Shapes.Workflow improvements include the option to set up overrides for Pre-Comps, making it easier to create variants for a composition. Users can also now group Layers into simplified custom containers called Components, controlling which Attributes are exposed in the UI. Performance improvements include boosts of 10-600% in playback speed: the improvement is greater in complex scenes, but Scene Group says that the average is around 200%. Cavalry also now supports background rendering, making it possible to continue to work while a scene is rendering. https://www.cgchannel.com/wp-content/uploads/2024/02/240524_Cavalry21_tw.mp4 Updated 23 May 2024: Scene Group has released Cavalry 2.1. The update focuses on the audio tools, adding support for multi-track audio playback, and the option to export audio from Cavalry. Audio projects can be exported as AAC files, or in MP4, QuickTime or WebM files. It is also possible to import audio files in more formats, now including AAC, MP3 and CAF. Updated 11 November 2024: Scene Group has released Cavalry 2.2. The biggest change in the update is support for OpenType fonts in the Text Shape, with the option to control OpenType features like ligatures and superscript procedurally. It is also possible to create color gradients along Strokes, and to add multiple Strokes to paths. Other changes include the option to fill closed paths with Stitches, new Sweep and Shape Falloff patterns, a new Quick Mask mode, and proportional easing when scaling keyframes. Users of the paid Pro edition also get a new Knot behavior, which automatically adds gaps to paths where they self-intersect, and a new Stroke Duplicator feature. Released 11 December 2025: Scene Group has released Cavalry 2.3. Workflow improvements include the option to save presets for Layers, Compositions and Render Queue Items, and to ‘seed‘ random values for Attributes. Users of the Pro edition get the option to open multiple viewports, and to convert images or shape layers to contours, and to convert contours within shapes to sub-meshes. Updated 15 May 2025: Scene Group has released Cavalry 2.4. It’s a sizeable update, adding the option to write custom image-manipulation filters in SkSL, and save them as third-party plugins. SkSL, the shading language used by open-source 2D graphics library Skia, is a variant of GLSL, so code from sites like Shadertoy should work with “minor modifications”. The update also adds a new SLA shader for creating a range of animatable noise types. Text styling and .xslx import Other changes include new styling options in the Text Shape, making it possible to apply apply effects like underline, strikethrough and superscript to text.It is also now possible to import Excel .xslx files. Pro edition: use 2D meshes to deform images and shaders Users of the paid Pro edition also get the option to create 2D meshes to deform images or shaders by placing control vertices.When creating 2.5D animation, it is now possible to use a Camera Guide – a representation of the region of the scene visible to the camera – to drive its animation. For managing complex projects, a new Dependency Graph window provides an editable schematic view of a composition, replacing the old Flow Graph. Price and system requirements Cavalry 2.4 is available for Windows 10+ and macOS 12.0+. The full software is available rental-only, with Pro subscriptions costing £192/year (around $255/year). The free edition caps renders at full HD resolution, and lacks the advanced features in this table. Read a full list of new features in Cavalry in the online release notes Read an overview of original Cavalry 2.0 update on Scene Group’s blog Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Advanced tips for character art production in Unity

    In this guest post, Sakura Rabbit (@Sakura_Rabbiter) shares how she approaches art production and provides tips for creating a realistic character in Unity.I finally got some free time as of late and it got me thinking… How about I write something about character creation? I’ve just finished creating several characters in a row, and I’m quite familiar with the entire creation process.
    I’m not referring to things like the art design of worldviews, character backgrounds, or character implementation techniques.
    There are already plenty of articles that elaborate on those topics, so I won’t touch on them here.What else, then? After giving it some thought, I’ve decided to prepare an article about producing realistic characters in the Unity Editor.You might be thinking, “What brings Sakura Rabbit to this topic?” Alas, it’s all because I’ve gone through an uphill journey learning the skill from scratch.
    I’m writing this so you can learn from my mistakes and reduce errors in your work.Now, let’s get started!Generally speaking, the implementation process of a character model involves the following steps:1.
    Three-view drawing → 2.
    Prototype model → 3.
    High-precision model → 4.
    Low-polygon topology → 5.
    UV splitting → 6.
    Baking normal map → 7.
    Mapping → 8.
    Skin rigging → 9.
    Skeletal and vertex animation → 10,.
    Shader in the engine → 11.
    Rendering in the engine → 12.
    Real-time physics in the engine → 13.
    Animation application and animator → 14.
    Character controller/AI implementation → 15.
    Special effects, voice, sound effects, etc.There are 15 steps in total.
    The process might seem complicated, but from a character design standpoint, all these factors and details will influence how your character will ultimately be displayed in your game engine.
    Therefore, these numerous steps are necessary for the final product to achieve the desired effect.
    The entire process takes a long time, and all the steps must be done in a specific sequence – every step is crucial.
    If one isn’t done properly or if you try to cut corners, the final product will be directly affected.Let’s start by looking at the preliminary preparation work of art production.
    The 15 steps previously mentioned can be summarized into four main phases:Original drawing → modeling → animation → renderingIsn’t this much simpler? Now, let’s get straight to the point.
    Through my hands-on experience, I’ve learned some things – hopefully you find them useful in your own project!First of all, you should set up some checkpoints before you start.
    I’m going to skip the usual ones, such as the vertex counts, the size of the map, the number of bones, etc.
    Instead, I’m going to focus on the following:I’d like this character to have a human skeleton, since this will affect the subsequent AI implementation.
    The human skeleton is advantageous because it enables you to use the motion capture device or interval animation library to quickly create a set of high-quality animations that can be used on the controller or AI.In addition, you also need to plan ahead on the material effects you want for your character.
    To produce the desired effects, preliminary steps such as the UV, edge distribution, and mapping are indispensable.
    If you only think about them after completing the model and animation, you will most likely end up reworking your design.
    It’s best to think about effects ahead of time to avoid doing more work later.For some physics effects of the character, physical processing is required for certain components and must be done independently.
    This is another criterion you need to consider beforehand.With these checkpoints in place, the next step is implementation.
    Here’s how to get started.To ensure your character creation process runs smoothly, it’s important that the first step, namely the original drawing, is done carefully.
    Failure to do this properly beforehand may affect the structure or effects in the subsequent steps.
    Keep the following in mind when drawing to facilitate what you need to do next.Model: You need to make the drawing suitable for modeling.
    For example, will the structure of what you draw be difficult to implement during modeling? Will it be challenging to distribute the edges for certain structures when making low-polygon topology?Animation: Likewise, you need to make the drawing suitable for animation.
    For example, will rigging be difficult for certain parts of the animation? Which structure does not conform to the human skeleton?Shader: Next, you need to take into account shader implementation.
    Ask yourself: Will the shader of the material effect I draw be difficult to implement? How about the performance? How about the classification of materials? Does it come with special effects? Can it be implemented using one pass or multiple passes?Physics: Which structure requires simulated computation? How is the motion executed?By keeping all these in mind when drawing, you can streamline your work in the subsequent steps.Tip: When drawing a human body, you can use a 3D modeling software to assist you with the process.
    Not only will this improve your efficiency, but also ensure structural and perspectival relationships are correct.For modeling, the same rules apply – that is, take into consideration the steps that follow.
    Modeling must be done properly, and factors such as UV mapping, edge distribution, and material classifications must also be planned in advance.
    Modeling is the most critical part of the process since it needs to go through the animation process before it gets to rendering.
    If there is an issue in rendering, then the modeling and animation processes must be reworked.Mapping: You need to make the model suitable for mapping as well.
    Which structures can share the UV? Can you maximize the use of pixels of the map? Which components require Alpha?Animation: You need to consider how facial expressions are created in blend shape and how the model should be divided for UV.
    Also, you need to identify the body structures that require animation and determine how the edges should be distributed to make the rigging of the model more natural.Shader: Now it’s time to think about how the UV should be arranged so that it can deliver special effects for the implementation of shaders, as well as identifying which materials need to be separated when classifying modeling materials.Physics: Similarly, you need to distribute the edges properly to make the simulated effects appear more natural.When creating a model, the best way to avoid reworking is to take into account the subsequent steps and make plans in advance.Tip: When drawing high-polygon models in ZBrush or other software, it isn’t necessary to include minor detailed textures.
    Due to the resolution limit, the effect of the details will be very poor after being made into a map through direct baking.
    These details should be separated using Mask ID in the shader and added through Detail Map.
    Remember not to include them in the main map!Adding details in the shader directly is the way to go.During model rigging, it’s good practice to export files one by one in .obj format and then import them into the animation software to preserve your model’s authenticity.
    Then, check the normal orientations of the model, the layers of the file, and the allocation of the shader to see if there are any issues.
    If everything is good, you can proceed with model rigging.Bone positions play a key role in model rigging since they will decide whether the movement at the joints is natural.
    Let me say this again: it is extremely important! You will find yourself in trouble if the skin weight was fine, but the bone positions were wrong.Tip: Let’s use the hip bone, which is located in the middle of the rear, as an example.
    If you want the movement to look natural, the positioning of the bone must be accurate.
    Otherwise, the animation will be deformed when using the motion capture device or when applying it to other animations.At this stage, you’re very close to the final step of your work.
    Still, you can’t afford to take things lightly.
    There are several issues you should consider during the creation process:Model: Check the model again to make sure the orientation of the normals are aligned properly, the soft and hard edges are fine, the classification of the model components and materials is done correctly, the components that require blend shape are combined, and the materials and naming are handled.Animation: Determine whether the current bone structure meets the humanoid requirement in the engine.Shader: Check again whether the structures that require the effect are split.Physics: Identify the parts of the simulation that use bones and the ones that use vertices.Now, you have completed all the preliminary work before using the engine.
    Next, we need to import the entire set of the model map into Unity and merge all our preliminary work.Tip: When working on the skin weight, you can switch between the skinning software and the engine to test the effect.
    When the character is animated, it’s easier to identify problems.
    See the image below as an example.
    When the character is moving, you can see there’s a glitch when her scapula reaches a certain angle.
    This is due to the vertex weight not being smooth enough.Thanks to the checkpoints you set previously, the implementation process should be a walk in the park.For the shader, all you need to do is set or create the material for the separated components independently, as you will have already classified the materials during the model-making process.
    For animation adaptation, you can use the humanoid of Unity directly since you will have set the human skeleton standard beforehand.
    This way, you can save a lot of time on the animation work.In addition, you can also apply motion capture to further reduce our workload.
    If the blend shape you have made fulfills ARKit naming conventions, you can directly perform a facial motion capture to produce the animation of the facial blend shape.Tip: If you use Advanced Skeleton to do your rigging, the alignment of the character's scapula and shoulder nodes will most likely be incorrect when imported into Unity.
    To solve this, adjust it manually on the humanoid interface.Well, that’s it! In summary, throughout the character creation process, from original drawing to modeling, animation to rendering, I recommend a results-oriented approach and determining the steps that you should take to achieve the result you want.
    Furthermore, you should also have a thorough understanding of the entire production process so that you always know what to do next and what to take note of in the current step.Please share my post if you found it helpful!/ / /
    (^_^) Sakura Rabbit 樱花兔Sakura Rabbit’s character art was featured on the cover of our e-book, The definitive guide to creating advanced visual effects in Unity, which you can access for free here.
    See more from Sakura Rabbit on Twitter, Instagram, YouTube, and her FanBox page, where this article was originally published.
    Check out more blogs from Made with Unity developers here.

    المصدر: https://unity.com/blog/games/advanced-tips-for-character-art-production-in-unity

    #Advanced #tips #for #character #art #production #Unity
    Advanced tips for character art production in Unity
    In this guest post, Sakura Rabbit (@Sakura_Rabbiter) shares how she approaches art production and provides tips for creating a realistic character in Unity.I finally got some free time as of late and it got me thinking… How about I write something about character creation? I’ve just finished creating several characters in a row, and I’m quite familiar with the entire creation process. I’m not referring to things like the art design of worldviews, character backgrounds, or character implementation techniques. There are already plenty of articles that elaborate on those topics, so I won’t touch on them here.What else, then? After giving it some thought, I’ve decided to prepare an article about producing realistic characters in the Unity Editor.You might be thinking, “What brings Sakura Rabbit to this topic?” Alas, it’s all because I’ve gone through an uphill journey learning the skill from scratch. I’m writing this so you can learn from my mistakes and reduce errors in your work.Now, let’s get started!Generally speaking, the implementation process of a character model involves the following steps:1. Three-view drawing → 2. Prototype model → 3. High-precision model → 4. Low-polygon topology → 5. UV splitting → 6. Baking normal map → 7. Mapping → 8. Skin rigging → 9. Skeletal and vertex animation → 10,. Shader in the engine → 11. Rendering in the engine → 12. Real-time physics in the engine → 13. Animation application and animator → 14. Character controller/AI implementation → 15. Special effects, voice, sound effects, etc.There are 15 steps in total. The process might seem complicated, but from a character design standpoint, all these factors and details will influence how your character will ultimately be displayed in your game engine. Therefore, these numerous steps are necessary for the final product to achieve the desired effect. The entire process takes a long time, and all the steps must be done in a specific sequence – every step is crucial. If one isn’t done properly or if you try to cut corners, the final product will be directly affected.Let’s start by looking at the preliminary preparation work of art production. The 15 steps previously mentioned can be summarized into four main phases:Original drawing → modeling → animation → renderingIsn’t this much simpler? Now, let’s get straight to the point. Through my hands-on experience, I’ve learned some things – hopefully you find them useful in your own project!First of all, you should set up some checkpoints before you start. I’m going to skip the usual ones, such as the vertex counts, the size of the map, the number of bones, etc. Instead, I’m going to focus on the following:I’d like this character to have a human skeleton, since this will affect the subsequent AI implementation. The human skeleton is advantageous because it enables you to use the motion capture device or interval animation library to quickly create a set of high-quality animations that can be used on the controller or AI.In addition, you also need to plan ahead on the material effects you want for your character. To produce the desired effects, preliminary steps such as the UV, edge distribution, and mapping are indispensable. If you only think about them after completing the model and animation, you will most likely end up reworking your design. It’s best to think about effects ahead of time to avoid doing more work later.For some physics effects of the character, physical processing is required for certain components and must be done independently. This is another criterion you need to consider beforehand.With these checkpoints in place, the next step is implementation. Here’s how to get started.To ensure your character creation process runs smoothly, it’s important that the first step, namely the original drawing, is done carefully. Failure to do this properly beforehand may affect the structure or effects in the subsequent steps. Keep the following in mind when drawing to facilitate what you need to do next.Model: You need to make the drawing suitable for modeling. For example, will the structure of what you draw be difficult to implement during modeling? Will it be challenging to distribute the edges for certain structures when making low-polygon topology?Animation: Likewise, you need to make the drawing suitable for animation. For example, will rigging be difficult for certain parts of the animation? Which structure does not conform to the human skeleton?Shader: Next, you need to take into account shader implementation. Ask yourself: Will the shader of the material effect I draw be difficult to implement? How about the performance? How about the classification of materials? Does it come with special effects? Can it be implemented using one pass or multiple passes?Physics: Which structure requires simulated computation? How is the motion executed?By keeping all these in mind when drawing, you can streamline your work in the subsequent steps.Tip: When drawing a human body, you can use a 3D modeling software to assist you with the process. Not only will this improve your efficiency, but also ensure structural and perspectival relationships are correct.For modeling, the same rules apply – that is, take into consideration the steps that follow. Modeling must be done properly, and factors such as UV mapping, edge distribution, and material classifications must also be planned in advance. Modeling is the most critical part of the process since it needs to go through the animation process before it gets to rendering. If there is an issue in rendering, then the modeling and animation processes must be reworked.Mapping: You need to make the model suitable for mapping as well. Which structures can share the UV? Can you maximize the use of pixels of the map? Which components require Alpha?Animation: You need to consider how facial expressions are created in blend shape and how the model should be divided for UV. Also, you need to identify the body structures that require animation and determine how the edges should be distributed to make the rigging of the model more natural.Shader: Now it’s time to think about how the UV should be arranged so that it can deliver special effects for the implementation of shaders, as well as identifying which materials need to be separated when classifying modeling materials.Physics: Similarly, you need to distribute the edges properly to make the simulated effects appear more natural.When creating a model, the best way to avoid reworking is to take into account the subsequent steps and make plans in advance.Tip: When drawing high-polygon models in ZBrush or other software, it isn’t necessary to include minor detailed textures. Due to the resolution limit, the effect of the details will be very poor after being made into a map through direct baking. These details should be separated using Mask ID in the shader and added through Detail Map. Remember not to include them in the main map!Adding details in the shader directly is the way to go.During model rigging, it’s good practice to export files one by one in .obj format and then import them into the animation software to preserve your model’s authenticity. Then, check the normal orientations of the model, the layers of the file, and the allocation of the shader to see if there are any issues. If everything is good, you can proceed with model rigging.Bone positions play a key role in model rigging since they will decide whether the movement at the joints is natural. Let me say this again: it is extremely important! You will find yourself in trouble if the skin weight was fine, but the bone positions were wrong.Tip: Let’s use the hip bone, which is located in the middle of the rear, as an example. If you want the movement to look natural, the positioning of the bone must be accurate. Otherwise, the animation will be deformed when using the motion capture device or when applying it to other animations.At this stage, you’re very close to the final step of your work. Still, you can’t afford to take things lightly. There are several issues you should consider during the creation process:Model: Check the model again to make sure the orientation of the normals are aligned properly, the soft and hard edges are fine, the classification of the model components and materials is done correctly, the components that require blend shape are combined, and the materials and naming are handled.Animation: Determine whether the current bone structure meets the humanoid requirement in the engine.Shader: Check again whether the structures that require the effect are split.Physics: Identify the parts of the simulation that use bones and the ones that use vertices.Now, you have completed all the preliminary work before using the engine. Next, we need to import the entire set of the model map into Unity and merge all our preliminary work.Tip: When working on the skin weight, you can switch between the skinning software and the engine to test the effect. When the character is animated, it’s easier to identify problems. See the image below as an example. When the character is moving, you can see there’s a glitch when her scapula reaches a certain angle. This is due to the vertex weight not being smooth enough.Thanks to the checkpoints you set previously, the implementation process should be a walk in the park.For the shader, all you need to do is set or create the material for the separated components independently, as you will have already classified the materials during the model-making process. For animation adaptation, you can use the humanoid of Unity directly since you will have set the human skeleton standard beforehand. This way, you can save a lot of time on the animation work.In addition, you can also apply motion capture to further reduce our workload. If the blend shape you have made fulfills ARKit naming conventions, you can directly perform a facial motion capture to produce the animation of the facial blend shape.Tip: If you use Advanced Skeleton to do your rigging, the alignment of the character's scapula and shoulder nodes will most likely be incorrect when imported into Unity. To solve this, adjust it manually on the humanoid interface.Well, that’s it! In summary, throughout the character creation process, from original drawing to modeling, animation to rendering, I recommend a results-oriented approach and determining the steps that you should take to achieve the result you want. Furthermore, you should also have a thorough understanding of the entire production process so that you always know what to do next and what to take note of in the current step.Please share my post if you found it helpful!/ / / (^_^) Sakura Rabbit 樱花兔Sakura Rabbit’s character art was featured on the cover of our e-book, The definitive guide to creating advanced visual effects in Unity, which you can access for free here. See more from Sakura Rabbit on Twitter, Instagram, YouTube, and her FanBox page, where this article was originally published. Check out more blogs from Made with Unity developers here. المصدر: https://unity.com/blog/games/advanced-tips-for-character-art-production-in-unity #Advanced #tips #for #character #art #production #Unity
    UNITY.COM
    Advanced tips for character art production in Unity
    In this guest post, Sakura Rabbit (@Sakura_Rabbiter) shares how she approaches art production and provides tips for creating a realistic character in Unity.I finally got some free time as of late and it got me thinking… How about I write something about character creation? I’ve just finished creating several characters in a row, and I’m quite familiar with the entire creation process. I’m not referring to things like the art design of worldviews, character backgrounds, or character implementation techniques. There are already plenty of articles that elaborate on those topics, so I won’t touch on them here.What else, then? After giving it some thought, I’ve decided to prepare an article about producing realistic characters in the Unity Editor.You might be thinking, “What brings Sakura Rabbit to this topic?” Alas, it’s all because I’ve gone through an uphill journey learning the skill from scratch. I’m writing this so you can learn from my mistakes and reduce errors in your work.Now, let’s get started!Generally speaking, the implementation process of a character model involves the following steps:1. Three-view drawing → 2. Prototype model → 3. High-precision model → 4. Low-polygon topology → 5. UV splitting → 6. Baking normal map → 7. Mapping → 8. Skin rigging → 9. Skeletal and vertex animation → 10,. Shader in the engine → 11. Rendering in the engine → 12. Real-time physics in the engine → 13. Animation application and animator → 14. Character controller/AI implementation → 15. Special effects, voice, sound effects, etc.There are 15 steps in total. The process might seem complicated, but from a character design standpoint, all these factors and details will influence how your character will ultimately be displayed in your game engine. Therefore, these numerous steps are necessary for the final product to achieve the desired effect. The entire process takes a long time, and all the steps must be done in a specific sequence – every step is crucial. If one isn’t done properly or if you try to cut corners, the final product will be directly affected.Let’s start by looking at the preliminary preparation work of art production. The 15 steps previously mentioned can be summarized into four main phases:Original drawing → modeling → animation → renderingIsn’t this much simpler? Now, let’s get straight to the point. Through my hands-on experience, I’ve learned some things – hopefully you find them useful in your own project!First of all, you should set up some checkpoints before you start. I’m going to skip the usual ones, such as the vertex counts, the size of the map, the number of bones, etc. Instead, I’m going to focus on the following:I’d like this character to have a human skeleton, since this will affect the subsequent AI implementation. The human skeleton is advantageous because it enables you to use the motion capture device or interval animation library to quickly create a set of high-quality animations that can be used on the controller or AI.In addition, you also need to plan ahead on the material effects you want for your character. To produce the desired effects, preliminary steps such as the UV, edge distribution, and mapping are indispensable. If you only think about them after completing the model and animation, you will most likely end up reworking your design. It’s best to think about effects ahead of time to avoid doing more work later.For some physics effects of the character, physical processing is required for certain components and must be done independently. This is another criterion you need to consider beforehand.With these checkpoints in place, the next step is implementation. Here’s how to get started.To ensure your character creation process runs smoothly, it’s important that the first step, namely the original drawing, is done carefully. Failure to do this properly beforehand may affect the structure or effects in the subsequent steps. Keep the following in mind when drawing to facilitate what you need to do next.Model: You need to make the drawing suitable for modeling. For example, will the structure of what you draw be difficult to implement during modeling? Will it be challenging to distribute the edges for certain structures when making low-polygon topology?Animation: Likewise, you need to make the drawing suitable for animation. For example, will rigging be difficult for certain parts of the animation? Which structure does not conform to the human skeleton?Shader: Next, you need to take into account shader implementation. Ask yourself: Will the shader of the material effect I draw be difficult to implement? How about the performance? How about the classification of materials? Does it come with special effects? Can it be implemented using one pass or multiple passes?Physics: Which structure requires simulated computation? How is the motion executed?By keeping all these in mind when drawing, you can streamline your work in the subsequent steps.Tip: When drawing a human body, you can use a 3D modeling software to assist you with the process. Not only will this improve your efficiency, but also ensure structural and perspectival relationships are correct.For modeling, the same rules apply – that is, take into consideration the steps that follow. Modeling must be done properly, and factors such as UV mapping, edge distribution, and material classifications must also be planned in advance. Modeling is the most critical part of the process since it needs to go through the animation process before it gets to rendering. If there is an issue in rendering, then the modeling and animation processes must be reworked.Mapping: You need to make the model suitable for mapping as well. Which structures can share the UV? Can you maximize the use of pixels of the map? Which components require Alpha?Animation: You need to consider how facial expressions are created in blend shape and how the model should be divided for UV. Also, you need to identify the body structures that require animation and determine how the edges should be distributed to make the rigging of the model more natural.Shader: Now it’s time to think about how the UV should be arranged so that it can deliver special effects for the implementation of shaders, as well as identifying which materials need to be separated when classifying modeling materials.Physics: Similarly, you need to distribute the edges properly to make the simulated effects appear more natural.When creating a model, the best way to avoid reworking is to take into account the subsequent steps and make plans in advance.Tip: When drawing high-polygon models in ZBrush or other software, it isn’t necessary to include minor detailed textures. Due to the resolution limit, the effect of the details will be very poor after being made into a map through direct baking. These details should be separated using Mask ID in the shader and added through Detail Map. Remember not to include them in the main map!Adding details in the shader directly is the way to go.During model rigging, it’s good practice to export files one by one in .obj format and then import them into the animation software to preserve your model’s authenticity. Then, check the normal orientations of the model, the layers of the file, and the allocation of the shader to see if there are any issues. If everything is good, you can proceed with model rigging.Bone positions play a key role in model rigging since they will decide whether the movement at the joints is natural. Let me say this again: it is extremely important! You will find yourself in trouble if the skin weight was fine, but the bone positions were wrong.Tip: Let’s use the hip bone, which is located in the middle of the rear, as an example. If you want the movement to look natural, the positioning of the bone must be accurate. Otherwise, the animation will be deformed when using the motion capture device or when applying it to other animations.At this stage, you’re very close to the final step of your work. Still, you can’t afford to take things lightly. There are several issues you should consider during the creation process:Model: Check the model again to make sure the orientation of the normals are aligned properly, the soft and hard edges are fine, the classification of the model components and materials is done correctly, the components that require blend shape are combined, and the materials and naming are handled.Animation: Determine whether the current bone structure meets the humanoid requirement in the engine.Shader: Check again whether the structures that require the effect are split.Physics: Identify the parts of the simulation that use bones and the ones that use vertices.Now, you have completed all the preliminary work before using the engine. Next, we need to import the entire set of the model map into Unity and merge all our preliminary work.Tip: When working on the skin weight, you can switch between the skinning software and the engine to test the effect. When the character is animated, it’s easier to identify problems. See the image below as an example. When the character is moving, you can see there’s a glitch when her scapula reaches a certain angle. This is due to the vertex weight not being smooth enough.Thanks to the checkpoints you set previously, the implementation process should be a walk in the park.For the shader, all you need to do is set or create the material for the separated components independently, as you will have already classified the materials during the model-making process. For animation adaptation, you can use the humanoid of Unity directly since you will have set the human skeleton standard beforehand. This way, you can save a lot of time on the animation work.In addition, you can also apply motion capture to further reduce our workload. If the blend shape you have made fulfills ARKit naming conventions, you can directly perform a facial motion capture to produce the animation of the facial blend shape.Tip: If you use Advanced Skeleton to do your rigging, the alignment of the character's scapula and shoulder nodes will most likely be incorrect when imported into Unity. To solve this, adjust it manually on the humanoid interface.Well, that’s it! In summary, throughout the character creation process, from original drawing to modeling, animation to rendering, I recommend a results-oriented approach and determining the steps that you should take to achieve the result you want. Furthermore, you should also have a thorough understanding of the entire production process so that you always know what to do next and what to take note of in the current step.Please share my post if you found it helpful!/ / / (^_^) Sakura Rabbit 樱花兔Sakura Rabbit’s character art was featured on the cover of our e-book, The definitive guide to creating advanced visual effects in Unity, which you can access for free here. See more from Sakura Rabbit on Twitter, Instagram, YouTube, and her FanBox page, where this article was originally published. Check out more blogs from Made with Unity developers here.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • The Set Mesh Normal node has recently been added to Geometry Nodes in Blender, allowing users to add or modify custom normals directly within the node system. For example, if you're looking to perform this kind of mesh welding – that is, smoothly blending or "fusing" parts of a mesh together – you can achieve this using a combination of modifiers.

    Please note that for the Vertex Weight Proximity modifier to work properly (as noted by 3D Artist Slinc_HD), you need to select all vertices in Weight Paint mode and assign them 100% weight (1.0) before applying the modifier.

    This video by FR3NKD shows the setup in action. Original post: https://lnkd.in/gu-CKSXu
    The Set Mesh Normal node has recently been added to Geometry Nodes in Blender, allowing users to add or modify custom normals directly within the node system. For example, if you're looking to perform this kind of mesh welding – that is, smoothly blending or "fusing" parts of a mesh together – you can achieve this using a combination of modifiers. Please note that for the Vertex Weight Proximity modifier to work properly (as noted by 3D Artist Slinc_HD), you need to select all vertices in Weight Paint mode and assign them 100% weight (1.0) before applying the modifier. This video by FR3NKD shows the setup in action. Original post: https://lnkd.in/gu-CKSXu
    Like
    1
    0 Yorumlar 0 hisse senetleri 2 0 önizleme
  • 3D Paths in Substance 3D Painter

    The Adobe Substance 3D team always amazes me with their ability to continue implementing innovative tools with new updates to their tools. This time, the team released a new tool that allows you to paint paths and curves in the 3D viewport. It works like a regular brush on paint layers and effects but is created by placing points or vertices, which allows you to modify any part of your path.

    A game changer in texturing, period.
    3D Paths in Substance 3D Painter 🎨 The Adobe Substance 3D team always amazes me with their ability to continue implementing innovative tools with new updates to their tools. This time, the team released a new tool that allows you to paint paths and curves in the 3D viewport. It works like a regular brush on paint layers and effects but is created by placing points or vertices, which allows you to modify any part of your path. A game changer in texturing, period.
    Love
    2
    0 Yorumlar 0 hisse senetleri 23 0 önizleme
  • Take a look at this crazy hair rig in Blender

    Animator Banno Yuki amazes me once again. The artist shared a look at an incredible hair rig set up for an animation project. Everything is hand-made, it doesn't have a large number of vertices and doesn't use image textures. The creator noted it didn't take a lot of time to render the whole thing.
    Take a look at this crazy hair rig in Blender 💇‍♂️ Animator Banno Yuki amazes me once again. The artist shared a look at an incredible hair rig set up for an animation project. Everything is hand-made, it doesn't have a large number of vertices and doesn't use image textures. The creator noted it didn't take a lot of time to render the whole thing.
    Love
    Wow
    4
    0 Yorumlar 0 hisse senetleri 118 0 önizleme
Arama Sonuçları
CGShares https://cgshares.com