CG channel
CG channel
Daily news and inspiration for anyone interested in CG,
game art or VFX. Part of the Gnomon group.
  • 212 oameni carora le place asta
  • 739 Postari
  • 3 Fotografii
  • 9 Video
  • 0 previzualizare
  • News
Căutare
Recent Actualizat
  • Adobe releases After Effects 25.3

    Check out the new features in the compositing software for motion graphics and VFX, like the option to zoom in smoothly on compositions.
    #adobe #releases #after #effects
    Adobe releases After Effects 25.3
    Check out the new features in the compositing software for motion graphics and VFX, like the option to zoom in smoothly on compositions. #adobe #releases #after #effects
    Adobe releases After Effects 25.3
    Check out the new features in the compositing software for motion graphics and VFX, like the option to zoom in smoothly on compositions.
    Like
    Love
    Wow
    Angry
    Sad
    571
    2 Commentarii 0 Distribuiri
  • Download Unreal Engine 2D animation plugin Odyssey for free

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Epic Games has made Odyssey, Praxinos’s 2D animation plugin for Unreal Engine, available for free through Fab, its online marketplace.The software – which can be used for storyboarding or texturing 3D models as well as creating 2D animation – is available for free indefinitely, and will continue to be updated.
    A serious professional 2D animation tool created by former TVPaint staff

    Created by a team that includes former developers of standalone 2D animation software TVPaint, Odyssey has been in development since 2019.Part of that work was also funded by Epic Games, with Praxinos receiving an Epic MegaGrant for two of Odyssey’s precursors: painting plugin Iliad and storyboard and layout plugin Epos.
    Odyssey itself was released last year after beta testing at French animation studios including Ellipse Animation, and originally cost €1,200 for a perpetual license.

    Create 2D animation, storyboards, or textures for 3D models

    Although Odyssey’s main function is to create 2D animation – for movie and broadcast projects, motion graphics, or even games – the plugin adds a wider 2D toolset to Unreal Engine.Other use cases include storyboarding – you can import image sequences and turn them into storyboards – and texturing, either by painting 2D texture maps, or painting onto 3D meshes.
    It supports both 2D and 3D workflows, with the 2D editors – which include a flipbook editor as well as the 2D texture and animation editors – complemented by a 3D viewport.
    The bitmap painting toolset makes use of Unreal Engine’s Blueprint system, making it possible for users to create new painting brushes using a node-based workflow, and supports pressure sensitivity on graphics tablets.
    There is also a vector toolset for creating hard-edged shapes.
    Animation features include onion skinning, Toon Boom-style shift and trace, and automatic inbetweening.
    The plugin supports standard 2D and 3D file formats, including PSD, FBX and USD.
    Available for free indefinitely, but future updates planned

    Epic Games regularly makes Unreal Engine assets available for free through Fab, but usually only for a limited period of time.Odyssey is different, in that it is available for free indefinitely.
    However, it will continue to get updates: according to Epic Games’ blog post, Praxinos “plans to work in close collaboration with Epic Games and continue to enhance Odyssey”.
    As well as Odyssey itself, Praxinos offers custom tools development and training, which will hopefully also help to support future development.
    System requirements and availability

    Odyssey is compatible with Unreal Engine 5.6 on Windows and macOS. It is available for free under a Fab Standard License, including for commercial use. about Odyssey on Praxinos’s website
    Find more detailed information in Odyssey’s online manual
    Download Unreal Engine 2D animation plugin Odyssey for free

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #download #unreal #engine #animation #plugin
    Download Unreal Engine 2D animation plugin Odyssey for free
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Epic Games has made Odyssey, Praxinos’s 2D animation plugin for Unreal Engine, available for free through Fab, its online marketplace.The software – which can be used for storyboarding or texturing 3D models as well as creating 2D animation – is available for free indefinitely, and will continue to be updated. A serious professional 2D animation tool created by former TVPaint staff Created by a team that includes former developers of standalone 2D animation software TVPaint, Odyssey has been in development since 2019.Part of that work was also funded by Epic Games, with Praxinos receiving an Epic MegaGrant for two of Odyssey’s precursors: painting plugin Iliad and storyboard and layout plugin Epos. Odyssey itself was released last year after beta testing at French animation studios including Ellipse Animation, and originally cost €1,200 for a perpetual license. Create 2D animation, storyboards, or textures for 3D models Although Odyssey’s main function is to create 2D animation – for movie and broadcast projects, motion graphics, or even games – the plugin adds a wider 2D toolset to Unreal Engine.Other use cases include storyboarding – you can import image sequences and turn them into storyboards – and texturing, either by painting 2D texture maps, or painting onto 3D meshes. It supports both 2D and 3D workflows, with the 2D editors – which include a flipbook editor as well as the 2D texture and animation editors – complemented by a 3D viewport. The bitmap painting toolset makes use of Unreal Engine’s Blueprint system, making it possible for users to create new painting brushes using a node-based workflow, and supports pressure sensitivity on graphics tablets. There is also a vector toolset for creating hard-edged shapes. Animation features include onion skinning, Toon Boom-style shift and trace, and automatic inbetweening. The plugin supports standard 2D and 3D file formats, including PSD, FBX and USD. Available for free indefinitely, but future updates planned Epic Games regularly makes Unreal Engine assets available for free through Fab, but usually only for a limited period of time.Odyssey is different, in that it is available for free indefinitely. However, it will continue to get updates: according to Epic Games’ blog post, Praxinos “plans to work in close collaboration with Epic Games and continue to enhance Odyssey”. As well as Odyssey itself, Praxinos offers custom tools development and training, which will hopefully also help to support future development. System requirements and availability Odyssey is compatible with Unreal Engine 5.6 on Windows and macOS. It is available for free under a Fab Standard License, including for commercial use. about Odyssey on Praxinos’s website Find more detailed information in Odyssey’s online manual Download Unreal Engine 2D animation plugin Odyssey for free Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #download #unreal #engine #animation #plugin
    Download Unreal Engine 2D animation plugin Odyssey for free
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Epic Games has made Odyssey, Praxinos’s 2D animation plugin for Unreal Engine, available for free through Fab, its online marketplace.The software – which can be used for storyboarding or texturing 3D models as well as creating 2D animation – is available for free indefinitely, and will continue to be updated. A serious professional 2D animation tool created by former TVPaint staff Created by a team that includes former developers of standalone 2D animation software TVPaint, Odyssey has been in development since 2019.Part of that work was also funded by Epic Games, with Praxinos receiving an Epic MegaGrant for two of Odyssey’s precursors: painting plugin Iliad and storyboard and layout plugin Epos. Odyssey itself was released last year after beta testing at French animation studios including Ellipse Animation, and originally cost €1,200 for a perpetual license. Create 2D animation, storyboards, or textures for 3D models Although Odyssey’s main function is to create 2D animation – for movie and broadcast projects, motion graphics, or even games – the plugin adds a wider 2D toolset to Unreal Engine.Other use cases include storyboarding – you can import image sequences and turn them into storyboards – and texturing, either by painting 2D texture maps, or painting onto 3D meshes. It supports both 2D and 3D workflows, with the 2D editors – which include a flipbook editor as well as the 2D texture and animation editors – complemented by a 3D viewport. The bitmap painting toolset makes use of Unreal Engine’s Blueprint system, making it possible for users to create new painting brushes using a node-based workflow, and supports pressure sensitivity on graphics tablets. There is also a vector toolset for creating hard-edged shapes. Animation features include onion skinning, Toon Boom-style shift and trace, and automatic inbetweening. The plugin supports standard 2D and 3D file formats, including PSD, FBX and USD. Available for free indefinitely, but future updates planned Epic Games regularly makes Unreal Engine assets available for free through Fab, but usually only for a limited period of time.Odyssey is different, in that it is available for free indefinitely. However, it will continue to get updates: according to Epic Games’ blog post, Praxinos “plans to work in close collaboration with Epic Games and continue to enhance Odyssey”. As well as Odyssey itself, Praxinos offers custom tools development and training, which will hopefully also help to support future development. System requirements and availability Odyssey is compatible with Unreal Engine 5.6 on Windows and macOS. It is available for free under a Fab Standard License, including for commercial use.Read more about Odyssey on Praxinos’s website Find more detailed information in Odyssey’s online manual Download Unreal Engine 2D animation plugin Odyssey for free Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Tutorial: Practical Lighting for Production

    Saturday, June 14th, 2025
    Posted by Jim Thacker
    Tutorial: Practical Lighting for Production

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    The Gnomon Workshop has released Practical Lighting for Production, a guide to VFX and cinematics workflows recorded by former Blizzard lighting lead Graham Cunningham.
    The intermediate-level workshop provides four hours of training in Maya, Arnold and Nuke.
    Discover professional workflows for lighting a CG shot to match a movie reference
    In the workshop, Cunningham sets out the complete process of lighting and compositing a shot to match a movie reference, using industry-standard software.
    He begins by setting up a basic look development light rig in Maya, importing a 3D character, assigning materials and shading components, and creating a turntable setup.
    Next, he creates a shot camera and set dresses the environment using kitbash assets.
    Cunningham also discusses strategies for lighting a character, including how to use dome lights and area lights to provide key, fill and rim lighting, and how to use HDRI maps.
    From there, he moves to rendering using Arnold, discussing render settings, depth of field, and how to create render passes.
    Cunningham then assembles the render passes in Nuke, splits out the light AOVs, and sets out how to adjust light colors and intensities.
    He also reveals how to add atmosphere, how to use cryptomattes to fine tune the results, how to add post effects, and how to apply a final color grade to match a chosen movie reference.
    As well as the tutorial videos, viewers of the workshop can download one of Cunningham’s Maya files.
    The workshop uses 3D Scan Store’s commercial Female Explorer Game Character, and KitBash3D’s Wreckage Kit, plus assets from KitBash3D’s Cargo.
    About the artist
    Graham Cunningham is a Senior Lighting, Compositing and Lookdev Artist, beginning his career as a generalist working in VFX for film and TV before moving to Blizzard Entertainment.
    At Blizzard, he contributed to cinematics for Diablo IV, Diablo Immortal, Starcraft II, Heroes of the Storm, World of Warcraft, Overwatch, and Overwatch 2, many of them as a lead lighting artist.
    Pricing and availability
    Practical Lighting for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.
    Subscriptions cost /month or /year. Free trials are available.
    about Practical Lighting for Production on The Gnomon Workshop’s website

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Full disclosure: CG Channel is owned by Gnomon.

    Latest News

    DreamWorks Animation releases MoonRay 2.15
    Check out the new features in the open-source release of DreamWorks Animation's production renderer. used on movies like The Wild Robot.
    Sunday, June 15th, 2025

    Tutorial: Practical Lighting for Production
    Master professional CG lighting workflows with former Blizzard lighting lead Graham Cunningham's tutorial for The Gnomon Workshop.
    Saturday, June 14th, 2025

    Boris FX releases Mocha Pro 2025.5
    Planar tracking tool gets new AI face recognition system for automatically obscuring identities in footage. Check out its other new features.
    Friday, June 13th, 2025

    Leopoly adds voxel sculpting to Shapelab 2025
    Summer 2025 update to the VR modeling app expands the new voxel engine for blocking out 3D forms. See the other new features.
    Friday, June 13th, 2025

    iRender: the next-gen render farm for OctaneRenderOnline render farm iRender explains why its powerful, affordable GPU rendering solutions are a must for OctaneRender users.
    Wednesday, June 11th, 2025

    Master Architectural Design for Games using Blender & UE5
    Discover how to create game environments grounded in architectural principles with The Gnomon Workshop's new tutorial.
    Monday, June 9th, 2025

    More News
    Epic Games' free Live Link Face app is now available for Android
    Adobe launches Photoshop on Android and iPhone
    Sketchsoft releases Feather 1.3
    Autodesk releases 3ds Max 2026.1
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    You can now sell MetaHumans, or use them in Unity or Godot
    Epic Games to rebrand RealityCapture as RealityScan 2.0
    Epic Games releases Unreal Engine 5.6
    Pulze releases new network render manager RenderFlow 1.0
    Xencelabs launches Pen Tablet Medium v2
    Desktop edition of sculpting app Nomad enters free beta
    Boris FX releases Silhouette 2025
    Older Posts
    #tutorial #practical #lighting #production
    Tutorial: Practical Lighting for Production
    Saturday, June 14th, 2025 Posted by Jim Thacker Tutorial: Practical Lighting for Production html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The Gnomon Workshop has released Practical Lighting for Production, a guide to VFX and cinematics workflows recorded by former Blizzard lighting lead Graham Cunningham. The intermediate-level workshop provides four hours of training in Maya, Arnold and Nuke. Discover professional workflows for lighting a CG shot to match a movie reference In the workshop, Cunningham sets out the complete process of lighting and compositing a shot to match a movie reference, using industry-standard software. He begins by setting up a basic look development light rig in Maya, importing a 3D character, assigning materials and shading components, and creating a turntable setup. Next, he creates a shot camera and set dresses the environment using kitbash assets. Cunningham also discusses strategies for lighting a character, including how to use dome lights and area lights to provide key, fill and rim lighting, and how to use HDRI maps. From there, he moves to rendering using Arnold, discussing render settings, depth of field, and how to create render passes. Cunningham then assembles the render passes in Nuke, splits out the light AOVs, and sets out how to adjust light colors and intensities. He also reveals how to add atmosphere, how to use cryptomattes to fine tune the results, how to add post effects, and how to apply a final color grade to match a chosen movie reference. As well as the tutorial videos, viewers of the workshop can download one of Cunningham’s Maya files. The workshop uses 3D Scan Store’s commercial Female Explorer Game Character, and KitBash3D’s Wreckage Kit, plus assets from KitBash3D’s Cargo. About the artist Graham Cunningham is a Senior Lighting, Compositing and Lookdev Artist, beginning his career as a generalist working in VFX for film and TV before moving to Blizzard Entertainment. At Blizzard, he contributed to cinematics for Diablo IV, Diablo Immortal, Starcraft II, Heroes of the Storm, World of Warcraft, Overwatch, and Overwatch 2, many of them as a lead lighting artist. Pricing and availability Practical Lighting for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials. Subscriptions cost /month or /year. Free trials are available. about Practical Lighting for Production on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon. Latest News DreamWorks Animation releases MoonRay 2.15 Check out the new features in the open-source release of DreamWorks Animation's production renderer. used on movies like The Wild Robot. Sunday, June 15th, 2025 Tutorial: Practical Lighting for Production Master professional CG lighting workflows with former Blizzard lighting lead Graham Cunningham's tutorial for The Gnomon Workshop. Saturday, June 14th, 2025 Boris FX releases Mocha Pro 2025.5 Planar tracking tool gets new AI face recognition system for automatically obscuring identities in footage. Check out its other new features. Friday, June 13th, 2025 Leopoly adds voxel sculpting to Shapelab 2025 Summer 2025 update to the VR modeling app expands the new voxel engine for blocking out 3D forms. See the other new features. Friday, June 13th, 2025 iRender: the next-gen render farm for OctaneRenderOnline render farm iRender explains why its powerful, affordable GPU rendering solutions are a must for OctaneRender users. Wednesday, June 11th, 2025 Master Architectural Design for Games using Blender & UE5 Discover how to create game environments grounded in architectural principles with The Gnomon Workshop's new tutorial. Monday, June 9th, 2025 More News Epic Games' free Live Link Face app is now available for Android Adobe launches Photoshop on Android and iPhone Sketchsoft releases Feather 1.3 Autodesk releases 3ds Max 2026.1 Autodesk adds AI animation tool MotionMaker to Maya 2026.1 You can now sell MetaHumans, or use them in Unity or Godot Epic Games to rebrand RealityCapture as RealityScan 2.0 Epic Games releases Unreal Engine 5.6 Pulze releases new network render manager RenderFlow 1.0 Xencelabs launches Pen Tablet Medium v2 Desktop edition of sculpting app Nomad enters free beta Boris FX releases Silhouette 2025 Older Posts #tutorial #practical #lighting #production
    Tutorial: Practical Lighting for Production
    Saturday, June 14th, 2025 Posted by Jim Thacker Tutorial: Practical Lighting for Production html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The Gnomon Workshop has released Practical Lighting for Production, a guide to VFX and cinematics workflows recorded by former Blizzard lighting lead Graham Cunningham. The intermediate-level workshop provides four hours of training in Maya, Arnold and Nuke. Discover professional workflows for lighting a CG shot to match a movie reference In the workshop, Cunningham sets out the complete process of lighting and compositing a shot to match a movie reference, using industry-standard software. He begins by setting up a basic look development light rig in Maya, importing a 3D character, assigning materials and shading components, and creating a turntable setup. Next, he creates a shot camera and set dresses the environment using kitbash assets. Cunningham also discusses strategies for lighting a character, including how to use dome lights and area lights to provide key, fill and rim lighting, and how to use HDRI maps. From there, he moves to rendering using Arnold, discussing render settings, depth of field, and how to create render passes. Cunningham then assembles the render passes in Nuke, splits out the light AOVs, and sets out how to adjust light colors and intensities. He also reveals how to add atmosphere, how to use cryptomattes to fine tune the results, how to add post effects, and how to apply a final color grade to match a chosen movie reference. As well as the tutorial videos, viewers of the workshop can download one of Cunningham’s Maya files. The workshop uses 3D Scan Store’s commercial Female Explorer Game Character, and KitBash3D’s Wreckage Kit, plus assets from KitBash3D’s Cargo. About the artist Graham Cunningham is a Senior Lighting, Compositing and Lookdev Artist, beginning his career as a generalist working in VFX for film and TV before moving to Blizzard Entertainment. At Blizzard, he contributed to cinematics for Diablo IV, Diablo Immortal, Starcraft II, Heroes of the Storm, World of Warcraft, Overwatch, and Overwatch 2, many of them as a lead lighting artist. Pricing and availability Practical Lighting for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials. Subscriptions cost $57/month or $519/year. Free trials are available. Read more about Practical Lighting for Production on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon. Latest News DreamWorks Animation releases MoonRay 2.15 Check out the new features in the open-source release of DreamWorks Animation's production renderer. used on movies like The Wild Robot. Sunday, June 15th, 2025 Tutorial: Practical Lighting for Production Master professional CG lighting workflows with former Blizzard lighting lead Graham Cunningham's tutorial for The Gnomon Workshop. Saturday, June 14th, 2025 Boris FX releases Mocha Pro 2025.5 Planar tracking tool gets new AI face recognition system for automatically obscuring identities in footage. Check out its other new features. Friday, June 13th, 2025 Leopoly adds voxel sculpting to Shapelab 2025 Summer 2025 update to the VR modeling app expands the new voxel engine for blocking out 3D forms. See the other new features. Friday, June 13th, 2025 iRender: the next-gen render farm for OctaneRender [Sponsored] Online render farm iRender explains why its powerful, affordable GPU rendering solutions are a must for OctaneRender users. Wednesday, June 11th, 2025 Master Architectural Design for Games using Blender & UE5 Discover how to create game environments grounded in architectural principles with The Gnomon Workshop's new tutorial. Monday, June 9th, 2025 More News Epic Games' free Live Link Face app is now available for Android Adobe launches Photoshop on Android and iPhone Sketchsoft releases Feather 1.3 Autodesk releases 3ds Max 2026.1 Autodesk adds AI animation tool MotionMaker to Maya 2026.1 You can now sell MetaHumans, or use them in Unity or Godot Epic Games to rebrand RealityCapture as RealityScan 2.0 Epic Games releases Unreal Engine 5.6 Pulze releases new network render manager RenderFlow 1.0 Xencelabs launches Pen Tablet Medium v2 Desktop edition of sculpting app Nomad enters free beta Boris FX releases Silhouette 2025 Older Posts
    0 Commentarii 0 Distribuiri
  • Autodesk adds AI animation tool MotionMaker to Maya 2026.1

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations.

    Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work.
    Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows.
    Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios.

    MotionMaker: new generative AI tool roughs out movement animations

    The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation.
    Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between.
    At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting.
    Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation.
    Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”.
    Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools.
    Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example.
    There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points.
    There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated.
    You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog.
    According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”.
    That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additionalstyles” in future.

    Bifrost: new modular character rigging framework

    Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch.
    The release is compatibility-breaking, and does not work with earlier versions of the toolset.
    The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”.
    Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes.

    Bifrost: improvements to liquid simulation and workflow
    Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation.
    The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions.
    In addition, a new parameter controls air drag on foam and spray thrown out by a liquid.
    Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”.

    LookdevX: support for OpenPBR in FBX files
    LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated.
    Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024.
    To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers.
    LookdevX 1.8 also features a number of workflow improvements, particularly on macOS.
    USD for Maya: workflow improvements

    USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport.
    Arnold for Maya: performance improvements

    Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores.
    Maya Creative 2026.1 also released

    Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya.
    Price and system requirements

    Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost /month or /year, up a further /month or /year since the release of Maya 2026.
    In many countries, artists earning under /year and working on projects valued at under /year, qualify for Maya Indie subscriptions, now priced at /year.
    Maya Creative is available pay-as-you-go, with prices starting at /day, and a minimum spend of /year.
    Read a full list of new features in Maya 2026.1 in the online documentation

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #autodesk #adds #animation #tool #motionmaker
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations. Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work. Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows. Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios. MotionMaker: new generative AI tool roughs out movement animations The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation. Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between. At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting. Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation. Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”. Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools. Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example. There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points. There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated. You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog. According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”. That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additionalstyles” in future. Bifrost: new modular character rigging framework Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch. The release is compatibility-breaking, and does not work with earlier versions of the toolset. The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”. Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes. Bifrost: improvements to liquid simulation and workflow Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation. The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions. In addition, a new parameter controls air drag on foam and spray thrown out by a liquid. Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”. LookdevX: support for OpenPBR in FBX files LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated. Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024. To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers. LookdevX 1.8 also features a number of workflow improvements, particularly on macOS. USD for Maya: workflow improvements USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport. Arnold for Maya: performance improvements Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Maya Creative 2026.1 also released Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya. Price and system requirements Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost /month or /year, up a further /month or /year since the release of Maya 2026. In many countries, artists earning under /year and working on projects valued at under /year, qualify for Maya Indie subscriptions, now priced at /year. Maya Creative is available pay-as-you-go, with prices starting at /day, and a minimum spend of /year. Read a full list of new features in Maya 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #autodesk #adds #animation #tool #motionmaker
    WWW.CGCHANNEL.COM
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations. Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work. Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows. Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios. MotionMaker: new generative AI tool roughs out movement animations The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation. Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between. At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting. Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation. Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”. Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools. Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example. There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points. There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated. You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog. According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”. That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additional [motion] styles” in future. Bifrost: new modular character rigging framework Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch. The release is compatibility-breaking, and does not work with earlier versions of the toolset. The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”. Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes. Bifrost: improvements to liquid simulation and workflow Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation. The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions. In addition, a new parameter controls air drag on foam and spray thrown out by a liquid. Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”. LookdevX: support for OpenPBR in FBX files LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated. Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024. To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers. LookdevX 1.8 also features a number of workflow improvements, particularly on macOS. USD for Maya: workflow improvements USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport. Arnold for Maya: performance improvements Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Maya Creative 2026.1 also released Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya. Price and system requirements Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost $255/month or $2,010/year, up a further $10/month or $65/year since the release of Maya 2026. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year, qualify for Maya Indie subscriptions, now priced at $330/year. Maya Creative is available pay-as-you-go, with prices starting at $3/day, and a minimum spend of $300/year. Read a full list of new features in Maya 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    498
    0 Commentarii 0 Distribuiri
  • Autodesk releases 3ds Max 2026.1

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";



    Autodesk has released 3ds Max 2026.1, the latest version of its 3D modelling and animation software for architectural visualization, motion graphics and VFX.The release adds a new Attribute Transfer modifier, for transferring UVs or vertex data between 3D objects, and updates the Push and XForm modifiers.
    3D modeling: new Attribute Transfer modifier

    The new feature in 3ds Max 2026.1 is the Attribute Transfer modifier, shown in the video above.It transfers attributes – vertex positions, normals and colors, plus up to two sets of UVs – from one model to another.
    According to Autodesk, it provides a “new, non-destructive modifier workflow” for tasks that were typically done through scripting.


    3D modeling and animation: new controls for the Push modifier

    There are also updates to a couple of the existing modifiers, including the Push modifier, used to inflate or deflate 3D meshes.It gets a number of new options, including the option to use other objects in a scene to limit the result of the push operation.
    When the result collides with a control object, it stops moving in that direction in real time.
    A new Relax Iterations setting smooths the mesh in a similar way to the Relax modifier, helping to prevent issues with self-intersection.
    It is also now possible to limit push operations to specific axes.


    Layout and animation: new options for the XForm modifier

    The XForm modifier, used to apply transformations non-destructively to objects, gets a choice of four transformation modes.As well as the previous default behavior, users can now transform objects in local space, world space, or relative to another object in the scene.
    There is also a self-explanatory new Preserve Normals checkbox.
    USD for 3ds Max: convert USD geometry to native 3ds Max objects

    3ds Max’s Universal Scene Description plugin has been updated, with USD for 3ds Max 0.11 adding a new Promote to 3ds Max Object option. It promotes USD geometry to a 3ds Max object, making it possible to work on it using native 3ds Max tools.
    In addition, the USD Exporter now supports the OpenPBR material, which has been 3ds Max’s default material since 3ds Max 2026.
    There are also a number of workflow improvements, and new Python functions for manipulating USD layers.
    Arnold for 3ds Max: performance improvements

    3ds Max’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MAXtoA 5.8.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores.
    Price and system requirements

    3ds Max 2026.1 is compatible with Windows 10+. It is rental-only. Subscriptions cost /month, up a further /month since 3ds Max 2026, or /year, up /year.In many countries, artists earning under /year and working on projects valued at under /year qualify for Indie subscriptions, which now cost /year.
    Read a full list of new features in 3ds Max 2026.1 in the online documentation

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #autodesk #releases #3ds #max
    Autodesk releases 3ds Max 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Autodesk has released 3ds Max 2026.1, the latest version of its 3D modelling and animation software for architectural visualization, motion graphics and VFX.The release adds a new Attribute Transfer modifier, for transferring UVs or vertex data between 3D objects, and updates the Push and XForm modifiers. 3D modeling: new Attribute Transfer modifier The new feature in 3ds Max 2026.1 is the Attribute Transfer modifier, shown in the video above.It transfers attributes – vertex positions, normals and colors, plus up to two sets of UVs – from one model to another. According to Autodesk, it provides a “new, non-destructive modifier workflow” for tasks that were typically done through scripting. 3D modeling and animation: new controls for the Push modifier There are also updates to a couple of the existing modifiers, including the Push modifier, used to inflate or deflate 3D meshes.It gets a number of new options, including the option to use other objects in a scene to limit the result of the push operation. When the result collides with a control object, it stops moving in that direction in real time. A new Relax Iterations setting smooths the mesh in a similar way to the Relax modifier, helping to prevent issues with self-intersection. It is also now possible to limit push operations to specific axes. Layout and animation: new options for the XForm modifier The XForm modifier, used to apply transformations non-destructively to objects, gets a choice of four transformation modes.As well as the previous default behavior, users can now transform objects in local space, world space, or relative to another object in the scene. There is also a self-explanatory new Preserve Normals checkbox. USD for 3ds Max: convert USD geometry to native 3ds Max objects 3ds Max’s Universal Scene Description plugin has been updated, with USD for 3ds Max 0.11 adding a new Promote to 3ds Max Object option. It promotes USD geometry to a 3ds Max object, making it possible to work on it using native 3ds Max tools. In addition, the USD Exporter now supports the OpenPBR material, which has been 3ds Max’s default material since 3ds Max 2026. There are also a number of workflow improvements, and new Python functions for manipulating USD layers. Arnold for 3ds Max: performance improvements 3ds Max’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MAXtoA 5.8.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Price and system requirements 3ds Max 2026.1 is compatible with Windows 10+. It is rental-only. Subscriptions cost /month, up a further /month since 3ds Max 2026, or /year, up /year.In many countries, artists earning under /year and working on projects valued at under /year qualify for Indie subscriptions, which now cost /year. Read a full list of new features in 3ds Max 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #autodesk #releases #3ds #max
    Autodesk releases 3ds Max 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" https://www.cgchannel.com/wp-content/uploads/2025/06/250605_3dsMax20261_AttributeTransfer.mp4 Autodesk has released 3ds Max 2026.1, the latest version of its 3D modelling and animation software for architectural visualization, motion graphics and VFX.The release adds a new Attribute Transfer modifier, for transferring UVs or vertex data between 3D objects, and updates the Push and XForm modifiers. 3D modeling: new Attribute Transfer modifier The new feature in 3ds Max 2026.1 is the Attribute Transfer modifier, shown in the video above.It transfers attributes – vertex positions, normals and colors, plus up to two sets of UVs – from one model to another. According to Autodesk, it provides a “new, non-destructive modifier workflow” for tasks that were typically done through scripting. https://www.cgchannel.com/wp-content/uploads/2025/06/250605_3dsMax20261_PushModifier.mp4 3D modeling and animation: new controls for the Push modifier There are also updates to a couple of the existing modifiers, including the Push modifier, used to inflate or deflate 3D meshes.It gets a number of new options, including the option to use other objects in a scene to limit the result of the push operation. When the result collides with a control object, it stops moving in that direction in real time. A new Relax Iterations setting smooths the mesh in a similar way to the Relax modifier, helping to prevent issues with self-intersection. It is also now possible to limit push operations to specific axes. https://www.cgchannel.com/wp-content/uploads/2025/06/250605_3dsMax20261_XForm.mp4 Layout and animation: new options for the XForm modifier The XForm modifier, used to apply transformations non-destructively to objects, gets a choice of four transformation modes.As well as the previous default behavior, users can now transform objects in local space, world space, or relative to another object in the scene. There is also a self-explanatory new Preserve Normals checkbox. USD for 3ds Max: convert USD geometry to native 3ds Max objects 3ds Max’s Universal Scene Description plugin has been updated, with USD for 3ds Max 0.11 adding a new Promote to 3ds Max Object option. It promotes USD geometry to a 3ds Max object, making it possible to work on it using native 3ds Max tools. In addition, the USD Exporter now supports the OpenPBR material, which has been 3ds Max’s default material since 3ds Max 2026. There are also a number of workflow improvements, and new Python functions for manipulating USD layers. Arnold for 3ds Max: performance improvements 3ds Max’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MAXtoA 5.8.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Price and system requirements 3ds Max 2026.1 is compatible with Windows 10+. It is rental-only. Subscriptions cost $255/month, up a further $10/month since 3ds Max 2026, or $2,010/year, up $65/year.In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year qualify for Indie subscriptions, which now cost $330/year. Read a full list of new features in 3ds Max 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    428
    0 Commentarii 0 Distribuiri
  • Epic Games to rebrand RealityCapture as RealityScan 2.0

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Epic Games is rebranding RealityCapture, its professional desktop photogrammetry software, as RealityScan.RealityScan 2.0, due in the “coming weeks”, will unify the desktop application with the existing RealityScan: Epic Games’ free 3D scanning app for iOS and Android devices.
    The update will also introduce new features including AI-based mask generation, support for aerial Lidar data, and new visual tools for troubleshooting scan quality.
    A desktop photogrammetry tool for games, VFX, visualization and urban planning

    First released in 2016, RealityCapture generates accurate triangle-based meshes of real-world objects, from people and props to environments.Its core photogrammetry toolset, for generating 3D meshes from sets of source images, is augmented by support for laser scan data.
    The software includes features aimed at aerial surveying and urban planning, but is also used in the entertainment industry to generate assets for use in games and VFX.
    RealityCapture was acquired by Epic Games in 2021, which made the software available free to artists and studios with revenue under million/year last year.
    Now rebranded as RealityScan to unify it with the existing mobile app

    RealityCapture 2.0 – or rather, RealityScan 2.0 – is a change of branding, with the desktop application taking its new name and logo from Epic Games’ existing mobile scanning app.First released in 2022, RealityScan was originally pitched as a way to make RealityCapture’s functionality accessible to hobbyists as well as pros.
    It’s a pure photogrammetry tool, turning photos captured on a mobile phone or tablet into textured 3D models for use in AR, game development or general 3D work.
    RealityScan 2.0: AI masking, new Quality Analysis Tool, and support for aerial Lidar data

    New features in RealityCapture 2.0 will include AI-powered masking, with the software automatically identifying and masking out the background of the source images.The change should remove the need to generate masks manually, either in RealityCapture itself or an external DCC app.
    In addition, the default settings have been updated to improve alignment of source images, particularly when scanning objects with smooth surfaces and few surface features.
    To help troubleshoot scans, a new Quality Analysis Tool displays heatmaps showing parts of the scan where more images may be needed to reconstruct the source object accurately.
    The update will also introduce support for aerial Lidar data, which may be used alongside aerial photography and terrestrial data to reconstruct environments more accurately.
    No information yet on how the new features break down between desktop and mobile

    It isn’t clear which of those new features will be included in the mobile app, although it will presumably also be updated to version 2.0 at the same time, since Epic Games’ blog post announcing the changes describes its aim as to “unify the desktop and mobile versions”.We’ve contacted Epic for more information, and will update if we hear back.
    Price, system requirements and release date

    RealityScan 2.0 is due in the “coming weeks”. Epic Games hasn’t announced an exact release date, or any changes to price or system requirements.The current version of the desktop software, RealityCapture 1.5, is available for Windows 7+ and Windows Server 2008+. It’s CUDA-based, so you need a CUDA 3.0-capable NVIDIA GPU.
    The desktop software is free to artists and studios with revenue under million/year. For larger studios, subscriptions cost /seat/year.
    The current version of the mobile app, RealityScan 1.6, is compatible with Android 7.0+, iOS 16.0+ and iPadOS 16.0+. It’s free, including for commercial use.
    By default, its EULA gives Epic Games the right to use your scan data to train products and services, but you can opt out in the in-app settings.
    Read Epic Games’ blog post announcing that it is rebranding RealityCapture as RealityScan
    about RealityCapture and RealityScan on the product website

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #epic #games #rebrand #realitycapture #realityscan
    Epic Games to rebrand RealityCapture as RealityScan 2.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Epic Games is rebranding RealityCapture, its professional desktop photogrammetry software, as RealityScan.RealityScan 2.0, due in the “coming weeks”, will unify the desktop application with the existing RealityScan: Epic Games’ free 3D scanning app for iOS and Android devices. The update will also introduce new features including AI-based mask generation, support for aerial Lidar data, and new visual tools for troubleshooting scan quality. A desktop photogrammetry tool for games, VFX, visualization and urban planning First released in 2016, RealityCapture generates accurate triangle-based meshes of real-world objects, from people and props to environments.Its core photogrammetry toolset, for generating 3D meshes from sets of source images, is augmented by support for laser scan data. The software includes features aimed at aerial surveying and urban planning, but is also used in the entertainment industry to generate assets for use in games and VFX. RealityCapture was acquired by Epic Games in 2021, which made the software available free to artists and studios with revenue under million/year last year. Now rebranded as RealityScan to unify it with the existing mobile app RealityCapture 2.0 – or rather, RealityScan 2.0 – is a change of branding, with the desktop application taking its new name and logo from Epic Games’ existing mobile scanning app.First released in 2022, RealityScan was originally pitched as a way to make RealityCapture’s functionality accessible to hobbyists as well as pros. It’s a pure photogrammetry tool, turning photos captured on a mobile phone or tablet into textured 3D models for use in AR, game development or general 3D work. RealityScan 2.0: AI masking, new Quality Analysis Tool, and support for aerial Lidar data New features in RealityCapture 2.0 will include AI-powered masking, with the software automatically identifying and masking out the background of the source images.The change should remove the need to generate masks manually, either in RealityCapture itself or an external DCC app. In addition, the default settings have been updated to improve alignment of source images, particularly when scanning objects with smooth surfaces and few surface features. To help troubleshoot scans, a new Quality Analysis Tool displays heatmaps showing parts of the scan where more images may be needed to reconstruct the source object accurately. The update will also introduce support for aerial Lidar data, which may be used alongside aerial photography and terrestrial data to reconstruct environments more accurately. No information yet on how the new features break down between desktop and mobile It isn’t clear which of those new features will be included in the mobile app, although it will presumably also be updated to version 2.0 at the same time, since Epic Games’ blog post announcing the changes describes its aim as to “unify the desktop and mobile versions”.We’ve contacted Epic for more information, and will update if we hear back. Price, system requirements and release date RealityScan 2.0 is due in the “coming weeks”. Epic Games hasn’t announced an exact release date, or any changes to price or system requirements.The current version of the desktop software, RealityCapture 1.5, is available for Windows 7+ and Windows Server 2008+. It’s CUDA-based, so you need a CUDA 3.0-capable NVIDIA GPU. The desktop software is free to artists and studios with revenue under million/year. For larger studios, subscriptions cost /seat/year. The current version of the mobile app, RealityScan 1.6, is compatible with Android 7.0+, iOS 16.0+ and iPadOS 16.0+. It’s free, including for commercial use. By default, its EULA gives Epic Games the right to use your scan data to train products and services, but you can opt out in the in-app settings. Read Epic Games’ blog post announcing that it is rebranding RealityCapture as RealityScan about RealityCapture and RealityScan on the product website Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #epic #games #rebrand #realitycapture #realityscan
    Epic Games to rebrand RealityCapture as RealityScan 2.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Epic Games is rebranding RealityCapture, its professional desktop photogrammetry software, as RealityScan.RealityScan 2.0, due in the “coming weeks”, will unify the desktop application with the existing RealityScan: Epic Games’ free 3D scanning app for iOS and Android devices. The update will also introduce new features including AI-based mask generation, support for aerial Lidar data, and new visual tools for troubleshooting scan quality. A desktop photogrammetry tool for games, VFX, visualization and urban planning First released in 2016, RealityCapture generates accurate triangle-based meshes of real-world objects, from people and props to environments.Its core photogrammetry toolset, for generating 3D meshes from sets of source images, is augmented by support for laser scan data. The software includes features aimed at aerial surveying and urban planning, but is also used in the entertainment industry to generate assets for use in games and VFX. RealityCapture was acquired by Epic Games in 2021, which made the software available free to artists and studios with revenue under $1 million/year last year. Now rebranded as RealityScan to unify it with the existing mobile app RealityCapture 2.0 – or rather, RealityScan 2.0 – is a change of branding, with the desktop application taking its new name and logo from Epic Games’ existing mobile scanning app.First released in 2022, RealityScan was originally pitched as a way to make RealityCapture’s functionality accessible to hobbyists as well as pros. It’s a pure photogrammetry tool, turning photos captured on a mobile phone or tablet into textured 3D models for use in AR, game development or general 3D work. RealityScan 2.0: AI masking, new Quality Analysis Tool, and support for aerial Lidar data New features in RealityCapture 2.0 will include AI-powered masking, with the software automatically identifying and masking out the background of the source images.The change should remove the need to generate masks manually, either in RealityCapture itself or an external DCC app. In addition, the default settings have been updated to improve alignment of source images, particularly when scanning objects with smooth surfaces and few surface features. To help troubleshoot scans, a new Quality Analysis Tool displays heatmaps showing parts of the scan where more images may be needed to reconstruct the source object accurately. The update will also introduce support for aerial Lidar data, which may be used alongside aerial photography and terrestrial data to reconstruct environments more accurately. No information yet on how the new features break down between desktop and mobile It isn’t clear which of those new features will be included in the mobile app, although it will presumably also be updated to version 2.0 at the same time, since Epic Games’ blog post announcing the changes describes its aim as to “unify the desktop and mobile versions”.We’ve contacted Epic for more information, and will update if we hear back. Price, system requirements and release date RealityScan 2.0 is due in the “coming weeks”. Epic Games hasn’t announced an exact release date, or any changes to price or system requirements.The current version of the desktop software, RealityCapture 1.5, is available for Windows 7+ and Windows Server 2008+. It’s CUDA-based, so you need a CUDA 3.0-capable NVIDIA GPU. The desktop software is free to artists and studios with revenue under $1 million/year. For larger studios, subscriptions cost $1,250/seat/year. The current version of the mobile app, RealityScan 1.6, is compatible with Android 7.0+, iOS 16.0+ and iPadOS 16.0+. It’s free, including for commercial use. By default, its EULA gives Epic Games the right to use your scan data to train products and services, but you can opt out in the in-app settings. Read Epic Games’ blog post announcing that it is rebranding RealityCapture as RealityScan Read more about RealityCapture and RealityScan on the product website Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    196
    0 Commentarii 0 Distribuiri
  • You can now sell MetaHumans, or use them in Unity or Godot

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    The MetaHuman client reel. Epic Games’ framework for generating realistic 3D characters for games is out of early access, and can now be used with any DCC app or game engine.

    Epic Games has officially launched MetaHuman, its framework for generating realistic 3D characters for games, animation and VFX work, after four years in early access.The core applications, MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator, are now integrated into Unreal Engine 5.6, the latest version of the game engine.
    In addition, Epic has updated the licensing for MetaHuman characters, making it possible to use them in any game engine or DCC application, including in commercial projects.
    There are also two new free plugins, MetaHuman for Maya and MetaHuman for Houdini, intended to streamline the process of editing MetaHumans in Maya and Houdini.
    A suite of tools for generating and animating realistic real-time 3D characters

    First launched in early access in 2021, MetaHuman is a framework of tools for generating realistic 3D characters for next-gen games, animation, virtual production and VFX.The first component, MetaHuman Creator, enables users to design realistic digital humans.
    Users can generate new characters by blending between presets, then adjusting the proportions of the face by hand, and customising readymade hairstyles and clothing.
    The second component, Mesh to MetaHuman, makes it possible to create MetaHumans matching 3D scans or facial models created in other DCC apps.
    The final component, MetaHuman Animator, streamlines the process of transferring the facial performance of an actor from video footage to a MetaHuman character.
    MetaHuman Creator was originally a cloud-based tool, while Mesh to MetaHuman and MetaHuman Animator were available via the old MetaHuman plugin for Unreal Engine.
    Now integrated directly into Unreal Engine 5.6

    That changes with the end of early access, with MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator all now integrated directly into Unreal Engine itself.Integration – available in Unreal Engine 5.6, the latest version of the engine – is intended to simplify character creation and asset management worklows.
    Studios also get access to the MetaHuman source code, since Unreal Engine itself comes with full C++ source code access.
    However, the tools still cannot be run entirely locally: according to Epic, in-editor workflow is “enhanced by cloud services that deliver autorigging and texture synthesis”.


    Users can now adjust MetaHumans’ bodies, with a new unified Outfit Asset making it possible to create 3D clothing that adjusts automatically to bodily proportions.

    Updates to both MetaHuman Creator and MetaHuman Animator

    In addition, the official release introduces new features, with MetaHuman Creator’s parametric system for creating faces now extended to body shapes.Users can now adjust proportions like height, chest and waist measurements, and leg length, rather than simply selecting preset body types.
    Similarly, a new unified Outfit Asset makes it possible to author custom 3D clothing, rather than selecting readymade presets, with garments resizing to characters’ body shapes.
    MetaHuman Animator – which previously required footage from stereo head-mounted cameras or iPhones – now supports footage from mono cameras like webcams.
    The toolset can also now generate facial animation – both lip sync and head movement – solely from audio recordings, as well as from video footage.
    You can find fuller descriptions of the new features in Epic Games’ blog post.
    Use MetaHumans in Unity or Godot games, or sell them on online marketplaces

    Equally significantly, Epic has changed the licensing for MetaHumans.The MetaHuman toolset is now covered by the standard Unreal Engine EULA, meaning that it can be used for free by any artist or studio with under million/year in revenue.
    MetaHuman characters and clothing can also now be sold on online marketplaces, or used in commercial projects created with other DCC apps or game engines.
    The only exception is for AI: you can use MetaHumans in “workflows that incorporate artificial intelligence technology”, but not to train or enhance the AI models themselves.
    Studios earning more than million/year from projects that use MetaHuman characters need Unreal Engine seat licenses, with currently cost /year.
    However, since MetaHuman characters and animations are classed as ‘non-engine products’, they can be used in games created in other engines, like Unity or Godot, without incurring the 5% cut of the revenue that Epic takes from Unreal Engine games.

    The free MetaHuman for Maya plugin lets you edit MetaHumans with Maya’s native tools.

    New plugins streamline editing MetaHumans in Maya and Houdini

    Last but not least, Epic Games has released new free add-ons intended to streamline the process of editing MetaHumans in other DCC software.The MetaHuman for Maya plugin makes it possible to manipulate the MetaHuman mesh directly with Maya’s standard mesh-editing and sculpting tools.
    Users can also create MetaHuman-compatible hair grooms using Maya’s XGen toolset, and export them in Alembic format.
    The MetaHuman for Houdini plugin seems to be confined to grooming, with users able to create hairstyles using Houdini’s native tools, and export them in Alembic format.
    The plugins themselves are supplemented by MetaHuman Groom Starter Kits for Maya and Houdini, which provide readymade sample files for generating grooms.
    Price, licensing and system requirements

    MetaHuman Creator and MetaHuman Animator are integrated into Unreal Engine 5.6. The Unreal Editor is compatible with Windows 10+, macOS 14.0+ and RHEL/Rocky Linux 8+.The MetaHuman plugin for Maya is compatible with Maya 2022-2025. The MetaHuman for Houdini plugin is compatible with Houdini 20.5 with SideFX Labs installed.
    All of the software is free to use, including for commercial projects, if you earn under million/year. You can find more information on licensing in the story above.
    Read an overview of the changes to the MetaHuman software on Epic Games’ blog
    Download the free MetaHuman for Maya and Houdini plugins and starter kits
    Read Epic Games’ FAQs about the changes to licensing for MetaHumans

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #you #can #now #sell #metahumans
    You can now sell MetaHumans, or use them in Unity or Godot
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The MetaHuman client reel. Epic Games’ framework for generating realistic 3D characters for games is out of early access, and can now be used with any DCC app or game engine. Epic Games has officially launched MetaHuman, its framework for generating realistic 3D characters for games, animation and VFX work, after four years in early access.The core applications, MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator, are now integrated into Unreal Engine 5.6, the latest version of the game engine. In addition, Epic has updated the licensing for MetaHuman characters, making it possible to use them in any game engine or DCC application, including in commercial projects. There are also two new free plugins, MetaHuman for Maya and MetaHuman for Houdini, intended to streamline the process of editing MetaHumans in Maya and Houdini. A suite of tools for generating and animating realistic real-time 3D characters First launched in early access in 2021, MetaHuman is a framework of tools for generating realistic 3D characters for next-gen games, animation, virtual production and VFX.The first component, MetaHuman Creator, enables users to design realistic digital humans. Users can generate new characters by blending between presets, then adjusting the proportions of the face by hand, and customising readymade hairstyles and clothing. The second component, Mesh to MetaHuman, makes it possible to create MetaHumans matching 3D scans or facial models created in other DCC apps. The final component, MetaHuman Animator, streamlines the process of transferring the facial performance of an actor from video footage to a MetaHuman character. MetaHuman Creator was originally a cloud-based tool, while Mesh to MetaHuman and MetaHuman Animator were available via the old MetaHuman plugin for Unreal Engine. Now integrated directly into Unreal Engine 5.6 That changes with the end of early access, with MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator all now integrated directly into Unreal Engine itself.Integration – available in Unreal Engine 5.6, the latest version of the engine – is intended to simplify character creation and asset management worklows. Studios also get access to the MetaHuman source code, since Unreal Engine itself comes with full C++ source code access. However, the tools still cannot be run entirely locally: according to Epic, in-editor workflow is “enhanced by cloud services that deliver autorigging and texture synthesis”. Users can now adjust MetaHumans’ bodies, with a new unified Outfit Asset making it possible to create 3D clothing that adjusts automatically to bodily proportions. Updates to both MetaHuman Creator and MetaHuman Animator In addition, the official release introduces new features, with MetaHuman Creator’s parametric system for creating faces now extended to body shapes.Users can now adjust proportions like height, chest and waist measurements, and leg length, rather than simply selecting preset body types. Similarly, a new unified Outfit Asset makes it possible to author custom 3D clothing, rather than selecting readymade presets, with garments resizing to characters’ body shapes. MetaHuman Animator – which previously required footage from stereo head-mounted cameras or iPhones – now supports footage from mono cameras like webcams. The toolset can also now generate facial animation – both lip sync and head movement – solely from audio recordings, as well as from video footage. You can find fuller descriptions of the new features in Epic Games’ blog post. Use MetaHumans in Unity or Godot games, or sell them on online marketplaces Equally significantly, Epic has changed the licensing for MetaHumans.The MetaHuman toolset is now covered by the standard Unreal Engine EULA, meaning that it can be used for free by any artist or studio with under million/year in revenue. MetaHuman characters and clothing can also now be sold on online marketplaces, or used in commercial projects created with other DCC apps or game engines. The only exception is for AI: you can use MetaHumans in “workflows that incorporate artificial intelligence technology”, but not to train or enhance the AI models themselves. Studios earning more than million/year from projects that use MetaHuman characters need Unreal Engine seat licenses, with currently cost /year. However, since MetaHuman characters and animations are classed as ‘non-engine products’, they can be used in games created in other engines, like Unity or Godot, without incurring the 5% cut of the revenue that Epic takes from Unreal Engine games. The free MetaHuman for Maya plugin lets you edit MetaHumans with Maya’s native tools. New plugins streamline editing MetaHumans in Maya and Houdini Last but not least, Epic Games has released new free add-ons intended to streamline the process of editing MetaHumans in other DCC software.The MetaHuman for Maya plugin makes it possible to manipulate the MetaHuman mesh directly with Maya’s standard mesh-editing and sculpting tools. Users can also create MetaHuman-compatible hair grooms using Maya’s XGen toolset, and export them in Alembic format. The MetaHuman for Houdini plugin seems to be confined to grooming, with users able to create hairstyles using Houdini’s native tools, and export them in Alembic format. The plugins themselves are supplemented by MetaHuman Groom Starter Kits for Maya and Houdini, which provide readymade sample files for generating grooms. Price, licensing and system requirements MetaHuman Creator and MetaHuman Animator are integrated into Unreal Engine 5.6. The Unreal Editor is compatible with Windows 10+, macOS 14.0+ and RHEL/Rocky Linux 8+.The MetaHuman plugin for Maya is compatible with Maya 2022-2025. The MetaHuman for Houdini plugin is compatible with Houdini 20.5 with SideFX Labs installed. All of the software is free to use, including for commercial projects, if you earn under million/year. You can find more information on licensing in the story above. Read an overview of the changes to the MetaHuman software on Epic Games’ blog Download the free MetaHuman for Maya and Houdini plugins and starter kits Read Epic Games’ FAQs about the changes to licensing for MetaHumans Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #you #can #now #sell #metahumans
    You can now sell MetaHumans, or use them in Unity or Godot
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The MetaHuman client reel. Epic Games’ framework for generating realistic 3D characters for games is out of early access, and can now be used with any DCC app or game engine. Epic Games has officially launched MetaHuman, its framework for generating realistic 3D characters for games, animation and VFX work, after four years in early access.The core applications, MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator, are now integrated into Unreal Engine 5.6, the latest version of the game engine. In addition, Epic has updated the licensing for MetaHuman characters, making it possible to use them in any game engine or DCC application, including in commercial projects. There are also two new free plugins, MetaHuman for Maya and MetaHuman for Houdini, intended to streamline the process of editing MetaHumans in Maya and Houdini. A suite of tools for generating and animating realistic real-time 3D characters First launched in early access in 2021, MetaHuman is a framework of tools for generating realistic 3D characters for next-gen games, animation, virtual production and VFX.The first component, MetaHuman Creator, enables users to design realistic digital humans. Users can generate new characters by blending between presets, then adjusting the proportions of the face by hand, and customising readymade hairstyles and clothing. The second component, Mesh to MetaHuman, makes it possible to create MetaHumans matching 3D scans or facial models created in other DCC apps. The final component, MetaHuman Animator, streamlines the process of transferring the facial performance of an actor from video footage to a MetaHuman character. MetaHuman Creator was originally a cloud-based tool, while Mesh to MetaHuman and MetaHuman Animator were available via the old MetaHuman plugin for Unreal Engine. Now integrated directly into Unreal Engine 5.6 That changes with the end of early access, with MetaHuman Creator, Mesh to MetaHuman and MetaHuman Animator all now integrated directly into Unreal Engine itself.Integration – available in Unreal Engine 5.6, the latest version of the engine – is intended to simplify character creation and asset management worklows. Studios also get access to the MetaHuman source code, since Unreal Engine itself comes with full C++ source code access. However, the tools still cannot be run entirely locally: according to Epic, in-editor workflow is “enhanced by cloud services that deliver autorigging and texture synthesis”. https://www.cgchannel.com/wp-content/uploads/2025/06/250604_MetaHumanOfficialLaunch_LicensingChanges_UnifiedClothing.mp4 Users can now adjust MetaHumans’ bodies, with a new unified Outfit Asset making it possible to create 3D clothing that adjusts automatically to bodily proportions. Updates to both MetaHuman Creator and MetaHuman Animator In addition, the official release introduces new features, with MetaHuman Creator’s parametric system for creating faces now extended to body shapes.Users can now adjust proportions like height, chest and waist measurements, and leg length, rather than simply selecting preset body types. Similarly, a new unified Outfit Asset makes it possible to author custom 3D clothing, rather than selecting readymade presets, with garments resizing to characters’ body shapes. MetaHuman Animator – which previously required footage from stereo head-mounted cameras or iPhones – now supports footage from mono cameras like webcams. The toolset can also now generate facial animation – both lip sync and head movement – solely from audio recordings, as well as from video footage. You can find fuller descriptions of the new features in Epic Games’ blog post. Use MetaHumans in Unity or Godot games, or sell them on online marketplaces Equally significantly, Epic has changed the licensing for MetaHumans.The MetaHuman toolset is now covered by the standard Unreal Engine EULA, meaning that it can be used for free by any artist or studio with under $1 million/year in revenue. MetaHuman characters and clothing can also now be sold on online marketplaces, or used in commercial projects created with other DCC apps or game engines. The only exception is for AI: you can use MetaHumans in “workflows that incorporate artificial intelligence technology”, but not to train or enhance the AI models themselves. Studios earning more than $1 million/year from projects that use MetaHuman characters need Unreal Engine seat licenses, with currently cost $1,850/year. However, since MetaHuman characters and animations are classed as ‘non-engine products’, they can be used in games created in other engines, like Unity or Godot, without incurring the 5% cut of the revenue that Epic takes from Unreal Engine games. https://www.cgchannel.com/wp-content/uploads/2025/06/250604_MetaHumanOfficialLaunch_LicensingChanges_MetaHumanForMaya.mp4 The free MetaHuman for Maya plugin lets you edit MetaHumans with Maya’s native tools. New plugins streamline editing MetaHumans in Maya and Houdini Last but not least, Epic Games has released new free add-ons intended to streamline the process of editing MetaHumans in other DCC software.The MetaHuman for Maya plugin makes it possible to manipulate the MetaHuman mesh directly with Maya’s standard mesh-editing and sculpting tools. Users can also create MetaHuman-compatible hair grooms using Maya’s XGen toolset, and export them in Alembic format. The MetaHuman for Houdini plugin seems to be confined to grooming, with users able to create hairstyles using Houdini’s native tools, and export them in Alembic format. The plugins themselves are supplemented by MetaHuman Groom Starter Kits for Maya and Houdini, which provide readymade sample files for generating grooms. Price, licensing and system requirements MetaHuman Creator and MetaHuman Animator are integrated into Unreal Engine 5.6. The Unreal Editor is compatible with Windows 10+, macOS 14.0+ and RHEL/Rocky Linux 8+.The MetaHuman plugin for Maya is compatible with Maya 2022-2025. The MetaHuman for Houdini plugin is compatible with Houdini 20.5 with SideFX Labs installed. All of the software is free to use, including for commercial projects, if you earn under $1 million/year. You can find more information on licensing in the story above. Read an overview of the changes to the MetaHuman software on Epic Games’ blog Download the free MetaHuman for Maya and Houdini plugins and starter kits Read Epic Games’ FAQs about the changes to licensing for MetaHumans Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    184
    0 Commentarii 0 Distribuiri
  • Blackmagic Design releases DaVinci Resolve 20.0

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    A recording of Blackmagic Design’s livestream of its announcements for NAB 2025. You can see the new features in Da Vinci Resolve 20.0 at 2:13:20 in the video.

    Originally posted on 6 April 2025 for the beta, and updated for the stable release.
    Blackmagic Design has updated DaVinci Resolve, its free colour grading, editing and post-production software, and DaVinci Resolve Studio, its commercial edition.
    DaVinci Resolve Studio 20.0 is a major release, adding over 100 tools, including a set of new AI-powered features for video editing and audio production, and is free to all users.
    Below, we’ve rounded up the key features for colorists and effects artists, including a new Chroma Warp tool, a new deep compositing toolset, and full support for multi-layer EXRs.

    The new Chroma Warp tool makes it possible to create looks with intuitive gesture controls.

    Color page: New Chroma Warp, plus updates to the Resolve FX Warper and Magic Mask

    For grading, the Color page‘s Color Warper gets a new Chroma Warp tool.It is designed to create looks intuitively, with users selecting a color in the viewer, and dragging to adjust its hue and saturation simultaneously.
    Among the existing tools, the Resolve FX Warper effect gets a new Curves Warp mode, which creates a custom polygon with spline points for finer control when warping images.
    Magic Mask, DaVinci Resolve’s AI-based feature for generating mattes, has been updated, and now operates in a single mode for both people and objects.
    Workflow is also now more precise, with users now placing points to make selections, then using the paint tools to include or exclude surrounding regions of the image.
    Another key AI-based feature, the Resolve FX Depth Map effect, which automatically generates depth mattes, has been updated to improve speed and accuracy.
    For color management across a pipeline, the software has been updated to ACES 2.0, and OpenColorIO is supported as Resolve FX.

    Effects artists get deep compositing support in the integrated Fusion compositing toolset.

    Fusion: new deep compositing toolset

    For compositing and effects work, the Fusion page gets support for deep compositing.Deep compositing, long supported in more VFX-focused apps like Nuke, makes use of depth data encoded in image formats like OpenEXR to control object visibility.
    It simplifies the process of generating and managing holdouts, and generates fewer visual artifacts, particularly when working with motion blur or environment fog.
    Deep images can now be viewed in the Fusion viewer or the 3D view, and there is a new set of nodes to merge, transform, resize, crop, recolor and generate holdouts.
    It is also possible to render deep images from the 3D environment, and export them as deep EXRs via the Fusion saver node.
    Fusion: new vector warping toolset, plus support for 180 VR and multi-layer workflows

    Other new features in the Fusion page include a new optical-flow-based vector warping toolset, for image patching and cleanup, and for effects like digital makeup.There is also a new 360° Dome Light for environment lighting, and support for 180 VR, with a number of key tools updated to support 180° workflows.
    Pipeline improvements include full multi-layer workflows, with all of Fusion’s nodes now able to access each layer within multi-layer EXR or PSD files.
    Fusion also now natively supports Cryptomatte ID matte data in EXR files.
    You can read about the new features on the Fusion Page in our story on Fusion Studio 20.0, the latest version of Blackmagic Design’s standalone compositing app, in which they also feature.

    IntelliScript automatically generates an edit timeline matching a user-provided script.

    Other toolsets: lots of new AI features for video editing and audio production

    DaVinci Resolve Studio 20.0 also features a lot of new AI features powered by the software’s Neural Engine, although primarily in the video editing and audio production toolsets.The Cut and Edit pages get new AI tools for automatically creating edit timelines matching a user-provided script; generating animated subtitles; editing or extending music to match clip length; and matching tone, level and reverberance for dialogue.
    There are also new tools for recording new voiceovers during editing to match an edit.
    Workflow improvements include a dedicated curve view for keyframe editing; plus a new MultiText tool and updates to the Text+ tool for better control of the layout of on-screen text.
    For audio post work, the Fairlight page gets new AI features for removing silences from raw footage, and automatically balancing an audio mix.
    We don’t cover video or audio editing on CG Channel, but you can find a complete list of changes via the links at the foot of the story.
    Codec and format support

    Other key changes include native support for ProRes encoding on Windows and Linux systems as well as macOS.MV-HEVC encoding is now supported on systems with NVIDIA GPUs, and H.265 4:2:2 encoding and decoding are GPU-accelerated on NVIDIA’s new Blackwell GPUs.
    Again, you can find a full list of changes to codec and platform support via the links at the foot of the story.

    Upcoming features include generative AI-based background extension.

    Future updates: new toolsets for immersive video and AI background generation

    Blackmagic Design also announced two upcoming features not present in the initial release.Artists creating mixed reality content will get a new toolset for ingesting, editing and delivering immersive video for Apple’s Vision Pro headset.
    There will also be a new generative AI feature, Resolve FX AI Set Extender, available via Blackmagic Cloud.
    More details will be announced later this year, but Blackmagic says that it will enable users to generate new backgrounds for shots by entering simple text prompts.
    The video above shows a range of use cases, including extending or adding objects to existing footage, and generating a complete new background behind a foreground object.
    Price, system requirements and release date

    DaVinci Resolve 20.0 and DaVinci Resolve Studio 20.0 are for Windows 10+, Rocky Linux 8.6, and macOS 14.0+. The updates are free to existing users.New perpetual licenses of the base edition are also free.
    The Studio edition, which adds AI features, stereoscopic 3D tools, and collaboration features, costs following a temporary increase in the US following the introduction of new tariffs.
    Read a full list of new features in DaVinci Resolve 20.0 and DaVinci Resolve Studio 20.0

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #blackmagic #design #releases #davinci #resolve
    Blackmagic Design releases DaVinci Resolve 20.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; A recording of Blackmagic Design’s livestream of its announcements for NAB 2025. You can see the new features in Da Vinci Resolve 20.0 at 2:13:20 in the video. Originally posted on 6 April 2025 for the beta, and updated for the stable release. Blackmagic Design has updated DaVinci Resolve, its free colour grading, editing and post-production software, and DaVinci Resolve Studio, its commercial edition. DaVinci Resolve Studio 20.0 is a major release, adding over 100 tools, including a set of new AI-powered features for video editing and audio production, and is free to all users. Below, we’ve rounded up the key features for colorists and effects artists, including a new Chroma Warp tool, a new deep compositing toolset, and full support for multi-layer EXRs. The new Chroma Warp tool makes it possible to create looks with intuitive gesture controls. Color page: New Chroma Warp, plus updates to the Resolve FX Warper and Magic Mask For grading, the Color page‘s Color Warper gets a new Chroma Warp tool.It is designed to create looks intuitively, with users selecting a color in the viewer, and dragging to adjust its hue and saturation simultaneously. Among the existing tools, the Resolve FX Warper effect gets a new Curves Warp mode, which creates a custom polygon with spline points for finer control when warping images. Magic Mask, DaVinci Resolve’s AI-based feature for generating mattes, has been updated, and now operates in a single mode for both people and objects. Workflow is also now more precise, with users now placing points to make selections, then using the paint tools to include or exclude surrounding regions of the image. Another key AI-based feature, the Resolve FX Depth Map effect, which automatically generates depth mattes, has been updated to improve speed and accuracy. For color management across a pipeline, the software has been updated to ACES 2.0, and OpenColorIO is supported as Resolve FX. Effects artists get deep compositing support in the integrated Fusion compositing toolset. Fusion: new deep compositing toolset For compositing and effects work, the Fusion page gets support for deep compositing.Deep compositing, long supported in more VFX-focused apps like Nuke, makes use of depth data encoded in image formats like OpenEXR to control object visibility. It simplifies the process of generating and managing holdouts, and generates fewer visual artifacts, particularly when working with motion blur or environment fog. Deep images can now be viewed in the Fusion viewer or the 3D view, and there is a new set of nodes to merge, transform, resize, crop, recolor and generate holdouts. It is also possible to render deep images from the 3D environment, and export them as deep EXRs via the Fusion saver node. Fusion: new vector warping toolset, plus support for 180 VR and multi-layer workflows Other new features in the Fusion page include a new optical-flow-based vector warping toolset, for image patching and cleanup, and for effects like digital makeup.There is also a new 360° Dome Light for environment lighting, and support for 180 VR, with a number of key tools updated to support 180° workflows. Pipeline improvements include full multi-layer workflows, with all of Fusion’s nodes now able to access each layer within multi-layer EXR or PSD files. Fusion also now natively supports Cryptomatte ID matte data in EXR files. You can read about the new features on the Fusion Page in our story on Fusion Studio 20.0, the latest version of Blackmagic Design’s standalone compositing app, in which they also feature. IntelliScript automatically generates an edit timeline matching a user-provided script. Other toolsets: lots of new AI features for video editing and audio production DaVinci Resolve Studio 20.0 also features a lot of new AI features powered by the software’s Neural Engine, although primarily in the video editing and audio production toolsets.The Cut and Edit pages get new AI tools for automatically creating edit timelines matching a user-provided script; generating animated subtitles; editing or extending music to match clip length; and matching tone, level and reverberance for dialogue. There are also new tools for recording new voiceovers during editing to match an edit. Workflow improvements include a dedicated curve view for keyframe editing; plus a new MultiText tool and updates to the Text+ tool for better control of the layout of on-screen text. For audio post work, the Fairlight page gets new AI features for removing silences from raw footage, and automatically balancing an audio mix. We don’t cover video or audio editing on CG Channel, but you can find a complete list of changes via the links at the foot of the story. Codec and format support Other key changes include native support for ProRes encoding on Windows and Linux systems as well as macOS.MV-HEVC encoding is now supported on systems with NVIDIA GPUs, and H.265 4:2:2 encoding and decoding are GPU-accelerated on NVIDIA’s new Blackwell GPUs. Again, you can find a full list of changes to codec and platform support via the links at the foot of the story. Upcoming features include generative AI-based background extension. Future updates: new toolsets for immersive video and AI background generation Blackmagic Design also announced two upcoming features not present in the initial release.Artists creating mixed reality content will get a new toolset for ingesting, editing and delivering immersive video for Apple’s Vision Pro headset. There will also be a new generative AI feature, Resolve FX AI Set Extender, available via Blackmagic Cloud. More details will be announced later this year, but Blackmagic says that it will enable users to generate new backgrounds for shots by entering simple text prompts. The video above shows a range of use cases, including extending or adding objects to existing footage, and generating a complete new background behind a foreground object. Price, system requirements and release date DaVinci Resolve 20.0 and DaVinci Resolve Studio 20.0 are for Windows 10+, Rocky Linux 8.6, and macOS 14.0+. The updates are free to existing users.New perpetual licenses of the base edition are also free. The Studio edition, which adds AI features, stereoscopic 3D tools, and collaboration features, costs following a temporary increase in the US following the introduction of new tariffs. Read a full list of new features in DaVinci Resolve 20.0 and DaVinci Resolve Studio 20.0 Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #blackmagic #design #releases #davinci #resolve
    WWW.CGCHANNEL.COM
    Blackmagic Design releases DaVinci Resolve 20.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" A recording of Blackmagic Design’s livestream of its announcements for NAB 2025. You can see the new features in Da Vinci Resolve 20.0 at 2:13:20 in the video. Originally posted on 6 April 2025 for the beta, and updated for the stable release. Blackmagic Design has updated DaVinci Resolve, its free colour grading, editing and post-production software, and DaVinci Resolve Studio, its $295 commercial edition. DaVinci Resolve Studio 20.0 is a major release, adding over 100 tools, including a set of new AI-powered features for video editing and audio production, and is free to all users. Below, we’ve rounded up the key features for colorists and effects artists, including a new Chroma Warp tool, a new deep compositing toolset, and full support for multi-layer EXRs. The new Chroma Warp tool makes it possible to create looks with intuitive gesture controls. Color page: New Chroma Warp, plus updates to the Resolve FX Warper and Magic Mask For grading, the Color page‘s Color Warper gets a new Chroma Warp tool.It is designed to create looks intuitively, with users selecting a color in the viewer, and dragging to adjust its hue and saturation simultaneously. Among the existing tools, the Resolve FX Warper effect gets a new Curves Warp mode, which creates a custom polygon with spline points for finer control when warping images. Magic Mask, DaVinci Resolve’s AI-based feature for generating mattes, has been updated, and now operates in a single mode for both people and objects. Workflow is also now more precise, with users now placing points to make selections, then using the paint tools to include or exclude surrounding regions of the image. Another key AI-based feature, the Resolve FX Depth Map effect, which automatically generates depth mattes, has been updated to improve speed and accuracy. For color management across a pipeline, the software has been updated to ACES 2.0, and OpenColorIO is supported as Resolve FX. Effects artists get deep compositing support in the integrated Fusion compositing toolset. Fusion: new deep compositing toolset For compositing and effects work, the Fusion page gets support for deep compositing.Deep compositing, long supported in more VFX-focused apps like Nuke, makes use of depth data encoded in image formats like OpenEXR to control object visibility. It simplifies the process of generating and managing holdouts, and generates fewer visual artifacts, particularly when working with motion blur or environment fog. Deep images can now be viewed in the Fusion viewer or the 3D view, and there is a new set of nodes to merge, transform, resize, crop, recolor and generate holdouts. It is also possible to render deep images from the 3D environment, and export them as deep EXRs via the Fusion saver node. Fusion: new vector warping toolset, plus support for 180 VR and multi-layer workflows Other new features in the Fusion page include a new optical-flow-based vector warping toolset, for image patching and cleanup, and for effects like digital makeup.There is also a new 360° Dome Light for environment lighting, and support for 180 VR, with a number of key tools updated to support 180° workflows. Pipeline improvements include full multi-layer workflows, with all of Fusion’s nodes now able to access each layer within multi-layer EXR or PSD files. Fusion also now natively supports Cryptomatte ID matte data in EXR files. You can read about the new features on the Fusion Page in our story on Fusion Studio 20.0, the latest version of Blackmagic Design’s standalone compositing app, in which they also feature. IntelliScript automatically generates an edit timeline matching a user-provided script. Other toolsets: lots of new AI features for video editing and audio production DaVinci Resolve Studio 20.0 also features a lot of new AI features powered by the software’s Neural Engine, although primarily in the video editing and audio production toolsets.The Cut and Edit pages get new AI tools for automatically creating edit timelines matching a user-provided script; generating animated subtitles; editing or extending music to match clip length; and matching tone, level and reverberance for dialogue. There are also new tools for recording new voiceovers during editing to match an edit. Workflow improvements include a dedicated curve view for keyframe editing; plus a new MultiText tool and updates to the Text+ tool for better control of the layout of on-screen text. For audio post work, the Fairlight page gets new AI features for removing silences from raw footage, and automatically balancing an audio mix. We don’t cover video or audio editing on CG Channel, but you can find a complete list of changes via the links at the foot of the story. Codec and format support Other key changes include native support for ProRes encoding on Windows and Linux systems as well as macOS.MV-HEVC encoding is now supported on systems with NVIDIA GPUs, and H.265 4:2:2 encoding and decoding are GPU-accelerated on NVIDIA’s new Blackwell GPUs. Again, you can find a full list of changes to codec and platform support via the links at the foot of the story. Upcoming features include generative AI-based background extension. Future updates: new toolsets for immersive video and AI background generation Blackmagic Design also announced two upcoming features not present in the initial release.Artists creating mixed reality content will get a new toolset for ingesting, editing and delivering immersive video for Apple’s Vision Pro headset. There will also be a new generative AI feature, Resolve FX AI Set Extender, available via Blackmagic Cloud. More details will be announced later this year, but Blackmagic says that it will enable users to generate new backgrounds for shots by entering simple text prompts. The video above shows a range of use cases, including extending or adding objects to existing footage, and generating a complete new background behind a foreground object. Price, system requirements and release date DaVinci Resolve 20.0 and DaVinci Resolve Studio 20.0 are for Windows 10+, Rocky Linux 8.6, and macOS 14.0+. The updates are free to existing users.New perpetual licenses of the base edition are also free. The Studio edition, which adds AI features, stereoscopic 3D tools, and collaboration features, costs $295, following a temporary increase in the US following the introduction of new tariffs. Read a full list of new features in DaVinci Resolve 20.0 and DaVinci Resolve Studio 20.0 Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Desktop edition of sculpting app Nomad enters free beta

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    A creature created with Nomad by Glen Southern. The new desktop edition of the formerly mobile-only digital sculpting app is now available in free public beta.

    Hexanomad – aka developer Stéphane Ginier – has released the new desktop edition of Nomad, its popular digital sculpting app for iPads and Android tablets, in free public beta.Beta builds are currently available for Windows and macOS, although they currently only include a limited range of tools from the mobile edition.
    A rounded set of digital sculpting, 3D painting and remeshing features

    First released in 2020, Nomad – also often known as Nomad Sculpt – is a popular digital sculpting app for iPads and Android tablets.It has a familiar set of sculpting brushes, including Clay, Crease, Move, Flatten and Smooth, with support for falloff, alphas and masking.
    A dynamic tessellation system, similar to those of desktop tools like ZBrush, automatically changes the resolution of the part of the mesh being sculpted to accommodate new details.
    Users can also perform a voxel remesh of the sculpt to generate a uniform level of detail, or switch manually between different levels of resolution.
    Nomad features a PBR vertex paint system, making it possible to rough out surface colours; and built-in lighting and post-processing options for viewing models in context.
    Both sculpting and painting are layer-based, making it possible to work non-destructively.
    Completed sculpts can be exported in FBX, OBJ, glTF/GLB, PLY and STL format.
    New desktop edition still early in development, but evolving fast

    Nomad already has a web demo version, which makes it possible to test the app inside a web browser, but the new beta answers long-standing user requests for a native desktop version.It’s still very early in development, so it only features a limited range of tools from the mobile edition – the initial release was limited to the Clay and Move tools – and has known issues with graphics tablets, but new builds are being released regularly.
    Ginier has stated that his aim is to make the desktop edition “identical to the mobile versions”.
    The desktop version should also support Quad Remesher, Exoside’s auto retopology system, which is available as an in-app purchase inside the iPad edition.
    You can follow development in the -beta-desktop channel of the Nomad Sculpt Discord server.
    Price, release date and system requirements

    The desktop edition of Nomad is currently in free public beta for Windows 10+ and macOS 12.0+. Beta builds do not expire. Stéphane Ginier hasn’t announced a final release date or price yet.The mobile edition of Nomad is available for iOS/iPadOS 15.0+ and Android 6.0+. It costs about Nomad on the product website
    Follow the progress of the desktop edition on the Discord server
    Download the latest beta builds of the desktop edition of Nomad

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #desktop #edition #sculpting #app #nomad
    Desktop edition of sculpting app Nomad enters free beta
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; A creature created with Nomad by Glen Southern. The new desktop edition of the formerly mobile-only digital sculpting app is now available in free public beta. Hexanomad – aka developer Stéphane Ginier – has released the new desktop edition of Nomad, its popular digital sculpting app for iPads and Android tablets, in free public beta.Beta builds are currently available for Windows and macOS, although they currently only include a limited range of tools from the mobile edition. A rounded set of digital sculpting, 3D painting and remeshing features First released in 2020, Nomad – also often known as Nomad Sculpt – is a popular digital sculpting app for iPads and Android tablets.It has a familiar set of sculpting brushes, including Clay, Crease, Move, Flatten and Smooth, with support for falloff, alphas and masking. A dynamic tessellation system, similar to those of desktop tools like ZBrush, automatically changes the resolution of the part of the mesh being sculpted to accommodate new details. Users can also perform a voxel remesh of the sculpt to generate a uniform level of detail, or switch manually between different levels of resolution. Nomad features a PBR vertex paint system, making it possible to rough out surface colours; and built-in lighting and post-processing options for viewing models in context. Both sculpting and painting are layer-based, making it possible to work non-destructively. Completed sculpts can be exported in FBX, OBJ, glTF/GLB, PLY and STL format. New desktop edition still early in development, but evolving fast Nomad already has a web demo version, which makes it possible to test the app inside a web browser, but the new beta answers long-standing user requests for a native desktop version.It’s still very early in development, so it only features a limited range of tools from the mobile edition – the initial release was limited to the Clay and Move tools – and has known issues with graphics tablets, but new builds are being released regularly. Ginier has stated that his aim is to make the desktop edition “identical to the mobile versions”. The desktop version should also support Quad Remesher, Exoside’s auto retopology system, which is available as an in-app purchase inside the iPad edition. You can follow development in the -beta-desktop channel of the Nomad Sculpt Discord server. Price, release date and system requirements The desktop edition of Nomad is currently in free public beta for Windows 10+ and macOS 12.0+. Beta builds do not expire. Stéphane Ginier hasn’t announced a final release date or price yet.The mobile edition of Nomad is available for iOS/iPadOS 15.0+ and Android 6.0+. It costs about Nomad on the product website Follow the progress of the desktop edition on the Discord server Download the latest beta builds of the desktop edition of Nomad Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #desktop #edition #sculpting #app #nomad
    WWW.CGCHANNEL.COM
    Desktop edition of sculpting app Nomad enters free beta
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" A creature created with Nomad by Glen Southern. The new desktop edition of the formerly mobile-only digital sculpting app is now available in free public beta. Hexanomad – aka developer Stéphane Ginier – has released the new desktop edition of Nomad, its popular digital sculpting app for iPads and Android tablets, in free public beta.Beta builds are currently available for Windows and macOS, although they currently only include a limited range of tools from the mobile edition. A rounded set of digital sculpting, 3D painting and remeshing features First released in 2020, Nomad – also often known as Nomad Sculpt – is a popular digital sculpting app for iPads and Android tablets.It has a familiar set of sculpting brushes, including Clay, Crease, Move, Flatten and Smooth, with support for falloff, alphas and masking. A dynamic tessellation system, similar to those of desktop tools like ZBrush, automatically changes the resolution of the part of the mesh being sculpted to accommodate new details. Users can also perform a voxel remesh of the sculpt to generate a uniform level of detail, or switch manually between different levels of resolution. Nomad features a PBR vertex paint system, making it possible to rough out surface colours; and built-in lighting and post-processing options for viewing models in context. Both sculpting and painting are layer-based, making it possible to work non-destructively. Completed sculpts can be exported in FBX, OBJ, glTF/GLB, PLY and STL format. New desktop edition still early in development, but evolving fast Nomad already has a web demo version, which makes it possible to test the app inside a web browser, but the new beta answers long-standing user requests for a native desktop version.It’s still very early in development, so it only features a limited range of tools from the mobile edition – the initial release was limited to the Clay and Move tools – and has known issues with graphics tablets, but new builds are being released regularly. Ginier has stated that his aim is to make the desktop edition “identical to the mobile versions”. The desktop version should also support Quad Remesher, Exoside’s auto retopology system, which is available as an in-app purchase inside the iPad edition. You can follow development in the -beta-desktop channel of the Nomad Sculpt Discord server. Price, release date and system requirements The desktop edition of Nomad is currently in free public beta for Windows 10+ and macOS 12.0+. Beta builds do not expire. Stéphane Ginier hasn’t announced a final release date or price yet.The mobile edition of Nomad is available for iOS/iPadOS 15.0+ and Android 6.0+. It costs $19.99. Read more about Nomad on the product website Follow the progress of the desktop edition on the Discord server Download the latest beta builds of the desktop edition of Nomad Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Itoosoft releases RailClone 7

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Itoosoft has released RailClone 7, the latest version of its 3ds Max parametric modelling plugin.The update introduces a new set of Spline Operators for manipulating splines in a wide range of ways, comprising 10 new nodes with 19 separate features.
    Users of the paid Pro edition get RailClone Systems, a new set of readymade procedural assets for generating common architectural structures like windows, curtain walls, and cabling.
    A popular parametric modelling tool for architectural visualisation work

    First released in 2010, RailClone makes it possible to generate complex 3D models by defining procedural construction rules using a node-based workflow.Users can create complex 3D models by repeating simple base meshes, or ‘Segments’, along splines, using Generators to arrange them into arrays, and Operators to control their properties.
    Although the workflow applies to visual effects or motion graphics, the plugin is most commonly used to generate buildings and street furniture for architectural visualisation projects.
    It is compatible with a range of third-party renderers, including Arnold, Corona, FStorm, OctaneRender, Redshift and V-Ray.

    RailClone 7: new multi-purpose Spline Operators

    RailClone 7 adds a new category of Spline Operators to the software’s graph editor.The 10 new nodes include Basic Ops, a new ‘multi-tool’ for performing common operations on splines, like transforming, breaking, combining, flattening or chamfering splines.
    A new Boolean node performs standard Boolean operations on regions bounded by splines.
    Other new nodes include Offset, for creating repeating clones of splines; Catenary, for creating the catenary curves generated by cables hanging under their own weight; and Conform, for projecting splines onto terrain.
    The images in Itoosoft’s blog post show potential use cases ranging from creating road networks to structures like wiring, railings and gantries.
    In addition, a new Draw Splines mode makes it possible to preview the result of spline operations directly in the viewport.
    New version-independent portable file format, and updates to point clouds

    Other new features include the Itoosoft Portable file format, making it possible to save RailClone objects in a file format independent of the version of 3ds Max used to create them.The point cloud display mode has been updated, with each RailClone object now using a fixed number of points, rather than point density being dependent on distance from the camera.
    According to Itoosoft, the new mode is optimized for modern GPUs and versions of 3ds Max.
    There are also a number of smaller workflow and feature updates, especially to macros, array generation, and handling of V-Ray Proxies when rendering with V-Ray GPU or Vantage.

    Pro edition: new RailClone Systems procedural assets

    Users of the paid Pro edition also get RailClone Systems, a new set of customizable readymade procedural assets for creating common architectural elements like windows, suspended ceilings, curtain walls, boardwalks, and cabling.You can see the new assets in the online preview of RailClone’s asset library.
    Price and system requirements

    RailClone 7.0 is available for 3ds Max 2022+. Feature support varies between the compatible renderers. New licences start at including one year’s maintenance. There is also a free, feature-limited Lite edition of the plugin.
    Read an overview of the new features in RailClone 7 on iToo Software’s blog
    Read a full list of new features in RailClone in the online release notes.
    Visit the RailClone product websiteHave your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #itoosoft #releases #railclone
    Itoosoft releases RailClone 7
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Itoosoft has released RailClone 7, the latest version of its 3ds Max parametric modelling plugin.The update introduces a new set of Spline Operators for manipulating splines in a wide range of ways, comprising 10 new nodes with 19 separate features. Users of the paid Pro edition get RailClone Systems, a new set of readymade procedural assets for generating common architectural structures like windows, curtain walls, and cabling. A popular parametric modelling tool for architectural visualisation work First released in 2010, RailClone makes it possible to generate complex 3D models by defining procedural construction rules using a node-based workflow.Users can create complex 3D models by repeating simple base meshes, or ‘Segments’, along splines, using Generators to arrange them into arrays, and Operators to control their properties. Although the workflow applies to visual effects or motion graphics, the plugin is most commonly used to generate buildings and street furniture for architectural visualisation projects. It is compatible with a range of third-party renderers, including Arnold, Corona, FStorm, OctaneRender, Redshift and V-Ray. RailClone 7: new multi-purpose Spline Operators RailClone 7 adds a new category of Spline Operators to the software’s graph editor.The 10 new nodes include Basic Ops, a new ‘multi-tool’ for performing common operations on splines, like transforming, breaking, combining, flattening or chamfering splines. A new Boolean node performs standard Boolean operations on regions bounded by splines. Other new nodes include Offset, for creating repeating clones of splines; Catenary, for creating the catenary curves generated by cables hanging under their own weight; and Conform, for projecting splines onto terrain. The images in Itoosoft’s blog post show potential use cases ranging from creating road networks to structures like wiring, railings and gantries. In addition, a new Draw Splines mode makes it possible to preview the result of spline operations directly in the viewport. New version-independent portable file format, and updates to point clouds Other new features include the Itoosoft Portable file format, making it possible to save RailClone objects in a file format independent of the version of 3ds Max used to create them.The point cloud display mode has been updated, with each RailClone object now using a fixed number of points, rather than point density being dependent on distance from the camera. According to Itoosoft, the new mode is optimized for modern GPUs and versions of 3ds Max. There are also a number of smaller workflow and feature updates, especially to macros, array generation, and handling of V-Ray Proxies when rendering with V-Ray GPU or Vantage. Pro edition: new RailClone Systems procedural assets Users of the paid Pro edition also get RailClone Systems, a new set of customizable readymade procedural assets for creating common architectural elements like windows, suspended ceilings, curtain walls, boardwalks, and cabling.You can see the new assets in the online preview of RailClone’s asset library. Price and system requirements RailClone 7.0 is available for 3ds Max 2022+. Feature support varies between the compatible renderers. New licences start at including one year’s maintenance. There is also a free, feature-limited Lite edition of the plugin. Read an overview of the new features in RailClone 7 on iToo Software’s blog Read a full list of new features in RailClone in the online release notes. Visit the RailClone product websiteHave your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #itoosoft #releases #railclone
    WWW.CGCHANNEL.COM
    Itoosoft releases RailClone 7
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Itoosoft has released RailClone 7, the latest version of its 3ds Max parametric modelling plugin.The update introduces a new set of Spline Operators for manipulating splines in a wide range of ways, comprising 10 new nodes with 19 separate features. Users of the paid Pro edition get RailClone Systems, a new set of readymade procedural assets for generating common architectural structures like windows, curtain walls, and cabling. A popular parametric modelling tool for architectural visualisation work First released in 2010, RailClone makes it possible to generate complex 3D models by defining procedural construction rules using a node-based workflow.Users can create complex 3D models by repeating simple base meshes, or ‘Segments’, along splines, using Generators to arrange them into arrays, and Operators to control their properties. Although the workflow applies to visual effects or motion graphics, the plugin is most commonly used to generate buildings and street furniture for architectural visualisation projects. It is compatible with a range of third-party renderers, including Arnold, Corona, FStorm, OctaneRender, Redshift and V-Ray. RailClone 7: new multi-purpose Spline Operators RailClone 7 adds a new category of Spline Operators to the software’s graph editor.The 10 new nodes include Basic Ops, a new ‘multi-tool’ for performing common operations on splines, like transforming, breaking, combining, flattening or chamfering splines. A new Boolean node performs standard Boolean operations on regions bounded by splines. Other new nodes include Offset, for creating repeating clones of splines; Catenary, for creating the catenary curves generated by cables hanging under their own weight; and Conform, for projecting splines onto terrain. The images in Itoosoft’s blog post show potential use cases ranging from creating road networks to structures like wiring, railings and gantries. In addition, a new Draw Splines mode makes it possible to preview the result of spline operations directly in the viewport. New version-independent portable file format, and updates to point clouds Other new features include the Itoosoft Portable file format, making it possible to save RailClone objects in a file format independent of the version of 3ds Max used to create them.The point cloud display mode has been updated, with each RailClone object now using a fixed number of points, rather than point density being dependent on distance from the camera. According to Itoosoft, the new mode is optimized for modern GPUs and versions of 3ds Max. There are also a number of smaller workflow and feature updates, especially to macros, array generation, and handling of V-Ray Proxies when rendering with V-Ray GPU or Vantage. Pro edition: new RailClone Systems procedural assets Users of the paid Pro edition also get RailClone Systems, a new set of customizable readymade procedural assets for creating common architectural elements like windows, suspended ceilings, curtain walls, boardwalks, and cabling.You can see the new assets in the online preview of RailClone’s asset library. Price and system requirements RailClone 7.0 is available for 3ds Max 2022+. Feature support varies between the compatible renderers. New licences start at $275, including one year’s maintenance. There is also a free, feature-limited Lite edition of the plugin. Read an overview of the new features in RailClone 7 on iToo Software’s blog Read a full list of new features in RailClone in the online release notes. Visit the RailClone product website (Includes a download link for RailClone Lite at the foot of the page) Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Boris FX releases Silhouette 2025

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Boris FX has begun its 2025 updates to Silhouette, its roto and paint software.Silhouette 2025 adds new AI tools for refining mattes, generating depth maps and fixing glitches in video, and a new 3D Scene node for importing scenes with tracked 3D cameras.
    A VFX-industry standard tool for rotoscoping and roto paint work

    First released 20 years ago, and acquired by Boris FX in 2019, Silhouette is a rotoscoping and paint tool.The software is widely used in production for movie and broadcast visual effects, winning both a Scientific and Technical Academy Award and Engineering Emmy Award in 2019.
    As well as the original standalone edition, Silhouette is available as a plugin, making the toolset available inside Adobe software and OFX-compatible apps like Nuke and DaVinci Resolve.
    New AI tools for refining mattes, generating depth maps, and fixing glitches

    Silhouette 2025 introduces new AI-based features for automating common tasks.The 2024 releases added an AI-based matte workflow, with the Mask ML node automatically generating a mask for a significant object – like a person or animal – in a frame of video, and Matte Assist ML propagating it throughout the rest of the footage.
    They are now joined by Matte Refine ML, a new node for processing hard-edge mattes into “natural, detailed selections”, creating better results when isolating hair or fur.
    In addition, new Depth Map ML and Frame Fixer ML tools generate depth maps from footage, and semi-automatically fix artifacts like scratches, camera flashes, or dropped frames.
    You can read more about them in our story on Continuum 2025.5, Silhouette’s sibling tool.
    New 3D Scene node lets users work with tracked 3D cameras

    Other new features in Silhouette 2025 include the new 3D environment.The 3D Scene node makes it possible to load a scene with a tracked 3D camera in FBX or Alembic format, or to perform a 3D track using Mocha Pro or SynthEyes.
    It is then possible to place cards in 3D space and paint directly on them in the viewer, while a new Unproject/Reproject node allows for fuller composites.
    Other new features

    When using a PowerMesh from Silhouette’s Mocha module to track deforming organic surfaces, it is now possible to paint on undistorted frames using a new PowerMesh Morph node.In addition, it is now possible to merge custom node setups into a single Compound node, which can be reused between projects or shared with collaborators.
    Prices up since the previous release

    The price of the software has also risen since Silhouette 2024.5, although the increases aren’t as large as with some of Boris FX’s other recent product updates.For the standalone edition, the price of perpetual licenses rise by to Subscriptions rise by /month, to /month, or by /year, to /year.
    For the plugin edition, the price of perpetual licenses rise by to Subscriptions rise by /month, to /month, or by /year, to /year.
    Price and system requirements

    Silhouette 2025 is available as a standalone tool for Windows 10+, Linux and macOS 12.0+, and as a plugin for Adobe software and OFX-compatible tools like Nuke.Perpetual licences of the standalone cost ; the plugin costs Rental costs /month or /year for the standalone; /month or /year for the plugin.
    Read a list of new features in Silhouette 2025 on Boris FX’s blog

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #boris #releases #silhouette
    Boris FX releases Silhouette 2025
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Boris FX has begun its 2025 updates to Silhouette, its roto and paint software.Silhouette 2025 adds new AI tools for refining mattes, generating depth maps and fixing glitches in video, and a new 3D Scene node for importing scenes with tracked 3D cameras. A VFX-industry standard tool for rotoscoping and roto paint work First released 20 years ago, and acquired by Boris FX in 2019, Silhouette is a rotoscoping and paint tool.The software is widely used in production for movie and broadcast visual effects, winning both a Scientific and Technical Academy Award and Engineering Emmy Award in 2019. As well as the original standalone edition, Silhouette is available as a plugin, making the toolset available inside Adobe software and OFX-compatible apps like Nuke and DaVinci Resolve. New AI tools for refining mattes, generating depth maps, and fixing glitches Silhouette 2025 introduces new AI-based features for automating common tasks.The 2024 releases added an AI-based matte workflow, with the Mask ML node automatically generating a mask for a significant object – like a person or animal – in a frame of video, and Matte Assist ML propagating it throughout the rest of the footage. They are now joined by Matte Refine ML, a new node for processing hard-edge mattes into “natural, detailed selections”, creating better results when isolating hair or fur. In addition, new Depth Map ML and Frame Fixer ML tools generate depth maps from footage, and semi-automatically fix artifacts like scratches, camera flashes, or dropped frames. You can read more about them in our story on Continuum 2025.5, Silhouette’s sibling tool. New 3D Scene node lets users work with tracked 3D cameras Other new features in Silhouette 2025 include the new 3D environment.The 3D Scene node makes it possible to load a scene with a tracked 3D camera in FBX or Alembic format, or to perform a 3D track using Mocha Pro or SynthEyes. It is then possible to place cards in 3D space and paint directly on them in the viewer, while a new Unproject/Reproject node allows for fuller composites. Other new features When using a PowerMesh from Silhouette’s Mocha module to track deforming organic surfaces, it is now possible to paint on undistorted frames using a new PowerMesh Morph node.In addition, it is now possible to merge custom node setups into a single Compound node, which can be reused between projects or shared with collaborators. Prices up since the previous release The price of the software has also risen since Silhouette 2024.5, although the increases aren’t as large as with some of Boris FX’s other recent product updates.For the standalone edition, the price of perpetual licenses rise by to Subscriptions rise by /month, to /month, or by /year, to /year. For the plugin edition, the price of perpetual licenses rise by to Subscriptions rise by /month, to /month, or by /year, to /year. Price and system requirements Silhouette 2025 is available as a standalone tool for Windows 10+, Linux and macOS 12.0+, and as a plugin for Adobe software and OFX-compatible tools like Nuke.Perpetual licences of the standalone cost ; the plugin costs Rental costs /month or /year for the standalone; /month or /year for the plugin. Read a list of new features in Silhouette 2025 on Boris FX’s blog Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #boris #releases #silhouette
    Boris FX releases Silhouette 2025
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Boris FX has begun its 2025 updates to Silhouette, its roto and paint software.Silhouette 2025 adds new AI tools for refining mattes, generating depth maps and fixing glitches in video, and a new 3D Scene node for importing scenes with tracked 3D cameras. A VFX-industry standard tool for rotoscoping and roto paint work First released 20 years ago, and acquired by Boris FX in 2019, Silhouette is a rotoscoping and paint tool.The software is widely used in production for movie and broadcast visual effects, winning both a Scientific and Technical Academy Award and Engineering Emmy Award in 2019. As well as the original standalone edition, Silhouette is available as a plugin, making the toolset available inside Adobe software and OFX-compatible apps like Nuke and DaVinci Resolve. New AI tools for refining mattes, generating depth maps, and fixing glitches Silhouette 2025 introduces new AI-based features for automating common tasks.The 2024 releases added an AI-based matte workflow, with the Mask ML node automatically generating a mask for a significant object – like a person or animal – in a frame of video, and Matte Assist ML propagating it throughout the rest of the footage. They are now joined by Matte Refine ML, a new node for processing hard-edge mattes into “natural, detailed selections”, creating better results when isolating hair or fur. In addition, new Depth Map ML and Frame Fixer ML tools generate depth maps from footage, and semi-automatically fix artifacts like scratches, camera flashes, or dropped frames. You can read more about them in our story on Continuum 2025.5, Silhouette’s sibling tool. New 3D Scene node lets users work with tracked 3D cameras Other new features in Silhouette 2025 include the new 3D environment.The 3D Scene node makes it possible to load a scene with a tracked 3D camera in FBX or Alembic format, or to perform a 3D track using Mocha Pro or SynthEyes. It is then possible to place cards in 3D space and paint directly on them in the viewer, while a new Unproject/Reproject node allows for fuller composites. Other new features When using a PowerMesh from Silhouette’s Mocha module to track deforming organic surfaces, it is now possible to paint on undistorted frames using a new PowerMesh Morph node.In addition, it is now possible to merge custom node setups into a single Compound node, which can be reused between projects or shared with collaborators. Prices up since the previous release The price of the software has also risen since Silhouette 2024.5, although the increases aren’t as large as with some of Boris FX’s other recent product updates.For the standalone edition, the price of perpetual licenses rise by $200, to $2,195. Subscriptions rise by $15/month, to $165/month, or by $80/year, to $875/year. For the plugin edition, the price of perpetual licenses rise by $100, to $1,195. Subscriptions rise by $3/month, to $103/month, or by $50/year, to $545/year. Price and system requirements Silhouette 2025 is available as a standalone tool for Windows 10+, Linux and macOS 12.0+, and as a plugin for Adobe software and OFX-compatible tools like Nuke.Perpetual licences of the standalone cost $2,195; the plugin costs $1,195. Rental costs $165/month or $875/year for the standalone; $103/month or $545/year for the plugin. Read a list of new features in Silhouette 2025 on Boris FX’s blog Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Baga River Generator lets you draw rivers into Blender scenes

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Architectural visualization artist and tools developer Antoine Bagattini has released Baga River Generator, a new tool for rivers to 3D scenes in Blender.The add-on, co-created with artist Laura Mercadal, lets users create detailed rivers simply by drawing freehand paths in the viewport.
    Add good-looking 3D rivers to Blender scenes by drawing them in freehand

    To judge from the promo video embedded above, workflow in Baga River Generator is pretty much as simple as selecting a preset, then drawing a path in the viewport.The add-on then generates a detailed 3D river along the course of the path, complete with water, banks, and surrounding terrain, populated with rocks and plants.
    There are four presets available – a desert environment, and three with surrounding vegetation – and the add-on comes with over 30 different readymade environment assets.
    If you want to create your own looks, the terrain is fully parametric, making it possible to adjust the width and depth of the river channel and height of the surrounding land.
    You can also create or edit scatter layers, to control how the environment assets are distributed.
    There are some limitations in the initial release – the vegetation isn’t animated, and the water doesn’t have any foam or turbulence – but the results in the video look pretty good, and the output can be rendered with both the Cycles and Eevee render engines.
    Price and system requirements

    Baga River Generator is compatible with Blender 4.2+. It costs To install it, you need the GeoPack system in BagaPie, Bagattini’s free Blender modifier, which is now available as an Extension directly inside Blender.
    about Baga River Generator on the plugin’s Superhive page

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #baga #river #generator #lets #you
    Baga River Generator lets you draw rivers into Blender scenes
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Architectural visualization artist and tools developer Antoine Bagattini has released Baga River Generator, a new tool for rivers to 3D scenes in Blender.The add-on, co-created with artist Laura Mercadal, lets users create detailed rivers simply by drawing freehand paths in the viewport. Add good-looking 3D rivers to Blender scenes by drawing them in freehand To judge from the promo video embedded above, workflow in Baga River Generator is pretty much as simple as selecting a preset, then drawing a path in the viewport.The add-on then generates a detailed 3D river along the course of the path, complete with water, banks, and surrounding terrain, populated with rocks and plants. There are four presets available – a desert environment, and three with surrounding vegetation – and the add-on comes with over 30 different readymade environment assets. If you want to create your own looks, the terrain is fully parametric, making it possible to adjust the width and depth of the river channel and height of the surrounding land. You can also create or edit scatter layers, to control how the environment assets are distributed. There are some limitations in the initial release – the vegetation isn’t animated, and the water doesn’t have any foam or turbulence – but the results in the video look pretty good, and the output can be rendered with both the Cycles and Eevee render engines. Price and system requirements Baga River Generator is compatible with Blender 4.2+. It costs To install it, you need the GeoPack system in BagaPie, Bagattini’s free Blender modifier, which is now available as an Extension directly inside Blender. about Baga River Generator on the plugin’s Superhive page Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #baga #river #generator #lets #you
    Baga River Generator lets you draw rivers into Blender scenes
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Architectural visualization artist and tools developer Antoine Bagattini has released Baga River Generator, a new tool for rivers to 3D scenes in Blender.The add-on, co-created with artist Laura Mercadal, lets users create detailed rivers simply by drawing freehand paths in the viewport. Add good-looking 3D rivers to Blender scenes by drawing them in freehand To judge from the promo video embedded above, workflow in Baga River Generator is pretty much as simple as selecting a preset, then drawing a path in the viewport.The add-on then generates a detailed 3D river along the course of the path, complete with water, banks, and surrounding terrain, populated with rocks and plants. There are four presets available – a desert environment, and three with surrounding vegetation – and the add-on comes with over 30 different readymade environment assets. If you want to create your own looks, the terrain is fully parametric, making it possible to adjust the width and depth of the river channel and height of the surrounding land. You can also create or edit scatter layers, to control how the environment assets are distributed. There are some limitations in the initial release – the vegetation isn’t animated, and the water doesn’t have any foam or turbulence – but the results in the video look pretty good, and the output can be rendered with both the Cycles and Eevee render engines. Price and system requirements Baga River Generator is compatible with Blender 4.2+. It costs $12.To install it, you need the GeoPack system in BagaPie, Bagattini’s free Blender modifier, which is now available as an Extension directly inside Blender. Read more about Baga River Generator on the plugin’s Superhive page Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    11 Commentarii 0 Distribuiri
  • Discover how to create Lightning & Electricity Effects

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    The Gnomon Workshop has released Lightning & Electricity Effects, a guide to generalist FX workflows in Houdini, recorded by VFX Artist Josh Harrison.The workshop provides five hours of video training in Houdini, Maya and Nuke.
    Create energy effects using a robust CG generalist approach

    In the workshop, Harrison sets out his entire workflow for creating electricity and lightning effects using Houdini, following a professional CG generalist approach.The training takes viewers through the process step by step, from project organization to final-frame rendering, integrating a custom character into a Houdini effects setup.
    Harrison begins by setting out how to prepare character models and animation for import into Houdini, and provides simple workarounds for rigging characters in Maya.
    Moving to Houdini, he then demonstrates how to set up materials and shaders for the Mantra render engine, then focuses on creating the lightning effects.
    Having set out fundamental concepts and approaches to creating electricity, Harrison moves on more advanced techniques to create secondary and tertiary effects elements, discussing how to use particle and Pyro simulations to add detail to the scene.
    The final chapters of the workshop cover how to convert meshes into renderable light geometry, how to create custom AOVs in Mantra, and how to use those render passes to composite the final shots in Nuke.
    As well as the training videos, viewers of the workshop can download Harrison’s Alembic animation cache. The workshop also uses Ryan Reos’s Spartan Hoplite character and Truong’s inexpensive Mike Freeman rig.
    About the artist

    Josh Harrison is a freelance Senior FX Artist and Senior 3D Generalist. He began his career in film at Luma Pictures, working on movies including Godzilla vs. Kong.He them moved into TV and commercials at MPC and The Mill, working on series including American Horror Story and House of the Dragon.
    Pricing and availability

    Lightning & Electricity Effects is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.Subscriptions cost /month or /year. Free trials are available.
    about Lightning & Electricity Effects on The Gnomon Workshop’s website

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Full disclosure: CG Channel is owned by Gnomon.
    #discover #how #create #lightning #ampamp
    Discover how to create Lightning & Electricity Effects
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The Gnomon Workshop has released Lightning & Electricity Effects, a guide to generalist FX workflows in Houdini, recorded by VFX Artist Josh Harrison.The workshop provides five hours of video training in Houdini, Maya and Nuke. Create energy effects using a robust CG generalist approach In the workshop, Harrison sets out his entire workflow for creating electricity and lightning effects using Houdini, following a professional CG generalist approach.The training takes viewers through the process step by step, from project organization to final-frame rendering, integrating a custom character into a Houdini effects setup. Harrison begins by setting out how to prepare character models and animation for import into Houdini, and provides simple workarounds for rigging characters in Maya. Moving to Houdini, he then demonstrates how to set up materials and shaders for the Mantra render engine, then focuses on creating the lightning effects. Having set out fundamental concepts and approaches to creating electricity, Harrison moves on more advanced techniques to create secondary and tertiary effects elements, discussing how to use particle and Pyro simulations to add detail to the scene. The final chapters of the workshop cover how to convert meshes into renderable light geometry, how to create custom AOVs in Mantra, and how to use those render passes to composite the final shots in Nuke. As well as the training videos, viewers of the workshop can download Harrison’s Alembic animation cache. The workshop also uses Ryan Reos’s Spartan Hoplite character and Truong’s inexpensive Mike Freeman rig. About the artist Josh Harrison is a freelance Senior FX Artist and Senior 3D Generalist. He began his career in film at Luma Pictures, working on movies including Godzilla vs. Kong.He them moved into TV and commercials at MPC and The Mill, working on series including American Horror Story and House of the Dragon. Pricing and availability Lightning & Electricity Effects is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.Subscriptions cost /month or /year. Free trials are available. about Lightning & Electricity Effects on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon. #discover #how #create #lightning #ampamp
    Discover how to create Lightning & Electricity Effects
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The Gnomon Workshop has released Lightning & Electricity Effects, a guide to generalist FX workflows in Houdini, recorded by VFX Artist Josh Harrison.The workshop provides five hours of video training in Houdini, Maya and Nuke. Create energy effects using a robust CG generalist approach In the workshop, Harrison sets out his entire workflow for creating electricity and lightning effects using Houdini, following a professional CG generalist approach.The training takes viewers through the process step by step, from project organization to final-frame rendering, integrating a custom character into a Houdini effects setup. Harrison begins by setting out how to prepare character models and animation for import into Houdini, and provides simple workarounds for rigging characters in Maya. Moving to Houdini, he then demonstrates how to set up materials and shaders for the Mantra render engine, then focuses on creating the lightning effects. Having set out fundamental concepts and approaches to creating electricity, Harrison moves on more advanced techniques to create secondary and tertiary effects elements, discussing how to use particle and Pyro simulations to add detail to the scene. The final chapters of the workshop cover how to convert meshes into renderable light geometry, how to create custom AOVs in Mantra, and how to use those render passes to composite the final shots in Nuke. As well as the training videos, viewers of the workshop can download Harrison’s Alembic animation cache. The workshop also uses Ryan Reos’s Spartan Hoplite character and Truong’s inexpensive Mike Freeman rig. About the artist Josh Harrison is a freelance Senior FX Artist and Senior 3D Generalist. He began his career in film at Luma Pictures, working on movies including Godzilla vs. Kong.He them moved into TV and commercials at MPC and The Mill, working on series including American Horror Story and House of the Dragon. Pricing and availability Lightning & Electricity Effects is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.Subscriptions cost $57/month or $519/year. Free trials are available. Read more about Lightning & Electricity Effects on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon.
    0 Commentarii 0 Distribuiri
  • Boris FX adds new AI tools to Continuum 2025.5

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Boris FX has released Continuum 2025.5, the latest version of its set of effects plugins for compositing and editing software.The update adds an AI-based masking and tracking system, new AI tools for generating depth and ST maps and fixing bad frames, and updates Particle Illusion, Title Studio and the FX Editor.
    The prices of both perpetual licenses and subscriptions to Continuum have also risen since the previous release, particularly for OFX host applications like DaVinci Resolve and Nuke.
    More new AI-based tools for common editing and compositing tasks

    Boris FX began to add AI features to Continuum in its 2024 releases, beginning with machine learning-based filters for denoising, upresing and retiming video.In Continuum 2025.0, they were joined by machine-learning-based filters for generating motion blur, and obscuring car license plates in footage.

    New AI-based mask-generation and tracking system

    To that, Continuum 2025.5 adds the new AI-based object masking and tracking toolset from Mocha, Boris FX’s planar tracking software, the core tech from which is integrated into Sapphire.The Object Brush ML feature makes it possible to isolate objects in a single frame of footage simply by clicking inside them, with Continuum automatically generating a corresponding mask.
    The selection can be refined by selecting extra parts of the image to include or exclude, or by painting areas in or out manually.
    The Matte Assist ML feature automatically propagates a mask throughout a shot, tracking the object selected and generating animated mattes matching its changing outline.

    New AI tools for generating depth and ST maps, and fixing glitches in footage

    The release also adds three other new AI tools: BCC+ Depth Map ML, BCC+ ST Map, and BCC+ Frame Fixer ML.BCC+ Depth Map ML automatically generates depth maps from video footage, based on objects’ distance from the camera, making it easier to add fog and haze, or fake rack focus.
    Users have a choice of two ML models, one tuned for performance, and one for detail.
    BCC+ ST Map generates ST Maps, often used to encode lens distortion data, making it easier to distort or undistort video footage, or to convert it from rectilinear to 360° formats.
    BCC+ Frame Fixer ML automatically fixes ‘bad’ frames in footage: for example, those with scratches or tape degradation, or with flash photography changing the lighting of the subject.
    You have to identify the bad frames manually, but processing is automatic.
    According to Boris FX, the filter can also be used to fix dropped frames, and even to hide jump cuts.
    Updates to Witness Protection ML and Retimer ML

    Updates to existing AI tools include a Smoothing Level control for the BCC+ Witness Protection ML filter, used to obscure people’s faces automatically in source footage.BCC+ Retimer ML now supports retiming footage with variable frame rates, although only in Adobe host apps.
    The AI models used by the tools also now load faster on first use in a work session.
    Improvements to the FX Editor, Particle Illusion and Title Generator

    Of the non-AI features, the FX Editor, for editing effects and presets, gets performance and workflow improvements, with “2x speed improvements” when playing back 4K footage. Particle generator Particle Illusion gets workflow improvements including the option to save camera animation presets, and an updated emitter library.
    The 8,000 x 8,000px limit in 3D titling plugin Title Studio has been removed, making it possible to work up to the maximum resolution supported by the host application.
    There are also over 100 new presets for filters, and support for a “full set” of blend modes in BCC+ Vignette.
    Prices up, particularly for OFX licenses

    Boris FX has also raised the price of the plugins since the release of Continuum 2025.0.The software is priced according to host application, so the exact figures vary, but for After Effects and Premiere Pro, the cost of a perpetual license rises by to For OFX applications, including DaVinci Resolve and Nuke, the cost of a perpetual license rises by also to For all available hosts, the price rises by to The price of subscriptions is also up, with After Effects and Premiere Pro subscriptions rising by /month, to /month; and by /year, to /year.
    OFX subscriptions rise by /month, to /month; and by /year, to /year. For all hosts, subscriptions rise by /month, to /month, and by /year, to /year.
    Pricing and system requirements

    Continuum 2025.5 is compatible with a range of compositing and editing software, including After Effects, DaVinci Resolve and Nuke, on Windows 10+ or macOS 10.15+.It is priced according to host application, with new perpetual licences costing from to Subscriptions cost from /year to /year.
    Read an overview of the new features in Continuum 2025.5 on Boris FX’s website
    Read a full list of new features in Continuum 2025.5 in the release notesHave your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #boris #adds #new #tools #continuum
    Boris FX adds new AI tools to Continuum 2025.5
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Boris FX has released Continuum 2025.5, the latest version of its set of effects plugins for compositing and editing software.The update adds an AI-based masking and tracking system, new AI tools for generating depth and ST maps and fixing bad frames, and updates Particle Illusion, Title Studio and the FX Editor. The prices of both perpetual licenses and subscriptions to Continuum have also risen since the previous release, particularly for OFX host applications like DaVinci Resolve and Nuke. More new AI-based tools for common editing and compositing tasks Boris FX began to add AI features to Continuum in its 2024 releases, beginning with machine learning-based filters for denoising, upresing and retiming video.In Continuum 2025.0, they were joined by machine-learning-based filters for generating motion blur, and obscuring car license plates in footage. New AI-based mask-generation and tracking system To that, Continuum 2025.5 adds the new AI-based object masking and tracking toolset from Mocha, Boris FX’s planar tracking software, the core tech from which is integrated into Sapphire.The Object Brush ML feature makes it possible to isolate objects in a single frame of footage simply by clicking inside them, with Continuum automatically generating a corresponding mask. The selection can be refined by selecting extra parts of the image to include or exclude, or by painting areas in or out manually. The Matte Assist ML feature automatically propagates a mask throughout a shot, tracking the object selected and generating animated mattes matching its changing outline. New AI tools for generating depth and ST maps, and fixing glitches in footage The release also adds three other new AI tools: BCC+ Depth Map ML, BCC+ ST Map, and BCC+ Frame Fixer ML.BCC+ Depth Map ML automatically generates depth maps from video footage, based on objects’ distance from the camera, making it easier to add fog and haze, or fake rack focus. Users have a choice of two ML models, one tuned for performance, and one for detail. BCC+ ST Map generates ST Maps, often used to encode lens distortion data, making it easier to distort or undistort video footage, or to convert it from rectilinear to 360° formats. BCC+ Frame Fixer ML automatically fixes ‘bad’ frames in footage: for example, those with scratches or tape degradation, or with flash photography changing the lighting of the subject. You have to identify the bad frames manually, but processing is automatic. According to Boris FX, the filter can also be used to fix dropped frames, and even to hide jump cuts. Updates to Witness Protection ML and Retimer ML Updates to existing AI tools include a Smoothing Level control for the BCC+ Witness Protection ML filter, used to obscure people’s faces automatically in source footage.BCC+ Retimer ML now supports retiming footage with variable frame rates, although only in Adobe host apps. The AI models used by the tools also now load faster on first use in a work session. Improvements to the FX Editor, Particle Illusion and Title Generator Of the non-AI features, the FX Editor, for editing effects and presets, gets performance and workflow improvements, with “2x speed improvements” when playing back 4K footage. Particle generator Particle Illusion gets workflow improvements including the option to save camera animation presets, and an updated emitter library. The 8,000 x 8,000px limit in 3D titling plugin Title Studio has been removed, making it possible to work up to the maximum resolution supported by the host application. There are also over 100 new presets for filters, and support for a “full set” of blend modes in BCC+ Vignette. Prices up, particularly for OFX licenses Boris FX has also raised the price of the plugins since the release of Continuum 2025.0.The software is priced according to host application, so the exact figures vary, but for After Effects and Premiere Pro, the cost of a perpetual license rises by to For OFX applications, including DaVinci Resolve and Nuke, the cost of a perpetual license rises by also to For all available hosts, the price rises by to The price of subscriptions is also up, with After Effects and Premiere Pro subscriptions rising by /month, to /month; and by /year, to /year. OFX subscriptions rise by /month, to /month; and by /year, to /year. For all hosts, subscriptions rise by /month, to /month, and by /year, to /year. Pricing and system requirements Continuum 2025.5 is compatible with a range of compositing and editing software, including After Effects, DaVinci Resolve and Nuke, on Windows 10+ or macOS 10.15+.It is priced according to host application, with new perpetual licences costing from to Subscriptions cost from /year to /year. Read an overview of the new features in Continuum 2025.5 on Boris FX’s website Read a full list of new features in Continuum 2025.5 in the release notesHave your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #boris #adds #new #tools #continuum
    Boris FX adds new AI tools to Continuum 2025.5
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Boris FX has released Continuum 2025.5, the latest version of its set of effects plugins for compositing and editing software.The update adds an AI-based masking and tracking system, new AI tools for generating depth and ST maps and fixing bad frames, and updates Particle Illusion, Title Studio and the FX Editor. The prices of both perpetual licenses and subscriptions to Continuum have also risen since the previous release, particularly for OFX host applications like DaVinci Resolve and Nuke. More new AI-based tools for common editing and compositing tasks Boris FX began to add AI features to Continuum in its 2024 releases, beginning with machine learning-based filters for denoising, upresing and retiming video.In Continuum 2025.0, they were joined by machine-learning-based filters for generating motion blur, and obscuring car license plates in footage. New AI-based mask-generation and tracking system To that, Continuum 2025.5 adds the new AI-based object masking and tracking toolset from Mocha, Boris FX’s planar tracking software, the core tech from which is integrated into Sapphire.The Object Brush ML feature makes it possible to isolate objects in a single frame of footage simply by clicking inside them, with Continuum automatically generating a corresponding mask. The selection can be refined by selecting extra parts of the image to include or exclude, or by painting areas in or out manually. The Matte Assist ML feature automatically propagates a mask throughout a shot, tracking the object selected and generating animated mattes matching its changing outline. New AI tools for generating depth and ST maps, and fixing glitches in footage The release also adds three other new AI tools: BCC+ Depth Map ML, BCC+ ST Map, and BCC+ Frame Fixer ML.BCC+ Depth Map ML automatically generates depth maps from video footage, based on objects’ distance from the camera, making it easier to add fog and haze, or fake rack focus. Users have a choice of two ML models, one tuned for performance, and one for detail. BCC+ ST Map generates ST Maps, often used to encode lens distortion data, making it easier to distort or undistort video footage, or to convert it from rectilinear to 360° formats. BCC+ Frame Fixer ML automatically fixes ‘bad’ frames in footage: for example, those with scratches or tape degradation, or with flash photography changing the lighting of the subject. You have to identify the bad frames manually, but processing is automatic. According to Boris FX, the filter can also be used to fix dropped frames, and even to hide jump cuts. Updates to Witness Protection ML and Retimer ML Updates to existing AI tools include a Smoothing Level control for the BCC+ Witness Protection ML filter, used to obscure people’s faces automatically in source footage.BCC+ Retimer ML now supports retiming footage with variable frame rates, although only in Adobe host apps. The AI models used by the tools also now load faster on first use in a work session. Improvements to the FX Editor, Particle Illusion and Title Generator Of the non-AI features, the FX Editor, for editing effects and presets, gets performance and workflow improvements, with “2x speed improvements” when playing back 4K footage. Particle generator Particle Illusion gets workflow improvements including the option to save camera animation presets, and an updated emitter library. The 8,000 x 8,000px limit in 3D titling plugin Title Studio has been removed, making it possible to work up to the maximum resolution supported by the host application. There are also over 100 new presets for filters, and support for a “full set” of blend modes in BCC+ Vignette. Prices up, particularly for OFX licenses Boris FX has also raised the price of the plugins since the release of Continuum 2025.0.The software is priced according to host application, so the exact figures vary, but for After Effects and Premiere Pro, the cost of a perpetual license rises by $100, to $1,095. For OFX applications, including DaVinci Resolve and Nuke, the cost of a perpetual license rises by $400, also to $1,095. For all available hosts, the price rises by $200, to $2,195. The price of subscriptions is also up, with After Effects and Premiere Pro subscriptions rising by $11/month, to $48/month; and by $30/year, to $325/year. OFX subscriptions rise by $23/month, to $48/month; and by $130/year, to $325/year. For all hosts, subscriptions rise by $25/month, to $112/month, and by $70/year, to $765/year. Pricing and system requirements Continuum 2025.5 is compatible with a range of compositing and editing software, including After Effects, DaVinci Resolve and Nuke, on Windows 10+ or macOS 10.15+.It is priced according to host application, with new perpetual licences costing from $365 to $2,195. Subscriptions cost from $215/year to $765/year. Read an overview of the new features in Continuum 2025.5 on Boris FX’s website Read a full list of new features in Continuum 2025.5 in the release notes (Adobe edition) Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Get 50+ free assets for building a city scene in Unreal Engine

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Epic Games has made a set of over 50 assets for creating a gritty urban alley environment available to download for free from Fab, its new online marketplace.Games environment artist Emran Bayarti‘s modular Downtown Alley asset pack is part of the latest set of time-limited free content released on Fab, and will be free until 3 June 2025.
    Over 50 modular 3D assets for building a city alley environment

    The Downtown Alley pack is a set of assets for building a back alley from a contemporary city – it could be anywhere from the 1980s onwards, we think – with English-language neon signs.It comprises over 50 meshes, with LODs, of up to 33,000 polygons, with textures up to 4,096 x 4,096px in resolution.
    The assets include modular wall parts, ground parts, fire escapes, plus objects for set dressing like trash, dumpsters and smaller trash cans.
    System requirements and availability

    The Downtown Alley is available as an asset package for Unreal Engine 4.19+ and 5.0+ under a Fab Standard license.The terms of the Fab Standard license permit content to be used in “any engine or tool”, including for commercial projects, subject to the restrictions set out in the Fab EULA.
    Get Emran Bayarti’s Downtown Alley modular asset kit from FabHave your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #get #free #assets #building #city
    Get 50+ free assets for building a city scene in Unreal Engine
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Epic Games has made a set of over 50 assets for creating a gritty urban alley environment available to download for free from Fab, its new online marketplace.Games environment artist Emran Bayarti‘s modular Downtown Alley asset pack is part of the latest set of time-limited free content released on Fab, and will be free until 3 June 2025. Over 50 modular 3D assets for building a city alley environment The Downtown Alley pack is a set of assets for building a back alley from a contemporary city – it could be anywhere from the 1980s onwards, we think – with English-language neon signs.It comprises over 50 meshes, with LODs, of up to 33,000 polygons, with textures up to 4,096 x 4,096px in resolution. The assets include modular wall parts, ground parts, fire escapes, plus objects for set dressing like trash, dumpsters and smaller trash cans. System requirements and availability The Downtown Alley is available as an asset package for Unreal Engine 4.19+ and 5.0+ under a Fab Standard license.The terms of the Fab Standard license permit content to be used in “any engine or tool”, including for commercial projects, subject to the restrictions set out in the Fab EULA. Get Emran Bayarti’s Downtown Alley modular asset kit from FabHave your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #get #free #assets #building #city
    Get 50+ free assets for building a city scene in Unreal Engine
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Epic Games has made a set of over 50 assets for creating a gritty urban alley environment available to download for free from Fab, its new online marketplace.Games environment artist Emran Bayarti‘s modular Downtown Alley asset pack is part of the latest set of time-limited free content released on Fab, and will be free until 3 June 2025. Over 50 modular 3D assets for building a city alley environment The Downtown Alley pack is a set of assets for building a back alley from a contemporary city – it could be anywhere from the 1980s onwards, we think – with English-language neon signs.It comprises over 50 meshes, with LODs, of up to 33,000 polygons, with textures up to 4,096 x 4,096px in resolution. The assets include modular wall parts, ground parts, fire escapes, plus objects for set dressing like trash, dumpsters and smaller trash cans. System requirements and availability The Downtown Alley is available as an asset package for Unreal Engine 4.19+ and 5.0+ under a Fab Standard license.The terms of the Fab Standard license permit content to be used in “any engine or tool”, including for commercial projects, subject to the restrictions set out in the Fab EULA. Get Emran Bayarti’s Downtown Alley modular asset kit from Fab (Available free until 3 June 2025) Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • LuxCoreRender and BlendLuxCore 2.10 have been released

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    A recent render created using LuxCoreRender shared on the open-source renderer’s Instagram account.

    The LuxCoreRender team has released version 2.10 of the open-source physically based renderer and BlendLuxCore, its Blender integration plugin.The update – the first stable release in over three years – puts development “back on track”, adding support for the Blender 4 releases, and for Apple Silicon Macs.
    A hybrid CPU/GPU unbiased render engine, formerly known as LuxRender

    Formerly known as LuxRender, and rebooted in 2018, LuxCoreRender is an alternative to Blender’s native Cycles renderer, particularly for product and architectural visualization. It’s a physically based render engine with a range of production features and, as of LuxCoreRender 2.0, supports hybrid rendering on CPUs and GPUs.
    Now compatible with Blender 4.x, and available for Apple Silicon Macs

    LuxCoreRender 2.10 is the first stable version of the software in over three years: while there have been some experimental updates, the last stable release was LuxCoreRender 2.6.Development then stalled after several of the original key developers left the project.
    According to the release announcement, the 2.10 release is mainly intended to put development “back on track”.
    While it doesn’t introduce major new features, it makes the software compatible with the current Blender 4.x releases, and makes it “ready for new development work”.
    LuxCore Python bindings are now available as wheels on PyPi, making it easier for third-party developers to integrate the renderer into their software.
    The update also makes LuxCoreRender available for a greater range of platforms: as well as Windows, Linux and Intel Macs, it now runs current Macs with Apple Silicon processors.
    For GPU acceleration, the software still uses CUDA on NVIDIA hardware, and OpenCL elsewhere: it doesn’t currently use Apple’s Metal API when running on macOS.
    License and system requirements

    LuxCoreRender 2.10 is available under an Apache 2.0 licence for Windows, Linux and macOS. BlendLuxCore 2.10 is compatible with Blender 4.2 and 4.3.
    The experimental 3ds Max integration plugin, MaxToLux, has not been updated, and is no longer available on the downloads page of the LuxCoreRender website.
    about the new features in LuxCoreRender 2.10 in the release announcement
    Download LuxCoreRender and BlendLuxCore

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #luxcorerender #blendluxcore #have #been #released
    LuxCoreRender and BlendLuxCore 2.10 have been released
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; A recent render created using LuxCoreRender shared on the open-source renderer’s Instagram account. The LuxCoreRender team has released version 2.10 of the open-source physically based renderer and BlendLuxCore, its Blender integration plugin.The update – the first stable release in over three years – puts development “back on track”, adding support for the Blender 4 releases, and for Apple Silicon Macs. A hybrid CPU/GPU unbiased render engine, formerly known as LuxRender Formerly known as LuxRender, and rebooted in 2018, LuxCoreRender is an alternative to Blender’s native Cycles renderer, particularly for product and architectural visualization. It’s a physically based render engine with a range of production features and, as of LuxCoreRender 2.0, supports hybrid rendering on CPUs and GPUs. Now compatible with Blender 4.x, and available for Apple Silicon Macs LuxCoreRender 2.10 is the first stable version of the software in over three years: while there have been some experimental updates, the last stable release was LuxCoreRender 2.6.Development then stalled after several of the original key developers left the project. According to the release announcement, the 2.10 release is mainly intended to put development “back on track”. While it doesn’t introduce major new features, it makes the software compatible with the current Blender 4.x releases, and makes it “ready for new development work”. LuxCore Python bindings are now available as wheels on PyPi, making it easier for third-party developers to integrate the renderer into their software. The update also makes LuxCoreRender available for a greater range of platforms: as well as Windows, Linux and Intel Macs, it now runs current Macs with Apple Silicon processors. For GPU acceleration, the software still uses CUDA on NVIDIA hardware, and OpenCL elsewhere: it doesn’t currently use Apple’s Metal API when running on macOS. License and system requirements LuxCoreRender 2.10 is available under an Apache 2.0 licence for Windows, Linux and macOS. BlendLuxCore 2.10 is compatible with Blender 4.2 and 4.3. The experimental 3ds Max integration plugin, MaxToLux, has not been updated, and is no longer available on the downloads page of the LuxCoreRender website. about the new features in LuxCoreRender 2.10 in the release announcement Download LuxCoreRender and BlendLuxCore Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #luxcorerender #blendluxcore #have #been #released
    WWW.CGCHANNEL.COM
    LuxCoreRender and BlendLuxCore 2.10 have been released
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" A recent render created using LuxCoreRender shared on the open-source renderer’s Instagram account. The LuxCoreRender team has released version 2.10 of the open-source physically based renderer and BlendLuxCore, its Blender integration plugin.The update – the first stable release in over three years – puts development “back on track”, adding support for the Blender 4 releases, and for Apple Silicon Macs. A hybrid CPU/GPU unbiased render engine, formerly known as LuxRender Formerly known as LuxRender, and rebooted in 2018, LuxCoreRender is an alternative to Blender’s native Cycles renderer, particularly for product and architectural visualization. It’s a physically based render engine with a range of production features and, as of LuxCoreRender 2.0, supports hybrid rendering on CPUs and GPUs. Now compatible with Blender 4.x, and available for Apple Silicon Macs LuxCoreRender 2.10 is the first stable version of the software in over three years: while there have been some experimental updates, the last stable release was LuxCoreRender 2.6.Development then stalled after several of the original key developers left the project. According to the release announcement, the 2.10 release is mainly intended to put development “back on track”. While it doesn’t introduce major new features, it makes the software compatible with the current Blender 4.x releases, and makes it “ready for new development work”. LuxCore Python bindings are now available as wheels on PyPi, making it easier for third-party developers to integrate the renderer into their software. The update also makes LuxCoreRender available for a greater range of platforms: as well as Windows, Linux and Intel Macs, it now runs current Macs with Apple Silicon processors. For GPU acceleration, the software still uses CUDA on NVIDIA hardware, and OpenCL elsewhere: it doesn’t currently use Apple’s Metal API when running on macOS. License and system requirements LuxCoreRender 2.10 is available under an Apache 2.0 licence for Windows, Linux and macOS. BlendLuxCore 2.10 is compatible with Blender 4.2 and 4.3. The experimental 3ds Max integration plugin, MaxToLux, has not been updated, and is no longer available on the downloads page of the LuxCoreRender website. Read more about the new features in LuxCoreRender 2.10 in the release announcement Download LuxCoreRender and BlendLuxCore Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Get free Maya and UE5 rigging tools mGear 5.0 and ueGear 1.0

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    The mGear development team has released mGear 5.0, the latest version of the open-source character rigging framework for Maya, and ueGear 1.0, its new bridge to Unreal Engine.ueGear provides an easy way to translate character rigs created in Maya using mGear to Unreal Engine 5, automatically regenerating them inside the game engine.

    A versatile framework for creating modular character rigs

    First released in 2015, mGear was originally based on Gear, Blur Studio technical animation supervisor Jeremie Passerin’s Softimage rigging framework, although it isn’t an exact copy.It provides artists with a library of customisable rig components for different body parts, and is intended to generate “an infinite variety of rig combinations … without programming knowledge”.
    As well as Shifter, the rigging framework, mGear includes a customizable Anim Picker interface, a RBF Manager, and shot-sculpting tool Crank
    The toolset has been used by a range of VFX, animation and game development studios: the user reel includes projects by El Ranchito, Mac Guff, Unseen, and Pendulo Studios.
    mGear is free and open-source, but lead developer Miquel Campos’s company, mcsGear, provides commercial support and rigging services, which fund development.
    New tool ueGear automatically recreates mGear Maya rigs inside Unreal Engine

    The main new feature in the mGear 5.0 release is a separate tool, ueGear, a new “bridge connecting Maya and Unreal Engine pipelines”.First released in beta last year, ueGear enables artists to build animation rigs in Maya using mGear, then have ueGear regenerate them inside Unreal Engine, using the native Control Rig.
    The tool is designed for both games and offline animation workflows, and makes it possible to transfer animations, cameras, and sequences.
    However, the relationship between the Maya and Unreal rigs is not yet perfectly one-to-one, and the workflow does not yet support the Modular Control Rig introduced in Unreal Engine 5.4.
    PyMEL replaced by new custom PyMaya wrapper

    In mGear itself, the main changes are under the hood, removing the core framework’s dependency on PyMEL.An open-source library, PyMEL provides a bridge between Python scripting and Maya’s native MEL scripting language, and was widely used by rigging and animation tools.
    However, it is not developed or maintained by Autodesk itself, and is not longer installed by default with Maya, which creates installation and support issues for studios.
    mGear 5.0 replaces PyMEL with its the team’s own custom wrapper, PyMaya.
    It isn’t a full replacement for PyMEL, and objects created with PyMaya are not compatible with PyMEL, but it simplifies dependency management, and should maintain compatibility across versions of Maya.
    License and system requirements

    mGear is compatible with Maya 2025 and 2026, running on Windows, macOS and Linux. As of mGear 5.07, some functionality is only available in Windows when using Maya 2026.ueGear 1.0 is compatible with Unreal Engine 5.3+.
    The source code for both applications is available under an open-source MIT licence, and the compiled binaries are free downloads.
    Read a list of new features in mGear in the online changelogWatch video tutorials on the mGear YouTube channel
    Download mGear from GitHub

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #get #free #maya #ue5 #rigging
    Get free Maya and UE5 rigging tools mGear 5.0 and ueGear 1.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The mGear development team has released mGear 5.0, the latest version of the open-source character rigging framework for Maya, and ueGear 1.0, its new bridge to Unreal Engine.ueGear provides an easy way to translate character rigs created in Maya using mGear to Unreal Engine 5, automatically regenerating them inside the game engine. A versatile framework for creating modular character rigs First released in 2015, mGear was originally based on Gear, Blur Studio technical animation supervisor Jeremie Passerin’s Softimage rigging framework, although it isn’t an exact copy.It provides artists with a library of customisable rig components for different body parts, and is intended to generate “an infinite variety of rig combinations … without programming knowledge”. As well as Shifter, the rigging framework, mGear includes a customizable Anim Picker interface, a RBF Manager, and shot-sculpting tool Crank The toolset has been used by a range of VFX, animation and game development studios: the user reel includes projects by El Ranchito, Mac Guff, Unseen, and Pendulo Studios. mGear is free and open-source, but lead developer Miquel Campos’s company, mcsGear, provides commercial support and rigging services, which fund development. New tool ueGear automatically recreates mGear Maya rigs inside Unreal Engine The main new feature in the mGear 5.0 release is a separate tool, ueGear, a new “bridge connecting Maya and Unreal Engine pipelines”.First released in beta last year, ueGear enables artists to build animation rigs in Maya using mGear, then have ueGear regenerate them inside Unreal Engine, using the native Control Rig. The tool is designed for both games and offline animation workflows, and makes it possible to transfer animations, cameras, and sequences. However, the relationship between the Maya and Unreal rigs is not yet perfectly one-to-one, and the workflow does not yet support the Modular Control Rig introduced in Unreal Engine 5.4. PyMEL replaced by new custom PyMaya wrapper In mGear itself, the main changes are under the hood, removing the core framework’s dependency on PyMEL.An open-source library, PyMEL provides a bridge between Python scripting and Maya’s native MEL scripting language, and was widely used by rigging and animation tools. However, it is not developed or maintained by Autodesk itself, and is not longer installed by default with Maya, which creates installation and support issues for studios. mGear 5.0 replaces PyMEL with its the team’s own custom wrapper, PyMaya. It isn’t a full replacement for PyMEL, and objects created with PyMaya are not compatible with PyMEL, but it simplifies dependency management, and should maintain compatibility across versions of Maya. License and system requirements mGear is compatible with Maya 2025 and 2026, running on Windows, macOS and Linux. As of mGear 5.07, some functionality is only available in Windows when using Maya 2026.ueGear 1.0 is compatible with Unreal Engine 5.3+. The source code for both applications is available under an open-source MIT licence, and the compiled binaries are free downloads. Read a list of new features in mGear in the online changelogWatch video tutorials on the mGear YouTube channel Download mGear from GitHub Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #get #free #maya #ue5 #rigging
    Get free Maya and UE5 rigging tools mGear 5.0 and ueGear 1.0
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The mGear development team has released mGear 5.0, the latest version of the open-source character rigging framework for Maya, and ueGear 1.0, its new bridge to Unreal Engine.ueGear provides an easy way to translate character rigs created in Maya using mGear to Unreal Engine 5, automatically regenerating them inside the game engine. A versatile framework for creating modular character rigs First released in 2015, mGear was originally based on Gear, Blur Studio technical animation supervisor Jeremie Passerin’s Softimage rigging framework, although it isn’t an exact copy.It provides artists with a library of customisable rig components for different body parts, and is intended to generate “an infinite variety of rig combinations … without programming knowledge”. As well as Shifter, the rigging framework, mGear includes a customizable Anim Picker interface, a RBF Manager, and shot-sculpting tool Crank The toolset has been used by a range of VFX, animation and game development studios: the user reel includes projects by El Ranchito, Mac Guff, Unseen, and Pendulo Studios. mGear is free and open-source, but lead developer Miquel Campos’s company, mcsGear, provides commercial support and rigging services, which fund development. New tool ueGear automatically recreates mGear Maya rigs inside Unreal Engine The main new feature in the mGear 5.0 release is a separate tool, ueGear, a new “bridge connecting Maya and Unreal Engine pipelines”.First released in beta last year, ueGear enables artists to build animation rigs in Maya using mGear, then have ueGear regenerate them inside Unreal Engine, using the native Control Rig. The tool is designed for both games and offline animation workflows, and makes it possible to transfer animations, cameras, and sequences. However, the relationship between the Maya and Unreal rigs is not yet perfectly one-to-one, and the workflow does not yet support the Modular Control Rig introduced in Unreal Engine 5.4. PyMEL replaced by new custom PyMaya wrapper In mGear itself, the main changes are under the hood, removing the core framework’s dependency on PyMEL.An open-source library, PyMEL provides a bridge between Python scripting and Maya’s native MEL scripting language, and was widely used by rigging and animation tools. However, it is not developed or maintained by Autodesk itself, and is not longer installed by default with Maya, which creates installation and support issues for studios. mGear 5.0 replaces PyMEL with its the team’s own custom wrapper, PyMaya. It isn’t a full replacement for PyMEL, and objects created with PyMaya are not compatible with PyMEL, but it simplifies dependency management, and should maintain compatibility across versions of Maya. License and system requirements mGear is compatible with Maya 2025 and 2026, running on Windows, macOS and Linux. As of mGear 5.07, some functionality is only available in Windows when using Maya 2026.ueGear 1.0 is compatible with Unreal Engine 5.3+. The source code for both applications is available under an open-source MIT licence, and the compiled binaries are free downloads. Read a list of new features in mGear in the online changelog (Not updated for mGear 5.0 at the time of writing)Watch video tutorials on the mGear YouTube channel Download mGear from GitHub Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Maxon discontinues ZBrushCore and ZBrushCoreMini

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Maxon is discontinuing ZBrushCore and ZBrushCoreMini. the commercial and free cut-down editions of ZBrush, its digital sculpting software.ZBrushCoreMini downloads will be removed on 30 May 2025, and sales of new ZBrushCore subscriptions will be ended, although existing subs can be renewed until 30 September 2025.
    In its online FAQs, Maxon also teases a new “freemium” version of ZBrush to “align the desktop and iPad versions”.
    ZBrushCore and ZBrushCoreMini: the old cut-down desktop editions of ZBrush

    First released in 2016, ZBrushCore is a cut-down edition of the software aimed at “users who are new to 3D, illustrators, students and 3D printing enthusiasts”.ZBrushCoreMini, a free non-commercial edition, followed it in 2020.
    You can see a feature comparison table for ZBrushCoreMini and ZBrushCore on Maxon’s website, and find more details on the ZBrushCore and ZBrushCoreMini product pages.
    Neither has seen many updates since Maxon acquired original ZBrush developer Pixologic in 2022: the most recent version of ZBrushCore is the 2021.6 release.
    Updates and downloads to stop on 30 May 2025; subscription renewals on 30 September

    Both editions will now enter ‘limited maintenance mode’ on 30 May 2025, so neither will receive any updates or bugfixes, and ZBrushCoreMini will no longer be available for download.It will also no longer be possible to take out a new ZBrushCore subscription, although existing subscribers will be able to renew monthly subscriptions until 30 September 2025, and will receive active support until that date.
    New ‘freemium’ edition of ZBrush coming soon

    Maxon doesn’t give a reason for discontinuing ZBrushCore and ZBrushCoreMini in its FAQs, but it does mention the new iPad edition of ZBrush, which fulfils the role of a less expensive, less fully featured alternative to the desktop version of the software, with a free base edition.In the FAQs, Maxon also notes that as part of its “continued efforts to align the Desktop and iPad versions, a new Freemium version of ZBrush Desktop is on its way”.
    Price, system requirements and dates

    Maxon doesn’t list system requirements or prices for ZBrushCore on its website, but on release, ZBrushCore 2021 was compatible with Windows 7+ and Mac OS X 10.10+, and cost /month.ZBrushCoreMini is available free until 30 May 2025. On the release of ZBrushCoreMini 2021, it was compatible with 64-bit Windows 7+ and Mac OS X 10.11+.
    ZBrush for iPad is compatible with iPadOS 17.0+. It requires an iPad with a A12 Bionic chip or later. The base app is free, but is export-disabled. Access to the full feature set requires a paid subscription, which costs /month or /year.
    The desktop edition of ZBrush is compatible with Windows 10+ and macOS 11.5+. It is rental-only, with subscriptions costing /month or /year, also including the iPad edition. Maxon hasn’t announced a release date for the new freemium edition.
    Read Maxon’s online FAQs about discontinuing ZBrushCore and ZBrushCoreMini
    Download ZBrushCoreMini for free until 30 May 2025Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #maxon #discontinues #zbrushcore #zbrushcoremini
    Maxon discontinues ZBrushCore and ZBrushCoreMini
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Maxon is discontinuing ZBrushCore and ZBrushCoreMini. the commercial and free cut-down editions of ZBrush, its digital sculpting software.ZBrushCoreMini downloads will be removed on 30 May 2025, and sales of new ZBrushCore subscriptions will be ended, although existing subs can be renewed until 30 September 2025. In its online FAQs, Maxon also teases a new “freemium” version of ZBrush to “align the desktop and iPad versions”. ZBrushCore and ZBrushCoreMini: the old cut-down desktop editions of ZBrush First released in 2016, ZBrushCore is a cut-down edition of the software aimed at “users who are new to 3D, illustrators, students and 3D printing enthusiasts”.ZBrushCoreMini, a free non-commercial edition, followed it in 2020. You can see a feature comparison table for ZBrushCoreMini and ZBrushCore on Maxon’s website, and find more details on the ZBrushCore and ZBrushCoreMini product pages. Neither has seen many updates since Maxon acquired original ZBrush developer Pixologic in 2022: the most recent version of ZBrushCore is the 2021.6 release. Updates and downloads to stop on 30 May 2025; subscription renewals on 30 September Both editions will now enter ‘limited maintenance mode’ on 30 May 2025, so neither will receive any updates or bugfixes, and ZBrushCoreMini will no longer be available for download.It will also no longer be possible to take out a new ZBrushCore subscription, although existing subscribers will be able to renew monthly subscriptions until 30 September 2025, and will receive active support until that date. New ‘freemium’ edition of ZBrush coming soon Maxon doesn’t give a reason for discontinuing ZBrushCore and ZBrushCoreMini in its FAQs, but it does mention the new iPad edition of ZBrush, which fulfils the role of a less expensive, less fully featured alternative to the desktop version of the software, with a free base edition.In the FAQs, Maxon also notes that as part of its “continued efforts to align the Desktop and iPad versions, a new Freemium version of ZBrush Desktop is on its way”. Price, system requirements and dates Maxon doesn’t list system requirements or prices for ZBrushCore on its website, but on release, ZBrushCore 2021 was compatible with Windows 7+ and Mac OS X 10.10+, and cost /month.ZBrushCoreMini is available free until 30 May 2025. On the release of ZBrushCoreMini 2021, it was compatible with 64-bit Windows 7+ and Mac OS X 10.11+. ZBrush for iPad is compatible with iPadOS 17.0+. It requires an iPad with a A12 Bionic chip or later. The base app is free, but is export-disabled. Access to the full feature set requires a paid subscription, which costs /month or /year. The desktop edition of ZBrush is compatible with Windows 10+ and macOS 11.5+. It is rental-only, with subscriptions costing /month or /year, also including the iPad edition. Maxon hasn’t announced a release date for the new freemium edition. Read Maxon’s online FAQs about discontinuing ZBrushCore and ZBrushCoreMini Download ZBrushCoreMini for free until 30 May 2025Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #maxon #discontinues #zbrushcore #zbrushcoremini
    WWW.CGCHANNEL.COM
    Maxon discontinues ZBrushCore and ZBrushCoreMini
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Maxon is discontinuing ZBrushCore and ZBrushCoreMini. the commercial and free cut-down editions of ZBrush, its digital sculpting software.ZBrushCoreMini downloads will be removed on 30 May 2025, and sales of new ZBrushCore subscriptions will be ended, although existing subs can be renewed until 30 September 2025. In its online FAQs, Maxon also teases a new “freemium” version of ZBrush to “align the desktop and iPad versions”. ZBrushCore and ZBrushCoreMini: the old cut-down desktop editions of ZBrush First released in 2016, ZBrushCore is a cut-down edition of the software aimed at “users who are new to 3D, illustrators, students and 3D printing enthusiasts”.ZBrushCoreMini, a free non-commercial edition, followed it in 2020. You can see a feature comparison table for ZBrushCoreMini and ZBrushCore on Maxon’s website, and find more details on the ZBrushCore and ZBrushCoreMini product pages. Neither has seen many updates since Maxon acquired original ZBrush developer Pixologic in 2022: the most recent version of ZBrushCore is the 2021.6 release. Updates and downloads to stop on 30 May 2025; subscription renewals on 30 September Both editions will now enter ‘limited maintenance mode’ on 30 May 2025, so neither will receive any updates or bugfixes, and ZBrushCoreMini will no longer be available for download.It will also no longer be possible to take out a new ZBrushCore subscription, although existing subscribers will be able to renew monthly subscriptions until 30 September 2025, and will receive active support until that date. New ‘freemium’ edition of ZBrush coming soon Maxon doesn’t give a reason for discontinuing ZBrushCore and ZBrushCoreMini in its FAQs, but it does mention the new iPad edition of ZBrush, which fulfils the role of a less expensive, less fully featured alternative to the desktop version of the software, with a free base edition.In the FAQs, Maxon also notes that as part of its “continued efforts to align the Desktop and iPad versions, a new Freemium version of ZBrush Desktop is on its way”. Price, system requirements and dates Maxon doesn’t list system requirements or prices for ZBrushCore on its website, but on release, ZBrushCore 2021 was compatible with Windows 7+ and Mac OS X 10.10+, and cost $9.95/month.ZBrushCoreMini is available free until 30 May 2025. On the release of ZBrushCoreMini 2021, it was compatible with 64-bit Windows 7+ and Mac OS X 10.11+. ZBrush for iPad is compatible with iPadOS 17.0+. It requires an iPad with a A12 Bionic chip or later. The base app is free, but is export-disabled. Access to the full feature set requires a paid subscription, which costs $9.99/month or $89.99/year. The desktop edition of ZBrush is compatible with Windows 10+ and macOS 11.5+. It is rental-only, with subscriptions costing $49/month or $399/year, also including the iPad edition. Maxon hasn’t announced a release date for the new freemium edition. Read Maxon’s online FAQs about discontinuing ZBrushCore and ZBrushCoreMini Download ZBrushCoreMini for free until 30 May 2025 (Requires a free Maxon account) Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Commentarii 0 Distribuiri
  • Cubebrush bans AI content from its online marketplace

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    Online CG asset marketplace Cubebrush has changed its upload policy to require creators to identify any products that have been created using AI tools.AI-generated content is still permitted on the site, if correctly labeled, but will automatically be hidden in searches and on sellers’ online stores, making it “essentially invisible”.
    In a blog post explaining the change, Cubebrush describes its ultimate aim as “to have an AI-free marketplace”.
    Why has Cubebrush changed its stance on AI-generated content?

    Founded in 2014, Cubebrush is now one of the most popular online marketplaces for stock content for CG work, including 3D models, materials, tools and 2D images.At the time of writing, there are just under 150,000 assets for sale.
    Although most are created in the conventional way, by hand, an “increasing number” of new products offered for sale have been created using AI tools.
    According to Cubebrush, while it has “consistently denied new creators withportfolios applying to open new stores … a small number of existing creators on our platform have started uploading A.I. content”.
    How has Cubebrush changed its handling of AI-generated content?

    Previously, Cubebrush staff manually hid AI-generated assets being offered for sale on the marketplace, which hides them in search results, and in creators’ stores.The site has now changed its upload policy to require sellers to tag assets that have been created using AI.
    Any tagged assets will be hidden automatically, and any assets found to have been mislabeled will now be deleted.
    Stores that “consistently fail to properly identify their A.I. content” will face permanent bans from the site.
    Is AI-generated content forbidden on Cubebrush?

    The policy is interesting, because although Cubebrush’s blog post states directly that “A.I. content is NOT ACCEPTED on the marketplace”, it is still permitted on the site itself.Creators can upload AI-generated content, and if labeled correctly, it will not be deleted, but it will only be accessible through a direct link.
    While sellers could promote those direct links themselves, doing so would remove one of the key benefits of selling through an online marketplace: that they receive more traffic than most sellers’ own websites or social media profiles.
    According to Cubebrush, AI-generated content will become “essentially invisible”.
    How does Cubebrush’s approach compare to other online marketplaces?

    Other online asset marketplaces have responded to the growth of AI-generated content, and artists’ differing responses to it, in different ways.Some view generative AI as a potentially useful tool, or a money-making opportunity; others see it as devaluing the role of the artist, or even as an existential threat.
    Cubebrush’s new policy places it towards the latter end of the spectrum.
    Of the other popular marketplaces, Flipped Normals‘ policy is straightforward: AI-generated content is not permitted, even if offered for free, and will be removed.
    Fab and ArtStation Marketplace, both owned by Epic Games, permit AI-generated content, if it is tagged ‘CreatedWithAI’, but do not license existing assets for use training AI models.
    TurboSquid, owned by Shutterstock/Getty Images, does not permit sale of AI-generated content, but it does license assets on the site for use training AI models, under an opt-out policy, and is also launching its own generative AI tool.
    CGTrader‘s terms of use do not explicitly forbid sale of AI-generated content, although sellers are required to warrant that assets are their own “original work”. CGTrader licenses assets on the site for training AI models, under an opt-out policy; advertises assets from the site as part of a licensable AI-training dataset; and operates its own generative AI tool.
    Read Cubebrush’s blog post announcing its current policy on AI-generated content

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Full disclosure: CG Channel is owned by Gnomon.
    #cubebrush #bans #content #its #online
    Cubebrush bans AI content from its online marketplace
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Online CG asset marketplace Cubebrush has changed its upload policy to require creators to identify any products that have been created using AI tools.AI-generated content is still permitted on the site, if correctly labeled, but will automatically be hidden in searches and on sellers’ online stores, making it “essentially invisible”. In a blog post explaining the change, Cubebrush describes its ultimate aim as “to have an AI-free marketplace”. Why has Cubebrush changed its stance on AI-generated content? Founded in 2014, Cubebrush is now one of the most popular online marketplaces for stock content for CG work, including 3D models, materials, tools and 2D images.At the time of writing, there are just under 150,000 assets for sale. Although most are created in the conventional way, by hand, an “increasing number” of new products offered for sale have been created using AI tools. According to Cubebrush, while it has “consistently denied new creators withportfolios applying to open new stores … a small number of existing creators on our platform have started uploading A.I. content”. How has Cubebrush changed its handling of AI-generated content? Previously, Cubebrush staff manually hid AI-generated assets being offered for sale on the marketplace, which hides them in search results, and in creators’ stores.The site has now changed its upload policy to require sellers to tag assets that have been created using AI. Any tagged assets will be hidden automatically, and any assets found to have been mislabeled will now be deleted. Stores that “consistently fail to properly identify their A.I. content” will face permanent bans from the site. Is AI-generated content forbidden on Cubebrush? The policy is interesting, because although Cubebrush’s blog post states directly that “A.I. content is NOT ACCEPTED on the marketplace”, it is still permitted on the site itself.Creators can upload AI-generated content, and if labeled correctly, it will not be deleted, but it will only be accessible through a direct link. While sellers could promote those direct links themselves, doing so would remove one of the key benefits of selling through an online marketplace: that they receive more traffic than most sellers’ own websites or social media profiles. According to Cubebrush, AI-generated content will become “essentially invisible”. How does Cubebrush’s approach compare to other online marketplaces? Other online asset marketplaces have responded to the growth of AI-generated content, and artists’ differing responses to it, in different ways.Some view generative AI as a potentially useful tool, or a money-making opportunity; others see it as devaluing the role of the artist, or even as an existential threat. Cubebrush’s new policy places it towards the latter end of the spectrum. Of the other popular marketplaces, Flipped Normals‘ policy is straightforward: AI-generated content is not permitted, even if offered for free, and will be removed. Fab and ArtStation Marketplace, both owned by Epic Games, permit AI-generated content, if it is tagged ‘CreatedWithAI’, but do not license existing assets for use training AI models. TurboSquid, owned by Shutterstock/Getty Images, does not permit sale of AI-generated content, but it does license assets on the site for use training AI models, under an opt-out policy, and is also launching its own generative AI tool. CGTrader‘s terms of use do not explicitly forbid sale of AI-generated content, although sellers are required to warrant that assets are their own “original work”. CGTrader licenses assets on the site for training AI models, under an opt-out policy; advertises assets from the site as part of a licensable AI-training dataset; and operates its own generative AI tool. Read Cubebrush’s blog post announcing its current policy on AI-generated content Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon. #cubebrush #bans #content #its #online
    WWW.CGCHANNEL.COM
    Cubebrush bans AI content from its online marketplace
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Online CG asset marketplace Cubebrush has changed its upload policy to require creators to identify any products that have been created using AI tools.AI-generated content is still permitted on the site, if correctly labeled, but will automatically be hidden in searches and on sellers’ online stores, making it “essentially invisible”. In a blog post explaining the change, Cubebrush describes its ultimate aim as “to have an AI-free marketplace”. Why has Cubebrush changed its stance on AI-generated content? Founded in 2014, Cubebrush is now one of the most popular online marketplaces for stock content for CG work, including 3D models, materials, tools and 2D images.At the time of writing, there are just under 150,000 assets for sale. Although most are created in the conventional way, by hand, an “increasing number” of new products offered for sale have been created using AI tools. According to Cubebrush, while it has “consistently denied new creators with [AI-generated] portfolios applying to open new stores … a small number of existing creators on our platform have started uploading A.I. content”. How has Cubebrush changed its handling of AI-generated content? Previously, Cubebrush staff manually hid AI-generated assets being offered for sale on the marketplace, which hides them in search results, and in creators’ stores.The site has now changed its upload policy to require sellers to tag assets that have been created using AI. Any tagged assets will be hidden automatically, and any assets found to have been mislabeled will now be deleted. Stores that “consistently fail to properly identify their A.I. content” will face permanent bans from the site. Is AI-generated content forbidden on Cubebrush? The policy is interesting, because although Cubebrush’s blog post states directly that “A.I. content is NOT ACCEPTED on the marketplace”, it is still permitted on the site itself.Creators can upload AI-generated content, and if labeled correctly, it will not be deleted, but it will only be accessible through a direct link. While sellers could promote those direct links themselves, doing so would remove one of the key benefits of selling through an online marketplace: that they receive more traffic than most sellers’ own websites or social media profiles. According to Cubebrush, AI-generated content will become “essentially invisible”. How does Cubebrush’s approach compare to other online marketplaces? Other online asset marketplaces have responded to the growth of AI-generated content, and artists’ differing responses to it, in different ways.Some view generative AI as a potentially useful tool, or a money-making opportunity; others see it as devaluing the role of the artist, or even as an existential threat. Cubebrush’s new policy places it towards the latter end of the spectrum. Of the other popular marketplaces, Flipped Normals‘ policy is straightforward: AI-generated content is not permitted, even if offered for free, and will be removed. Fab and ArtStation Marketplace, both owned by Epic Games, permit AI-generated content, if it is tagged ‘CreatedWithAI’, but do not license existing assets for use training AI models. TurboSquid, owned by Shutterstock/Getty Images, does not permit sale of AI-generated content, but it does license assets on the site for use training AI models, under an opt-out policy, and is also launching its own generative AI tool. CGTrader‘s terms of use do not explicitly forbid sale of AI-generated content, although sellers are required to warrant that assets are their own “original work”. CGTrader licenses assets on the site for training AI models, under an opt-out policy; advertises assets from the site as part of a licensable AI-training dataset; and operates its own generative AI tool. Read Cubebrush’s blog post announcing its current policy on AI-generated content Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon.
    0 Commentarii 0 Distribuiri
  • Master Character Creation for Production

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    The Gnomon Workshop has released Character Creation for Production, a detailed guide to creating 3D characters recorded by Character Artist Antonio Mossucca.The intermediate-level workshop provides a over eight and half hours of video training in ZBrush, Maya, Marvelous Designer and Substance 3D Painter.
    Improve your character design and look dev skills with this intermediate workshop

    In the workshop, Mossucca reveals his workflow for creating a VFX or animation production-quality character, from initial 2D concept to final 3D render.He begins by explaining his approach to ideation and gathering reference material, before sculpting the character – an anthropomorphic frog – in ZBrush.
    Mossucca then covers how to create 3D clothing in Marvelous Designer, texture the character in Substance 3D Designer, and create portfolio renders using Maya and Arnold.
    As well as the tutorial videos, viewers of the workshop can download project files including Mossucca’s ZBrush user interface and shortcuts, his ACEScg LUT files to use in Photoshop, plus scripts for use in XGen and mGear for character rigging.
    About the artist

    Antonio Mossucca is a Character Artist who has worked for VFX and animation studios including Cinesite, Framestore, Scanline VFX, Jellyfish Pictures and Axis Studios.His credits include Planet Dinosaur, Avengers: Endgame and Infinity War, The Commuter, The Adam Project, Black Adam, and Aquaman and the Lost Kingdom.
    Pricing and availability

    Character Creation for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.Subscriptions cost /month or /year. Free trials are available.
    about Character Creation for Production on The Gnomon Workshop’s website

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Full disclosure: CG Channel is owned by Gnomon.
    #master #character #creation #production
    Master Character Creation for Production
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The Gnomon Workshop has released Character Creation for Production, a detailed guide to creating 3D characters recorded by Character Artist Antonio Mossucca.The intermediate-level workshop provides a over eight and half hours of video training in ZBrush, Maya, Marvelous Designer and Substance 3D Painter. Improve your character design and look dev skills with this intermediate workshop In the workshop, Mossucca reveals his workflow for creating a VFX or animation production-quality character, from initial 2D concept to final 3D render.He begins by explaining his approach to ideation and gathering reference material, before sculpting the character – an anthropomorphic frog – in ZBrush. Mossucca then covers how to create 3D clothing in Marvelous Designer, texture the character in Substance 3D Designer, and create portfolio renders using Maya and Arnold. As well as the tutorial videos, viewers of the workshop can download project files including Mossucca’s ZBrush user interface and shortcuts, his ACEScg LUT files to use in Photoshop, plus scripts for use in XGen and mGear for character rigging. About the artist Antonio Mossucca is a Character Artist who has worked for VFX and animation studios including Cinesite, Framestore, Scanline VFX, Jellyfish Pictures and Axis Studios.His credits include Planet Dinosaur, Avengers: Endgame and Infinity War, The Commuter, The Adam Project, Black Adam, and Aquaman and the Lost Kingdom. Pricing and availability Character Creation for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.Subscriptions cost /month or /year. Free trials are available. about Character Creation for Production on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon. #master #character #creation #production
    Master Character Creation for Production
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The Gnomon Workshop has released Character Creation for Production, a detailed guide to creating 3D characters recorded by Character Artist Antonio Mossucca.The intermediate-level workshop provides a over eight and half hours of video training in ZBrush, Maya, Marvelous Designer and Substance 3D Painter. Improve your character design and look dev skills with this intermediate workshop In the workshop, Mossucca reveals his workflow for creating a VFX or animation production-quality character, from initial 2D concept to final 3D render.He begins by explaining his approach to ideation and gathering reference material, before sculpting the character – an anthropomorphic frog – in ZBrush. Mossucca then covers how to create 3D clothing in Marvelous Designer, texture the character in Substance 3D Designer, and create portfolio renders using Maya and Arnold. As well as the tutorial videos, viewers of the workshop can download project files including Mossucca’s ZBrush user interface and shortcuts, his ACEScg LUT files to use in Photoshop, plus scripts for use in XGen and mGear for character rigging. About the artist Antonio Mossucca is a Character Artist who has worked for VFX and animation studios including Cinesite, Framestore, Scanline VFX, Jellyfish Pictures and Axis Studios.His credits include Planet Dinosaur, Avengers: Endgame and Infinity War, The Commuter, The Adam Project, Black Adam, and Aquaman and the Lost Kingdom. Pricing and availability Character Creation for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.Subscriptions cost $57/month or $519/year. Free trials are available. Read more about Character Creation for Production on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon.
    0 Commentarii 0 Distribuiri
Mai multe povesti