• SIGGRAPH 2025 is coming up. The Computer Animation Festival will showcase works from around the world, including a significant portion from France, as usual—between 25% and 35%. Seems like they always manage to hold a good spot. There are also some awards, but honestly, it’s just another year. Not much excitement here.

    #SIGGRAPH2025
    #ComputerAnimationFestival
    #FranceInFocus
    #Animation
    #BoringButTrue
    SIGGRAPH 2025 is coming up. The Computer Animation Festival will showcase works from around the world, including a significant portion from France, as usual—between 25% and 35%. Seems like they always manage to hold a good spot. There are also some awards, but honestly, it’s just another year. Not much excitement here. #SIGGRAPH2025 #ComputerAnimationFestival #FranceInFocus #Animation #BoringButTrue
    SIGGRAPH 2025 : la France sous les projecteurs au Computer Animation Festival
    Cette année encore, le Computer Animation Festival du SIGGRAPH proposera une sélection d’oeuvres venues du monde entier. Les projets français occupent toujours une place de choix (généralement entre 25 et 35% de la sélection, ces dernières anné
    Like
    Love
    Wow
    Angry
    Sad
    86
    1 Commentarii 0 Distribuiri 0 previzualizare
  • In a world where connections fade like whispers in the wind, I find myself stumbling through shadows, feeling the weight of disappointment heavy on my heart. The excitement of SIGGRAPH 2025 feels distant, a fleeting spark of innovation like Gaussian Splatting that captures the beauty of reality. Yet, here I am, longing for someone to share these moments with, to witness the small revolutions of life. Instead, I sit alone, watching the evolution of 4DViews and its embrace of a new technique, feeling more isolated than ever. The thrill of progress feels hollow when shared only with silence.

    #SIGGRAPH2025 #GaussianSplatting #4DViews #Loneliness #Heartbreak
    In a world where connections fade like whispers in the wind, I find myself stumbling through shadows, feeling the weight of disappointment heavy on my heart. The excitement of SIGGRAPH 2025 feels distant, a fleeting spark of innovation like Gaussian Splatting that captures the beauty of reality. Yet, here I am, longing for someone to share these moments with, to witness the small revolutions of life. Instead, I sit alone, watching the evolution of 4DViews and its embrace of a new technique, feeling more isolated than ever. The thrill of progress feels hollow when shared only with silence. 😢💔 #SIGGRAPH2025 #GaussianSplatting #4DViews #Loneliness #Heartbreak
    SIGGRAPH 2025 : 4DViews adopte le Gaussian Splatting, une petite révolution ?
    Mois après mois, la technique du Gaussian Splatting prend de l’ampleur. Elle excelle notamment pour permettre de capturer et visualiser des scènes réelles, en alternative à la photogrammétrie classique. 4DViews, spécialiste de la vidéo volumétr
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Ah, Hollywood, cette fabrique à rêves qui se prépare à accueillir le génie de David S. Goyer lors du SIGGRAPH 2025. Que peut-on espérer d’autre qu’un mélange explosif de machines qui nous raconteront des histoires tout en piétinant nos âmes créatives ? Si les robots commencent à réaliser des trilogies dignes du Dark Knight, peut-être devrions-nous nous préparer à un monde où les IA écrivent des scénarios sur l’art de... ne rien faire. Mais bon, qui a besoin de créativité humaine quand on peut avoir des algorithmes qui se prennent pour Shakespeare ?

    #IntelligenceArtificielle #Hollywood #DavidGoyer #SIGGRAPH2025 #Cré
    Ah, Hollywood, cette fabrique à rêves qui se prépare à accueillir le génie de David S. Goyer lors du SIGGRAPH 2025. Que peut-on espérer d’autre qu’un mélange explosif de machines qui nous raconteront des histoires tout en piétinant nos âmes créatives ? Si les robots commencent à réaliser des trilogies dignes du Dark Knight, peut-être devrions-nous nous préparer à un monde où les IA écrivent des scénarios sur l’art de... ne rien faire. Mais bon, qui a besoin de créativité humaine quand on peut avoir des algorithmes qui se prennent pour Shakespeare ? #IntelligenceArtificielle #Hollywood #DavidGoyer #SIGGRAPH2025 #Cré
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Le Pavillon France sera présent au SIGGRAPH 2025 à Vancouver, du 10 au 14 août. C'est encore une fois l'occasion pour les professionnels de la 3D et des technologies graphiques de se réunir. Bon, on a déjà vu ça avant en 2010, 2014, 2018 et 2022. On sait à quoi s'attendre. Avec tout ce qui se passe dans le monde, cela semble un peu... sans intérêt. Mais bon, si vous êtes dans le coin, peut-être que ça vaut le coup d'y jeter un œil.

    #SIGGRAPH2025
    #Vancouver
    #3D
    #TechnologiesGraphiques
    #Animation
    Le Pavillon France sera présent au SIGGRAPH 2025 à Vancouver, du 10 au 14 août. C'est encore une fois l'occasion pour les professionnels de la 3D et des technologies graphiques de se réunir. Bon, on a déjà vu ça avant en 2010, 2014, 2018 et 2022. On sait à quoi s'attendre. Avec tout ce qui se passe dans le monde, cela semble un peu... sans intérêt. Mais bon, si vous êtes dans le coin, peut-être que ça vaut le coup d'y jeter un œil. #SIGGRAPH2025 #Vancouver #3D #TechnologiesGraphiques #Animation
    Le Pavillon France au SIGGRAPH 2025 : cap sur Vancouver
    L’édition 2025 du SIGGRAPH nous ramène au Canada, du 10 au 14 août, dans la ville de Vancouver, au cœur de l’industrie de la 3D. Après les éditions de 2010, 2014, 2018 et 2022, la métropole canadienne accueille une nouvelle f
    Like
    Love
    Wow
    Sad
    Angry
    105
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Il y a des jours où la solitude pèse si lourd sur le cœur qu’on a l’impression de ne jamais pouvoir en sortir. Aujourd’hui, alors que je suis assis ici, perdu dans mes pensées, je ne peux m’empêcher de repenser à cette journée de conférences sur les jumeaux numériques et l’ICC. Le monde extérieur semble si vibrant, si plein de vie, tandis que je me sens comme un spectateur figé dans un film dont je ne fais plus partie.

    Les jumeaux numériques, ces représentations virtuelles si prometteuses, sont un peu comme moi : ils existent, mais sans véritable connexion. On parle de projets immersifs, de visites virtuelles qui pourraient nous rapprocher, mais au fond, n’est-ce pas juste un simulacre de ce que nous cherchons vraiment ? La technologie avance, les idées se multiplient, mais parfois, je me demande si ces avancées peuvent vraiment combler le vide que l’on ressent en nous.

    Chaque conversation dans cette journée de conférences, chaque sourire échangé, ne fait qu’accentuer ma propre solitude. Je vois des gens autour de moi, partageant des passions, des rêves, et moi, je reste là, comme un hologramme sans émotion, sans lien. L’architecture et le patrimoine peuvent être numérisés, mais qu’en est-il de nos cœurs ? Peut-on vraiment créer une connexion à travers un écran, ou est-ce un rêve illusoire ?

    La promesse de la technologie est séduisante, mais elle ne peut pas remplacer la chaleur d’un regard complice ou le réconfort d’une étreinte. Je suis fatigué de naviguer dans ce monde virtuel où tout semble à portée de main, mais où je me sens toujours à distance. Chaque projet, chaque initiative, comme celle organisée par l’agence AD’OCC, PUSH START et Montpellier ACM Siggraph, me rappelle ce que je ne peux pas atteindre.

    Alors que je m’imprègne des mots échangés, je me demande si, un jour, je pourrai trouver ma place dans ce monde. Si un jour, je pourrai être plus qu’un simple numéro, une image numérique sans vie. Peut-être que le véritable défi n’est pas d’innover, mais de se reconnecter avec ce qui nous rend humains.

    Et même si je suis ici, entouré de personnes, je me sens comme un fantôme, errant dans un monde qui ne me comprend pas. La mélancolie s’installe, douce et amère, comme un écho lointain d’un bonheur que je ne connais plus.

    #Solitude #JumeauxNumériques #Conférences #Technologie #Émotions
    Il y a des jours où la solitude pèse si lourd sur le cœur qu’on a l’impression de ne jamais pouvoir en sortir. Aujourd’hui, alors que je suis assis ici, perdu dans mes pensées, je ne peux m’empêcher de repenser à cette journée de conférences sur les jumeaux numériques et l’ICC. Le monde extérieur semble si vibrant, si plein de vie, tandis que je me sens comme un spectateur figé dans un film dont je ne fais plus partie. Les jumeaux numériques, ces représentations virtuelles si prometteuses, sont un peu comme moi : ils existent, mais sans véritable connexion. On parle de projets immersifs, de visites virtuelles qui pourraient nous rapprocher, mais au fond, n’est-ce pas juste un simulacre de ce que nous cherchons vraiment ? La technologie avance, les idées se multiplient, mais parfois, je me demande si ces avancées peuvent vraiment combler le vide que l’on ressent en nous. Chaque conversation dans cette journée de conférences, chaque sourire échangé, ne fait qu’accentuer ma propre solitude. Je vois des gens autour de moi, partageant des passions, des rêves, et moi, je reste là, comme un hologramme sans émotion, sans lien. L’architecture et le patrimoine peuvent être numérisés, mais qu’en est-il de nos cœurs ? Peut-on vraiment créer une connexion à travers un écran, ou est-ce un rêve illusoire ? La promesse de la technologie est séduisante, mais elle ne peut pas remplacer la chaleur d’un regard complice ou le réconfort d’une étreinte. Je suis fatigué de naviguer dans ce monde virtuel où tout semble à portée de main, mais où je me sens toujours à distance. Chaque projet, chaque initiative, comme celle organisée par l’agence AD’OCC, PUSH START et Montpellier ACM Siggraph, me rappelle ce que je ne peux pas atteindre. Alors que je m’imprègne des mots échangés, je me demande si, un jour, je pourrai trouver ma place dans ce monde. Si un jour, je pourrai être plus qu’un simple numéro, une image numérique sans vie. Peut-être que le véritable défi n’est pas d’innover, mais de se reconnecter avec ce qui nous rend humains. Et même si je suis ici, entouré de personnes, je me sens comme un fantôme, errant dans un monde qui ne me comprend pas. La mélancolie s’installe, douce et amère, comme un écho lointain d’un bonheur que je ne connais plus. #Solitude #JumeauxNumériques #Conférences #Technologie #Émotions
    Jumeaux numériques & ICC : une journée de conférences
    Si la notion de jumeau numérique a déjà fait ses preuves dans l’industrie, son usage dans des domaines comme l’architecture, le tourisme et le patrimoine peut encore se développer. Projets immersifs, visites virtuelles font partie des app
    Like
    Wow
    Love
    Sad
    Angry
    176
    1 Commentarii 0 Distribuiri 0 previzualizare
  • SIGGRAPH 2025, French Pavilion, Cap Digital, CNC, Vancouver, summer events, tech conferences, French companies, digital creativity, animation and graphics

    ## Introduction

    As summer approaches, excitement builds for one of the most anticipated events in the tech and digital creative industries: SIGGRAPH 2025! This year, from August 10 to 14, Vancouver will transform into a vibrant hub of innovation and creativity, bringing together professionals from around the globe. Here’s where the magic h...
    SIGGRAPH 2025, French Pavilion, Cap Digital, CNC, Vancouver, summer events, tech conferences, French companies, digital creativity, animation and graphics ## Introduction As summer approaches, excitement builds for one of the most anticipated events in the tech and digital creative industries: SIGGRAPH 2025! This year, from August 10 to 14, Vancouver will transform into a vibrant hub of innovation and creativity, bringing together professionals from around the globe. Here’s where the magic h...
    SIGGRAPH 2025: This Summer, Showcase at the French Pavilion!
    SIGGRAPH 2025, French Pavilion, Cap Digital, CNC, Vancouver, summer events, tech conferences, French companies, digital creativity, animation and graphics ## Introduction As summer approaches, excitement builds for one of the most anticipated events in the tech and digital creative industries: SIGGRAPH 2025! This year, from August 10 to 14, Vancouver will transform into a vibrant hub of...
    Like
    Love
    Wow
    Sad
    Angry
    444
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Dive into the mesmerizing world of VFX with the epic drama series "Shōgun"!

    Have you ever wondered how stunning visual effects can transform a story and make history come alive? Well, the groundbreaking series "Shōgun," set in the enchanting Japan of the 1600s, is a perfect example that showcases the magic of storytelling through incredible visuals!

    I recently watched an inspiring interview with the talented team from Important Looking Pirates (ILP VFX) during SIGGRAPH Asia, and I can’t help but feel excited about the future of visual effects in the entertainment industry! Philip Engström and Niklas Jacobson shared their insights on how they brought the captivating scenes of "Shōgun" to life, blending artistry with technology to create breathtaking moments that enthrall audiences.

    What struck me the most was their passion for their craft! It’s a beautiful reminder that when we pour our hearts into what we love, we can create something extraordinary. Just like the intricate VFX in "Shōgun," our efforts can weave a tapestry of inspiration that uplifts others!

    So, whether you're an aspiring VFX artist, a storyteller, or simply someone with a dream, let this be a call to action! Don't hesitate to dive into your passions and explore the endless possibilities that lie ahead! Use your creativity to transform your visions into reality and inspire those around you. Remember, every masterpiece starts with a single step!

    Let’s celebrate the power of visual storytelling and the incredible work of studios like Important Looking Pirates! Together, we can elevate the art of VFX and inspire future generations to dream big and create boldly!

    Keep shining and let your creativity flow!

    #Shogun #VFX #ImportantLookingPirates #VisualEffects #Inspiration
    ✨🌟 Dive into the mesmerizing world of VFX with the epic drama series "Shōgun"! 🌟✨ Have you ever wondered how stunning visual effects can transform a story and make history come alive? 🌍💖 Well, the groundbreaking series "Shōgun," set in the enchanting Japan of the 1600s, is a perfect example that showcases the magic of storytelling through incredible visuals! 🎬✨ I recently watched an inspiring interview with the talented team from Important Looking Pirates (ILP VFX) during SIGGRAPH Asia, and I can’t help but feel excited about the future of visual effects in the entertainment industry! 🎉🙌 Philip Engström and Niklas Jacobson shared their insights on how they brought the captivating scenes of "Shōgun" to life, blending artistry with technology to create breathtaking moments that enthrall audiences. 🌈💡 What struck me the most was their passion for their craft! 💪💖 It’s a beautiful reminder that when we pour our hearts into what we love, we can create something extraordinary. Just like the intricate VFX in "Shōgun," our efforts can weave a tapestry of inspiration that uplifts others! 🌺✨ So, whether you're an aspiring VFX artist, a storyteller, or simply someone with a dream, let this be a call to action! Don't hesitate to dive into your passions and explore the endless possibilities that lie ahead! 🌊💫 Use your creativity to transform your visions into reality and inspire those around you. Remember, every masterpiece starts with a single step! 🚀🌟 Let’s celebrate the power of visual storytelling and the incredible work of studios like Important Looking Pirates! Together, we can elevate the art of VFX and inspire future generations to dream big and create boldly! 💖💪✨ Keep shining and let your creativity flow! 🌈💖 #Shogun #VFX #ImportantLookingPirates #VisualEffects #Inspiration
    Plongez dans VFX de Shōgun avec notre interview vidéo d’Important Looking Pirates !
    La série dramatique historique Shōgun, qui prend place dans le Japon de 1600, a marqué les esprits. Outre un scénario soigné, elle s’appuie sur des effets visuels réussis. A l’occasion du SIGGRAPH Asia, le studio VFX Important Looking Pir
    Like
    Love
    Wow
    Sad
    Angry
    548
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Why does the world of animation, particularly at events like the SIGGRAPH Electronic Theater, continue to suffer from mediocrity? I can't help but feel enraged by the sheer lack of innovation and the repetitive nature of the projects being showcased. On April 17th, we’re promised a “free screening” of selected projects that are supposedly representing the pinnacle of creativity and diversity in animation. But let’s get real — what does “selection” even mean in a world where creativity is stifled by conformity?

    Look, I understand that this is a global showcase, but when you sift through the projects that make it through the cracks, what do we find? Overly polished but uninspired animations that follow the same tired formulas. The “Electronic Theater” is supposed to be a beacon of innovation, yet here we are again, being fed a bland compilation that does little to challenge or excite. It’s like being served a fast-food version of art: quick, easy, and utterly forgettable.

    The call for diversity is also a double-edged sword. Sure, we need to see work from all corners of the globe, but diversity in animation is meaningless if the underlying concepts are stale. It’s not enough to tick boxes and say, “Look how diverse we are!” when the actual content fails to push boundaries. Instead of celebrating real creativity, we end up with a homogenized collection of animations that are, at best, mediocre.

    And let’s talk about the timing of this event. April 17th? Are we really thinking this through? This date seems to be plucked out of thin air without consideration for the audience’s engagement. Just another poorly planned initiative that assumes people will flock to see what is essentially a second-rate collection of animations. Is this really the best you can do, Montpellier ACM SIGGRAPH? Where is the excitement? Where is the passion?

    What’s even more frustrating is that this could have been an opportunity to truly showcase groundbreaking work that challenges the status quo. Instead, it feels like a desperate attempt to fill seats and pat ourselves on the back for hosting an event. Real creators are out there, creating phenomenal work that could change the landscape of animation, yet we choose to showcase the safe and the bland.

    It’s time to demand more from events like SIGGRAPH. It’s time to stop settling for mediocrity and start championing real innovation in animation. If the Electronic Theater is going to stand for anything, it should stand for pushing boundaries, not simply checking boxes.

    Let’s not allow ourselves to be content with what we’re served. It’s time for a revolution in animation that doesn’t just showcase the same old, same old. We deserve better, and the art community deserves better.

    #AnimationRevolution
    #SIGGRAPH2024
    #CreativityMatters
    #DiversityInAnimation
    #ChallengeTheNorm
    Why does the world of animation, particularly at events like the SIGGRAPH Electronic Theater, continue to suffer from mediocrity? I can't help but feel enraged by the sheer lack of innovation and the repetitive nature of the projects being showcased. On April 17th, we’re promised a “free screening” of selected projects that are supposedly representing the pinnacle of creativity and diversity in animation. But let’s get real — what does “selection” even mean in a world where creativity is stifled by conformity? Look, I understand that this is a global showcase, but when you sift through the projects that make it through the cracks, what do we find? Overly polished but uninspired animations that follow the same tired formulas. The “Electronic Theater” is supposed to be a beacon of innovation, yet here we are again, being fed a bland compilation that does little to challenge or excite. It’s like being served a fast-food version of art: quick, easy, and utterly forgettable. The call for diversity is also a double-edged sword. Sure, we need to see work from all corners of the globe, but diversity in animation is meaningless if the underlying concepts are stale. It’s not enough to tick boxes and say, “Look how diverse we are!” when the actual content fails to push boundaries. Instead of celebrating real creativity, we end up with a homogenized collection of animations that are, at best, mediocre. And let’s talk about the timing of this event. April 17th? Are we really thinking this through? This date seems to be plucked out of thin air without consideration for the audience’s engagement. Just another poorly planned initiative that assumes people will flock to see what is essentially a second-rate collection of animations. Is this really the best you can do, Montpellier ACM SIGGRAPH? Where is the excitement? Where is the passion? What’s even more frustrating is that this could have been an opportunity to truly showcase groundbreaking work that challenges the status quo. Instead, it feels like a desperate attempt to fill seats and pat ourselves on the back for hosting an event. Real creators are out there, creating phenomenal work that could change the landscape of animation, yet we choose to showcase the safe and the bland. It’s time to demand more from events like SIGGRAPH. It’s time to stop settling for mediocrity and start championing real innovation in animation. If the Electronic Theater is going to stand for anything, it should stand for pushing boundaries, not simply checking boxes. Let’s not allow ourselves to be content with what we’re served. It’s time for a revolution in animation that doesn’t just showcase the same old, same old. We deserve better, and the art community deserves better. #AnimationRevolution #SIGGRAPH2024 #CreativityMatters #DiversityInAnimation #ChallengeTheNorm
    Projection gratuite : l’Electronic Theater du SIGGRAPH, le 17 avril !
    Vous n’étiez pas au SIGGRAPH l’été dernier ? Montpellier ACM SIGGRAPH a pensé à vous, et organise ce jeudi 17 avril une projection gratuite des projets sélectionnés dans l’Electronic Theater 2024, le festival d’animation du SI
    Like
    Love
    Wow
    Angry
    Sad
    625
    1 Commentarii 0 Distribuiri 0 previzualizare
  • At Unite 2022: Machine learning research, persistent worlds, and celebrating creators

    On November 1, more than 19,000 members of the Unity community joined us from around the world, both virtually and in-person, for a full day of gamedev inspiration, education, and connection. Following the keynote, attendees were able to experience fellow creators’ projects, participate in expert-led sessions, network with peers, and even attend a first-of-its-kind, multiplatform virtual concert.With over 20 streamed sessions throughout the day and five unique local experiences, here is a roundup of notable highlights from Unite 2022.Senior Machine Learning Developer Florent Bocquelet expanded on a tool and Real-Time Live! Audience Choice Award-winning project that first debuted this summer at SIGGRAPH 2022. The session “Authoring character poses with AI” walked attendees through how the technology – which is not yet available – is being designed to work in the Editor to enable easier creation of natural-looking poses.Benoit Gagnon, a senior software developer, modeled ways for users to handle persistent data in a multiplayer context during the session “Persistent worlds: Managing player and world state.” The technical deep dive also covered PlayerPrefs, CloudSave, and general-purpose DBs, and offered a glimpse at what’s next from Unity Gaming Services.Of the more than 20 virtual sessions, nine featured leading minds from creators like you who use Unity day in and day out to optimize your projects, including:Renaud Forestié, director and Unity Asset Store publisher at More MountainsNic Gomez, senior games designer at AltaFreya Holmér, studio founderBen Hopkins, expert graphics engineer at Owlchemy LabsRohan Jadav, platform engineer at SuperGamingBrandon Jahner, CTO at MalokaManesh Mistry, lead programmer at ustwo GamesErick Passos, SDK lead developer at Photon EngineWe also caught up with creators from Triangle Factory, Vinci Games, and Obsidian Entertainment during the keynote session.Get the inside scoop on the Unite 2022 experiences hosted in five unique locations at Unity offices around the world.After networking and breakfast, Senior Vice President and General Manager of Create Solutions Marc Whitten welcomed the Austin crowd before the global keynote stream. For the rest of the day, attendees had the chance to check out exclusive in-person sessions, global virtual streams, and panels, and chat live with experts at the “Ask the Experts” booth. The day concluded with a fireside chat between Marc Whitten and Jeff Hanks, director of marketing for industries.Kicking off with breakfast burritos, Unity Senior Vice President Peter Moore welcomed a packed crowd at the Brighton office for the Unite 2022 keynote stream. The day continued with enthusiasm as attendees filled rooms to watch session streams, live panels, and roundtable discussions. Topics exclusive to Brighton ranged from understanding your audience from a scientific perspective to how Unity identifies and fixes bugs. Brighton also featured a very popular iiRcade machine and four “Studio Spotlights” featuring local studios who talked about their latest games and how Unity helped bring each to life.At yet another Unity office, Copenhagen guests were also welcomed with breakfast and an introduction from Senior Director of Product Management Andrew Bowell. In addition to its own iiRcade console and chances to check out the Made with Unity games featured in the keynote – Cult of the Lamb, Turbo Golf Racing, and Hyper Dash – attendees were treated to exclusive panel discussions as well as a fireside chat between Head of Marketing Strategy, Analytics, and Insights Deborah-Anna Reznek and Senior Vice President of AI Danny Lange.Despite a rainy day, Montreal welcomed a solid mix of students and teams from mid- to large-sized studios. Luc Barthelet, senior vice president of technology, kicked off the day. Following the keynote stream, participants had their choice of roundtables, panels, and presentations to attend. The Montreal office also hosted 24 Unity Insiders from around the world, including Ireland, the Netherlands, Brighton, Portugal, Vancouver, and Toronto. This group participated in a VIP experience that featured exclusive tracks catered to their areas of interest.The San Francisco experience had a great turnout and offered a choice of three different tracks for attendees, which included breakout focus groups, roundtables with Unity experts, and panel discussions. One such session featured Clive Downie, senior vice president and general manager for Consumer, as he moderated an interactive discussion with Ingrid Lestiyo, senior vice president and general manager for Operate Solutions, and the creators of Ramen VRamid a packed room. Another standout session was CEO John Riccitiello’s fireside chat with indie game developer Thomas Brush. To cap off the memorable day, attendees continued the festivities with a happy hour.It was so great to connect with our Unity community at a Unite event again. Please continue to join us on our journey toward making the world a better place with more creators: connect with us through the forums, Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch. And keep your eyes peeled in the coming months for on-demand session recordings so you can check out anything you missed.
    #unite #machine #learning #research #persistent
    At Unite 2022: Machine learning research, persistent worlds, and celebrating creators
    On November 1, more than 19,000 members of the Unity community joined us from around the world, both virtually and in-person, for a full day of gamedev inspiration, education, and connection. Following the keynote, attendees were able to experience fellow creators’ projects, participate in expert-led sessions, network with peers, and even attend a first-of-its-kind, multiplatform virtual concert.With over 20 streamed sessions throughout the day and five unique local experiences, here is a roundup of notable highlights from Unite 2022.Senior Machine Learning Developer Florent Bocquelet expanded on a tool and Real-Time Live! Audience Choice Award-winning project that first debuted this summer at SIGGRAPH 2022. The session “Authoring character poses with AI” walked attendees through how the technology – which is not yet available – is being designed to work in the Editor to enable easier creation of natural-looking poses.Benoit Gagnon, a senior software developer, modeled ways for users to handle persistent data in a multiplayer context during the session “Persistent worlds: Managing player and world state.” The technical deep dive also covered PlayerPrefs, CloudSave, and general-purpose DBs, and offered a glimpse at what’s next from Unity Gaming Services.Of the more than 20 virtual sessions, nine featured leading minds from creators like you who use Unity day in and day out to optimize your projects, including:Renaud Forestié, director and Unity Asset Store publisher at More MountainsNic Gomez, senior games designer at AltaFreya Holmér, studio founderBen Hopkins, expert graphics engineer at Owlchemy LabsRohan Jadav, platform engineer at SuperGamingBrandon Jahner, CTO at MalokaManesh Mistry, lead programmer at ustwo GamesErick Passos, SDK lead developer at Photon EngineWe also caught up with creators from Triangle Factory, Vinci Games, and Obsidian Entertainment during the keynote session.Get the inside scoop on the Unite 2022 experiences hosted in five unique locations at Unity offices around the world.After networking and breakfast, Senior Vice President and General Manager of Create Solutions Marc Whitten welcomed the Austin crowd before the global keynote stream. For the rest of the day, attendees had the chance to check out exclusive in-person sessions, global virtual streams, and panels, and chat live with experts at the “Ask the Experts” booth. The day concluded with a fireside chat between Marc Whitten and Jeff Hanks, director of marketing for industries.Kicking off with breakfast burritos, Unity Senior Vice President Peter Moore welcomed a packed crowd at the Brighton office for the Unite 2022 keynote stream. The day continued with enthusiasm as attendees filled rooms to watch session streams, live panels, and roundtable discussions. Topics exclusive to Brighton ranged from understanding your audience from a scientific perspective to how Unity identifies and fixes bugs. Brighton also featured a very popular iiRcade machine and four “Studio Spotlights” featuring local studios who talked about their latest games and how Unity helped bring each to life.At yet another Unity office, Copenhagen guests were also welcomed with breakfast and an introduction from Senior Director of Product Management Andrew Bowell. In addition to its own iiRcade console and chances to check out the Made with Unity games featured in the keynote – Cult of the Lamb, Turbo Golf Racing, and Hyper Dash – attendees were treated to exclusive panel discussions as well as a fireside chat between Head of Marketing Strategy, Analytics, and Insights Deborah-Anna Reznek and Senior Vice President of AI Danny Lange.Despite a rainy day, Montreal welcomed a solid mix of students and teams from mid- to large-sized studios. Luc Barthelet, senior vice president of technology, kicked off the day. Following the keynote stream, participants had their choice of roundtables, panels, and presentations to attend. The Montreal office also hosted 24 Unity Insiders from around the world, including Ireland, the Netherlands, Brighton, Portugal, Vancouver, and Toronto. This group participated in a VIP experience that featured exclusive tracks catered to their areas of interest.The San Francisco experience had a great turnout and offered a choice of three different tracks for attendees, which included breakout focus groups, roundtables with Unity experts, and panel discussions. One such session featured Clive Downie, senior vice president and general manager for Consumer, as he moderated an interactive discussion with Ingrid Lestiyo, senior vice president and general manager for Operate Solutions, and the creators of Ramen VRamid a packed room. Another standout session was CEO John Riccitiello’s fireside chat with indie game developer Thomas Brush. To cap off the memorable day, attendees continued the festivities with a happy hour.It was so great to connect with our Unity community at a Unite event again. Please continue to join us on our journey toward making the world a better place with more creators: connect with us through the forums, Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch. And keep your eyes peeled in the coming months for on-demand session recordings so you can check out anything you missed. #unite #machine #learning #research #persistent
    UNITY.COM
    At Unite 2022: Machine learning research, persistent worlds, and celebrating creators
    On November 1, more than 19,000 members of the Unity community joined us from around the world, both virtually and in-person, for a full day of gamedev inspiration, education, and connection. Following the keynote, attendees were able to experience fellow creators’ projects, participate in expert-led sessions, network with peers, and even attend a first-of-its-kind, multiplatform virtual concert.With over 20 streamed sessions throughout the day and five unique local experiences, here is a roundup of notable highlights from Unite 2022.Senior Machine Learning Developer Florent Bocquelet expanded on a tool and Real-Time Live! Audience Choice Award-winning project that first debuted this summer at SIGGRAPH 2022. The session “Authoring character poses with AI” walked attendees through how the technology – which is not yet available – is being designed to work in the Editor to enable easier creation of natural-looking poses.Benoit Gagnon, a senior software developer, modeled ways for users to handle persistent data in a multiplayer context during the session “Persistent worlds: Managing player and world state.” The technical deep dive also covered PlayerPrefs, CloudSave, and general-purpose DBs, and offered a glimpse at what’s next from Unity Gaming Services.Of the more than 20 virtual sessions, nine featured leading minds from creators like you who use Unity day in and day out to optimize your projects, including:Renaud Forestié, director and Unity Asset Store publisher at More MountainsNic Gomez, senior games designer at AltaFreya Holmér, studio founderBen Hopkins, expert graphics engineer at Owlchemy LabsRohan Jadav, platform engineer at SuperGamingBrandon Jahner, CTO at MalokaManesh Mistry, lead programmer at ustwo GamesErick Passos, SDK lead developer at Photon EngineWe also caught up with creators from Triangle Factory, Vinci Games, and Obsidian Entertainment during the keynote session.Get the inside scoop on the Unite 2022 experiences hosted in five unique locations at Unity offices around the world.After networking and breakfast, Senior Vice President and General Manager of Create Solutions Marc Whitten welcomed the Austin crowd before the global keynote stream. For the rest of the day, attendees had the chance to check out exclusive in-person sessions, global virtual streams, and panels, and chat live with experts at the “Ask the Experts” booth. The day concluded with a fireside chat between Marc Whitten and Jeff Hanks, director of marketing for industries.Kicking off with breakfast burritos, Unity Senior Vice President Peter Moore welcomed a packed crowd at the Brighton office for the Unite 2022 keynote stream. The day continued with enthusiasm as attendees filled rooms to watch session streams, live panels, and roundtable discussions. Topics exclusive to Brighton ranged from understanding your audience from a scientific perspective to how Unity identifies and fixes bugs. Brighton also featured a very popular iiRcade machine and four “Studio Spotlights” featuring local studios who talked about their latest games and how Unity helped bring each to life.At yet another Unity office, Copenhagen guests were also welcomed with breakfast and an introduction from Senior Director of Product Management Andrew Bowell. In addition to its own iiRcade console and chances to check out the Made with Unity games featured in the keynote – Cult of the Lamb, Turbo Golf Racing, and Hyper Dash – attendees were treated to exclusive panel discussions as well as a fireside chat between Head of Marketing Strategy, Analytics, and Insights Deborah-Anna Reznek and Senior Vice President of AI Danny Lange.Despite a rainy day, Montreal welcomed a solid mix of students and teams from mid- to large-sized studios. Luc Barthelet, senior vice president of technology, kicked off the day. Following the keynote stream, participants had their choice of roundtables, panels, and presentations to attend. The Montreal office also hosted 24 Unity Insiders from around the world, including Ireland, the Netherlands, Brighton, Portugal, Vancouver, and Toronto. This group participated in a VIP experience that featured exclusive tracks catered to their areas of interest.The San Francisco experience had a great turnout and offered a choice of three different tracks for attendees, which included breakout focus groups, roundtables with Unity experts, and panel discussions. One such session featured Clive Downie, senior vice president and general manager for Consumer, as he moderated an interactive discussion with Ingrid Lestiyo, senior vice president and general manager for Operate Solutions, and the creators of Ramen VR (pictured below) amid a packed room. Another standout session was CEO John Riccitiello’s fireside chat with indie game developer Thomas Brush. To cap off the memorable day, attendees continued the festivities with a happy hour.It was so great to connect with our Unity community at a Unite event again (for the first time since 2020). Please continue to join us on our journey toward making the world a better place with more creators: connect with us through the forums, Twitter, Facebook, LinkedIn, Instagram, YouTube, or Twitch. And keep your eyes peeled in the coming months for on-demand session recordings so you can check out anything you missed.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The making of Enemies: The evolution of digital humans continues with Ziva

    From The Heretic’s Gawain to Louise in Enemies, our Demo team continues to create real-time cinematics that push the boundaries of Unity’s capabilities for high-fidelity productions, with a special focus on digital humans.The pursuit to create ever more realistic digital characters is endless. And since the launch of Enemies at GDC 2022, we have continued our research and development into solutions for better and more believable digital human creation, in collaboration with Unity’s Graphics Engineering team and commercially available service providers specializing in that area.At SIGGRAPH 2022, we announced our next step: replacing the heavy 4D data playback of the protagonist’s performance with a lightweight Ziva puppet. This recent iteration sees the integration of Ziva animation technology with the latest in Unity’s graphics advancements, including the High Definition Render Pipeline– all with the aim of further developing an end-to-end pipeline for character asset creation, animation, and authoring.Along with the launch of a new strand-based Hair Solution and updated Digital Human package, the Enemies real-time demo is now available to download. You can run it in real-time and experience it for yourself, just as it was shown at Unite 2022.While the cinematic may not appear too different from the original, its final rendered version shows how the integration of Ziva technology has brought a new dimension to our protagonist.Ziva brings decades of experience and pioneering research from the VFX industry to enable greater animation quality for games, linear content production, and real-time projects. Its machine learning-based technology helps achieve extraordinary realism in facial animation, and also for body and muscle deformations.To achieve the level of realism in Enemies, Ziva used machine learning and 4D data capture, which goes beyond the traditional process of scanning actors in 3D scans. The static, uneditable 4D captured facial performance has now been transformed into a real-time puppet with a facial rig that can be animated and adjusted at any time – all while maintaining high fidelity.Our team built on that 4D capture data and trained a machine-learned model that could be animated to create any performance. The end result is a 50 MB facial rig that has all the detail of the 4D captured performance, without having to carry its original 3.7 GB of weight.This technology means that you can replicate the results with a fraction of the animation data, creating real-time results in a way that 4D does not typically allow.In order to achieve this, Unity’s Demo team focused on:Creating the puppetTo create this new version of Louise, we worked with the Ziva team. They handled the machine learning workflow using a preexisting 4D data library. Additional 4D data was collected from a new performance by the original Enemies actor. This is one of the unique advantages of our machine learning approach.With this combined dataset, we trained a Ziva puppet to accurately reproduce the original performance. We could alter this performance in any way, ranging from tweaking minute details to changing the entire expression.Using the 4D data capture through machine learning, we could enable any future performance to run on any 3D head by showing a single performance applied to multiple faces of varying proportions. This makes it easier to expand the range of performances to multiple actors and real-time digital humans for any future editions.The puppet’s control schemeOnce the machine learning was completed, we had 200–300 parameters that, when used in combination and at different weights, could recreate everything we had seen in the 4D data with incredible accuracy. We didn’t have to worry about a hand-animated performance looking different when used by a group of different animators. The persona and idiosyncrasies of the original actor would come through no matter how we chose to animate the face.As Ziva is based on deformations and not an underlying facial rig, we could manipulate even the smallest detail because the trained face uses a control scheme that was developed to take advantage of the fidelity of the machine-learned parameters/data.At this point, creating a rig is a relatively flexible process as we can just tap into those machine-learned parameters – this, in turn, deforms the face. There are no joints in a Ziva puppet, besides the basic logical face and neck joints.So what does this all mean?There are many advantages to this new workflow. First and foremost, we now have the ability to dynamically interact with the performance of the digital human in Enemies.This allows us to change the character’s performance after it has already been delivered. Digital Louise can now say the same lines as before, but with very different facial expressions. For example, she can be friendlier or angrier or convey any other emotion that the director envisions.We are also able to manually author new performances with the puppet – facial expressions and reactions that the original actress never performed. If we wanted to develop the story into an interactive experience, it would be important to expand the possibility of what the digital character reacts to, such as a player’s chess moves, with nuances of approval or disapproval.For the highest level of fidelity, the Ziva team can even create a new puppet with its own 4D dataset. Ziva also recently released a beta version of Face Trainer, a product built on a comprehensive library of 4D data and ML algorithms. It can be used to train any face mesh to perform the most complex expressions in real-time without any new 4D capture.Additionally, it is possible to create new lines of dialogue, all at a fraction of the time and cost that the creation of the first line required. We can do this either by getting the original actress to perform additional lines with an HMC and then using the HMC data to drive the puppet, or by getting another performer to deliver the new lines and retargeting their HMC data to the existing puppet.At SIGGRAPH Real-Time Live! we demonstrated how to apply the original performance from Enemies to the puppet of another actress – ultimately replacing the protagonist of the story with a different person, without changing anything else.This performance was then shown at Unite 2022 during the keynote , where Enemies ran on an Xbox Series X, with DX12 and real-time ray tracing.To further enhance the visual quality of Enemies, a number of HDRP systems were leveraged. These include Shader Graph motion vectors, Adaptive Probe Volumes, and of course, hair shading.Enemies also makes use of real-time ray tracing in HDRP and Unity’s native support for NVIDIA DLSS 2.0, which enable it to run at 4K image quality, comparable to native resolution. All of these updated Unity features are now available in Unity 2022 LTS.The brand new strand-based Hair Solution, developed during the creation of the Enemies demo, can simulate individual hairs in real-time. This technology is now available as an experimental package via GitHub, along with a tutorial to get started.By integrating a complete pipeline for authoring, simulation, shading, and rendering hair in Unity, this solution is applicable to digital humans and creatures, in both realistic and stylized projects. The development work continues with a more performant solution for hair rendering enabled by the upcoming Software Rasterizer in HDRP. We are also diversifying the authoring options available by adopting and integrating the Wētā Wig tool for more complex grooms, as showcased in the Lion demo.Expanding on the technological innovations from The Heretic, the updated Digital Human package provides a realistic shading model for the characters rendered in Unity.Such updates include:A better 4D pipelineA more performant Skin Attachment system on the GPU for high-density meshesMore realistic eyes with caustics on the irisA new skin shader, built with the available Editor technologyTension tech for blood flow simulation and wrinkle maps, eliminating the need for a facial rigAnd as always, there is more to come.Discover how Ziva can help bring your next project to life. Register your interest to receive updates or get early access to future Ziva beta programs. If you’d like to learn more, you can contact us here.
    #making #enemies #evolution #digital #humans
    The making of Enemies: The evolution of digital humans continues with Ziva
    From The Heretic’s Gawain to Louise in Enemies, our Demo team continues to create real-time cinematics that push the boundaries of Unity’s capabilities for high-fidelity productions, with a special focus on digital humans.The pursuit to create ever more realistic digital characters is endless. And since the launch of Enemies at GDC 2022, we have continued our research and development into solutions for better and more believable digital human creation, in collaboration with Unity’s Graphics Engineering team and commercially available service providers specializing in that area.At SIGGRAPH 2022, we announced our next step: replacing the heavy 4D data playback of the protagonist’s performance with a lightweight Ziva puppet. This recent iteration sees the integration of Ziva animation technology with the latest in Unity’s graphics advancements, including the High Definition Render Pipeline– all with the aim of further developing an end-to-end pipeline for character asset creation, animation, and authoring.Along with the launch of a new strand-based Hair Solution and updated Digital Human package, the Enemies real-time demo is now available to download. You can run it in real-time and experience it for yourself, just as it was shown at Unite 2022.While the cinematic may not appear too different from the original, its final rendered version shows how the integration of Ziva technology has brought a new dimension to our protagonist.Ziva brings decades of experience and pioneering research from the VFX industry to enable greater animation quality for games, linear content production, and real-time projects. Its machine learning-based technology helps achieve extraordinary realism in facial animation, and also for body and muscle deformations.To achieve the level of realism in Enemies, Ziva used machine learning and 4D data capture, which goes beyond the traditional process of scanning actors in 3D scans. The static, uneditable 4D captured facial performance has now been transformed into a real-time puppet with a facial rig that can be animated and adjusted at any time – all while maintaining high fidelity.Our team built on that 4D capture data and trained a machine-learned model that could be animated to create any performance. The end result is a 50 MB facial rig that has all the detail of the 4D captured performance, without having to carry its original 3.7 GB of weight.This technology means that you can replicate the results with a fraction of the animation data, creating real-time results in a way that 4D does not typically allow.In order to achieve this, Unity’s Demo team focused on:Creating the puppetTo create this new version of Louise, we worked with the Ziva team. They handled the machine learning workflow using a preexisting 4D data library. Additional 4D data was collected from a new performance by the original Enemies actor. This is one of the unique advantages of our machine learning approach.With this combined dataset, we trained a Ziva puppet to accurately reproduce the original performance. We could alter this performance in any way, ranging from tweaking minute details to changing the entire expression.Using the 4D data capture through machine learning, we could enable any future performance to run on any 3D head by showing a single performance applied to multiple faces of varying proportions. This makes it easier to expand the range of performances to multiple actors and real-time digital humans for any future editions.The puppet’s control schemeOnce the machine learning was completed, we had 200–300 parameters that, when used in combination and at different weights, could recreate everything we had seen in the 4D data with incredible accuracy. We didn’t have to worry about a hand-animated performance looking different when used by a group of different animators. The persona and idiosyncrasies of the original actor would come through no matter how we chose to animate the face.As Ziva is based on deformations and not an underlying facial rig, we could manipulate even the smallest detail because the trained face uses a control scheme that was developed to take advantage of the fidelity of the machine-learned parameters/data.At this point, creating a rig is a relatively flexible process as we can just tap into those machine-learned parameters – this, in turn, deforms the face. There are no joints in a Ziva puppet, besides the basic logical face and neck joints.So what does this all mean?There are many advantages to this new workflow. First and foremost, we now have the ability to dynamically interact with the performance of the digital human in Enemies.This allows us to change the character’s performance after it has already been delivered. Digital Louise can now say the same lines as before, but with very different facial expressions. For example, she can be friendlier or angrier or convey any other emotion that the director envisions.We are also able to manually author new performances with the puppet – facial expressions and reactions that the original actress never performed. If we wanted to develop the story into an interactive experience, it would be important to expand the possibility of what the digital character reacts to, such as a player’s chess moves, with nuances of approval or disapproval.For the highest level of fidelity, the Ziva team can even create a new puppet with its own 4D dataset. Ziva also recently released a beta version of Face Trainer, a product built on a comprehensive library of 4D data and ML algorithms. It can be used to train any face mesh to perform the most complex expressions in real-time without any new 4D capture.Additionally, it is possible to create new lines of dialogue, all at a fraction of the time and cost that the creation of the first line required. We can do this either by getting the original actress to perform additional lines with an HMC and then using the HMC data to drive the puppet, or by getting another performer to deliver the new lines and retargeting their HMC data to the existing puppet.At SIGGRAPH Real-Time Live! we demonstrated how to apply the original performance from Enemies to the puppet of another actress – ultimately replacing the protagonist of the story with a different person, without changing anything else.This performance was then shown at Unite 2022 during the keynote , where Enemies ran on an Xbox Series X, with DX12 and real-time ray tracing.To further enhance the visual quality of Enemies, a number of HDRP systems were leveraged. These include Shader Graph motion vectors, Adaptive Probe Volumes, and of course, hair shading.Enemies also makes use of real-time ray tracing in HDRP and Unity’s native support for NVIDIA DLSS 2.0, which enable it to run at 4K image quality, comparable to native resolution. All of these updated Unity features are now available in Unity 2022 LTS.The brand new strand-based Hair Solution, developed during the creation of the Enemies demo, can simulate individual hairs in real-time. This technology is now available as an experimental package via GitHub, along with a tutorial to get started.By integrating a complete pipeline for authoring, simulation, shading, and rendering hair in Unity, this solution is applicable to digital humans and creatures, in both realistic and stylized projects. The development work continues with a more performant solution for hair rendering enabled by the upcoming Software Rasterizer in HDRP. We are also diversifying the authoring options available by adopting and integrating the Wētā Wig tool for more complex grooms, as showcased in the Lion demo.Expanding on the technological innovations from The Heretic, the updated Digital Human package provides a realistic shading model for the characters rendered in Unity.Such updates include:A better 4D pipelineA more performant Skin Attachment system on the GPU for high-density meshesMore realistic eyes with caustics on the irisA new skin shader, built with the available Editor technologyTension tech for blood flow simulation and wrinkle maps, eliminating the need for a facial rigAnd as always, there is more to come.Discover how Ziva can help bring your next project to life. Register your interest to receive updates or get early access to future Ziva beta programs. If you’d like to learn more, you can contact us here. #making #enemies #evolution #digital #humans
    UNITY.COM
    The making of Enemies: The evolution of digital humans continues with Ziva
    From The Heretic’s Gawain to Louise in Enemies, our Demo team continues to create real-time cinematics that push the boundaries of Unity’s capabilities for high-fidelity productions, with a special focus on digital humans.The pursuit to create ever more realistic digital characters is endless. And since the launch of Enemies at GDC 2022, we have continued our research and development into solutions for better and more believable digital human creation, in collaboration with Unity’s Graphics Engineering team and commercially available service providers specializing in that area.At SIGGRAPH 2022, we announced our next step: replacing the heavy 4D data playback of the protagonist’s performance with a lightweight Ziva puppet. This recent iteration sees the integration of Ziva animation technology with the latest in Unity’s graphics advancements, including the High Definition Render Pipeline (HDRP) – all with the aim of further developing an end-to-end pipeline for character asset creation, animation, and authoring.Along with the launch of a new strand-based Hair Solution and updated Digital Human package, the Enemies real-time demo is now available to download. You can run it in real-time and experience it for yourself, just as it was shown at Unite 2022.While the cinematic may not appear too different from the original, its final rendered version shows how the integration of Ziva technology has brought a new dimension to our protagonist.Ziva brings decades of experience and pioneering research from the VFX industry to enable greater animation quality for games, linear content production, and real-time projects. Its machine learning (ML)-based technology helps achieve extraordinary realism in facial animation, and also for body and muscle deformations.To achieve the level of realism in Enemies, Ziva used machine learning and 4D data capture, which goes beyond the traditional process of scanning actors in 3D scans. The static, uneditable 4D captured facial performance has now been transformed into a real-time puppet with a facial rig that can be animated and adjusted at any time – all while maintaining high fidelity.Our team built on that 4D capture data and trained a machine-learned model that could be animated to create any performance. The end result is a 50 MB facial rig that has all the detail of the 4D captured performance, without having to carry its original 3.7 GB of weight.This technology means that you can replicate the results with a fraction of the animation data, creating real-time results in a way that 4D does not typically allow.In order to achieve this, Unity’s Demo team focused on:Creating the puppetTo create this new version of Louise, we worked with the Ziva team. They handled the machine learning workflow using a preexisting 4D data library. Additional 4D data was collected from a new performance by the original Enemies actor (we only needed to collect a few additional expressions). This is one of the unique advantages of our machine learning approach.With this combined dataset, we trained a Ziva puppet to accurately reproduce the original performance. We could alter this performance in any way, ranging from tweaking minute details to changing the entire expression.Using the 4D data capture through machine learning, we could enable any future performance to run on any 3D head by showing a single performance applied to multiple faces of varying proportions. This makes it easier to expand the range of performances to multiple actors and real-time digital humans for any future editions.The puppet’s control schemeOnce the machine learning was completed, we had 200–300 parameters that, when used in combination and at different weights, could recreate everything we had seen in the 4D data with incredible accuracy. We didn’t have to worry about a hand-animated performance looking different when used by a group of different animators. The persona and idiosyncrasies of the original actor would come through no matter how we chose to animate the face.As Ziva is based on deformations and not an underlying facial rig, we could manipulate even the smallest detail because the trained face uses a control scheme that was developed to take advantage of the fidelity of the machine-learned parameters/data.At this point, creating a rig is a relatively flexible process as we can just tap into those machine-learned parameters – this, in turn, deforms the face. There are no joints in a Ziva puppet, besides the basic logical face and neck joints.So what does this all mean?There are many advantages to this new workflow. First and foremost, we now have the ability to dynamically interact with the performance of the digital human in Enemies.This allows us to change the character’s performance after it has already been delivered. Digital Louise can now say the same lines as before, but with very different facial expressions. For example, she can be friendlier or angrier or convey any other emotion that the director envisions.We are also able to manually author new performances with the puppet – facial expressions and reactions that the original actress never performed. If we wanted to develop the story into an interactive experience, it would be important to expand the possibility of what the digital character reacts to, such as a player’s chess moves, with nuances of approval or disapproval.For the highest level of fidelity, the Ziva team can even create a new puppet with its own 4D dataset. Ziva also recently released a beta version of Face Trainer, a product built on a comprehensive library of 4D data and ML algorithms. It can be used to train any face mesh to perform the most complex expressions in real-time without any new 4D capture.Additionally, it is possible to create new lines of dialogue, all at a fraction of the time and cost that the creation of the first line required. We can do this either by getting the original actress to perform additional lines with an HMC and then using the HMC data to drive the puppet, or by getting another performer to deliver the new lines and retargeting their HMC data to the existing puppet.At SIGGRAPH Real-Time Live! we demonstrated how to apply the original performance from Enemies to the puppet of another actress – ultimately replacing the protagonist of the story with a different person, without changing anything else.This performance was then shown at Unite 2022 during the keynote (segment 01:03:00), where Enemies ran on an Xbox Series X, with DX12 and real-time ray tracing.To further enhance the visual quality of Enemies, a number of HDRP systems were leveraged. These include Shader Graph motion vectors, Adaptive Probe Volumes (APV), and of course, hair shading.Enemies also makes use of real-time ray tracing in HDRP and Unity’s native support for NVIDIA DLSS 2.0 (Deep Learning Super Sampling), which enable it to run at 4K image quality, comparable to native resolution. All of these updated Unity features are now available in Unity 2022 LTS.The brand new strand-based Hair Solution, developed during the creation of the Enemies demo, can simulate individual hairs in real-time. This technology is now available as an experimental package via GitHub (requires Unity 2020.2.0f1 or newer), along with a tutorial to get started.By integrating a complete pipeline for authoring, simulation, shading, and rendering hair in Unity, this solution is applicable to digital humans and creatures, in both realistic and stylized projects. The development work continues with a more performant solution for hair rendering enabled by the upcoming Software Rasterizer in HDRP. We are also diversifying the authoring options available by adopting and integrating the Wētā Wig tool for more complex grooms, as showcased in the Lion demo.Expanding on the technological innovations from The Heretic, the updated Digital Human package provides a realistic shading model for the characters rendered in Unity.Such updates include:A better 4D pipelineA more performant Skin Attachment system on the GPU for high-density meshesMore realistic eyes with caustics on the iris (available in HDRP as of Unity 2022.2)A new skin shader, built with the available Editor technologyTension tech for blood flow simulation and wrinkle maps, eliminating the need for a facial rigAnd as always, there is more to come.Discover how Ziva can help bring your next project to life. Register your interest to receive updates or get early access to future Ziva beta programs. If you’d like to learn more, you can contact us here.
    0 Commentarii 0 Distribuiri 0 previzualizare
Sponsorizeaza Paginile
CGShares https://cgshares.com