• Mini Review: Pine: A Story Of Loss (Switch) - An Evocative But Oddly Anti-Immersive Tale
    www.nintendolife.com
    'Game' is a funny word. On its face, it describes lighthearted, fun, frivolous things. Even a very serious game of snooker, in which people wear waistcoats, is 'just a game' on some level. But of course video games can be very different, and Pine is one such game. The subtitle gives it away A Story of Loss and it does exactly what it says on the tin.You might be inclined to imagine something like Arise: A Simple Story, where some light platforming guides you through allegorical landscapes in between cutscenes. However, Pine could be politely described as 'gameplay-light'. The protagonist is an unnamed man living alone in a forest clearing. He fells trees for firewood, grows vegetables in a small allotment to feed himself, and does little else besides carving little statuettes of a woman he loves but has lost.Clearly prioritising touchscreen controls, you begin the game with plenty of swooping your finger to pre-empt the swing of an axe into wood, lifting and placing to simulate the management of the vegetable patch, or tapping to eat food from a plate, chomp by chomp. Playing with a controller really trivialises this, as the interactions are essentially reduced to pressing down a few times languidly, pressing 'A' a few times lethargically, and so on. That said, even the touch interactions are pretty bog standard. The idea of replicating on-screen actions with similar gestures is one thats been well and truly done, and painting guidelines on the screen so you can put your hand in the way of the lovely artwork is actually kind of anti-immersive. Brief puzzle interludes likewise risk interrupting the narrative flow.While we appreciate this is not painting a thrilling picture of Pine, there is something going on here that is worth a look. The artwork is attractive, and the sounds are evocative of the simplicity of the work our man is doing to keep on going with life in the face of having lost his true love. The music breathes in and out, swelling and fading, driving the sad persistence of the story.And persisting is all the protagonist is really doing. Its a portrait of depression and grieving, so be ready for that if you are going to give Pine a shot. The game only lasts a couple of hours, but it felt longer, sometimes like we could see the paint of the artwork drying in front of our eyes. Thats the point, though, as it leans into the monotony and bleakness of half-heartedly pressing on. The later stages lighten the interaction even further and are more like watching an animated film with only occasional button inputs. It becomes 'Press A to Continue Existing'. Whether you find the resolution of the tale relatable is of course going to be very much a personal matter, but we didnt quite click with it, interpreting a message that loss is to be forgotten more than digested.Pine, then, is part of the video game world, but its far from 'just a game'. With appealing visuals and a haunting atmosphere, it demands patience and introspection. For those eager to explore its ideas of loss and moving on, its worth a look; for others, it might feel like the worlds saddest gardening simulator.
    0 Comentários ·0 Compartilhamentos ·123 Visualizações
  • Dbrand Seems To Have Shared Images Of Switch 2 Inside Its New Case
    www.nintendolife.com
    It's not switching it up.Canadian accessory manufacturer Dbrand known for its controversial social media stunts has shared images of what it claims is a case for the upcoming successor to the Nintendo Switch.As VGC reports, the company shared a teaser for the case on Thursday on X (formerly Twitter) along with the statement "We will not be answering any questions at this time."Read the full article on nintendolife.com
    0 Comentários ·0 Compartilhamentos ·127 Visualizações
  • OpenAI whistleblower found dead in San Francisco apartment
    techcrunch.com
    A former OpenAI employee, Suchir Balaji, was recently found dead in his San Francisco apartment, according to the San Francisco Office of the Chief Medical Examiner. In October, the 26-year-old AI researcher raised concerns about OpenAI breaking copyright law when he was interviewed by The New York Times.The Office of the Chief Medical Examiner (OCME) has identified the decedent as Suchir Balaji, 26, of San Francisco. The manner of death has been determined to be suicide, said a spokesperson in a statement to TechCrunch. The OCME has notified the next-of-kin and has no further comment or reports for publication at this time.After nearly four years working at OpenAI, Balaji quit the company when he realized the technology would bring more harm than good to society, he told The New York Times. Balajis main concern was the way OpenAI allegedly used copyright data, and he believed its practices were damaging to the internet.We are devastated to learn of this incredibly sad news today and our hearts go out to Suchirs loved ones during this difficult time, said an OpenAI spokesperson in an email to TechCrunch.Balaji was found dead in his Buchanan Street apartment on November 26, according to the San Jose Mercury News. Police were reportedly called to his residence in the citys Lower Haight district to perform a wellness check on the former OpenAI researcher.I was at OpenAI for nearly 4 years and worked on ChatGPT for the last 1.5 of them, said Balaji in a tweet from October. I initially didnt know much about copyright, fair use, etc. but became curious after seeing all the lawsuits filed against GenAI companies. When I tried to understand the issue better, I eventually came to the conclusion that fair use seems like a pretty implausible defense for a lot of generative AI products, for the basic reason that they can create substitutes that compete with the data theyre trained on.OpenAI and Microsoft are currently involved with several ongoing lawsuits from newspapers and media publishers, including the New York Times, who claim the generative AI startup has broken copyright law. Before working at OpenAI, the 26-year-old researcher studied computer science at the University of California, Berkeley. During college, he interned at OpenAI and Scale AI, the former of which he would go on to work for. Balaji worked on WebGPT during his early days at OpenAI, a fine-tuned version of GPT-3 that could search the web. It was an early version of SearchGPT, which OpenAI released earlier this year. Later on, Balaji worked on the pretraining team for GPT-4, reasoning team with o1, and post-training for ChatGPT, according to his LinkedIn.The San Francisco Police Department did not immediately respond to TechCrunchs request for comment.
    0 Comentários ·0 Compartilhamentos ·110 Visualizações
  • AI helps Telegram remove 15 million suspect groups and channels in 2024
    techcrunch.com
    In BriefPosted:3:51 PM PST December 13, 2024Image Credits:Chris Ratcliffe/Bloomberg via Getty ImagesAI helps Telegram remove 15 million suspect groups and channels in 2024Telegram has been under unprecedented pressure to clean up its platform this year, after its founder Pavel Durov was arrested in France and faces charges over the alleged harmful content shared on his messaging app.After announcing a crackdown in September, Telegram now says it has removed 15.4 million groups and channels related to harmful content like fraud and terrorism in 2024, noting this effort was enhanced with cutting-edgeAI moderation tools.The announcement is part of a newly launched moderation page Telegram has created to better communicate its moderation efforts to the public, according to a post from Durovs Telegram channel. According to Telegrams moderation page, theres a noticeable increase in enforcement after Durovs arrest in August:Image Credits:TelegramDurovs French case is still pending, but he is currently out on 5 million bail.Topics
    0 Comentários ·0 Compartilhamentos ·102 Visualizações
  • Sam Richardson, Teyonah Parris Join Matchbox Cast
    www.awn.com
    John Cena also stars in the live-action film from Skydance, Apple Original Films, and Mattel Films, inspired by Mattels real-world die-cast toy vehicle line, invented in 1953 by Jack Odell.
    0 Comentários ·0 Compartilhamentos ·145 Visualizações
  • 3rd Annual Childrens & Family Emmy Awards Nominees Announced
    www.awn.com
    The National Academy of Television Arts & Sciences (NATAS) has announced its nominees for the 3rd Annual Childrens & Family Emmy Awards. The awards ceremonies are scheduled to take place in Los Angeles on March 15, 2025. Now in its third year, this standalone competition spotlights the pinnacle of creativity and innovation in childrens entertainment.This years nominations are particularly meaningful as much of the work being honored was delayed due to last years strikes, said Rachel Schwartz, Head of the Childrens & Family Emmy Awards. It is a privilege to have been named Head of a competition that is not only honoring exceptional talent but is also shaping the way children and their families interact with the world. Congratulations to all.Two winners and an Honorable Mention in the Juried category, Public Service Initiative, were also announced. Winners in the Juried Individual Achievement in Animation categories will be announced in early 2025. The Lifetime Achievement honoree and ceremony host and presenters will be announced in early 2025.Here are highlights of the Childrens and Family Emmy Awards Nominees in animation and visual effects; the complete list can be seen here.Preschool SeriesBlues Clues & You! NickelodeonDonkey Hodie - PBS KidsLovely Little Farm - Apple TV+Sesame Street - MAXFiction SpecialMonster High 2 NickelodeonThe Naughty Nine - Disney ChannelThe Slumber Party - Disney ChannelThe Velveteen Rabbit - Apple TV+Worlds Best - Disney+Preschool Animated SeriesFrog and Toad - Apple TV+Interrupting Chicken - Apple TV+Rosies Rules - PBS KidsStoryBots: Answer Time - NetflixThe Tiny Chef Show - NickelodeonChildrens or Young Teen Animated SeriesCURSES! - Apple TV+Hilda - NetflixIwj - Disney+Kiff - Disney ChannelMarvels Moon Girl and Devil Dinosaur - Disney+Summer Camp Island - Cartoon NetworkAnimated Special Merry Little Batman - Prime VideoOrion and the Dark NetflixPeter and the Wolf MaxSnoopy Presents: One-of-a-Kind Marcie - Apple TV+The Tigers Apprentice - Paramount+Tiny Chefs Marvelous Mish Mesh Special - NickelodeonShort Form Animated ProgramHow Not to Draw - Disney ChannelI Am Groot - Disney+Once Upon a Studio - Disney+Take Care with Peanuts - Snoopy - Official ChannelThe Wonderful World of Mickey Mouse: Steamboat Silly - Disney+Interactive MediaCousin Hodie Playdate - Donkey Hodie - PBS KidsCyberchase: Cyber Sound Quest - PBS KidsDaniel Tigers Neighborhood - PBS KidsMolly of Denali - PBS KidsStus Super Stunts! - PBS KidsVoice Performer in a Preschool ProgramBobby Moynihan as Bobby Boots: Pupstruction - Disney JuniorCree Summer as Lizard & DeeDee: Spirit Rangers NetflixFred Tatasciore as Bang, BlimBlam the Barbarian, King Hydrogen,Alabama Smith & The Lone Drifter: StoryBots: Answer Time NetflixKari Wahlgren as Granny Caterina, Ms. Poochytail & Magda: Superkitties - Disney JuniorVoice Performer in a Childrens or Young Teen ProgramBen Feldman as Tylor Tuskmon: Monsters at Work - Disney+Bob Bergen as Porky Pig: Looney Tunes Cartoons MaxEric Bauza as Daffy Duck & Bugs Bunny: Teen Titans Go! - Cartoon NetworkPaul Walter Hauser as Dark: Orion and the Dark - NetflixWilliam Shatner as Keldor: Masters of the Universe: Revolution - NetflixYounger Voice Performer in a Preschool, Childrens, or Young Teen ProgramArianna McDonald as Marcie: Snoopy Presents: One-of-a-Kind Marcie - Apple TV+Jacob Tremblay as Orion: Orion in the Dark - NetflixLucia Cunningham as Jessica Williams: Jessicas Big Little World - Cartoon NetworkSimisola Gbadamosi as Tola Martins: Iwj - Disney+Terrence Little Gardenhigh as Pat: Fright Krewe - Hulu I PeacockWriting for a Preschool Animated SeriesJessicas Big Little World: Glow Toy - Cartoon NetworkXavier Riddle and the Secret Museum: I am Grandmaster Flash - PBS KidsMolly of Denali: Not a Mascot - PBS KidsStoryBots: Answer Time: Taxes NetflixStoryBots: Answer Time: Tornado - NetflixWriting for a Childrens or Young Teen Animated SeriesMy Dad the Bounty Hunter: Abducted NetflixMarvels Moon Girl and Devil Dinosaur: Dancing With Myself - Disney+Craig of the Creek: Heart of the Forest - Cartoon NetworkHaileys On It!: I Wanna Dance With My Buddy Disney ChannelDirecting for a Preschool Animated SeriesStoryBots: Answer Time: "Fractions" NetflixGhee Happy: "Ganga" - Ghee Happy StudioStoryBots: Answer Time: Glass NetflixXavier Riddle and the Secret Museum: I am Grandmaster Flash - PBS KidsSpirit Rangers: Xutash Harvest - NetflixDirecting for an Animated SeriesIwj: Kole - Disney+Kizazi Moto: Generation Fire: Moremi - Disney+Hilda: The Fairy Isle NetflixMonsters at Work: Descent Into Fear - Disney+Marvels Moon Girl and Devil Dinosaur: The Molecular Level - Disney+Voice Directing for an Animated SeriesDaniel Tigers Neighborhood - PBS KidsMarvels Moon Girl and Devil Dinosaur - Disney+Monsters at Work - Disney+Star Wars: Young Jedi Adventures - Disney+Young Love - HBO | MaxMusic Direction and Composition for an Animated ProgramFrog and Toad - Apple TV+Gremlins: Secrets of The Mogwai MaxLooney Tunes Cartoons MaxOrion and the Dark NetflixStar Wars: Young Jedi Adventures - Disney+Original Song for a Preschool ProgramStoryBots: Answer Time: Find the Area NetflixBaby Sharks Big Movie!: Keep Swimming Through NickelodeonAlices Wonderland Bakery: Let Your Wish Carry You Away - Disney JuniorSesame Street: Thats Why We Love Nature MaxStoryBots: Answer Time: The Tornado Song - NetflixOriginal Song for a Childrens or Young Teen ProgramHaileys On It!: Kiss Your Friend - Disney ChannelOne Piece: My Sails are Set - NetflixFraggle Rock: Back to the Rock: Radishes vs. Strawberries Apple TV+High School Musical: The Musical: The Series: Speak Out Disney+Kiff: Things - Disney ChannelShow OpenThe Fairly OddParents: A New Wish - NickelodeonHilda - NetflixPercy Jackson and the Olympians - Disney+Peter and the Wolf - MaxThe Spiderwick Chronicles -Roku ChannelEditing for a Preschool Animated ProgramFrog and Toad - Apple TV+Star Wars: Young Jedi Adventures - Disney+StoryBots: Answer Time NetflixStoryBots: Super Silly Stories with Bo NetflixThe Tiny Chef Show - NickelodeonEditing for an Animated ProgramHilda NetflixMarvels Moon Girl and Devil Dinosaur - Disney+Merry Little Batman - Prime VideoOrion and the Dark - NetflixSnoopy Presents: Welcome Home, Franklin - Apple TV+The Wonderful World of Mickey Mouse: Steamboat Silly - Disney+Sound Mixing and Sound Editing for a Preschool Animated ProgramBaby Sharks Big Movie! - Paramount+Santiago of the Seas - NICK Jr.Star Wars: Young Jedi Adventures - Disney+The Winter Blues - Apple TV+Xavier Riddle and the Secret Museum - PBS KidsSound Mixing and Sound Editing for an Animated ProgramI Am Groot - Disney+Jurassic World: Chaos Theory - NetflixMech Cadets - NetflixMonsters at Work - Disney+Orion and the Dark - NetflixTransformers: EarthSpark - NickelodeonVisual Effects for a Live-Action ProgramFraggle Rock: Back to the Rock - Apple TV+Goosebumps - Disney+The Naughty Nine - Disney ChannelOne Piece NetflixPercy Jackson and the Olympians - Disney+The Spiderwick Chronicles - Roku ChannelCasting for an Animated ProgramGremlins: Secrets of The Mogwai - MaxJurassic World: Chaos Theory NetflixMonsters at Work - Disney+Orion and the Dark -NetflixRock Paper Scissors - NickelodeonSpirit Rangers -NetflixSupa Team 4 NetflixSource: Source: The National Academy of Television Arts & Sciences Debbie Diamond Sarto is news editor at Animation World Network.
    0 Comentários ·0 Compartilhamentos ·135 Visualizações
  • Wolfs: Huw Evans VFX Supervisor beloFX
    www.artofvfx.com
    InterviewsWolfs: Huw Evans VFX Supervisor beloFXBy Vincent Frei - 13/12/2024 In 2023, Huw Evans shared insights into beloFXs visual effects work on Fast X. He later contributed to The Wheel of Time. Today, he discusses the subtle visual effects behind Wolfs.How did you and beloFX get involved on this show?I had my first meeting with Janek Sirrs (Production VFX Supervisor) and Mitchell Ferm (Production VFX Producer) in April 2023 while they were filming in Los Angeles. It was my first time working with them, though Janek had previously worked with some members of the beloFX founding team, so he was already familiar with who we were and what we could deliver. The script was fantastic, and the visual effects work looked compelling, so we were excited to join the project.How was the collaboration with Director Jon Watts and VFX Supervisor Janek Sirrs?It was my first time working with both Jon and Janek, but I was already aware of their recent work on the latest Spider-Man movies (of which Im a huuuge fan!) so I was excited to meet them both. We worked closely with Janek throughout the project, with regular calls to discuss the vision and approach. He was a great collaborator, providing us with clear direction and references but also giving us the freedom to bring our own creative solutions to the table and explore alternative ideas where we thought we could help. Personally, I love this kind of collaboration in VFX its an industry driven by artists, after all! Embracing that spark of creativity is often where the real magic begins to emerge. We truly appreciated the collaborative spirit that both Janek and Jon brought to the process.How did you organize the work with your VFX Producer?This was my second show working with the wonderful Jan Meade, our beloFX VFX Producer. Shes incredibly level-headed and exceptional at her job, making it an absolute pleasure to work alongside her on this project. The work was distributed across several of our sites, with the London hub leading the charge with the generalist type work while most of the FX-related work was handled by our Canadian office. Balancing the skill-sets of our available artists with delivering the best value for our clients is always a priority alongside making some amazing visuals obviously!What are the sequences made by beloFX?We were responsible for the majority of the VFX work in the second half of the movie, focusing primarily on digital snow and environment work, among other bits and pieces. We worked on a large amount of the films wide establishing shots, and also a large number of the car interior shots. There were also exterior shots outside Junes apartment, around Club Ice, outside the warehouse and we worked on the shoot-out under the underpass too, among other sections.Can you explain the role of invisible visual effects in Wolfs and why theyre essential for the story?The action unfolds over one snowy night in New York and, as with any production, nobody can control the weather. So, our work was to create a photorealistic snowfall that starts out light and subtle, but progressively builds as the story continues.Our goal from the start was to keep the work invisible. It wasnt about spectacle but about achieving realism and subtlety. Ideally, a production would capture this in-camera using practical effects like fake snow. However, in this case, shooting permits had specific limitations for instance, crushed ice could only be placed on pavements, not roads. Add to that the unpredictability of wind, which can make SFX foam towers unreliable, and the need for VFX support became clear to achieve the desired look.In the end, nearly every shot featuring snow was either entirely created with VFX or at least augmented with it. So there was quite the challenge in ensuring that it looked convincing and consistent throughout the entire film.Could you walk us through the process of adding snow effects in a realistic way that blends seamlessly with live-action shots?Our process would generally begin by looking at the practical SFX snowfall captured in-camera, if any was used for a given shot. In some cases, the practical effects worked well the behaviour looked natural and only needed to be augmented or thickened to achieve the desired density. These cases provided an excellent reference for the FX team to match the behaviour of our digital snow which was really helpful.In other cases, the practical flakes might prove too distracting or behave unpredictably, maybe theyd catch too much wind due to being made of lighter foam for example. For these scenarios, our process would be to remove the practical snow either procedurally if it was particularly heavy or manually if it was lighter. From there, wed create a full digital replacement to achieve the desired behaviour.In some instances, there was no snow visible in the plate photography at all, so wed go fully digital, ensuring were matching the surrounding shots in the sequence.For the ground-based snow, wed take the lidar data of the environment where the shot was set, and use it to procedurally generate a blanket of snow that gathered and reacted to the underlying geometry. Naturally, this required some manual clean-up and adjustments, but it provided an excellent starting point and gave us full control over the snow levels for different moments throughout the movies timeline.In terms of environmental effects, were there any particular elements of New Yorks winter ambiance you found challenging to replicate?There were a few key elements we had to work especially hard on to get just right. In the films earlier sequences, while the snow was light, we needed to digitally wet down the streets. This involved adding digital puddles reflecting the multicoloured shop lights and passing cars, and even creating a wet sheen on some of the surrounding buildings to capture that snow just starting to settle feel.We also took great care to add small details and subtleties, like snow settling on the branches of a tree. This required a full paint out and removal of the tree from the plate, then matching back to the photography with a fully CG tree covered in a dusting of snow which worked great with the moving camera in the shot. Things like that took a bit of time to get just right.Then, as the snow grew heavier, the roads needed to be populated with a mix of compressed ice, softer slush that reacted to car movements, and a lighter top layer of snow dusting all working together to give the right feel. We also had to keep track of the hero car, ensuring it accumulated snow as the story progressed. This involved a combination of static geometry pieces and animated, FX-driven snow all working together with the practical moving car.Some of the more subtle environmental effects included the unique depth hazing that occurs when light layers of snow stacks up, creating an interesting mist-type look. We played with this a lot, especially in our wider establishing shots, which helped sell the overall ambiance.How do you measure success in invisible effects when the goal is for audiences not to notice them?I think thats a rather interesting question actually. I guess the answer is exactly that when audiences dont notice our work! We all love creating the big VFX spectacles, something unimaginable and fantastical has its own joys and rewards, but theres always something super challenging and hugely rewarding about the invisible effects and knowing that people watching it wouldnt suspect a thing. I always enjoy watching VFX breakdowns and seeing the before and afters, its like looking behind the curtain and seeing a magician reveal their secrets, as dramatic (and cheesy) as that might sound!Were there any real-world references that you relied on heavily to ensure the authenticity of the New York winter look?Its always, always, always beneficial to draw from references, and in this case, Jon had a few key photographs that captured the look and feel he wanted to create, so we continually referred back to those images. We also had specific falling snow references that Janek sent over, which were invaluable in helping us figure out the right speed and density needed for this movie. As with any VFX work, working from real-world references always leads to better results. Whether were creating something fantastical or something grounded in reality, we can always find little clues within references that help guide the way forward.Looking back, is there a specific invisible VFX shot or environment in Wolfs that youre particularly proud of?One of my favourite shots is a wide view of the street outside the apartment, where we added a complete road wet-down and integrated two differently lit practical photography plates to create the final look. We also added falling FX snow, illuminated by streetlamps. This single shot really contained a lot of our working methodology, and it ended up looking extremely similar to some of the reference photography Jon had guided us toward so it felt great to get that just right. Plus, its the kind of shot where people wouldnt necessarily know anything had been done to it, and those shots are always quite fun!Which sequence or shot was the most challenging?Id actually say some of our car windscreen wiper shots. Perhaps not the most technically challenging in terms of what we were doing (or the most exciting ones to shout about!), but adding the windscreen wipers to the hero car and ensuring the smear on the glass looked correct, along with the way the light dusting of snow settled on the glass and then got wiped off, was a surprising challenge! Hopefully it works as one of those totally natural, invisible effects mentioned earlier.Is there something specific that gives you some really short nights?As is usually the case, it was probably just the race to the finish line! Even when you think youre ahead on certain aspects of the work, its managing those curveballs and new shots that come in at the last minute and rolling with them. I guess thats part of what keeps it exciting working in VFX, right? Maybe? Who needs sleep anyway!How long have you worked on this show?I started on the show back in April 2023 and we delivered in April 2024 so pretty much a year in total.Whats the VFX shots count?We delivered just under 300 shots 298 to be exact!What is your next project?Im currently working on an unannounced TV series for Prime Video stay tuned!A big thanks for your time.WANT TO KNOW MORE?beloFX: Dedicated page about Wolfs on beloFX website.WATCH IT ON Vincent Frei The Art of VFX 2024
    0 Comentários ·0 Compartilhamentos ·109 Visualizações
  • The Witcher IV: Cinematic Reveal Trailer by Platige Image
    www.artofvfx.com
    In the shadows of destiny, Ciri rises once more. Witness her legendary return in The Witcher IVs breathtaking CGI trailer, forged by the masters at Platige Image. Fate still weaves its threads, but this time the hunt is hers!WANT TO KNOW MORE?Platige Image: Dedicated page about The Witcher IV on Platige Image website. Vincent Frei The Art of VFX 2024
    0 Comentários ·0 Compartilhamentos ·108 Visualizações
  • Siggraph Asia 2024
    lesterbanks.com
    Welcome to our coverage of SIGGRAPH Asia 2024, where the brightest minds in computer graphics gather to share their latest discoveries and innovations. The must-attend event brings together experts in animation and technology to discuss ideas, latest trends and innovations. In this article, well explore the key takeaways from the conference, featuring interviews with six key players who discuss topics from computer vision to AI to the state of CG in Japan and worldwide. Well also delve into the beating heart of the Tokyo industry scene, where the conferences after-hours events and networking opportunities come alive.Trends and Innovations in Computer GraphicsProf Takeo IgarashiConference ChairDylan SissonFeatured ExhibitorKen AnjyoFeatured Sessions ChairProf Ariel ShamirTechnical Papers Chair Prof Ruizhen HuTechnical Communications ChairProf Yuki BanEmerging Technologies ChairWhat are your thoughts on this years conference, and what are you most excited about?Takeo Igarashi: As everybody knows, generative AI changes everything, and is rapidly moving. At SIGGRAPH Asia 24, I see lots of interesting new results. Thats an important thing for me, in terms of content.Ken Anjyo: From my view, the charming point of this conference is collaboration. There is a new stream of Japanese content such as Shogun or Ultraman Rising. Ultramans concept, for example, is from Japan, but Ultraman Rising is quite different from the original. Its a new interpretation. And Godzilla, on the other hand, was made by only one Japanese company, Shirogumi. I invited the CG Director to give a featured talk. Shirogumi is a very well-established company that started in models and miniatures-making.Dylan Sisson: My first SIGGRAPH was 1997. Its not as big as it used to be, but its interesting to come to SIGGRAPH because you read the temperature of where the entire industry is going as a whole. For this year, a lot of what we do is focused on what were developing for RenderMan, talking with vendors, partners and collaborators, and figuring out where the right direction is to go for the next five years. Its hard to figure out what anythings going to be like in five years right now, but if we can figure it out, well, this seems like the right path to get there.You see much more diversity. And thats what always draws me to SIGGRAPHAriel Shamir: I can start with saying that SIGGRAPH has always excited me since I began at Reichman University. What I like about the environment is the interdisciplinary nature. They have, of course, science, which is where I fit in. But they also have a lot of art. And this mixture of people is what draws me to these types of conferences. If you ask me what is special about this year, what is really nice is the way that AI is entering the field. And theres an adjunct field called vision, where it basically took over. So everyone does AI now in vision. You can also see more classic works where you dont involve any AI, which is also nice, again, because its more diverse, both outside and inside the technical papers.Ruizhen Hu: I think its definitely the trend of combining computer graphics with robotics. Theres several sessions like those traditional professors in computer graphics. They mainly work on animation, now they switch to robotics and try to apply those computer technologies to help to boost the intelligence of the robot. Because I think in China, sometimes researchers at SIGGRAPH cannot get big funds because they think its all game, video. Its not very important for society. Now when we start to use technology to help with the improvement of machine intelligence to really help those products in industry and also improve humans lifestyle in a more intelligent way. So currently I think its a good trend, both in China and also in this international research community.Yuki Ban: This year, one of the standout characteristics is the remarkable diversity of topics being covered. While VR has consistently been a dominant theme at e-TECH each year, this time the scope has expanded significantly beyond just VR to include robotics, AR, AI, haptics, development using HMDs, and a wide range of applications, as well as foundational developments and technologies. This breadth of coverage is a defining feature of this years event.Among these, what stands out as particularly new and emerging is the inclusion of experiences incorporating Generative AI, which has recently become a very hot topic. This seems to be a key trend at the moment.Additionally, when it comes to VR, technologies related to avatarsalready a prominent focus in previous yearsare again represented by a diverse array of innovative offerings, which I feel is another hallmark of this years event.Challenges and Advice from Industry ExpertsIn your journeys from researcher to professor to prominent chairs of SIGGRAPH, what memorable challenges have you faced? What advice would you give to young people getting into the field?Ruizhen Hu: I think the biggest challenge I faced maybe, was when I decided combine computer graphics with robotics. Im more into geometric modeling, but I knew nothing about robotics at the time. My one undergraduate and I spent a lot of time together searching for problems that fit our expertise. Eventually we found it: transport and packing. A very complicated optimization problem. So I knew how to utilize my expertise to help. Researchers in robotics, they used heuristics a lot, we started looking at those papers in 2018, and we published our first paper in SIGGRAPH Asia on the topic in 2020. That was a very important experience for me. And also a big challenge sometimes youre just not very confident about what you do because its an unknown area and you havent any guidance. You need to explore. The advice I give is its always good to step out of your comfort zone, maintain your persistence, and really try to find something that can utilize your expertise. Find something you are really good at and try to solve it in an elegant way.Takeo Igarashi: Before becoming a professor, it was easy just doing whatever I wanted. I was lucky. But now after having a professor, my role changes, right? So now Im more like a manager. And managing is different. Yeah, maybe not the right answer for you, but thats my experience.Yuki Ban: What I often tell my students is to value the small discoveries and insights they encounter in their daily lives. For example, with the concept of perceiving strong stimuli [see below], I began by pondering the afterimages or residual effects of bright lighthow those lingering impressions affect our perception. I encourage students to constantly think about how the various stimuli they encounter in everyday life can be incorporated into their ideas. As for my research, I focus on using everyday discoveries to make small but meaningful improvements in the world. I strongly believe in the importance of cherishing and utilizing these daily moments of awareness.KEN ANJYOYour research spans visual computing and the application of differential geometry in graphics. Can you explain this in simple terms?KA: It is the merging of 3D models naturally with the hand-drawn. Everything can essentially be drawn by hand. But sometimes we add 3D, like big monsters are very hard to describe by hand. Shading or rendering a cartoon look. We often need much more sophisticated tools to control light and shade. Almost like hand-drawing. So we develop new techniques.I was reading that you enjoy teaching mathematics to young learners including your own grandchildren. How do you approach making complex concepts engaging to young minds?KA: I show the effect, the end result of the ideas. Not by explaining the exact meaning of differential geometry. I try to express that, we can deal with all kinds of things with mathematics, with equations you will learn in the near future, that kind of way.As the Featured Sessions Chair for SIGGRAPH Asia 24, what are you most excited to showcase?Shinji Aramaki SOLA DIGITAL ARTSKA: SIGGRAPH is always high speed, involving new techniques into our field. And this year we have many AI or deep-learning based ideas as youll see in the Featured Session. They show how to integrate generative AI into production work, such as Director Aramakis session. Im also looking forward to NVIDIAs presentation on digital human techniques. In my own 1995 SIGGRAPH paper about character walking, we extract the essence of walk briskness to control the degree by parameter. At the time, it was kind of heuristic to make it. But now we use AI.Shuzo John Shiota, president: Polygon Pictures, rock starI was reading that you were a drummer as well. Do you still play?No, no, no, no. My last performance was maybe in 2018. I chaired that conference, but at a midnight show the president of Polygon Pictures he is basically a rock singer invited me to join his team, and then I played that one. I hope I can find the next chance to play.What was your dream project?KA: In Pixars old film Luxo Junior, they needed two or three years to make five minutes of animation. At that time there was no distinction between artist and engineer, everyone worked together to reach the goal. Im old enough to remember such things Anyways, before joining OLM Digital I worked with a Japanese broadcasting company making titles for TV. It was around that time I visited Studio Ghibli, they were making Princess Mononoke and they asked me to automate the background animation. 3D camera moves were hard to create by hand, very hard, almost impossible. So we built a unique interface (GUI) to address this problem. It was presented at Siggraph 97. We were given a still image and we were instructed to give it a rough 3D structure and to split apart the foreground and background areas. At that time we used Photoshop to fill the missing sections of the background, but now, it is done by AI. So new things improved our original idea. And so, back to the original question. Im not an engineer. Im an artist. But, of course, heavily from the engineering side. Its always fun to make such things.Ken Anjyo is a renowned figure in the fields of computer graphics and animation, with a career spanning decades in academia and industry. He has been active for SIGGRAPH programs for years as an author, organizer and committee member. He chaired SIGGRAPH Asia 2018 and co-founded the ACM Digital Production Symposium (DigiPro) in 2012DYLAN SISSONRenderMan, cornerstone of visual effects for decades. How has it influenced storytelling capabilities of filmmakers?DS: Early on it was really interesting to see the relationship between what was possible and how far the filmmakers could push. And if you look at Toy Story, theres a reason why we made a movie about plastic toys, because everything looked like plastic at the time. Jurassic Park, it was easier to make a movie about scaly creatures than furry creatures. And those films worked within the limitations of the technology at the time in very interesting ways, but maximized the new opportunities that we had with the new micropolygon renderer architecture, you know, the REYES algorithm. We were able to render massive amounts of geometry we couldnt do before with motion blur that was anti-aliased and composited into live action. And that at the time, like the first time somebody rendered a dinosaur, or the first time somebody rendered a plastic toy, was a historic moment. Now weve gotten to a place whererenderers can render almost anything that you want physically, they can simulatelighttransport 2023 Disney/Pixar. All Rights Reserved.And the challenge really is providing direct ability for the creatives to do new things. And you see at Pixar a trend towards, well, lets make characters not out of polygonal meshes, but lets make them out of volumes and things that we wouldnt have been able to potentially render 10 years ago. Yeah, so all that kind of stuff is a new territory for us. Its maybe historic and groundbreaking in some ways, but our focus is still on that frontier of like, whats next? Whats next as far as how can we tell a story? So if you look at Pete Sohn in Elemental, his story about characters that are fire and liquid, thats something that we couldnt have really rendered like 10 years ago or even five years ago.Tell me about the REYES algorithm.DS: Its named after Point Reyes, which is in California, and thats where Loren Carpenter came up with the concept of micropolygons and bucket rendering. Disney/PixarAs he was standing in the water, right?Yeah. And for that moment he was the only person on earth who knew this! To make digital VFX feasible, the original architects of RenderMan invented all sorts of new technologies, including the alpha channel.With the rise of real time rendering engines like Unreal, what role do you believe RenderMan will play in the future of visual effects?DS: I think RenderMan always focused on kind of that race towards the top and the most complex images with the most creative flexibility. And in order to achieve real time right now, we still have to do a lot of pre-compute. So theres a lot of caching and theres a lot of compromises with the geometry and its getting better and better and better. But at the same time, its not full on path tracing. So theres some limitations that you have when youre doing real time rendering that you dont when youre doing the offline rendering. So to get that final 10% or 5%, you need a full blown path tracer to do that. And it turns out people dont necessarily need to watch movies in real time. Its great if the artists and the people that are making it can work in interactive time. So were really focused on providing fast feedbackfor artists, butrealtime rendering is not our primary goal.I think whats interesting in the very near future is giving control over light, shadow, volumes and lenses to the animator. Dylans Dragon Opener limited-edition art piece combines functionality with whimsy, featuring a dragons silhouette with legs designed to resemble a prancing motion.DS: And layout. Sometimes theres certain shots that are composed in a reflection. Thattype of shot is much easier to work with if layout,animation, and lighting can all see the final ray traced reflection. In Incredibles, theres a scene where Mr. Incredible is in bed and he has his phone. And thats the primary illumination of the scene. For the animation to work correctly, the animator had to have the light illuminating him in the right way to make it look good. So having that kind of control to see the final pixels throughout the process is something that were working on. So thats one of our primary goals. We want to bring it all together so the lighters can work with the animators on shots that maybe we couldnt have done that shot before because it would have been too hard to set up.Anything that puts more control into the hands of the animators and the artists.DS: Yeah, thats maybe more of a focus for us now than actually adding new features to the render. Because weve gotten so many features now, we can do a lot of different stuff and make a lot of different images. So if somebody wants to render something that looks X, Y, and Z, we can do that. Its just about delivering the best experience. So the creatives have the biggest opportunity to retain the creative flow when theyre working.Thats whats exciting when it comes to technology driving the art forward. Having to no longer have to worry simulating hair, waterDS: When the animator can see the simulation of hair, with rain, or snow, and all these things together, all of a sudden you can compose shots and do things that you couldnt do before. Being able to bring all the stakeholders in the room and have everyone contribute and have the director there and say, oh, well, can you try this thing or that thing, all of a sudden you can get someplace better, faster than you couldnt have gotten to before.Do you have a personal memory of a time where the teapot appeared unexpectedly in a project or an Easter egg?DS:Yeah, I guess for me, when I was starting out, Martin Newells Utah Teapot was in every software package, so I could just bring in the teapot and light and shade with it So it was one of the models that I first would practice out shading and creating a plastic shader, a metal shader, I would do it on the teapot just because it has concave and convex surfaces and it was just perfect. But there was one day when I found out that there was actually a geometric primitive, like an RI point or an RI sphere, in RenderMan, we also had an RI teapot and you could just call an RI teapot and it would just bring up Martin Newells teapot, the same RIB. And I think that was after Id worked with RenderMan for maybe five years, I learned about this hidden primitive inside of RenderMan and I thought it was really, really cool. Thats cool. Yeah.That it was on the same level as Platos Plutonic forms.Dylan has been a Key figure at Pixar Animation Studios since 1999, significantly contributing to the evolution of RenderMan. As an active artist, he explores emerging media in VR, AI, 3D printing, and stylized rendering. His experience in 3D spans over two decades, with a background in traditional art and illustration. Additionally, Dylan is the creator and designer of Pixars RenderMan Walking Teapot.ARIEL SHAMIRIt must be great to get access to what everyone is currently working on.AS: The problem now is that SIGGRAPH is gaining popularity similar to other venues and conferences in the area. And we had a record number of submissions and then also a record number of papers. So that means that the fast forward session is prolonging. And its sometimes maybe a bit too long. But still, its very nice and its a lot of fun to see all the content of the conference, basically one long session. And you can then say, oh, I think this was something I would like to see or thats the paper I would like to read.Your talk, Prompt Aligned Personalization of Text-to-Image Models, is about creating graphics easier and faster. Explain it to someone whos just getting into art and video creation.AS: My paper is a part of a large group of papers to create tools that make generative AI more accessible to any person. So everyone probably has heard about DALL-E or other models that you can write text and then you create an image based on this textual prompt. And this is a huge trend now in both graphics and vision. There are many papers that try to deal with various aspects of these models because obviously theyre not perfect. Our paper is really about trying to solve two things: The first thing is if you want to create an image and you just want to create, lets say, a dog on the beach, usually youll get what you want. But if you want to be more specific, like a dog wearing a red hat on the beach with a cat running around with a ball, you have a much more complex prompt. And these AI models that create those images, they have difficulty following all these instructions together. On the other hand, there is a trend that is called personalization. And that trend basically says that you want to create an image of specific things, lets say your pet or even yourself or your friends. The problem is solving both of these problems together is difficult. My paper was about how to actually create images with very complex prompts but also personalized to specific content.Youve been to Japan many times, in 2022, you were a visiting professor at the University of Tokyo. Has your experience in Japan shaped your vision of computer science?AS: I have to say that I really appreciate Japan and I really like Japanese culture. And I think I would say even more Japanese aesthetics. So Im not sure if I can answer whether Japan and Japanese culture influenced my computer science side. But many of my works involve things that are more about aesthetics, about art, about abstraction And so I certainly could say that even way back from my childhood, I was drawn to Japanese aesthetics and Japanese art and so on. Visiting the University of Tokyo was really a very good chance for me to walk around, see all the museums, live and experience Japan as it is.I was visiting Professor Takeo Igarashis lab, who was my host. And hes also incidentally the chair of this conference. I think we both had a lot of common interest in our field. We wanted to collaborate for many years. And he has an amazing lab there with many bright students. That was really a good experience. We built a relationship where we now have an ongoing collaboration between my students, his students, and our labs and everything.You hold 17 patents in areas like driving performance, classification, and 3D modeling. What inspires you to branch out into all of these different areas?AS: I am really curious. And this is one of the things that drawn me to SIGGRAPH in general and SIGGRAPH Asia to be specific. And I think that this is also part of why Im a professor at the university. Because being a professor, what you do most of the time, at least what you try to do most of the time is ask questions and then get answers to those questions. And I think that most professors that you see in university have this level of curiosity, maybe about specific areas, maybe about broad areas. I think Im more like the second type. So I have an interest in almost everything. Of course, Im not an expert in everything, so I try to focus on things that are more computer-based or computer science. But I have been involved in industry as a consultant for many years in many companies. Ive been involved both in large companies like Disney, Google and small companies like startup companies in Israel. In Israel, we call ourselves the startup nation. When I agree to consult a company, first of all, it has to interest me. And second, I have to feel that I give some value to the company, otherwise its useless. So thats why basically you can find various types of patents. Many times when you consult companies, they want to patent the technology.Story Albums combines advanced computational techniques with storytelling. Tell us a bit about it.AS: Story Albums was a work I was doing at Disney. Ive always been interested in stories and in storytelling in general. And Disney, for sure, can tell you in their movies, that story is king. Everything else comes after. I totally believe in that. And one of my research directions has been always to try and tell stories, personal stories, using your own materials, your own images, your own experiences and so on. So Story Albums was trying to do exactly that. The idea was to take your day at the beach or you go with your kids to Disneyland and you take many, many pictures. And then, of course, sometimes people look at those pictures, but usually you have so many pictures. Even back then and you usually just look at them once. And in fact, I think that today, I think there was some statistics that said that most images that are captured are watched zero times. So one of the motivations was really to bring them up and make you look at those images maybe in a different way. What we were thinking is to create a fictional story that sort of happened while you were traveling or while you were in Disneyland or something like that. And that way, first of all, youd be able to get some memorabilia, either a movie or a book or something like that, that is telling a story which, okay, for us, lets say adults, we know that this didnt really happen. But for your kids, you can basically tell a story where your kid was looking for Pluto in Disneyland and he or she were talking to many characters and searching for that character and so on. And that basically could then become your own specific story. Now, the method that we were using back then was a bit, I would even say, primitive to what we have today, because it was before the era of large language models and generative AI and so on. So we solved it by some graph-based algorithms and some heuristics. Probably there are sites on the Internet today that you can just click a button, upload an image of your child and then get a whole storybook of your child doing something.Professor Ariel Arik Shamir is the former Dean of Efi Arazi school of Computer Science at Reichman University in Israel. He has more than 150 publications in journals and holds 17 patents. He has a broad commercial experience working with, and consulting high-tech companies and institutes in the US and Israel. His main research field lies at the intersection of graphics and vision with machine learning.RUIZHEN HULets talk about your new robot planning method, PC Planner.RH: Yeah, so the PC Planner, the key idea is to solve an equation. For a path planner, we have the start point, we have the end point. So the goal is to find the shortest path with the shortest time from the start to the end. The key idea is if we have a time field which records the shortest time from the start to the point, if we already have this kind of field, then we can just follow the gradient direction of such field, which can guarantee the shortest path. Which means we need to define what we call the time field. But how to get it? This is the key of the Eikonal equation. The Eikonal equation builds a connection with this kind of time field with the speed of the robot. So its kind of because conceptually, this is the goal point, this is the start point.If you have a high speed, you can go to this point faster. So the transport time is lower, right? This is common sense. But the Eikonal equation goes deeper. It says if the speed is faster, then the change of this speed will have less effect to the shortest time when traveling to the goal point. So once we define a speed field, and the speed is defined for every configuration, then we can solve this equation to compute such a time field. With such a time field, we can just compute a vector field based on the time field and follow the gradient direction until we reach the ultimate goal. Then the Eikonal equation says that if we follow this direction, the path is guaranteed to be the shortest path.What is the Eikonal equation, and what is the curse of dimensionality?RH: The traditional method is a sample-based. So they sample a lot of paths, and then they pick the optimal one. If we do the path planning in only two dimensions, x, y, its simple. If we do it for the robot arm, there is a lot of configuration space, for example, 20 dimensions, then its 20 times 20 times 20, so it boils up. This is what we call the curse of dimensionality. What we do now is not sample paths and then pick the best one. We compute the gradient direction from the time field, then we follow the direction, we get the next point. In this way, we dont need to explore too much, which makes the planning much more efficient. That goes back to how to get the time field. What we do in this paper is find a way to solve this equation. Once we define a speed field, we can compute the time field accordingly. In this way we do not need to compute the shape anymore because its already encoded in the speed field.If the environment itself is not static, how do you account for that?RH: Yeah, thats a very good question. So currently, we assume the environment is static.Your work in LLM and the enhancement of household rearrangement is fascinating. How do you picture this evolving?RH: So the first paper, PC Planner, is focused more on low-level motion planning. You already know this is a starting point, this is a goal point. You want to find the shortest path to go there in an efficient way. So in the other paper, our key idea is to use high-level planning. High-level decision. So the goal is we want to rearrange the room, then what actions you need to take, which object you need to move from where to where. This is high-level decision. So what I envision is eventually, it should be all integrated together, after the high-level decision, we need to relate to make sure it is executable. And also, currently the high-level planning paper, we only focus on a very specific task, rearrange the room, right? So we also want to be more open. You can tell me all kinds of tasks, and we can learn to divide those tasks into several steps and make sure we can really execute all those steps and really help to accomplish different kind of goals.Using LLMs to determine why an object should or should not be where it is really sparks the imagination regarding autonomous robots roaming around the home.RH Yeah, yeah. This is also one of our key ideas. For the rearrangement task, different people like different kind of rearrangement. So we kind of build a scene graph for the current scene, and we analyze the context, the object rearrangement in the current scene, and we try to analyze the personality or personal preference for where those objects should go. And then based on this, we determine which object is displaced and where is the right place to move this object to.How might a robot navigate Tokyo, known for bustling train stations and massive crowds?RH: I think there are many directions we can go, like whether we make use of transportation, right? Because robots can be very intelligent, they can go into the station, take the bus, go to anywhere else. So if we take this into consideration, it would be very interesting because it also involves the scheduling, right? So if you want to accomplish a task, its no longer a static environment. You do path planning, which is also dynamic. And so the scheduling here then will be very important, like which time to take which line. This can also help us to make decisions when we want to go here and there, right? So when we consider the problem setting, we both assume the environment is kind of static because if dynamic changes, everything changes. We need to recompute, reanalyze this. This makes things more complicated. So if we focus on path planning or high-level planning, as the environment gets dynamic, the problem becomes very challenging. And I know theres a recent trend to deal with this kind of problem setting, which is unknown environment with dynamic change. So how to deal with this? How to do this local planning based on the vision change? How to escape those objects? This is what I think is quite important especially in a place like Tokyo. Very crowded and a lot of moving people. If you want to do some autonomous navigation, you have to be able to capture those dynamics and to give quick feedback and adjust your strategy.Ruizhen Hu is a distinguished professor at Shenzhen University, China. Her research interests are in shape analysis, geometry processing, and robotics. Her recent focus is on applying machine learning to advance the understanding and generative modeling of visual data.TAKEO IGARASHIHow do you think the creative world is going to change due to generative AI?TI: Im sure it changes. A very different way of making content, and a very different way of consuming content. How people perceive results is very different. There are many possible directions to go, but for me, the most interesting is the entire landscape changes. Thats the most interesting part. Previously, experts spent hours to generate something, but now its very easy to do for everybody. We in our professional enterprise, how we perceive content, how we manage, how we enjoy I think the value system is changing. Thats most interesting for me.Do you think that being here at SIGGRAPH gives you a birds eye view of the current state of the art?TI: Definitely, in terms of the computer graphics area. But AI is now much more diverse. Computer visual people do lots of interesting things, and AI conferences are doing something interesting in the industry. In a sense, this conference gives a very good birds eye view, but also Im aware that its not everything.Your work with TEDDY, the sketch-based modelTI: Yes, thats right. Im ashamed of it.Well, many say that it revolutionized how people interact with 3D design. How do you see TEDDY evolving in the future?TI: In a sense, that kind of problem is When I worked on TEDDY, 3D content generation was very, very difficult. You had a very complicated user interface. But now I turn prompts to 3D, or 2D images to 3D. So in a sense, simple, easy content creation, either 3D or 2D, is kind of done. As a researcher, we have to always see beyond that, right? Thats my observation. Im not directly thinking of future of TEDDY right now. We are looking more into different directions.How do you picture the future of human-computer interaction?TI: So, in my point of view, I do two different directions. One direction is disappearing user interface or interaction, empowering people. Like, augmented human is a big thing in Japan. So just augmented human as if its a part of the body or brain. So user just do more, like a superhuman, and then computer becomes invisible. Thats one direction. And another direction is computer is still outside of us. Its a tool, or agent, or additional person. So these two approaches, you know, each guy says this is the best way. But I say these two different ways, yeah, I think its interesting to see which one. I think both coexist in the future.If you could design a new interface that lets people interact with the world in a different way, what would that look like?TI: One possible answer people might say is a brain interface. Directly go to something, but Im kind of skeptical. I think the fundamental change doesnt happen in a kind of superficial thing. Text is good for text. Sketch is good for text. Direct manipulation is good for something. Maybe brain, maybe something. So its more important to think more how this existing tool changes over time. I dont have a right answer.If you could bring one piece of technology from your research or otherwise to life in the real world for everyone to use tomorrow, what would that be?TI: So regardless of whats actually doing in my group, youre talking about just imagination? My research interest has always been content creation. So something I want to develop is to empower people to create content. And today, as I said, already text prompt or sketch to 3D or image to 3D is already there. But you have to come up with a prompt or you have to come up with a sketch. So were moving research interest more into the earlier stage to help people actually conceptualize what they want. So helping people with the early stage in creation is something I want to do.Takeo Igarashi is Professor of Computer Science Department at The University of Tokyo. He has made significant contributions to human-computer interaction and design, earning accolades such as the SIGGRAPH Significant New Researcher Award and the CHI Academy Award.YUKI BANYoure known for your research in illusions. Tell us about the inception of your work.Kakigori (shaved ice)YB: My specialty is cross-modal display. The combination between five senses, for example, vision and haptics, and vision and auditory, combined with new perception. I usually use the example of Japanese kakigori (shaved ice), as one example of cross-modal. Different color Kakigori, the taste may all the same, but the experience is different. For example, the strawberry, the melon, that difference depends on the smell and visual color changes our perception of the taste. Its a cross-modal effect. Its a famous illusion of the five senses. Using a cross-modal effect, we can display various sensations with very simple hardware. But this time, I used a different illusion. This time, the illusion is displaying different images to the left eye and the right eye. Did you try that?Yes.YB: Its a famous illusion in the psychological field.But that illusion is not used in the engineering science for the displays. So we try to use the traditional illusion for the display technique.Your display uses three different simulated environments. A dragon scene, fireworks, and a horror game. What do you think would be another exciting use of this technology?I have a deep sense of gratitude toward SIGGRAPH Asia, thats why I accepted the role of Emerging Technologies Chair. Im honored to be serving in this capacity.YB:This technology is primarily being utilized in the entertainment field, there are developments aimed at creating new applications such as magic effects or elements for horror experiences that evoke a sense of the extraordinary. In the entertainment domain the technology itself is quite simple. However, identifying the most effective combinations is something that still needs to be clarified. Currently, guidelines are being developed to address this, and once they are in place, the technology can be readily applied to various areas like VR, AR, or glasses-free 3D panels. I believe it has the potential to make a significant impact in the entertainment industry.Tell us about your research for next year.YB: We are conducting various studies using illusions, and one of our new approaches involves the thermal grill technique. This method uses illusions to make individuals perceive extremely strong stimuli. For example, one study we presented last year demonstrated how to make users perceive bright light in VR. In the real world, when you look at an intense light and then look away, you experience an aftereffect or lingering image. By replicating this phenomenon in VR, we create the illusion that users have been exposed to a bright light, altering their perception of brightness within the virtual environment. Our research focuses on replicating bodily responses to intense stimuli in the real world and feeding those back into VR. This triggers illusions that make people feel as though they are experiencing strong stimuli. We have been working on several related projects and aim to present one of them next year.Earlier, I mentioned the goal of making the world better through incremental improvements. As part of addressing this, we developed a cushion that mimics the abdominal movements associated with deep breathing. When users hold the device, over time, they find their breathing synchronizes with the rhythm of the cushion. Deep, slow breathing is essential for relaxation, but maintaining this rhythm can be challenging without prior training, such as in yoga or meditation. This cushion enables users to unconsciously adapt their breathing pace simply by holding it. This approach leverages a kind of bodily illusion. We collaborated with a company to bring the device to market in 2018, and as a result, it became my first research endeavor to reach full production. The product, now called Fufuly, officially launched last September.YB: Personally, Ive found it particularly useful in the context of the growing number of online meetings and interviews. For instance, I keep the cushion nearby, much like having a calming indoor plant in view. Holding it before an online interview helps me settle my nerves and feel more at ease, which has been incredibly beneficial.I can picture this in a larger scale, like a breathing Totoro bed.YB: Id love to create something like a Totoro-inspired bed someday.Yuki Ban is a project lecturer with the Department of Frontier Sciences at The University of Tokyo. His current research interests include cross-modal interfaces and biological measurement.SIGGRAPH After Hours: A Glimpse Behind the ScenesWhile the sessions and keynotes offer a glimpse into the future of computer graphics like the advancements in generative AI that Takeo Igarashi highlighted, or the innovative applications of robotics discussed by Ruizhen Hu the real magic often happens after-hours. This year, the bustling evening events brought together a diverse crowd, mirroring the interdisciplinary spirit often mentioned by Ariel Shamir, fostering a unique melting pot of ideas and camaraderie. And perhaps, as Dylan Sisson suggested, these chance encounters could even spark the next big thing in the ever-evolving world of computer graphics.At the first reception party, the lines for food stretched long especially near the coveted meat station but the atmosphere buzzed with excitement. Speakeasy was packed to the brim with attendees. Among them, Sergey Morozov of MARZA met an interesting Canadian developer originally from Ukraine, who leads projects for the BC Attorney Generals Ministry. His impressive photography and creative ventures added a fascinating dimension to the evening.For Kasagi Natsumi, also from MARZA, it was her first time navigating the party circuit, where she focused on networking. At Speakeasy, she connected with CGWires Frank Rousseau, whose Kitsu project management tool, popular among studios like Blender, is making waves in the industry. Kasagi also explored the Blender booth, meeting Gaku Tada, a talented Japanese developer whose Deep Paint plugin and visual artistry have earned him credits at Digital Domain, Weta Digital, and ILM.Sergey and Kasagi had a serendipitous encounter with Masha Ellsworth, Lead TD from Pixar, another Ukrainian expat, whose seminar on animation careers provided valuable insights. She shared that hiring in studios like Pixar often hinges on project timing rather than individual merit, offering hope to those waiting on replies to applications. The TD and her daughter are fans of Japanese animation, particularly Chis Sweet Home, and took a liking to MARZAs Samurai Frog Golf.While the sessions and keynotes represent the bleeding edge of computer graphics, the hidden treasures lie in the connections forged, new friends made and the stories exchanged after-hours.SiGGRAPH 2025 takes place in Vancouver, and SIGGRAPH Asia 2025 in Hong Kong.interviews and photos by Brent Forrest, Chris Huang, Sergey Morozov and Natsumi KasagiThe authors and LesterBanks.com have no commercial affiliations with any of the companies, products, or projects mentioned in this article.
    0 Comentários ·0 Compartilhamentos ·177 Visualizações
  • Hazelight and EA Originals Reveal Split Fiction, an Action-Packed Co-op Buddy Adventure That Jumps Between Sci-Fi and Fantasy Worlds, Coming March 6
    news.ea.com
    December 12, 2024The co-op masterminds behind 2021 Game of the Year winner It Takes Two are back with another genre-defying adventure featuring ever-changing gameplay Watch the Reveal Trailer HERE REDWOOD CITY, Calif.--(BUSINESS WIRE)-- Today, Electronic Arts Inc. (NASDAQ: EA) and Hazelight Studios revealed Split Fiction , an action-adventure game that pushes the boundaries of the co-op genre further than ever. Developed by Hazelight Studios, the imaginative minds behind the 2021 Game of the Year winner with more than 20 million units sold worldwide, It Takes Two , comes a unique co-op action-adventure split screen game where players will jump between sci-fi and fantasy worlds. With a wide range of ever-changing abilities, players will take on the roles of Mio and Zoe as they work together to take back their creative legacies and discover the power of an unexpected friendship. Split Fiction (Graphic: Business Wire) At Hazelight, weve been building co-op games for 10 years, and with every game we push beyond what players expect for action-adventure co-op games. Im so proud of what we have built with Split Fiction . Let me tell you guys, its going to blow your mind, said Josef Fares, Founder of Hazelight Studios. Because Mio and Zoe jump back and forth between sci-fi and fantasy worlds, weve been able to do some really wild things with gameplay and storytelling. This is definitely our most epic co-op adventure yet. In Split Fiction , players will discover a variety of sci-fi and fantasy mechanics and abilities. Escape a sun thats going supernova, challenge a monkey to a dance battle, try out some cool hoverboard tricks, fight an evil kitty and ride everything from gravity bikes to a sandshark. With worlds that are entirely different from each other, surprising challenges await players at every turn. Mio and Zoe are contrasting writers, one of sci-fi and the other of fantasy, who become trapped in their own stories after being hooked up to a machine designed to steal their creative ideas. Jumping back and forth between worlds, theyll have to work together and master a variety of abilities in order to break free with their memories intact.Friendships and great memories are made through playing amazing co-op games together - and no one does it better than Hazelight, said Jeff Gamon, General Manager of EA Partners. We're excited to continue our long-term partnership with Josef and his talented team to bring another innovative, collaborative adventure to life; one that continues to push the boundaries and redefine what players can experience together on and off the screen.A Hazelight Studios staple feature, Friends Pass, which allows one player who owns the game to invite a friend to play for free, is back and pushed even further with crossplay options enabled for PlayStation, Xbox and PC via Steam.Split Fiction releases on March 6, 2025 on PlayStation 5, Xbox Series X|S and PC via Steam, Epic Games Store and EA app for $49.99. Watch the Reveal Trailer here. For more information and to stay up to date on Split Fiction , visit: https://www.ea.com/games/split-fiction/split-fiction. PRESS ASSETS ARE AVAILABLE AT EAPressPortal.comAbout Hazelight Studios Creators of the Game of the Year 2021, It Takes Two , Hazelight is a multiple award-winning independent game development studio based in Stockholm, Sweden. Founded in 2014 by Josef Fares, film director and creator of the critically acclaimed game Brothers: A Tale of Two Sons . Hazelight is committed to pushing the creative boundaries of what is possible in games. In 2018, Hazelight released A Way Out , the first-ever co-op-only third-person action-adventure, as part of the EA Originals program. About EA OriginalsEA Original celebrates those who dare to explore.These studios forge new ways to play by bringing together developers with bold visions. Here, these developers use their artistic freedom to reach players who will treasure the new experiences theyve created. Cristian Delgado Sr. PR Manager crdelgado@ea.com Source: Electronic Arts Inc.Multimedia Files:
    0 Comentários ·0 Compartilhamentos ·251 Visualizações