• Tired of your camera overheating and shutting down at the worst moments? Say goodbye to those frustrating interruptions with the Ulanzi Cooler!

    In my latest video, I put this upgraded cooling fan to the test on popular models like the X-S20 and A6700. Does it really work? Can it save your shoot? You won’t want to miss my honest review and practical tips!

    Join me on this journey to keep your gear cool and your creativity flowing. Your support means the world, and it helps me keep creating valuable content like this!

    Check out the video here:
    https://www.youtube.com/watch?v=js6x52tp6Oo

    #CameraGear #Ulanzi #Filmmaking #Photography #CoolingSolutions
    🔥 Tired of your camera overheating and shutting down at the worst moments? 📸 Say goodbye to those frustrating interruptions with the Ulanzi Cooler! In my latest video, I put this upgraded cooling fan to the test on popular models like the X-S20 and A6700. Does it really work? Can it save your shoot? You won’t want to miss my honest review and practical tips! Join me on this journey to keep your gear cool and your creativity flowing. Your support means the world, and it helps me keep creating valuable content like this! Check out the video here: https://www.youtube.com/watch?v=js6x52tp6Oo #CameraGear #Ulanzi #Filmmaking #Photography #CoolingSolutions
    0 Commentarios ·0 Acciones
  • Have you ever gazed at a landscape in a game and thought, "Wow, how do they make that grass look so real?" Creating high-quality foliage is like crafting a living painting—it requires more than just placing some textures. The secret often lies in the details, like carefully layering materials and using masks to add depth.

    I’ve started experimenting with various methods, and let me tell you, the results can be stunning! It’s all about achieving that organic feel—like the grass has a story to tell. What are your favorite techniques for making virtual landscapes feel alive? Let’s share some tips!

    #GameDev #UnrealEngine #Foliage #LandscapeArt #VisualStorytelling
    Have you ever gazed at a landscape in a game and thought, "Wow, how do they make that grass look so real?" Creating high-quality foliage is like crafting a living painting—it requires more than just placing some textures. The secret often lies in the details, like carefully layering materials and using masks to add depth. I’ve started experimenting with various methods, and let me tell you, the results can be stunning! It’s all about achieving that organic feel—like the grass has a story to tell. What are your favorite techniques for making virtual landscapes feel alive? Let’s share some tips! #GameDev #UnrealEngine #Foliage #LandscapeArt #VisualStorytelling
    0 Commentarios ·0 Acciones
  • Ever think about how magical kids' shows can be? With tools like Unreal Engine 5, the possibilities are practically endless. It's like giving a kid a giant box of crayons and saying, "Go wild!" But here’s my real question: how do we balance the creativity with making sure it’s also educational?

    While it’s tempting to simply focus on flashy animations and fun characters, I believe we should aim to create content that sparks curiosity and inspires imagination. What’s your take? Can we keep it entertaining without losing the valuable lessons?

    Let’s chat! I’d love to hear your thoughts on mixing fun with education in kids' programming.

    #KidsShows #UnrealEngine5 #Animation #CreativeContent #EducationalEntertainment
    Ever think about how magical kids' shows can be? 🎨✨ With tools like Unreal Engine 5, the possibilities are practically endless. It's like giving a kid a giant box of crayons and saying, "Go wild!" But here’s my real question: how do we balance the creativity with making sure it’s also educational? While it’s tempting to simply focus on flashy animations and fun characters, I believe we should aim to create content that sparks curiosity and inspires imagination. What’s your take? Can we keep it entertaining without losing the valuable lessons? Let’s chat! I’d love to hear your thoughts on mixing fun with education in kids' programming. #KidsShows #UnrealEngine5 #Animation #CreativeContent #EducationalEntertainment
    0 Commentarios ·0 Acciones
  • Ready to roll out your yoga mat? Check out this awesome short film, "Morning Yoga," where a yoga class takes an unexpected twist with some fresh techniques from a new instructor!

    Directed by Darrin Lile and featuring a fantastic cast, this film was even selected for the MKE International Short Film Festival. It’s a delightful mix of comedy and creativity that’ll make you see your morning routine in a whole new light!

    I loved the vibes and the unique take on yoga—it’ll definitely brighten your day! Give it a watch, and let me know what you think!

    Watch it here: https://www.youtube.com/watch?v=iVst7tq_3hI

    #YogaFilm #MorningVibes #ShortFilm #MKEFestival #GoodVibesOnly
    🌅 Ready to roll out your yoga mat? Check out this awesome short film, "Morning Yoga," where a yoga class takes an unexpected twist with some fresh techniques from a new instructor! 🧘‍♂️✨ Directed by Darrin Lile and featuring a fantastic cast, this film was even selected for the MKE International Short Film Festival. It’s a delightful mix of comedy and creativity that’ll make you see your morning routine in a whole new light! I loved the vibes and the unique take on yoga—it’ll definitely brighten your day! Give it a watch, and let me know what you think! 👉 Watch it here: https://www.youtube.com/watch?v=iVst7tq_3hI #YogaFilm #MorningVibes #ShortFilm #MKEFestival #GoodVibesOnly
    0 Commentarios ·0 Acciones
  • Ever wondered what happens when artificial intelligence gets behind the camera? It's a blockbuster waiting to happen!

    In our latest video, "We started an AI Movie Studio," we dive into the revolutionary world of AI filmmaking. The AI Film Company, part of the BAI-LEY Creative Group, is here to shake up the industry with cutting-edge AI film production, mind-blowing VFX, and expert consultancy for filmmakers ready to take the plunge into the future of cinema. Who knew robots could have such good taste in movies?

    Don't miss out on this wild ride into the future of film-making!

    Watch it here: https://www.youtube.com/watch?v=25k-vtCrixM

    #AIFilm #FutureOfCinema #Filmmaking #AI #MovieMagic
    🎬 Ever wondered what happens when artificial intelligence gets behind the camera? It's a blockbuster waiting to happen! In our latest video, "We started an AI Movie Studio," we dive into the revolutionary world of AI filmmaking. The AI Film Company, part of the BAI-LEY Creative Group, is here to shake up the industry with cutting-edge AI film production, mind-blowing VFX, and expert consultancy for filmmakers ready to take the plunge into the future of cinema. Who knew robots could have such good taste in movies? 🤖🍿 Don't miss out on this wild ride into the future of film-making! Watch it here: https://www.youtube.com/watch?v=25k-vtCrixM #AIFilm #FutureOfCinema #Filmmaking #AI #MovieMagic
    0 Commentarios ·0 Acciones
  • Mario Tennis Fever Preorders Are Live - First Mario Sports Game For Switch 2
    www.gamespot.com
    Mario Tennis Fever for Nintendo Switch 2 $70 | Releases February 12, 2026 Preorder at Amazon Preorder at Walmart Preorder at Target Preorder at Best Buy Preorder at GameStop Mario Tennis Fever is available to preorder for $70 at multiple major retailers ahead of its release next year on February 12, 2026. The latest Mario sports game is the first for Nintendo Switch 2 and will be exclusive to the new hardware. And like most Nintendo-published titles for Switch 2, Mario Tennis Fever retails for $70.Mario Tennis Fever was unveiled on September 12 during the Super Mario 40th Anniversary section of the latest Nintendo Direct. The livestream also revealed Super Mario Galaxy + Super Mario Galaxy 2, which releases much sooner (October 2) and is available for Switch 2 and original Switch. Mario Tennis Fever for Nintendo Switch 2 $70 | Releases February 12, 2026 Mario Tennis Fever's physical edition is up for preorder at Amazon, Walmart, Target, Best Buy, and GameStop. No preorder bonuses have been revealed, but retailers often have exclusive bonuses for Nintendo games. We'll update this story if that happens with Mario Tennis Fever.The full game is stored on the Switch 2 game card. Nintendo estimates a 10GB download for the digital edition of Mario Tennis Fever, which is available to preorder on the eShop for the same price. Preorder at Amazon Preorder at Walmart Preorder at Target Technically, Mario Tennis Fever is the second Mario sports game to release exclusively on Switch 2. Nintendo recently added Super Mario Strikers to the GameCube catalog for Switch Online + Expansion Pack members. But Mario Tennis Fever is the first all-new Mario sports game for Switch 2.Continue Reading at GameSpot
    0 Commentarios ·0 Acciones
  • Free Dice Dreams Rolls (Updated Regularly)
    gamerant.com
    From attacking opponents and collecting resources to upgrading your kingdom, Dice Dreamsis a great option for everyone who enjoys board games with a competitive twist. As the name implies, the core mechanic of Dice Dreams revolves around rolling dice to determine movement and actions on the board.
    0 Commentarios ·0 Acciones
  • Reaching Across the Isles: UK-LLM Brings AI to UK Languages With NVIDIA Nemotron
    blogs.nvidia.com
    Celtic languages including Cornish, Irish, Scottish Gaelic and Welsh are the U.K.s oldest living languages. To empower their speakers, the UK-LLM sovereign AI initiative is building an AI model based on NVIDIA Nemotron that can reason in both English and Welsh, a language spoken by about 850,000 people in Wales today.Enabling high-quality AI reasoning in Welsh will support the delivery of public services including healthcare, education and legal resources in the language.I want every corner of the U.K. to be able to harness the benefits of artificial intelligence. By enabling AI to reason in Welsh, were making sure that public services from healthcare to education are accessible to everyone, in the language they live by, said U.K. Prime Minister Keir Starmer. This is a powerful example of how the latest AI technology, trained on the U.K.s most advanced AI supercomputer in Bristol, can serve the public good, protect cultural heritage and unlock opportunity across the country.The UK-LLM project, established in 2023 as BritLLM and led by University College London, has previously released two models for U.K. languages. Its new model for Welsh, developed in collaboration with Wales Bangor University and NVIDIA, aligns with Welsh government efforts to boost the active use of the language, with the goal of achieving a million speakers by 2050 an initiative known as Cymraeg 2050.U.K.-based AI cloud provider Nscale will make the new model available to developers through its application programming interface. The aim is to ensure that Welsh remains a living, breathing language that continues to develop with the times, said Gruffudd Prys, senior terminologist and head of the Language Technologies Unit at Canolfan Bedwyr, the universitys center for Welsh language services, research and technology. AI shows enormous potential to help with second-language acquisition of Welsh as well as for enabling native speakers to improve their language skills.This new model could also boost the accessibility of Welsh resources by enabling public institutions and businesses operating in Wales to translate content or provide bilingual chatbot services. This can help groups including healthcare providers, educators, broadcasters, retailers and restaurant owners ensure their written content is as readily available in Welsh as they are in English.Beyond Welsh, the UK-LLM team aims to apply the same methodology used for its new model to develop AI models for other languages spoken across the U.K. such as Cornish, Irish, Scots and Scottish Gaelic as well as work with international collaborators to build models for languages from Africa and Southeast Asia.This collaboration with NVIDIA and Bangor University enabled us to create new training data and train a new model in record time, accelerating our goal to build the best-ever language model for Welsh, said Pontus Stenetorp, professor of natural language processing and deputy director for the Centre of Artificial Intelligence at University College London. Our aim is to take the insights gained from the Welsh model and apply them to other minority languages, in the U.K. and across the globe.Tapping Sovereign AI Infrastructure for Model DevelopmentThe new model for Welsh is based on NVIDIA Nemotron, a family of open-source models that features open weights, datasets and recipes. The UK-LLM development team has tapped the 49-billion-parameter Llama Nemotron Super model and 9-billion-parameter Nemotron Nano model, post-training them on Welsh-language data.Compared with languages like English or Spanish, theres less available source data in Welsh for AI training. So to create a sufficiently large Welsh training dataset, the team used NVIDIA NIM microservices for gpt-oss-120b and DeepSeek-R1 to translate NVIDIA Nemotron open datasets with over 30 million entries from English to Welsh.They used a GPU cluster through the NVIDIA DGX Cloud Lepton platform and are harnessing hundreds of NVIDIA GH200 Grace Hopper Superchips on Isambard-AI the U.K.s most powerful supercomputer, backed by 225 million in government investment and based at University of Bristol to accelerate their translation and training workloads.This new dataset supplements existing Welsh data from the teams previous efforts.Capturing Linguistic Nuances With Careful EvaluationBangor University, located in Gwynedd the county with the highest percentage of Welsh speakers is supporting the new models development with linguistic and cultural expertise.Welsh translation of: The aim is to ensure that Welsh remains a living, breathing language that continues to develop with the times. Gruffudd Prys, Bangor UniversityPrys, from the universitys Welsh-language center, brings to the collaboration about two decades of experience with language technology for Welsh. He and his team are helping to verify the accuracy of machine-translated training data and manually translated evaluation data, as well as assess how the model handles nuances of Welsh that AI typically struggles with such as the way consonants at the beginning of Welsh words change based on neighboring words.The model, as well as the Welsh training and evaluation datasets, are expected to be made available for enterprise and public sector use, supporting additional research, model training and application development.Its one thing to have this AI capability exist in Welsh, but its another to make it open and accessible for everyone, Prys said. That subtle distinction can be the difference between this technology being used or not being used.Deploy Sovereign AI Models With NVIDIA Nemotron, NIM MicroservicesThe framework used to develop UK-LLMs model for Welsh can serve as a foundation for multilingual AI development around the world.Benchmark-topping Nemotron models, data and recipes are publicly available for developers to build reasoning models tailored to virtually any language, domain and workflow. Packaged as NVIDIA NIM microservices, Nemotron models are optimized for cost-effective compute and run anywhere, from laptop to cloud.Europes enterprises will be able to run open, sovereign models on the Perplexity AI-powered search engine.Get started with NVIDIA Nemotron.Welsh translation:Ymestyn Ar Draws yr Ynysoedd: Mae DU-LLM yn Dod Deallusrwydd Artiffisial i Ieithoedd y DU Gyda NVIDIA NemotronWedii hyfforddi ar yr uwch gyfrifiadur Isambard-AI, mae model newydd a ddatblygwyd gan University College London, NVIDIA a Phrifysgol Bangor yn manteisio ar dechnegau a setiau data ffynhonnell agored NVIDIA Nemotron i alluogi rhesymu Deallusrwydd Artiffisial ar gyfer y Gymraeg ac ieithoedd eraill y DU ar gyfer gwasanaethau cyhoeddus gan gynnwys gofal iechyd, addysg ac adnoddau cyfreithiol.Ieithoedd Celtaidd gan gynnwys Cernyweg, Gwyddeleg, Gaeleg yr Alban a Chymraeg yw ieithoedd byw hynaf y DU. Er mwyn grymuso eu siaradwyr, mae menter Deallusrwydd Artiffisial sofran y DU-LLM yn adeiladu model Deallusrwydd Artiffisial yn seiliedig ar NVIDIA Nemotron a all resymu yn Saesneg a Chymraeg hefyd, iaith a siaredir gan tua 850,000 o bobl yng Nghymru heddiw.Bydd galluogi rhesymu Deallusrwydd Artiffisial o ansawdd uchel yn y Gymraeg yn cefnogir ddarpariaeth o wasanaethau cyhoeddus gan gynnwys gofal iechyd, addysg ac adnoddau cyfreithiol yn yr iaith.Rwyf am i bob cwr or DU allu harneisio manteision deallusrwydd artiffisial. Drwy alluogi deallusrwydd artiffisial i resymu yn y Gymraeg, rydym yn sicrhau bod gwasanaethau cyhoeddus o ofal iechyd i addysg yn hygyrch i bawb, yn yr iaith maen nhwn byw ynddi, meddai Prif Weinidog y DU, Keir Starmer. Mae hon yn enghraifft bwerus o sut y gall y dechnoleg dddiweddaraf, wedii hyfforddi ar uwch gyfrifiadur deallusrwydd artiffisial mwyaf datblygedig y DU ym Mryste, wasanaethu lles y cyhoedd, amddiffyn treftadaeth ddiwylliannol a datgloi cyfleoedd ledled y wlad.Mae prosiect DU-LLM, a sefydlwyd yn 2023 fel BritLLM ac a arweinir gan University College London, wedi rhyddhau dau fodel ar gyfer ieithoedd y DU yn flaenorol. Mae ei fodel newydd ar gyfer y Gymraeg, a ddatblygwyd mewn cydweithrediad Phrifysgol Bangor Cymru ac NVIDIA, yn cyd-fynd ag ymdrechion llywodraeth Cymru i hybu defnydd gweithredol or iaith, gydar nod o gyflawni miliwn o siaradwyr erbyn 2050 menter or enw Cymraeg 2050.Bydd darparwr cwmwl Deallusrwydd Artiffisial yn y DU, Nscale, yn sicrhau bod y model newydd ar gael i ddatblygwyr trwy ei ryngwyneb rhaglennu rhaglenni (API).Y nod yw sicrhau bod y Gymraeg yn parhau i fod yn iaith fyw, syn anadlu ac syn parhau i ddatblygu gydar oes, meddai Gruffudd Prys, uwch derminolegydd a phennaeth yr Uned Technolegau Iaith yng Nghanolfan Bedwyr, canolfan y brifysgol ar gyfer gwasanaethau, ymchwil a thechnoleg y Gymraeg. Mae deallusrwydd artiffisial yn dangos potensial aruthrol i helpu gyda chaffael y Gymraeg fel ail iaith yn ogystal galluogi siaradwyr brodorol i wella eu sgiliau iaith.Gallair model newydd hwn hefyd roi hwb i hygyrchedd adnoddau Cymraeg drwy alluogi sefydliadau cyhoeddus a busnesau syn gweithredu yng Nghymru i gyfieithu cynnwys neu ddarparu gwasanaethau sgwrsfot dwyieithog. Gall hyn helpu grwpiau gan gynnwys darparwyr gofal iechyd, addysgwyr, darlledwyr, manwerthwyr a pherchnogion bwytai i sicrhau bod eu cynnwys ysgrifenedig yr un mor hawdd ar gael yn y Gymraeg ag y mae yn Saesneg.Y tu hwnt ir Gymraeg, mae tm y DU-LLM yn anelu at gymhwysor un fethodoleg a ddefnyddiwyd ar gyfer ei fodel newydd i ddatblygu modelau Deallusrwydd Artiffisial ar gyfer ieithoedd eraill a siaredir ledled y DU fel Cernyweg, Gwyddeleg, Sgoteg a Gaeleg yr Alban yn ogystal gweithio gyda chydweithwyr rhyngwladol i adeiladu modelau ar gyfer ieithoedd o Affrica a De-ddwyrain Asia.Maer cydweithrediad hwn gydag NVIDIA a Phrifysgol Bangor wedi ein galluogi i greu data hyfforddi newydd a hyfforddi model newydd mewn amser record, gan gyflymu ein nod o adeiladur model iaith gorau erioed ar gyfer y Gymraeg, meddai Pontus Stenetorp, yr athro prosesu iaith naturiol a dirprwy gyfarwyddwr y Ganolfan Deallusrwydd Artiffisial yn University College London. Ein nod yw cymryd y mewnwelediadau a gafwyd or model Cymraeg au cymhwyso i ieithoedd lleiafrifol eraill, yn y DU ac ar draws y byd.Manteisio ar Seilwaith Deallusrwydd Artiffisial Sofran ar gyfer Datblygu ModelMaer model newydd ar gyfer y Gymraeg yn seiliedig ar NVIDIA Nemotron, teulu o fodelau ffynhonnell agored syn cynnwys pwysau, setiau data a ryseitiau agored.Maer tm datblygu DU-LLM wedi manteisio ar fodel 49-biliwn-paramedr Llama Nemotron Super a model 9-biliwn-paramedr Nemotron Nano, gan eu hl hyfforddi ar ddata iaith Gymraeg.Oi gymharu ag ieithoedd fel Saesneg neu Sbaeneg, mae llai o ddata ffynhonnell ar gael yn y Gymraeg ar gyfer hyfforddiant Deallusrwydd Artiffisial. Felly, er mwyn creu set ddata hyfforddi Cymraeg ddigon mawr, defnyddiodd y tm ficrowasanaethau NVIDIA NIM ar gyfer gpt-oss-120b a DeepSeek-R1 i gyfieithu setiau data agored NVIDIA gyda dros 30 miliwn o gofnodion or Saesneg ir Gymraeg.Defnyddion nhw glwstwr GPU drwy blatfform NVIDIA DGX Cloud Lepton ac yn harneisio cannoedd o Uwchsglodion NVIDIA GH200 Grace Hopper ar Isambard-AI uwchgyfrifiadur mwyaf pwerus y DU, gyda chefnogaeth 225 miliwn o fuddsoddiad gan y llywodraeth ac wedii leoli ym Mhrifysgol Bryste i gyflymu eu llwythi gwaith cyfieithu a hyfforddi.Maer set ddata newydd hon yn ategu data presennol yr iaith Gymraeg o ymdrechion blaenorol y tm.Cipio Naws Ieithyddol Gyda Gwerthusiad GofalusMae Prifysgol Bangor, sydd wedii lleoli yng Ngwynedd y sir gydar ganran uchaf o siaradwyr Cymraegs yn cefnogi datblygiad y model newydd gydag arbenigedd ieithyddol a diwylliannol.Mae Prys, o ganolfan Gymraeg y brifysgol, yn dod thua dau ddegawd o brofiad gyda thechnoleg iaith ar gyfer y Gymraeg ir cydweithrediad. Mae ef ai dm yn helpu i wirio cywirdeb data hyfforddi a gyfieithir gan beiriannau a data gwerthuso a gyfieithir llaw, yn ogystal ag asesu sut maer model yn ymdrin naws Gymraeg y mae Deallusrwydd Artiffisial fel arfer yn cael trafferth nhw megis y ffordd y mae cytseiniaid ar ddechrau geiriau Cymraeg yn newid yn seiliedig ar eiriau cyfagos.Disgwylir ir model, yn ogystal r setiau data hyfforddiant a gwerthusor Gymraeg, fod ar gael i fentrau ar sector cyhoeddus eu defnyddio, gan gefnogi ymchwil ychwanegol, hyfforddiant modelu a datblygu rhaglenni.Maen un peth cael y gallu Deallusrwydd Artiffisial hwn yn bodoli yn y Gymraeg, ond maen beth arall ei wneud yn agored ac yn hygyrch i bawb, meddai Prys. Gall y gwahaniaeth cynnil hwnnw fod y gwahaniaeth rhwng y dechnoleg hon yn cael ei defnyddio ai peidio.Defnyddio Modelau Deallusrwydd Artiffisial Sofran Gyda NVIDIA Nemotron, Microwasanaethau NIMGall y fframwaith a ddefnyddiwyd i ddatblygu model DU-LLM ar gyfer y Gymraeg fod yn sylfaen ar gyfer datblygu Deallusrwydd Artiffisial amlieithog ledled y byd.Mae modelau, data a ryseitiau Nemotron, syn cyrraedd y brig, ar gael yn gyhoeddus i ddatblygwyr er mwyn iddynt adeiladu modelau rhesymu sydd wediu teilwra i bron unrhyw iaith, parth a llif gwaith. Wediu pecynnu fel microgwasanaethau NVIDIA NIM, mae modelau Nemotron wediu hoptimeiddio ar gyfer cyfrifiadura cost-effeithiol a rhedeg yn unrhyw le, o liniadur ir cwmwl.Bydd mentrau Ewrop yn gallu rhedeg modelau agored, sofran ar y peiriant chwilio Perplexity wedii bweru gan Ddeallusrwydd Artiffisial.Dewch i ddechrau arni gyda NVIDIA Nemotron.
    0 Commentarios ·0 Acciones
  • DIGITAL DOMAIN TAKES MAJOR LEAP SHARING THE FANTASTIC FOUR: FIRST STEPS
    www.vfxvoice.com
    By TREVOR HOGGImages courtesy of Marvel Studios.Getting a reboot is the franchise where an encounter with cosmic radiation causes four astronauts to gain the ability to stretch, be invisible, self-ignite and get transformed into a rock being. Set in a retro-futuristic 1960s, The Fantastic Four: First Steps was directed by Matt Shakman and features the visual effects expertise of Scott Stokdyk, along with a significant contribution of 400 shots by Digital Domain, which managed the character development and animation of the Thing (Ebon Moss-Bachrach), Baby Franklin, Sue Storm/Invisible Woman (Vanessa Kirby), Johnny Storm/Human Torch (Joseph Quinn) and H.E.R.B.I.E. (voiced by Mathew Wood). At the center of the work was the in-house facial capture system known as Masquerade 3, which was upgraded to handle markerless capture, process hours of data overnight and share that data with other vendors.When you see it now, the baby fits in my hand, but on set, the baby had limbs hanging down both sides because of being double the scale of whats in the film. In those cases, we would use a CG blanket and paint out the limbs, replace the head and shrink the whole body down. It was often a per-shot problem.Jan Philip Cramer, Visual Effects Supervisor, Digital DomainThrough eye motion and pantomime, H.E.R.B.I.E. was able to convey whether he was happy or sad.We were brought on early to identify the Thing and how to best tackle that, states Jan Philip Cramer, Visual Effects Supervisor at Digital Domain. We tested a bunch of different options for Scott Stokdyk and ended up talking to all of the vendors. It was important to utilize something that everybody can use. We proposed Masquerade and to use a markerless system, which was the first step for us to see if it was going to work, and are we going to be able to provide data to everybody? This was something we had never done before, and in the industry, its not common to have these facial systems shared.Digital Domain, Framestore, ILM and Sony Pictures Imageworks were the main vendors. All the visual effects supervisors would get together while we were designing the Thing and later to figure out the FACS shapes and base package that everybody can live with, Cramer explains. I was tasked with that, so I met with each vendor separately. Our idea was to solve everything to the same sets of shapes and these shapes would be provided to everybody. This will provide the base level to get the Thing going, and because so many shots had to be worked on in parallel, it would bring some continuity to the character. On top of that, they were hopeful that we could do the same for The Third Floor, where they got the full FACS face with a complete solve on a per-shot level.The Thing (Ebon Moss-Bachrach) was given a stylish wardrobe with the rock body subtly indicated underneath the fabric.[Director] Matt Shakman had an amazing idea. He had Sue, or the stand-in version for her, on multiple days throughout the weeks put the baby on her and do what he called circumstantial acting. We filmed take after take to see whether the baby does something that unintentionally looks like its a choice. This was done until we had enough randomness and good performances that came out of that. That was fun from the get-go.Jan Philip Cramer, Visual Effects Supervisor, Digital DomainContinues Cramer, A great thing about Masquerade is that we could batch-solve overnight everything captured the previous day. We would get a folder delivered to us that would get processed blindly and then the next morning we would spot-check ranges of that. It was so much that you cant even check it in a reasonable way because they would shoot hours every day. My initial concern of sending blindly-solved stuff to other vendors was it might not be good enough or there would be inconsistencies from shot to shot, such as different lighting conditions on the face. We had to boil it down to the essence. It was good that we started with the Thing because its an abstraction of the actor. Its not a one-to-one, like with She-Hulk or Thanos. The rock face is quite different from Ebon Moss-Bachrach. That enabled us to push the system to see if it worked for the various vendors. We then ended up doing the Silver Surfer and Galactus as well, even though Digital Domain didnt have a single shot. We would process face data of these actors and supplied the FACS shapes to ILM, Framestore and Sony Pictures Imageworks.It was important for the digital augmentation to retain the aesthetic of the optical effects utilized during the 1960s.Another factor that had to be taken into consideration was not using facial markers. We shot everything markerless and then the additional photography came about, Cramer recalls. Ebon had a beard because hes on the TV show The Bear. We needed a solution that would work with a random amount of facial hair, and we were able to accommodate this with the system. It worked without having to re-train. A certain rock was chosen by Matt Shakman for the Thing. Normally, they bring the balls and charts, but we always had this rock, so everybody understood what the color of this orange was in that lighting condition. That helped a huge amount. Then, we had this prosthetic guy walking through in the costume; that didnt help so much for the rock. On the facial side, we initially wanted to simulate all the rocks on the skin, but due to the sheer volume, that wasnt the solution. During the shape-crafting period, there was a simulation to ensure that every rock was separated properly and were baked down to FACS shapes that had a lot of correctives in them; that also became the base FACS shape list for the other vendors to integrate into their own system.LED vests were on set to provide interactive lighting on the face of Johnny Storm aka Human Torch (Joseph Quinn).There were was a lot of per-shot tweaking to make sure that cracks between the rocks on the face of the Thing were not distracting. It was hard to maintain something like the nasolabial folds [smile lines], but we would normally try to specifically angle and rotate the rocks so you would still get desired lines coming through, and we would have shadow enhancements in those areas as well, Cramer remarks. We would drive that with masks on the face. Rigidity had to be balanced with bendability to properly convey emotion. Initially, we had two long rocks along the jawline. We would break those to make sure they stayed straight. In our facial rig we would ensure that the rocks didnt bend too much. The rocks had a threshold for how much they could deform. Any rock that you notice that still has a bend to it, we would stiffen that up. The cracks were more of blessing than a curse. Cramer explains, By modulating the cracks, you could redefine it. It forced a lot of per-shot tweaks that are more specific to a lighting condition. The problem was that any shot would generate a random number of issues regarding how the face reads. The first time you put it through lighting versus animation, the difference was quite a bit. In the end, this became part of the Thing language. Right away, when you would go into lighting, you would reduce contrast and focus everything on the main performance, then do little tweaks to the rocks on the in-shot model to get the expression to come through better.A family member was recruited as reference for Baby Franklin. I had a baby two and a half years ago, Cramer reveals. When we were at the beginning stages of planning The Fantastic Four, we realized that they needed a baby to test with, and my baby was exactly in the age range we were looking for. This was a year before the official production started. We went to Digital Domain in Los Angeles and with Matt Shakman shot some test scenes with my wife there and my son. We went to ICT [USC Institute for Creative Technologies], and he was the youngest kid ever to be scanned by them. We did some initial tests with my son to see how to best tackle a baby. They would have a lot of standard babies to swap out, so we needed to a bring consistency to that in some form. Obviously, the main baby does the most of the hero performances, but there would be many others. This was step one. For step two, I went to London to meet with production and became in charge of the baby unit they had. There were 14 different babies, and we whittled it down to two babies who were scanned for two weeks. Then we picked one baby deemed to be the cutest and had the best data. Thats what we went into the shoot with.No facial markers were used when capturing the on-set performance of Ebon Moss-Bachrach as the Thing.Not every Baby Franklin shot belonged to Digital Domain. We did everything once the Fantastic Four arrive on Earth and Sue is carrying the baby, Cramer states. When you see it now, the baby fits in my hand, but on set, the baby had limbs hanging down both sides because of being double the scale of whats in the film. In those cases, we would use a CG blanket and paint out the limbs, replace the head and shrink the whole body down. It was often a per-shot problem. The highest priority to make sure that the baby can only do what its supposed to do. Matt Shakman had an amazing idea. He had Sue, or the stand-in version for her, on multiple days throughout the weeks put the baby on her and do what he called circumstantial acting. We filmed take after take to see whether the baby does something that unintentionally looks like its a choice. This was done until we had enough randomness and good performances come out of that. That was fun from the get-go. The performances did not vary all that much with the stand-in babies. Cramer says, As long as the baby is not looking into the camera and appearing as a baby, youre good! We matched the different babies performances that werent on set, in the right situation, except for a few priority shots where we would pick from a library of performances of the real baby similar to a HMC [Head-Mounted Camera] select, which wed match and animate.A major breakthrough for Digital Domain was being able to process facial data captured by Masquerade 3 overnight, which was then shared with other vendors.Johnny Storm as the Human Torch was originally developed by Digital Domain. We mainly did the bigger flying shots, so luckily, we didnt have to deal so much with his face while performing, Cramer remarks. It was principally body shots. A number of times the face was kept clear, which helped a lot. LED vests were on set to provide interactive lighting on the face. The core idea was that oxygen is leaking out of his skin, so there would be this hint of blue flame that catches and turns into the red flame. The hope was to have that together with these leaking flames so it feels like its emanating from inside of him rather than being just on the surface. We did not do some of these dialogue shots when hes on fire. They used different techniques for that. The initial idea was to have the hands and feet ignite first. Cramer notes, We also played with ideas where it [a flame] came from the chest; having some off-set helped a lot. They trigger initially and have a big flame rippling up fast. I found it wasnt as much of a challenge. The hardest thing with the look was how he appears when fully engaged in flames what does his face turn into? He would have this strong underlying core that had a deep lava quality. We were not the driver of this. There were other vendors who took over the development and finished it.Shadows had a major role to play in getting the face of the Thing to convey the proper emotion.Element shoots were conducted on set. What happened with Scott [Stokdyk, Visual Effects Supervisor] early on was we filmed a bunch of fire tests on set with different flames and chemicals to get various colors and behaviours, Cramer explains. They would have different wind speeds running at it. That became the base of understanding what fire they wanted. Our biggest scene with that was when theyre inside the Baxter Building and [Human Torch] has a dialogue with Mr. Fantastic [Pedro Pascal]. The flames there are realistic. Our goal was to have a gassy fire. Tests were done with the stunt performers for the flying shots. The stunt performers were pulled on wires for the different flying options, which became the initial starting point. We would take those shots, and work with that. We played with ideas such as hovering based on the subtleties of hand thrusters that Johnny can use to balance, but the main thrust comes from his feet.The core idea was that oxygen is leaking out of his [Johnny Storm as Human Torch] skin, so there would be this hint of blue flame that catches and turns into the red flame. The hope was to have that together with these leaking flames so it feels like its emanating from inside of him rather than being just on the surface. He would have this strong underlying core that had a deep lava quality.Jan Philip Cramer, Visual Effects Supervisor, Digital DomainA fascinating shot to achieve was of Baby Franklin in the womb of Sue Storm, which incorporated a lens flare associated with the invisibility effect.Simplicity was key for effects, such as invisibility for the character of Sue Storm. The whole film is meant to feel like older effects that are part of the period, Cramer states. We tried having a glass body and all sorts of refractions through her body. In the end, we went relatively simple. We did some of the refraction, but it was mainly a 2D approach for her going fully invisible in the shots. We did subtle distortion on the inside edges and the client shot with different flares that looked nice; this is where the idea of the splitting camera came from, of showing this RGB effect; she is pulling the different colors apart, and that became part of her energy and force field. Whenever she does anything, you normally see some sort of RGB flare going by her. Its grounded in some form. Because of doing the baby, we did this scanning inside where theres an x-ray of her belly at one point. Those shots were fascinating to do. It was a partial invisibility of the body to do medical things. We tried to use the flares to help integrate it, and we always had the same base ideas that its outlining something that hums a little bit. We would take edges and separate them out to get a RGB look. For us, it started appearing more retro as an effect. It worked quite well. That became a language, and all of the vendors had a similar look running for this character, which was awesome.A stand-in rock nicknamed Jennifer assisted in getting the right lighting for the Thing.Assisting the Fantastic Four is a robotic character known as H.E.R.B.I.E.(Humanoid Experimental Robot-B Type Integrated Electronics), originally conceived for the animated series. Right now, there is this search to ground things in reality, Cramer observes. There was an on-set puppeteered robot that helped a great deal. There is one shot where we used that one-to-one; in all the others, its a CG takeover, but we were always able to use the performance cues from that. We got to design that character together with Marvel Studios from the get-go, and we did the majority of his shots, like when he baby-proofs the whole building. We worked out how he would hover and how his arms could move. We were always thinking how H.E.R.B.I.E. is meant to look not too magical, but that he could actually exist. The eyes consist of a tape machine. Cramer observes, We had different performance bits that were saved for the animators, and those were recycled in different shots. It was mainly with his eye rotation. He was so expressive with his little body motions. It was more like pantomime animation with him. It was obvious when he was happy or sad. There isnt so much nuance with him; its nice to have a character who is direct. It was fun for the animators and me because if the animation works then you know the shot is going to be finished.The facial capture for the Thing established the process for the other characters.Digital Domain focused on the character development for the Thing, Invisible Woman and Human Torch.Blankets came in handy when Sue Storm was holding Baby Franklin in her arms.The overarching breakthrough on this show for Digital Domain was providing other vendors with facial data. To funnel it all through us and then go to everybody helped a lot. It was something different for us to do as a vendor, Cramer states. Thats something Im proud of. The ability to share with other vendors will have ramifications for Masquerade 3. That should be a general move forward, especially with how the industry has changed over the years. Everybody has proprietary stuff, but normally now we share everything. You go on a Marvel Studios show and know youre going to get and give characters to other vendors. In the past, you would have Thanos, and it would be Wt FX and us. But now four or five vendors work on that, so you have five times the inconsistencies getting introduced by having different interpretations of their various systems. It is helpful to funnel it early on and assemble scenes, then hand it out to everybody. It speeds up everybody and gets the same level of look.
    0 Commentarios ·0 Acciones
  • Official PlayStation Podcast Episode 523: Memory Cards
    blog.playstation.com
    Email us at PSPodcast@sony.com!Subscribe via Apple Podcasts, Spotify, or download hereHey, everybody! Sid, Tim, and Brett are back this week to discuss the release of Borderlands 4, indie hit Hollow Knight: Silksong, and 30 years of PlayStation memories.Stuff We Talked AboutNext weeks releases:Borderlands 4 | PS5 (out today)LEGO Voyagers | PS5, PS4Frostpunk 2 | PS5Towa and the Guardians of the Sacred Tree | PS5Dying Light: The Beast | PS5Trails in the Sky 1st Chapter | PS5Digimon Story: Time Stranger hands-on New details revealed on the combat system and tropical Abyss Area.Hollow Knight: Silksong hands-on Discover whats new in the anticipated sequel, like mid-air healing, mantling on ledges, more challenging encounters, and more.Announcing PlayStation 30th Memories Were celebrating PlayStation history and youre invited to be a part of it by sharing your memories. Head to PS blog for more detailsPlayStation Family App This new mobile app gives parents more tools to guide their familys PlayStation experience.The CastView and download imageDownload the imagecloseCloseDownload this imageSid Shuman Senior Director of Content Communications, SIEView and download imageDownload the imagecloseCloseDownload this imageTim Turi Content Communications Manager, SIEView and download imageDownload the imagecloseCloseDownload this imageBrett Elston Manager, Content Communications, SIEThanks to Dormiln for our rad theme song and show music.[Editors note: PSN game release dates are subject to change without notice. Game details are gathered from press releases from their individual publishers and/or ESRB rating descriptions.]
    0 Commentarios ·0 Acciones
CGShares https://cgshares.com