• Gotta open ’em all! Because who wouldn't want to dive into the nostalgia of ancient packs in TCG Pocket? It's not like we have actual lives or responsibilities, right? Just toss your savings into the abyss of rare, elusive cards and hope to catch a glimpse of something magical—like a Charizard that’s been hiding since 1999.

    And let’s not forget the thrill of completing your Pokédex, because obviously, nothing screams “successful adult” more than collecting virtual monsters. So grab your digital wallet and let the hunt for those ancient packs commence! After all, what’s more rewarding than the sweet, sweet sound of your bank account crying?

    #GottaCatchEmAll #CardCollectors #TCGPocket #N
    Gotta open ’em all! Because who wouldn't want to dive into the nostalgia of ancient packs in TCG Pocket? It's not like we have actual lives or responsibilities, right? Just toss your savings into the abyss of rare, elusive cards and hope to catch a glimpse of something magical—like a Charizard that’s been hiding since 1999. And let’s not forget the thrill of completing your Pokédex, because obviously, nothing screams “successful adult” more than collecting virtual monsters. So grab your digital wallet and let the hunt for those ancient packs commence! After all, what’s more rewarding than the sweet, sweet sound of your bank account crying? #GottaCatchEmAll #CardCollectors #TCGPocket #N
    Gotta open ’em all ! Même les anciens packs dans TCG Pocket !
    Vous êtes à la recherche de cartes rares insaisissables ? Vous visiez à compléter votre Pokédex […] Cet article Gotta open ’em all ! Même les anciens packs dans TCG Pocket ! a été publié sur REALITE-VIRTUELLE.COM.
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Exciting news for Pokémon fans in Japan! McDonald's is teaming up with Pokémon once again to bring us limited edition TCG cards in their Happy Meals! This collaboration is not just a fun treat for kids, but also an adventure for collectors! Imagine the thrill of hunting down that elusive card and making memories with family on your quest!

    While it may take a few trips through the drive-thru, the joy of collecting together will be worth it! Let's embrace this fun-filled journey and support each other in completing the set! Go, Team Pokémon!

    #PokemonCollab #McDonaldsJapan #HappyMeals #CollectThemAll #PositiveVibes
    🎉✨ Exciting news for Pokémon fans in Japan! McDonald's is teaming up with Pokémon once again to bring us limited edition TCG cards in their Happy Meals! 🍔✨ This collaboration is not just a fun treat for kids, but also an adventure for collectors! 🌟 Imagine the thrill of hunting down that elusive card and making memories with family on your quest! 🚗💨 While it may take a few trips through the drive-thru, the joy of collecting together will be worth it! 💖 Let's embrace this fun-filled journey and support each other in completing the set! Go, Team Pokémon! 💪🌈 #PokemonCollab #McDonaldsJapan #HappyMeals #CollectThemAll #PositiveVibes
    KOTAKU.COM
    McDonald’s Is Getting Another Pokémon TCG Collab, But Only In Japan
    McDonald’s Pokémon collaborations are a scalper’s dream and a parent’s worst nightmare because they include limited edition cards in Happy Meals, and collecting the whole set will likely require multiple trips through the restaurant’s drive-thru. How
    Like
    Love
    Wow
    Angry
    Sad
    50
    1 Commentarii 0 Distribuiri 0 previzualizare
  • Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Commentarii 0 Distribuiri 0 previzualizare
  • The art of two Mickeys

    Classic splitscreens, traditional face replacements and new approaches to machine learning-assisted face swapping allowed for twinning shots in ‘Mickey 17’. An excerpt from issue #32 of befores & afters magazine.
    The art of representing two characters on screen at the same time has become known as ‘twinning’. For Mickey 17 visual effects supervisor Dan Glass, the effect of seeing both Mickey 17 and 18 together was one he looked to achieve with a variety of methodologies. “With a technique like that,” he says, “you always want to use a range of tricks, because you don’t want people to figure it out. You want to keep them like, ‘Oh, wait a minute. How did they…?”
    “Going back to the way that Director Bong is so prepared and organized,” adds Glass, “it again makes the world of difference with that kind of work, because he thumbnails every shot. Then, some of them are a bit more fleshed out in storyboards. You can look at it and go, ‘Okay, in this situation, this is what the camera’s doing, this is what the actor’s doing,’ which in itself is quite interesting, because he pre-thinks all of this. You’d think that the actors show up and basically just have to follow the steps like robots. It’s not like that. He gives them an environment to work in, but the shots do end up extraordinarily close to what he thumbnails, and it made it a lot simpler to go through.”

    Those different approaches to twinning ranged from simple splitscreens, to traditional face replacements, and then substantially with a machine learned AI approach, now usually termed ‘face swapping’. What made the twinning work a tougher task than usual, suggests Glass, was the fact that the two Pattinson characters are virtually identical.
    “Normally, when you’re doing some kind of face replacement, you’re comparing it to a memory of the face. But this was right in front of you as two Mickeys looking strikingly similar.”
    Here’s how a typical twinning shot was achieved, as described by Glass. “Because Mickey was mostly dressed the same, with only a slight hair change, we were able to have Robert play both roles and to do them one after another. Sometimes, you have to do these things where hair and makeup or costume has a significant variation, so you’re either waiting a long time, which slows production, or you’re coming back at another time to do the different roles, which always makes the process a lot more complicated to match, but we were able to do that immediately.”

    “Based on the design of the shot,” continues Glass, “I would recommend which of Robert’s parts should be shot first. This was most often determined by which role had more impact on the camera movement. A huge credit goes to Robert for his ability to flip between the roles so effortlessly.”
    In the film, Mickey 17 is more passive and Mickey 18 is more aggressive. Pattinson reflected the distinct characters in his actions, including for a moment in which they fight. This fight, overseen by stunt coordinator Paul Lowe, represented moments of close interaction between the two Mickeys. It was here that a body double was crucial in shooting. The body double was also relied upon for the classic twinning technique of shooting ‘dirty’ over-the- shoulder out of focus shots of the double—ie. 17 looking at 18. However, it was quickly determined that even these would need face replacement work. “Robert’s jawline is so distinct that even those had to be replaced or shot as split screens,” observes Glass.

    When the shot was a moving one, no motion control was employed. “I’ve never been a big advocate for motion control,” states Glass. “To me it’s applicable when you’re doing things like miniatures where you need many matching passes, but I think when performances are involved, it interferes too much. It slows down a production’s speed of movement, but it’s also restrictive. Performance and camera always benefit from more flexibility.”
    “It helped tremendously that Director Bong and DOP Darius Khondji shot quite classically with minimal crane and Steadicam moves,” says Glass. “So, a lot of the moves are pan and dolly. There are some Steadicams in there that we were sometimes able to do splitscreens on. I wasn’t always sure that we could get away with the splitscreen as we shot it, but since we were always shooting the two roles, we had the footage to assess the practicality later. We were always prepared to go down a CG or machine learning route, but where we could use the splitscreen, that was the preference.”
    The Hydralite rig, developed by Volucap. Source:
    Rising Sun Pictureshandled the majority of twinning visual effects, completing them as splitscreen composites, 2D face replacements, and most notably via their machine learning toolset REVIZE, which utilized facial and body capture of Pattinson to train a model of his face and torso to swap for the double’s. A custom capture rig, dubbed the ‘Crazy Rig’ and now officially, The Hydralite, was devised and configured by Volucap to capture multiple angles of Robert on set in each lighting environment in order to produce the best possible reference for the machine learning algorithm. “For me, it was a completely legitimate use of the technique,” attests Glass, in terms of the machine learning approach. “All of the footage that we used to go into that process was captured on our movie for our movie. There’s nothing historic, or going through past libraries of footage, and it was all with Robert’s approval. I think the results were tremendous.”
    “It’s staggering to me as I watch the movie that the performances of each character are so flawlessly consistent throughout the film, because I know how much we were jumping around,” notes Glass. “I did encourage that we rehearse scenes ahead. Let’s say 17 was going to be the first role we captured, I’d have them rehearse it the other way around so that the double knew what he was going to do. Therefore, eyelines, movement, pacing and in instances where we were basically replacing the likeness of his head or even torso, we were still able to use the double’s performance and then map to that.”

    Read the full Mickey 17 issue of befores & afters magazine in PRINT from Amazon or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released.
    The post The art of two Mickeys appeared first on befores & afters.
    #art #two #mickeys
    The art of two Mickeys
    Classic splitscreens, traditional face replacements and new approaches to machine learning-assisted face swapping allowed for twinning shots in ‘Mickey 17’. An excerpt from issue #32 of befores & afters magazine. The art of representing two characters on screen at the same time has become known as ‘twinning’. For Mickey 17 visual effects supervisor Dan Glass, the effect of seeing both Mickey 17 and 18 together was one he looked to achieve with a variety of methodologies. “With a technique like that,” he says, “you always want to use a range of tricks, because you don’t want people to figure it out. You want to keep them like, ‘Oh, wait a minute. How did they…?” “Going back to the way that Director Bong is so prepared and organized,” adds Glass, “it again makes the world of difference with that kind of work, because he thumbnails every shot. Then, some of them are a bit more fleshed out in storyboards. You can look at it and go, ‘Okay, in this situation, this is what the camera’s doing, this is what the actor’s doing,’ which in itself is quite interesting, because he pre-thinks all of this. You’d think that the actors show up and basically just have to follow the steps like robots. It’s not like that. He gives them an environment to work in, but the shots do end up extraordinarily close to what he thumbnails, and it made it a lot simpler to go through.” Those different approaches to twinning ranged from simple splitscreens, to traditional face replacements, and then substantially with a machine learned AI approach, now usually termed ‘face swapping’. What made the twinning work a tougher task than usual, suggests Glass, was the fact that the two Pattinson characters are virtually identical. “Normally, when you’re doing some kind of face replacement, you’re comparing it to a memory of the face. But this was right in front of you as two Mickeys looking strikingly similar.” Here’s how a typical twinning shot was achieved, as described by Glass. “Because Mickey was mostly dressed the same, with only a slight hair change, we were able to have Robert play both roles and to do them one after another. Sometimes, you have to do these things where hair and makeup or costume has a significant variation, so you’re either waiting a long time, which slows production, or you’re coming back at another time to do the different roles, which always makes the process a lot more complicated to match, but we were able to do that immediately.” “Based on the design of the shot,” continues Glass, “I would recommend which of Robert’s parts should be shot first. This was most often determined by which role had more impact on the camera movement. A huge credit goes to Robert for his ability to flip between the roles so effortlessly.” In the film, Mickey 17 is more passive and Mickey 18 is more aggressive. Pattinson reflected the distinct characters in his actions, including for a moment in which they fight. This fight, overseen by stunt coordinator Paul Lowe, represented moments of close interaction between the two Mickeys. It was here that a body double was crucial in shooting. The body double was also relied upon for the classic twinning technique of shooting ‘dirty’ over-the- shoulder out of focus shots of the double—ie. 17 looking at 18. However, it was quickly determined that even these would need face replacement work. “Robert’s jawline is so distinct that even those had to be replaced or shot as split screens,” observes Glass. When the shot was a moving one, no motion control was employed. “I’ve never been a big advocate for motion control,” states Glass. “To me it’s applicable when you’re doing things like miniatures where you need many matching passes, but I think when performances are involved, it interferes too much. It slows down a production’s speed of movement, but it’s also restrictive. Performance and camera always benefit from more flexibility.” “It helped tremendously that Director Bong and DOP Darius Khondji shot quite classically with minimal crane and Steadicam moves,” says Glass. “So, a lot of the moves are pan and dolly. There are some Steadicams in there that we were sometimes able to do splitscreens on. I wasn’t always sure that we could get away with the splitscreen as we shot it, but since we were always shooting the two roles, we had the footage to assess the practicality later. We were always prepared to go down a CG or machine learning route, but where we could use the splitscreen, that was the preference.” The Hydralite rig, developed by Volucap. Source: Rising Sun Pictureshandled the majority of twinning visual effects, completing them as splitscreen composites, 2D face replacements, and most notably via their machine learning toolset REVIZE, which utilized facial and body capture of Pattinson to train a model of his face and torso to swap for the double’s. A custom capture rig, dubbed the ‘Crazy Rig’ and now officially, The Hydralite, was devised and configured by Volucap to capture multiple angles of Robert on set in each lighting environment in order to produce the best possible reference for the machine learning algorithm. “For me, it was a completely legitimate use of the technique,” attests Glass, in terms of the machine learning approach. “All of the footage that we used to go into that process was captured on our movie for our movie. There’s nothing historic, or going through past libraries of footage, and it was all with Robert’s approval. I think the results were tremendous.” “It’s staggering to me as I watch the movie that the performances of each character are so flawlessly consistent throughout the film, because I know how much we were jumping around,” notes Glass. “I did encourage that we rehearse scenes ahead. Let’s say 17 was going to be the first role we captured, I’d have them rehearse it the other way around so that the double knew what he was going to do. Therefore, eyelines, movement, pacing and in instances where we were basically replacing the likeness of his head or even torso, we were still able to use the double’s performance and then map to that.” Read the full Mickey 17 issue of befores & afters magazine in PRINT from Amazon or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. The post The art of two Mickeys appeared first on befores & afters. #art #two #mickeys
    BEFORESANDAFTERS.COM
    The art of two Mickeys
    Classic splitscreens, traditional face replacements and new approaches to machine learning-assisted face swapping allowed for twinning shots in ‘Mickey 17’. An excerpt from issue #32 of befores & afters magazine. The art of representing two characters on screen at the same time has become known as ‘twinning’. For Mickey 17 visual effects supervisor Dan Glass, the effect of seeing both Mickey 17 and 18 together was one he looked to achieve with a variety of methodologies. “With a technique like that,” he says, “you always want to use a range of tricks, because you don’t want people to figure it out. You want to keep them like, ‘Oh, wait a minute. How did they…?” “Going back to the way that Director Bong is so prepared and organized,” adds Glass, “it again makes the world of difference with that kind of work, because he thumbnails every shot. Then, some of them are a bit more fleshed out in storyboards. You can look at it and go, ‘Okay, in this situation, this is what the camera’s doing, this is what the actor’s doing,’ which in itself is quite interesting, because he pre-thinks all of this. You’d think that the actors show up and basically just have to follow the steps like robots. It’s not like that. He gives them an environment to work in, but the shots do end up extraordinarily close to what he thumbnails, and it made it a lot simpler to go through.” Those different approaches to twinning ranged from simple splitscreens, to traditional face replacements, and then substantially with a machine learned AI approach, now usually termed ‘face swapping’. What made the twinning work a tougher task than usual, suggests Glass, was the fact that the two Pattinson characters are virtually identical. “Normally, when you’re doing some kind of face replacement, you’re comparing it to a memory of the face. But this was right in front of you as two Mickeys looking strikingly similar.” Here’s how a typical twinning shot was achieved, as described by Glass. “Because Mickey was mostly dressed the same, with only a slight hair change, we were able to have Robert play both roles and to do them one after another. Sometimes, you have to do these things where hair and makeup or costume has a significant variation, so you’re either waiting a long time, which slows production, or you’re coming back at another time to do the different roles, which always makes the process a lot more complicated to match, but we were able to do that immediately.” “Based on the design of the shot,” continues Glass, “I would recommend which of Robert’s parts should be shot first. This was most often determined by which role had more impact on the camera movement. A huge credit goes to Robert for his ability to flip between the roles so effortlessly.” In the film, Mickey 17 is more passive and Mickey 18 is more aggressive. Pattinson reflected the distinct characters in his actions, including for a moment in which they fight. This fight, overseen by stunt coordinator Paul Lowe, represented moments of close interaction between the two Mickeys. It was here that a body double was crucial in shooting. The body double was also relied upon for the classic twinning technique of shooting ‘dirty’ over-the- shoulder out of focus shots of the double—ie. 17 looking at 18. However, it was quickly determined that even these would need face replacement work. “Robert’s jawline is so distinct that even those had to be replaced or shot as split screens,” observes Glass. When the shot was a moving one, no motion control was employed. “I’ve never been a big advocate for motion control,” states Glass. “To me it’s applicable when you’re doing things like miniatures where you need many matching passes, but I think when performances are involved, it interferes too much. It slows down a production’s speed of movement, but it’s also restrictive. Performance and camera always benefit from more flexibility.” “It helped tremendously that Director Bong and DOP Darius Khondji shot quite classically with minimal crane and Steadicam moves,” says Glass. “So, a lot of the moves are pan and dolly. There are some Steadicams in there that we were sometimes able to do splitscreens on. I wasn’t always sure that we could get away with the splitscreen as we shot it, but since we were always shooting the two roles, we had the footage to assess the practicality later. We were always prepared to go down a CG or machine learning route, but where we could use the splitscreen, that was the preference.” The Hydralite rig, developed by Volucap. Source: https://volucap.com Rising Sun Pictures (visual effects supervisor Guido Wolter) handled the majority of twinning visual effects, completing them as splitscreen composites, 2D face replacements, and most notably via their machine learning toolset REVIZE, which utilized facial and body capture of Pattinson to train a model of his face and torso to swap for the double’s. A custom capture rig, dubbed the ‘Crazy Rig’ and now officially, The Hydralite, was devised and configured by Volucap to capture multiple angles of Robert on set in each lighting environment in order to produce the best possible reference for the machine learning algorithm. “For me, it was a completely legitimate use of the technique,” attests Glass, in terms of the machine learning approach. “All of the footage that we used to go into that process was captured on our movie for our movie. There’s nothing historic, or going through past libraries of footage, and it was all with Robert’s approval. I think the results were tremendous.” “It’s staggering to me as I watch the movie that the performances of each character are so flawlessly consistent throughout the film, because I know how much we were jumping around,” notes Glass. “I did encourage that we rehearse scenes ahead. Let’s say 17 was going to be the first role we captured, I’d have them rehearse it the other way around so that the double knew what he was going to do. Therefore, eyelines, movement, pacing and in instances where we were basically replacing the likeness of his head or even torso, we were still able to use the double’s performance and then map to that.” Read the full Mickey 17 issue of befores & afters magazine in PRINT from Amazon or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. The post The art of two Mickeys appeared first on befores & afters.
    0 Commentarii 0 Distribuiri 0 previzualizare
  • How to Complete Rosaura's Quest in Lies of P Overture

    Just like the base game, the Lies of P Overture DLC continues the trend of soulslike games featuring difficult and complex side quests. And while there are many examples of this, one of the best examples can be found in Rosaura's quest. Here, you must locate Rosaura in several locations, with very little information regarding where she'll turn up next. With this in mind, this guide will offer a complete guide to completing Rosaura's side quest in Lies of P Overture.
    #how #complete #rosaura039s #quest #lies
    How to Complete Rosaura's Quest in Lies of P Overture
    Just like the base game, the Lies of P Overture DLC continues the trend of soulslike games featuring difficult and complex side quests. And while there are many examples of this, one of the best examples can be found in Rosaura's quest. Here, you must locate Rosaura in several locations, with very little information regarding where she'll turn up next. With this in mind, this guide will offer a complete guide to completing Rosaura's side quest in Lies of P Overture. #how #complete #rosaura039s #quest #lies
    GAMERANT.COM
    How to Complete Rosaura's Quest in Lies of P Overture
    Just like the base game, the Lies of P Overture DLC continues the trend of soulslike games featuring difficult and complex side quests. And while there are many examples of this, one of the best examples can be found in Rosaura's quest. Here, you must locate Rosaura in several locations, with very little information regarding where she'll turn up next. With this in mind, this guide will offer a complete guide to completing Rosaura's side quest in Lies of P Overture.
    0 Commentarii 0 Distribuiri 0 previzualizare
CGShares https://cgshares.com