• EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025
    Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset.
    Key Takeaways:

    Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task.
    Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map.
    Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models.
    Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal.

    Challenge: Seeing the World from Two Different Angles
    The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings.

    FG2: Matching Fine-Grained Features
    The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map.

    Here’s a breakdown of their innovative pipeline:

    Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment.
    Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view.
    Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose.

    Unprecedented Performance and Interpretability
    The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research.

    Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems.
    “A Clearer Path” for Autonomous Navigation
    The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    #epfl #researchers #unveil #fg2 #cvpr
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausannein Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerialimage. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-Viewbut are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the verticaldimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoFpose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models #epfl #researchers #unveil #fg2 #cvpr
    WWW.MARKTECHPOST.COM
    EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
    Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers block and reflect satellite signals, leading to location errors of tens of meters. For you and me, that might mean a missed turn. But for an autonomous vehicle or a delivery robot, that level of imprecision is the difference between a successful mission and a costly failure. These machines require pinpoint accuracy to operate safely and efficiently. Addressing this critical challenge, researchers from the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland have introduced a groundbreaking new method for visual localization during CVPR 2025 Their new paper, “FG2: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching,” presents a novel AI model that significantly enhances the ability of a ground-level system, like an autonomous car, to determine its exact position and orientation using only a camera and a corresponding aerial (or satellite) image. The new approach has demonstrated a remarkable 28% reduction in mean localization error compared to the previous state-of-the-art on a challenging public dataset. Key Takeaways: Superior Accuracy: The FG2 model reduces the average localization error by a significant 28% on the VIGOR cross-area test set, a challenging benchmark for this task. Human-like Intuition: Instead of relying on abstract descriptors, the model mimics human reasoning by matching fine-grained, semantically consistent features—like curbs, crosswalks, and buildings—between a ground-level photo and an aerial map. Enhanced Interpretability: The method allows researchers to “see” what the AI is “thinking” by visualizing exactly which features in the ground and aerial images are being matched, a major step forward from previous “black box” models. Weakly Supervised Learning: Remarkably, the model learns these complex and consistent feature matches without any direct labels for correspondences. It achieves this using only the final camera pose as a supervisory signal. Challenge: Seeing the World from Two Different Angles The core problem of cross-view localization is the dramatic difference in perspective between a street-level camera and an overhead satellite view. A building facade seen from the ground looks completely different from its rooftop signature in an aerial image. Existing methods have struggled with this. Some create a general “descriptor” for the entire scene, but this is an abstract approach that doesn’t mirror how humans naturally localize themselves by spotting specific landmarks. Other methods transform the ground image into a Bird’s-Eye-View (BEV) but are often limited to the ground plane, ignoring crucial vertical structures like buildings. FG2: Matching Fine-Grained Features The EPFL team’s FG2 method introduces a more intuitive and effective process. It aligns two sets of points: one generated from the ground-level image and another sampled from the aerial map. Here’s a breakdown of their innovative pipeline: Mapping to 3D: The process begins by taking the features from the ground-level image and lifting them into a 3D point cloud centered around the camera. This creates a 3D representation of the immediate environment. Smart Pooling to BEV: This is where the magic happens. Instead of simply flattening the 3D data, the model learns to intelligently select the most important features along the vertical (height) dimension for each point. It essentially asks, “For this spot on the map, is the ground-level road marking more important, or is the edge of that building’s roof the better landmark?” This selection process is crucial, as it allows the model to correctly associate features like building facades with their corresponding rooftops in the aerial view. Feature Matching and Pose Estimation: Once both the ground and aerial views are represented as 2D point planes with rich feature descriptors, the model computes the similarity between them. It then samples a sparse set of the most confident matches and uses a classic geometric algorithm called Procrustes alignment to calculate the precise 3-DoF (x, y, and yaw) pose. Unprecedented Performance and Interpretability The results speak for themselves. On the challenging VIGOR dataset, which includes images from different cities in its cross-area test, FG2 reduced the mean localization error by 28% compared to the previous best method. It also demonstrated superior generalization capabilities on the KITTI dataset, a staple in autonomous driving research. Perhaps more importantly, the FG2 model offers a new level of transparency. By visualizing the matched points, the researchers showed that the model learns semantically consistent correspondences without being explicitly told to. For example, the system correctly matches zebra crossings, road markings, and even building facades in the ground view to their corresponding locations on the aerial map. This interpretability is extremenly valuable for building trust in safety-critical autonomous systems. “A Clearer Path” for Autonomous Navigation The FG2 method represents a significant leap forward in fine-grained visual localization. By developing a model that intelligently selects and matches features in a way that mirrors human intuition, the EPFL researchers have not only shattered previous accuracy records but also made the decision-making process of the AI more interpretable. This work paves the way for more robust and reliable navigation systems for autonomous vehicles, drones, and robots, bringing us one step closer to a future where machines can confidently navigate our world, even when GPS fails them. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Jean-marc MommessinJean-marc is a successful AI business executive .He leads and accelerates growth for AI powered solutions and started a computer vision company in 2006. He is a recognized speaker at AI conferences and has an MBA from Stanford.Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%Jean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Highlighted at CVPR 2025: Google DeepMind’s ‘Motion Prompting’ Paper Unlocks Granular Video ControlJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Snowflake Charts New AI Territory: Cortex AISQL & Snowflake Intelligence Poised to Reshape Data AnalyticsJean-marc Mommessinhttps://www.marktechpost.com/author/jean-marc0000677/Exclusive Talk: Joey Conway of NVIDIA on Llama Nemotron Ultra and Open Source Models
    Like
    Love
    Wow
    Angry
    Sad
    601
    0 Комментарии 0 Поделились 0 предпросмотр
  • NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE

    By TREVOR HOGG

    Images courtesy of Chocolate Tribe.

    Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe

    After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures.

    “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!”

    Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024.

    A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?”

    Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle.

    With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.”

    Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire.

    Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting.

    Stella Gono, Software Developer, working on the Chocolate Tribe website.

    Family photo of the Maketos. Maketo-van de Bragt has two siblings.

    Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.”

    AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!”

    Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.”

    South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024.

    There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.”

    Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years.

    Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024.

    Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.”

    Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    #nosipho #maketovan #den #bragt #altered
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.” #nosipho #maketovan #den #bragt #altered
    WWW.VFXVOICE.COM
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebe (center in black T-shirt) and friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI [nickname for Johannesburg],” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Gold (2023) for Netflix. (Image courtesy of Chocolate Tribe and Netflix) Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    Like
    Love
    Wow
    Angry
    Sad
    397
    0 Комментарии 0 Поделились 0 предпросмотр
  • The Lost Bus

    The smoke is rising. The clock is ticking. Watch the first teaser for The Lost Bus, Paul Greengrass’s intense new film based on real events!

    The VFX are made by:beloFXCinesiteILMRISEOutpost VFXVitality Visual EffectsMist VFXHost VFX

    The Production VFX Supervisor is Charlie Noble.The Production VFX Producer is Gavin Round.

    Director: Paul GreengrassRelease Date: Fall 2025Screenshot

    © Vincent Frei – The Art of VFX – 2025
    The post The Lost Bus appeared first on The Art of VFX.
    #lost #bus
    The Lost Bus
    The smoke is rising. The clock is ticking. Watch the first teaser for The Lost Bus, Paul Greengrass’s intense new film based on real events! The VFX are made by:beloFXCinesiteILMRISEOutpost VFXVitality Visual EffectsMist VFXHost VFX The Production VFX Supervisor is Charlie Noble.The Production VFX Producer is Gavin Round. Director: Paul GreengrassRelease Date: Fall 2025Screenshot © Vincent Frei – The Art of VFX – 2025 The post The Lost Bus appeared first on The Art of VFX. #lost #bus
    WWW.ARTOFVFX.COM
    The Lost Bus
    The smoke is rising. The clock is ticking. Watch the first teaser for The Lost Bus, Paul Greengrass’s intense new film based on real events! The VFX are made by:beloFX (VFX Supervisor: Russell Bowen)CinesiteILMRISE (VFX Supervisor: Oliver Schulz)Outpost VFX (VFX Supervisor: John McLaren)Vitality Visual Effects (VFX Supervisor: Jiwoong Kim)Mist VFX (VFX Supervisor: Sasi Kumar)Host VFX The Production VFX Supervisor is Charlie Noble.The Production VFX Producer is Gavin Round. Director: Paul GreengrassRelease Date: Fall 2025 (Apple TV+) Screenshot © Vincent Frei – The Art of VFX – 2025 The post The Lost Bus appeared first on The Art of VFX.
    0 Комментарии 0 Поделились 0 предпросмотр
  • FX Drops ‘Alien: Earth’ Official Trailer, Key Art

    If we don’t lock them down, it will be too late. The official trailer and key art have been revealed for Alien: Earth, which hits FX and Hulu August 12.
    In the upcoming series, when the mysterious deep space research vessel USCSS Maginot crash-lands on Earth, Wendy and a ragtag group of tactical soldiers make a fateful discovery that puts them face-to-face with the planet’s greatest threat.
    The series stars Sydney Chandler as Wendy; Timothy Olyphant as Kirsh; Alex Lawther as Hermit; Samuel Blenkin as Boy Kavalier; Babou Ceesay as Morrow;  Adrian Edmondson as Atom Eins; David Rysdahl as Arthur Sylvia; Essie Davis as Dame Sylvia; Lily Newmark as Nibs; Erana James as Curly; Adarsh Gourav as Slightly; Jonathan Ajayi as Smee; Kit Young as Tootles; Diêm Camille as Siberian; Moe Bar-El as Rashidi; and Sandra Yi Sencindiver as Yutani.
    Noah Hawley is creator and executive producer. Ridley Scott, David W. Zucker, Joseph Iberti, Dana Gonzales, and Clayton Krueger also executive produce. FX Productions produces.
    VFX are created by Clear Angle Studios, Fin Design & Effects, MPC, Pixomondo, The Third Floor, Untold Studios, and Zoic Studios, with Jonathan Rothbart acting as visual effects supervisor.
    Check out the official trailer now:

    Source: FX

    Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions.
    #drops #alien #earth #official #trailer
    FX Drops ‘Alien: Earth’ Official Trailer, Key Art
    If we don’t lock them down, it will be too late. The official trailer and key art have been revealed for Alien: Earth, which hits FX and Hulu August 12. In the upcoming series, when the mysterious deep space research vessel USCSS Maginot crash-lands on Earth, Wendy and a ragtag group of tactical soldiers make a fateful discovery that puts them face-to-face with the planet’s greatest threat. The series stars Sydney Chandler as Wendy; Timothy Olyphant as Kirsh; Alex Lawther as Hermit; Samuel Blenkin as Boy Kavalier; Babou Ceesay as Morrow;  Adrian Edmondson as Atom Eins; David Rysdahl as Arthur Sylvia; Essie Davis as Dame Sylvia; Lily Newmark as Nibs; Erana James as Curly; Adarsh Gourav as Slightly; Jonathan Ajayi as Smee; Kit Young as Tootles; Diêm Camille as Siberian; Moe Bar-El as Rashidi; and Sandra Yi Sencindiver as Yutani. Noah Hawley is creator and executive producer. Ridley Scott, David W. Zucker, Joseph Iberti, Dana Gonzales, and Clayton Krueger also executive produce. FX Productions produces. VFX are created by Clear Angle Studios, Fin Design & Effects, MPC, Pixomondo, The Third Floor, Untold Studios, and Zoic Studios, with Jonathan Rothbart acting as visual effects supervisor. Check out the official trailer now: Source: FX Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions. #drops #alien #earth #official #trailer
    WWW.AWN.COM
    FX Drops ‘Alien: Earth’ Official Trailer, Key Art
    If we don’t lock them down, it will be too late. The official trailer and key art have been revealed for Alien: Earth, which hits FX and Hulu August 12. In the upcoming series, when the mysterious deep space research vessel USCSS Maginot crash-lands on Earth, Wendy and a ragtag group of tactical soldiers make a fateful discovery that puts them face-to-face with the planet’s greatest threat. The series stars Sydney Chandler as Wendy; Timothy Olyphant as Kirsh; Alex Lawther as Hermit; Samuel Blenkin as Boy Kavalier; Babou Ceesay as Morrow;  Adrian Edmondson as Atom Eins; David Rysdahl as Arthur Sylvia; Essie Davis as Dame Sylvia; Lily Newmark as Nibs; Erana James as Curly; Adarsh Gourav as Slightly; Jonathan Ajayi as Smee; Kit Young as Tootles; Diêm Camille as Siberian; Moe Bar-El as Rashidi; and Sandra Yi Sencindiver as Yutani. Noah Hawley is creator and executive producer. Ridley Scott, David W. Zucker, Joseph Iberti, Dana Gonzales, and Clayton Krueger also executive produce. FX Productions produces. VFX are created by Clear Angle Studios, Fin Design & Effects, MPC, Pixomondo, The Third Floor, Untold Studios, and Zoic Studios, with Jonathan Rothbart acting as visual effects supervisor. Check out the official trailer now: Source: FX Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions.
    0 Комментарии 0 Поделились 0 предпросмотр
  • fxpodcast: Landman’s special effects and explosions with Garry Elmendorf

    Garry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman.
    As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time.
    Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined.

    In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments.

    In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact.

    Listen to the full interview on the fxpodcast.
    #fxpodcast #landmans #special #effects #explosions
    fxpodcast: Landman’s special effects and explosions with Garry Elmendorf
    Garry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman. As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time. Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined. In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments. In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact. Listen to the full interview on the fxpodcast. #fxpodcast #landmans #special #effects #explosions
    WWW.FXGUIDE.COM
    fxpodcast: Landman’s special effects and explosions with Garry Elmendorf
    Garry Elmendorf isn’t just a special effects supervisor, he’s a master of controlled chaos. With over 50 years in the business, from Logan’s Run in the ’70s to the high-octane worlds of Yellowstone, 1883, 1923, and Landman. Elmendorf has shaped the visual DNA of Taylor Sheridan’s TV empire with a mix of old-school craft and jaw-dropping spectacle. In the latest fxpodcast, Garry joins us to break down the physical effects work behind some of the most explosive moments in Landman. As regular listeners know, we occasionally conduct interviews with individuals working in SFX, rather than with VFX. Garry’s work is not the kind of work that’s built in post and his approach is grounded in real-world physics, practical fabrication, and deeply collaborative on-set discipline. Take the aircraft crash in Landman’s premiere: there was no CGI here, other than comp cleanup. It was shot with just a Frankenstein plane built from scrap, rigged with trip triggers and detonated in real time. Or the massive oil rig explosion, which involved custom pump jacks, 2,000 gallons of burning diesel and gasoline, propane cannons, and tightly timed pyro rigs. The scale is cinematic. Safety, Garry insists, is always his first concern, but what keeps him up at night is timing. One mistimed trigger, one failed ignition, and the shot is ruined. In our conversation, Garry shares incredible behind-the-scenes insights into how these sequences are devised, tested, and executed, whether it’s launching a van skyward via an air cannon or walking Billy Bob Thornton within 40 feet of a roaring fireball. There’s a tactile intensity to his work, and a trust among his crew that only comes from decades of working under pressure. From assembling a crashable aircraft out of mismatched parts to rigging oil rig explosions with precise control over flame size, duration, and safety, his work is rooted in mechanical problem-solving and coordination across departments. In Landman, whether coordinating multiple fuel types to achieve specific smoke density or calculating safe clearances for actors and crew around high-temperature pyrotechnics, Elmendorf’s contribution reflects a commitment to realism and repeatability on set. The result is a series where the physicality of explosions, crashes, and fire-driven action carries weight, both in terms of production logistics and visual impact. Listen to the full interview on the fxpodcast.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Apple TV+ Drops ‘Foundation’ Season 3 Trailer

    It's less than a month away! Apple TV+ has unveiled the trailer for Foundation Season 3. Based on Isaac Asimov’s epic, seminal sci-fi stories and starring Jared Harris, Lee Pace, and Lou Llobell, the upcoming season will debut globally with one episode on July 11 on Apple TV+, followed by episodes every Friday through September 12.
    Season 3, which is set 152 years after the events of Season 2, continues the epic chronicle of a band of exiles on their journey to save humanity and rebuild civilization amid the fall of the Galactic Empire.  We get to see more of how prescient, and important, Hari Seldon’s theories of psychohistory become.
    Newcomers to the franchise include Cherry Jones, Brandon P. Bell, Synnøve Karlsen, Cody Fern, Tómas Lemarquis, Alexander Siddig, Troy Kotsur, and Pilou Asbæk. Returning cast includes Laura Birn, Cassian Bilton, Terrence Mann, and Rowena King.
    Under overall VFX supervisor Chris MacLean, VFX studios include Crafty Apes, Framestore, Outpost VFX, Rodeo FX, SSVFX, and Trend VFX.
    Foundation is produced for Apple by Skydance Television. David S. Goyer executive produces alongside Bill Bost, David Ellison, Dana Goldberg, Matt Thunell, Robyn Asimov, David Kob, Christopher J. Byrne, Leigh Dana Jackson, Jane Espenson and Roxann Dawson.
    Foundation Seasons 1 and 2 are streaming globally on Apple TV+.
    Check out the trailer now:

    Source: Apple TV+

    Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions.
    #apple #drops #foundation #season #trailer
    Apple TV+ Drops ‘Foundation’ Season 3 Trailer
    It's less than a month away! Apple TV+ has unveiled the trailer for Foundation Season 3. Based on Isaac Asimov’s epic, seminal sci-fi stories and starring Jared Harris, Lee Pace, and Lou Llobell, the upcoming season will debut globally with one episode on July 11 on Apple TV+, followed by episodes every Friday through September 12. Season 3, which is set 152 years after the events of Season 2, continues the epic chronicle of a band of exiles on their journey to save humanity and rebuild civilization amid the fall of the Galactic Empire.  We get to see more of how prescient, and important, Hari Seldon’s theories of psychohistory become. Newcomers to the franchise include Cherry Jones, Brandon P. Bell, Synnøve Karlsen, Cody Fern, Tómas Lemarquis, Alexander Siddig, Troy Kotsur, and Pilou Asbæk. Returning cast includes Laura Birn, Cassian Bilton, Terrence Mann, and Rowena King. Under overall VFX supervisor Chris MacLean, VFX studios include Crafty Apes, Framestore, Outpost VFX, Rodeo FX, SSVFX, and Trend VFX. Foundation is produced for Apple by Skydance Television. David S. Goyer executive produces alongside Bill Bost, David Ellison, Dana Goldberg, Matt Thunell, Robyn Asimov, David Kob, Christopher J. Byrne, Leigh Dana Jackson, Jane Espenson and Roxann Dawson. Foundation Seasons 1 and 2 are streaming globally on Apple TV+. Check out the trailer now: Source: Apple TV+ Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions. #apple #drops #foundation #season #trailer
    WWW.AWN.COM
    Apple TV+ Drops ‘Foundation’ Season 3 Trailer
    It's less than a month away! Apple TV+ has unveiled the trailer for Foundation Season 3. Based on Isaac Asimov’s epic, seminal sci-fi stories and starring Jared Harris, Lee Pace, and Lou Llobell, the upcoming season will debut globally with one episode on July 11 on Apple TV+, followed by episodes every Friday through September 12. Season 3, which is set 152 years after the events of Season 2, continues the epic chronicle of a band of exiles on their journey to save humanity and rebuild civilization amid the fall of the Galactic Empire.  We get to see more of how prescient, and important, Hari Seldon’s theories of psychohistory become. Newcomers to the franchise include Cherry Jones, Brandon P. Bell, Synnøve Karlsen, Cody Fern, Tómas Lemarquis, Alexander Siddig, Troy Kotsur, and Pilou Asbæk. Returning cast includes Laura Birn, Cassian Bilton, Terrence Mann, and Rowena King. Under overall VFX supervisor Chris MacLean, VFX studios include Crafty Apes, Framestore, Outpost VFX, Rodeo FX, SSVFX, and Trend VFX. Foundation is produced for Apple by Skydance Television. David S. Goyer executive produces alongside Bill Bost, David Ellison, Dana Goldberg, Matt Thunell, Robyn Asimov, David Kob, Christopher J. Byrne, Leigh Dana Jackson, Jane Espenson and Roxann Dawson. Foundation Seasons 1 and 2 are streaming globally on Apple TV+. Check out the trailer now: Source: Apple TV+ Journalist, antique shop owner, aspiring gemologist—L'Wren brings a diverse perspective to animation, where every frame reflects her varied passions.
    0 Комментарии 0 Поделились 0 предпросмотр
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Комментарии 0 Поделились 0 предпросмотр
  • The art of two Mickeys

    Classic splitscreens, traditional face replacements and new approaches to machine learning-assisted face swapping allowed for twinning shots in ‘Mickey 17’. An excerpt from issue #32 of befores & afters magazine.
    The art of representing two characters on screen at the same time has become known as ‘twinning’. For Mickey 17 visual effects supervisor Dan Glass, the effect of seeing both Mickey 17 and 18 together was one he looked to achieve with a variety of methodologies. “With a technique like that,” he says, “you always want to use a range of tricks, because you don’t want people to figure it out. You want to keep them like, ‘Oh, wait a minute. How did they…?”
    “Going back to the way that Director Bong is so prepared and organized,” adds Glass, “it again makes the world of difference with that kind of work, because he thumbnails every shot. Then, some of them are a bit more fleshed out in storyboards. You can look at it and go, ‘Okay, in this situation, this is what the camera’s doing, this is what the actor’s doing,’ which in itself is quite interesting, because he pre-thinks all of this. You’d think that the actors show up and basically just have to follow the steps like robots. It’s not like that. He gives them an environment to work in, but the shots do end up extraordinarily close to what he thumbnails, and it made it a lot simpler to go through.”

    Those different approaches to twinning ranged from simple splitscreens, to traditional face replacements, and then substantially with a machine learned AI approach, now usually termed ‘face swapping’. What made the twinning work a tougher task than usual, suggests Glass, was the fact that the two Pattinson characters are virtually identical.
    “Normally, when you’re doing some kind of face replacement, you’re comparing it to a memory of the face. But this was right in front of you as two Mickeys looking strikingly similar.”
    Here’s how a typical twinning shot was achieved, as described by Glass. “Because Mickey was mostly dressed the same, with only a slight hair change, we were able to have Robert play both roles and to do them one after another. Sometimes, you have to do these things where hair and makeup or costume has a significant variation, so you’re either waiting a long time, which slows production, or you’re coming back at another time to do the different roles, which always makes the process a lot more complicated to match, but we were able to do that immediately.”

    “Based on the design of the shot,” continues Glass, “I would recommend which of Robert’s parts should be shot first. This was most often determined by which role had more impact on the camera movement. A huge credit goes to Robert for his ability to flip between the roles so effortlessly.”
    In the film, Mickey 17 is more passive and Mickey 18 is more aggressive. Pattinson reflected the distinct characters in his actions, including for a moment in which they fight. This fight, overseen by stunt coordinator Paul Lowe, represented moments of close interaction between the two Mickeys. It was here that a body double was crucial in shooting. The body double was also relied upon for the classic twinning technique of shooting ‘dirty’ over-the- shoulder out of focus shots of the double—ie. 17 looking at 18. However, it was quickly determined that even these would need face replacement work. “Robert’s jawline is so distinct that even those had to be replaced or shot as split screens,” observes Glass.

    When the shot was a moving one, no motion control was employed. “I’ve never been a big advocate for motion control,” states Glass. “To me it’s applicable when you’re doing things like miniatures where you need many matching passes, but I think when performances are involved, it interferes too much. It slows down a production’s speed of movement, but it’s also restrictive. Performance and camera always benefit from more flexibility.”
    “It helped tremendously that Director Bong and DOP Darius Khondji shot quite classically with minimal crane and Steadicam moves,” says Glass. “So, a lot of the moves are pan and dolly. There are some Steadicams in there that we were sometimes able to do splitscreens on. I wasn’t always sure that we could get away with the splitscreen as we shot it, but since we were always shooting the two roles, we had the footage to assess the practicality later. We were always prepared to go down a CG or machine learning route, but where we could use the splitscreen, that was the preference.”
    The Hydralite rig, developed by Volucap. Source:
    Rising Sun Pictureshandled the majority of twinning visual effects, completing them as splitscreen composites, 2D face replacements, and most notably via their machine learning toolset REVIZE, which utilized facial and body capture of Pattinson to train a model of his face and torso to swap for the double’s. A custom capture rig, dubbed the ‘Crazy Rig’ and now officially, The Hydralite, was devised and configured by Volucap to capture multiple angles of Robert on set in each lighting environment in order to produce the best possible reference for the machine learning algorithm. “For me, it was a completely legitimate use of the technique,” attests Glass, in terms of the machine learning approach. “All of the footage that we used to go into that process was captured on our movie for our movie. There’s nothing historic, or going through past libraries of footage, and it was all with Robert’s approval. I think the results were tremendous.”
    “It’s staggering to me as I watch the movie that the performances of each character are so flawlessly consistent throughout the film, because I know how much we were jumping around,” notes Glass. “I did encourage that we rehearse scenes ahead. Let’s say 17 was going to be the first role we captured, I’d have them rehearse it the other way around so that the double knew what he was going to do. Therefore, eyelines, movement, pacing and in instances where we were basically replacing the likeness of his head or even torso, we were still able to use the double’s performance and then map to that.”

    Read the full Mickey 17 issue of befores & afters magazine in PRINT from Amazon or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released.
    The post The art of two Mickeys appeared first on befores & afters.
    #art #two #mickeys
    The art of two Mickeys
    Classic splitscreens, traditional face replacements and new approaches to machine learning-assisted face swapping allowed for twinning shots in ‘Mickey 17’. An excerpt from issue #32 of befores & afters magazine. The art of representing two characters on screen at the same time has become known as ‘twinning’. For Mickey 17 visual effects supervisor Dan Glass, the effect of seeing both Mickey 17 and 18 together was one he looked to achieve with a variety of methodologies. “With a technique like that,” he says, “you always want to use a range of tricks, because you don’t want people to figure it out. You want to keep them like, ‘Oh, wait a minute. How did they…?” “Going back to the way that Director Bong is so prepared and organized,” adds Glass, “it again makes the world of difference with that kind of work, because he thumbnails every shot. Then, some of them are a bit more fleshed out in storyboards. You can look at it and go, ‘Okay, in this situation, this is what the camera’s doing, this is what the actor’s doing,’ which in itself is quite interesting, because he pre-thinks all of this. You’d think that the actors show up and basically just have to follow the steps like robots. It’s not like that. He gives them an environment to work in, but the shots do end up extraordinarily close to what he thumbnails, and it made it a lot simpler to go through.” Those different approaches to twinning ranged from simple splitscreens, to traditional face replacements, and then substantially with a machine learned AI approach, now usually termed ‘face swapping’. What made the twinning work a tougher task than usual, suggests Glass, was the fact that the two Pattinson characters are virtually identical. “Normally, when you’re doing some kind of face replacement, you’re comparing it to a memory of the face. But this was right in front of you as two Mickeys looking strikingly similar.” Here’s how a typical twinning shot was achieved, as described by Glass. “Because Mickey was mostly dressed the same, with only a slight hair change, we were able to have Robert play both roles and to do them one after another. Sometimes, you have to do these things where hair and makeup or costume has a significant variation, so you’re either waiting a long time, which slows production, or you’re coming back at another time to do the different roles, which always makes the process a lot more complicated to match, but we were able to do that immediately.” “Based on the design of the shot,” continues Glass, “I would recommend which of Robert’s parts should be shot first. This was most often determined by which role had more impact on the camera movement. A huge credit goes to Robert for his ability to flip between the roles so effortlessly.” In the film, Mickey 17 is more passive and Mickey 18 is more aggressive. Pattinson reflected the distinct characters in his actions, including for a moment in which they fight. This fight, overseen by stunt coordinator Paul Lowe, represented moments of close interaction between the two Mickeys. It was here that a body double was crucial in shooting. The body double was also relied upon for the classic twinning technique of shooting ‘dirty’ over-the- shoulder out of focus shots of the double—ie. 17 looking at 18. However, it was quickly determined that even these would need face replacement work. “Robert’s jawline is so distinct that even those had to be replaced or shot as split screens,” observes Glass. When the shot was a moving one, no motion control was employed. “I’ve never been a big advocate for motion control,” states Glass. “To me it’s applicable when you’re doing things like miniatures where you need many matching passes, but I think when performances are involved, it interferes too much. It slows down a production’s speed of movement, but it’s also restrictive. Performance and camera always benefit from more flexibility.” “It helped tremendously that Director Bong and DOP Darius Khondji shot quite classically with minimal crane and Steadicam moves,” says Glass. “So, a lot of the moves are pan and dolly. There are some Steadicams in there that we were sometimes able to do splitscreens on. I wasn’t always sure that we could get away with the splitscreen as we shot it, but since we were always shooting the two roles, we had the footage to assess the practicality later. We were always prepared to go down a CG or machine learning route, but where we could use the splitscreen, that was the preference.” The Hydralite rig, developed by Volucap. Source: Rising Sun Pictureshandled the majority of twinning visual effects, completing them as splitscreen composites, 2D face replacements, and most notably via their machine learning toolset REVIZE, which utilized facial and body capture of Pattinson to train a model of his face and torso to swap for the double’s. A custom capture rig, dubbed the ‘Crazy Rig’ and now officially, The Hydralite, was devised and configured by Volucap to capture multiple angles of Robert on set in each lighting environment in order to produce the best possible reference for the machine learning algorithm. “For me, it was a completely legitimate use of the technique,” attests Glass, in terms of the machine learning approach. “All of the footage that we used to go into that process was captured on our movie for our movie. There’s nothing historic, or going through past libraries of footage, and it was all with Robert’s approval. I think the results were tremendous.” “It’s staggering to me as I watch the movie that the performances of each character are so flawlessly consistent throughout the film, because I know how much we were jumping around,” notes Glass. “I did encourage that we rehearse scenes ahead. Let’s say 17 was going to be the first role we captured, I’d have them rehearse it the other way around so that the double knew what he was going to do. Therefore, eyelines, movement, pacing and in instances where we were basically replacing the likeness of his head or even torso, we were still able to use the double’s performance and then map to that.” Read the full Mickey 17 issue of befores & afters magazine in PRINT from Amazon or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. The post The art of two Mickeys appeared first on befores & afters. #art #two #mickeys
    BEFORESANDAFTERS.COM
    The art of two Mickeys
    Classic splitscreens, traditional face replacements and new approaches to machine learning-assisted face swapping allowed for twinning shots in ‘Mickey 17’. An excerpt from issue #32 of befores & afters magazine. The art of representing two characters on screen at the same time has become known as ‘twinning’. For Mickey 17 visual effects supervisor Dan Glass, the effect of seeing both Mickey 17 and 18 together was one he looked to achieve with a variety of methodologies. “With a technique like that,” he says, “you always want to use a range of tricks, because you don’t want people to figure it out. You want to keep them like, ‘Oh, wait a minute. How did they…?” “Going back to the way that Director Bong is so prepared and organized,” adds Glass, “it again makes the world of difference with that kind of work, because he thumbnails every shot. Then, some of them are a bit more fleshed out in storyboards. You can look at it and go, ‘Okay, in this situation, this is what the camera’s doing, this is what the actor’s doing,’ which in itself is quite interesting, because he pre-thinks all of this. You’d think that the actors show up and basically just have to follow the steps like robots. It’s not like that. He gives them an environment to work in, but the shots do end up extraordinarily close to what he thumbnails, and it made it a lot simpler to go through.” Those different approaches to twinning ranged from simple splitscreens, to traditional face replacements, and then substantially with a machine learned AI approach, now usually termed ‘face swapping’. What made the twinning work a tougher task than usual, suggests Glass, was the fact that the two Pattinson characters are virtually identical. “Normally, when you’re doing some kind of face replacement, you’re comparing it to a memory of the face. But this was right in front of you as two Mickeys looking strikingly similar.” Here’s how a typical twinning shot was achieved, as described by Glass. “Because Mickey was mostly dressed the same, with only a slight hair change, we were able to have Robert play both roles and to do them one after another. Sometimes, you have to do these things where hair and makeup or costume has a significant variation, so you’re either waiting a long time, which slows production, or you’re coming back at another time to do the different roles, which always makes the process a lot more complicated to match, but we were able to do that immediately.” “Based on the design of the shot,” continues Glass, “I would recommend which of Robert’s parts should be shot first. This was most often determined by which role had more impact on the camera movement. A huge credit goes to Robert for his ability to flip between the roles so effortlessly.” In the film, Mickey 17 is more passive and Mickey 18 is more aggressive. Pattinson reflected the distinct characters in his actions, including for a moment in which they fight. This fight, overseen by stunt coordinator Paul Lowe, represented moments of close interaction between the two Mickeys. It was here that a body double was crucial in shooting. The body double was also relied upon for the classic twinning technique of shooting ‘dirty’ over-the- shoulder out of focus shots of the double—ie. 17 looking at 18. However, it was quickly determined that even these would need face replacement work. “Robert’s jawline is so distinct that even those had to be replaced or shot as split screens,” observes Glass. When the shot was a moving one, no motion control was employed. “I’ve never been a big advocate for motion control,” states Glass. “To me it’s applicable when you’re doing things like miniatures where you need many matching passes, but I think when performances are involved, it interferes too much. It slows down a production’s speed of movement, but it’s also restrictive. Performance and camera always benefit from more flexibility.” “It helped tremendously that Director Bong and DOP Darius Khondji shot quite classically with minimal crane and Steadicam moves,” says Glass. “So, a lot of the moves are pan and dolly. There are some Steadicams in there that we were sometimes able to do splitscreens on. I wasn’t always sure that we could get away with the splitscreen as we shot it, but since we were always shooting the two roles, we had the footage to assess the practicality later. We were always prepared to go down a CG or machine learning route, but where we could use the splitscreen, that was the preference.” The Hydralite rig, developed by Volucap. Source: https://volucap.com Rising Sun Pictures (visual effects supervisor Guido Wolter) handled the majority of twinning visual effects, completing them as splitscreen composites, 2D face replacements, and most notably via their machine learning toolset REVIZE, which utilized facial and body capture of Pattinson to train a model of his face and torso to swap for the double’s. A custom capture rig, dubbed the ‘Crazy Rig’ and now officially, The Hydralite, was devised and configured by Volucap to capture multiple angles of Robert on set in each lighting environment in order to produce the best possible reference for the machine learning algorithm. “For me, it was a completely legitimate use of the technique,” attests Glass, in terms of the machine learning approach. “All of the footage that we used to go into that process was captured on our movie for our movie. There’s nothing historic, or going through past libraries of footage, and it was all with Robert’s approval. I think the results were tremendous.” “It’s staggering to me as I watch the movie that the performances of each character are so flawlessly consistent throughout the film, because I know how much we were jumping around,” notes Glass. “I did encourage that we rehearse scenes ahead. Let’s say 17 was going to be the first role we captured, I’d have them rehearse it the other way around so that the double knew what he was going to do. Therefore, eyelines, movement, pacing and in instances where we were basically replacing the likeness of his head or even torso, we were still able to use the double’s performance and then map to that.” Read the full Mickey 17 issue of befores & afters magazine in PRINT from Amazon or as a DIGITAL EDITION on Patreon. Remember, you can also subscribe to the DIGITAL EDITION as a tier on the Patreon and get a new issue every time one is released. The post The art of two Mickeys appeared first on befores & afters.
    0 Комментарии 0 Поделились 0 предпросмотр
CGShares https://cgshares.com