• Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Comentários 0 Compartilhamentos
  • 8 Stunning Sunset Color Palettes

    8 Stunning Sunset Color Palettes
    Zoe Santoro • 

    In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory.
    If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs.
    Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes
    1. Golden Hour Glow

    #FFD700

    #FF8C00

    #FF6347

    #CD5C5C

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity.
    2. Tropical Paradise

    #FF69B4

    #FF1493

    #FF8C00

    #FFD700

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling.
    3. Desert Dreams

    #CD853F

    #D2691E

    #B22222

    #8B0000

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere.

    The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands.
    4. Pastel Evening

    #FFE4E1

    #FFA07A

    #F0E68C

    #DDA0DD

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance.
    5. Coastal Sunset

    #fae991

    #FF7F50

    #FF6347

    #4169E1

    #1E90FF

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies.
    6. Urban Twilight

    #ffeda3

    #fdad52

    #fc8a6e

    #575475

    #111f2a

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic.
    7. Autumn Harvest

    #FF4500

    #FF8C00

    #DAA520

    #8B4513

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition.
    8. Fire Sky

    #652220

    #DC143C

    #FF0000

    #FF4500

    #FF8C00

    Download this color palette

    735×1102
    Pinterest image

    2160×3840
    Vertical wallpaper

    900×900
    Square

    3840×2160
    4K Wallpaper

    Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades.
    Why Sunset Colors Never Go Out of Style
    Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty.
    From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good.
    I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics.
    How to Master Sunset Palettes in Contemporary Design
    Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work:
    Start with Temperature Balance
    One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous.
    Layer for Depth
    Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature.
    Consider Context and Contrast
    While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear.
    Embrace Gradual Transitions
    The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette.
    The Science Behind Our Sunset Obsession
    As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security.
    There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences.
    The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match.
    Applying Sunset Palettes Across Design Disciplines
    One of the things I love most about sunset color palettes is how adaptable they are across different types of design work:
    Brand Identity Design
    Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement.
    Digital Design
    In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content.
    Print and Packaging
    Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print.
    Environmental Design
    In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community.
    Seasonal Considerations and Trending Applications
    While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications.
    I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments.
    Conclusion: Bringing Natural Beauty Into Modern Design
    As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people.
    The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind.
    Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities.
    So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project.
    After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing!

    Zoe Santoro

    Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way.

    10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how...
    #stunning #sunset #color #palettes
    8 Stunning Sunset Color Palettes
    8 Stunning Sunset Color Palettes Zoe Santoro •  In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory. If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs. 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just /mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes 1. Golden Hour Glow #FFD700 #FF8C00 #FF6347 #CD5C5C Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity. 2. Tropical Paradise #FF69B4 #FF1493 #FF8C00 #FFD700 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling. 3. Desert Dreams #CD853F #D2691E #B22222 #8B0000 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands. 4. Pastel Evening #FFE4E1 #FFA07A #F0E68C #DDA0DD Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance. 5. Coastal Sunset #fae991 #FF7F50 #FF6347 #4169E1 #1E90FF Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies. 6. Urban Twilight #ffeda3 #fdad52 #fc8a6e #575475 #111f2a Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic. 7. Autumn Harvest #FF4500 #FF8C00 #DAA520 #8B4513 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition. 8. Fire Sky #652220 #DC143C #FF0000 #FF4500 #FF8C00 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades. Why Sunset Colors Never Go Out of Style Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty. From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good. I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics. How to Master Sunset Palettes in Contemporary Design Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work: Start with Temperature Balance One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous. Layer for Depth Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature. Consider Context and Contrast While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear. Embrace Gradual Transitions The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette. The Science Behind Our Sunset Obsession As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security. There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences. The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match. Applying Sunset Palettes Across Design Disciplines One of the things I love most about sunset color palettes is how adaptable they are across different types of design work: Brand Identity Design Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement. Digital Design In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content. Print and Packaging Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print. Environmental Design In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community. Seasonal Considerations and Trending Applications While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications. I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments. Conclusion: Bringing Natural Beauty Into Modern Design As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people. The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind. Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities. So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project. After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing! Zoe Santoro Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way. 10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how... #stunning #sunset #color #palettes
    DESIGNWORKLIFE.COM
    8 Stunning Sunset Color Palettes
    8 Stunning Sunset Color Palettes Zoe Santoro •  In this article:See more ▼Post may contain affiliate links which give us commissions at no cost to you.There’s something absolutely magical about watching the sun dip below the horizon, painting the sky in breathtaking hues that seem almost too beautiful to be real. As a designer, I find myself constantly inspired by these natural masterpieces that unfold before us every evening. The way warm oranges melt into soft pinks, how deep purples blend seamlessly with golden yellows – it’s like nature’s own masterclass in color theory. If you’re looking to infuse your next project with the warmth, romance, and natural beauty of a perfect sunset, you’ve come to the right place. I’ve curated eight of the most captivating sunset color palettes that will bring that golden hour magic directly into your designs. 👋 Psst... Did you know you can get unlimited downloads of 59,000+ fonts and millions of other creative assets for just $16.95/mo? Learn more »The 8 Most Breathtaking Sunset Color Palettes 1. Golden Hour Glow #FFD700 #FF8C00 #FF6347 #CD5C5C Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures that perfect moment when everything seems to be touched by liquid gold. The warm yellows transition beautifully into rich oranges and soft coral reds, creating a sense of warmth and optimism that’s impossible to ignore. I find this combination works wonderfully for brands that want to evoke feelings of happiness, energy, and positivity. 2. Tropical Paradise #FF69B4 #FF1493 #FF8C00 #FFD700 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Inspired by those incredible sunsets you see in tropical destinations, this vibrant palette combines hot pinks with brilliant oranges and golden yellows. It’s bold, it’s energetic, and it’s perfect for projects that need to make a statement. I love using these colors for summer campaigns or anything that needs to capture that vacation feeling. 3. Desert Dreams #CD853F #D2691E #B22222 #8B0000 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Get 300+ Fonts for FREEEnter your email to download our 100% free "Font Lover's Bundle". For commercial & personal use. No royalties. No fees. No attribution. 100% free to use anywhere. The American Southwest produces some of the most spectacular sunsets on earth, and this palette pays homage to those incredible desert skies. The earthy browns blend into warm oranges before deepening into rich reds and burgundies. This combination brings a sense of grounding and authenticity that works beautifully for rustic or heritage brands. 4. Pastel Evening #FFE4E1 #FFA07A #F0E68C #DDA0DD Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Not every sunset needs to be bold and dramatic. This softer palette captures those gentle, dreamy evenings when the sky looks like it’s been painted with watercolors. The delicate pinks, peaches, and lavenders create a romantic, ethereal feeling that’s perfect for wedding designs, beauty brands, or any project that needs a touch of feminine elegance. 5. Coastal Sunset #fae991 #FF7F50 #FF6347 #4169E1 #1E90FF Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper There’s something special about watching the sun set over the ocean, where warm oranges and corals meet the deep blues of the sea and sky. This palette captures that perfect contrast between warm and cool tones. I find it creates a sense of adventure and wanderlust that’s ideal for travel brands or outdoor companies. 6. Urban Twilight #ffeda3 #fdad52 #fc8a6e #575475 #111f2a Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper As the sun sets behind city skylines, you get these incredible contrasts between deep purples and vibrant oranges. This sophisticated palette brings together the mystery of twilight with the warmth of the setting sun. It’s perfect for creating designs that feel both modern and dramatic. 7. Autumn Harvest #FF4500 #FF8C00 #DAA520 #8B4513 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper This palette captures those perfect fall evenings when the sunset seems to echo the changing leaves. The deep oranges and golden yellows create a cozy, inviting feeling that’s perfect for seasonal campaigns or brands that want to evoke comfort and tradition. 8. Fire Sky #652220 #DC143C #FF0000 #FF4500 #FF8C00 Download this color palette 735×1102 Pinterest image 2160×3840 Vertical wallpaper 900×900 Square 3840×2160 4K Wallpaper Sometimes nature puts on a show that’s so intense it takes your breath away. This bold, fiery palette captures those dramatic sunsets that look like the sky is literally on fire. It’s not for the faint of heart, but when you need maximum impact and energy, these colors deliver in spades. Why Sunset Colors Never Go Out of Style Before we explore how to use these palettes effectively, let’s talk about why sunset colors have such enduring appeal in design. There’s something deeply ingrained in human psychology that responds to these warm, glowing hues. They remind us of endings and beginnings, of peaceful moments and natural beauty. From a design perspective, sunset colors offer incredible versatility. They can be bold and energetic or soft and romantic. They work equally well for corporate branding and personal projects. And perhaps most importantly, they’re inherently optimistic – they make people feel good. I’ve found that incorporating sunset-inspired colors into modern projects adds an instant sense of warmth and approachability that resonates with audiences across all demographics. Whether you’re working on packaging design, web interfaces, or environmental graphics, these palettes can help create an emotional connection that goes beyond mere aesthetics. How to Master Sunset Palettes in Contemporary Design Using sunset colors effectively requires more than just picking pretty hues and hoping for the best. Here are some strategies I’ve developed for incorporating these palettes into modern design work: Start with Temperature Balance One of the most important aspects of working with sunset palettes is understanding color temperature. Most sunset combinations naturally include both warm and cool elements – the warm oranges and yellows of the sun itself, balanced by the cooler purples and blues of the surrounding sky. Maintaining this temperature balance keeps your designs from feeling flat or monotonous. Layer for Depth Real sunsets have incredible depth and dimension, with colors layering and blending into each other. Try to recreate this in your designs by using gradients, overlays, or layered elements rather than flat blocks of color. This approach creates visual interest and mimics the natural way these colors appear in nature. Consider Context and Contrast While sunset colors are beautiful, they need to work within the context of your overall design. Pay attention to readability – text needs sufficient contrast against sunset backgrounds. Consider using neutrals like deep charcoal or cream to provide breathing room and ensure your message remains clear. Embrace Gradual Transitions The magic of a sunset lies in how colors flow seamlessly from one to another. Incorporate this principle into your designs through smooth gradients, subtle color shifts, or elements that bridge between different hues in your palette. The Science Behind Our Sunset Obsession As someone who’s spent years studying color psychology, I’m fascinated by why sunset colors have such universal appeal. Research suggests that warm colors like those found in sunsets trigger positive emotional responses and can even increase feelings of comfort and security. There’s also the association factor – sunsets are linked in our minds with relaxation, beauty, and positive experiences. When we see these colors in design, we unconsciously associate them with those same positive feelings. This makes sunset palettes particularly effective for brands that want to create emotional connections with their audiences. The cyclical nature of sunsets also plays a role. They happen every day, marking the transition from activity to rest, from work to leisure. This gives sunset colors a sense of familiarity and comfort that few other color combinations can match. Applying Sunset Palettes Across Design Disciplines One of the things I love most about sunset color palettes is how adaptable they are across different types of design work: Brand Identity Design Sunset colors can help brands convey warmth, optimism, and approachability. I’ve used variations of these palettes for everything from artisanal food companies to wellness brands. The key is choosing the right intensity level for your brand’s personality – softer palettes for more refined brands, bolder combinations for companies that want to make a statement. Digital Design In web and app design, sunset colors can create interfaces that feel warm and inviting rather than cold and clinical. I often use these palettes for backgrounds, accent elements, or call-to-action buttons. The natural flow between colors makes them perfect for creating smooth user experiences that guide the eye naturally through content. Print and Packaging Sunset palettes really shine in print applications where you can take advantage of rich, saturated colors. They work beautifully for packaging design, particularly for products associated with warmth, comfort, or natural ingredients. The key is ensuring your color reproduction is accurate – sunset colors can look muddy if not handled properly in print. Environmental Design In spaces, sunset colors can create incredibly welcoming environments. I’ve seen these palettes used effectively in restaurants, retail spaces, and even corporate offices where the goal is to create a sense of warmth and community. Seasonal Considerations and Trending Applications While sunset colors are timeless, they do have natural seasonal associations that smart designers can leverage. The warmer, more intense sunset palettes work beautifully for fall and winter campaigns, while the softer, more pastel variations are perfect for spring and summer applications. I’ve noticed a growing trend toward using sunset palettes in unexpected contexts – tech companies embracing warm gradients, financial services using sunset colors to appear more approachable, and healthcare brands incorporating these hues to create more comforting environments. Conclusion: Bringing Natural Beauty Into Modern Design As we’ve explored these eight stunning sunset color palettes, I hope you’ve gained new appreciation for the incredible design potential that nature provides us every single day. These colors aren’t just beautiful – they’re powerful tools for creating emotional connections, conveying brand values, and making designs that truly resonate with people. The secret to successfully using sunset palettes lies in understanding both their emotional impact and their technical requirements. Don’t be afraid to experiment with different combinations and intensities, but always keep your audience and context in mind. Remember, the best sunset colors aren’t just about picking the prettiest hues – they’re about capturing the feeling of those magical moments when day transitions to night. Whether you’re creating a logo that needs to convey warmth and trust, designing a website that should feel welcoming and approachable, or developing packaging that needs to stand out on crowded shelves, these sunset-inspired palettes offer endless possibilities. So the next time you catch yourself stopped in your tracks by a particularly stunning sunset, take a moment to really study those colors. Notice how they blend and flow, how they make you feel, and how they change as the light shifts. Then bring that natural magic into your next design project. After all, if nature can create such breathtaking color combinations every single day, imagine what we can achieve when we learn from the master. Happy designing! Zoe Santoro Zoe is an art student and graphic designer with a passion for creativity and adventure. Whether she’s sketching in a cozy café or capturing inspiration from vibrant cityscapes, she finds beauty in every corner of the world. With a love for bold colors, clean design, and storytelling through visuals, Zoe blends her artistic skills with her wanderlust to create stunning, travel-inspired designs. Follow her journey as she explores new places, discovers fresh inspiration, and shares her creative process along the way. 10 Warm Color Palettes That’ll Brighten Your DayThere’s nothing quite like the embracing quality of warm colors to make a design feel inviting and alive. As someone...These 1920s Color Palettes are ‘Greater than Gatsby’There’s something undeniably captivating about the color schemes of the Roaring Twenties. As a designer with a passion for historical...How Fonts Influence Tone and Clarity in Animated VideosAudiences interact differently with messages based on which fonts designers choose to use within a text presentation. Fonts shape how...
    0 Comentários 0 Compartilhamentos
  • How To Create & Animate Breakdance-Inspired Streetwear

    IntroductionHi, my name is Pankaj Kholiya, and I am a Senior 3D Character Artist. I've been working in the game industry for the past 8 years. I worked on titles like Call of Duty: Black Ops 6, That Christmas, Ghost of Tsushima Director's Cut, Star Wars: Outlaws, Alan Wake 2, Street Fighter 6, and many more. Currently, I'm working as a freelancer for the gaming and cinematics industry.Since my last interview, I made a few personal works, was a part of a Netflix movie, That Christmas, and worked with Platige on Star Wars: Outlaws and Call of Duty: Black Ops 6 cinematic.The Breakdancing Clothing ProjectIt all started when I witnessed a dance battle that a friend organized. It was like watching Step Up live. There, I got the inspiration to create a break dancer. I started by gathering different references from the internet. I found one particular image on Pinterest and decided to recreate it in 3D.At first, the idea was to create the outfit in one pose, but along the way, I also decided to create a dancing version of the character and explore Unreal Engine. Here is the ref I used for the dancing version:Getting StartedFor the upcoming talents, I'll try to describe my process in a few points. Even before starting Marvelous Designer, I made sure to have my base character ready for animation and simulation. This time, I decided to use the MetaHuman creator for the base due to its high-quality textures and materials. My primary focus was on the clothing, so using MetaHuman saved a lot of time.After I was satisfied with how my MetaHuman looked, I took it to Mixamo to get some animations. I was really impressed by how good the animations worked on the MetaHuman. Once I had the animations, I took the animation into Marvelous Designer and simulated the clothes.For the posed character, I adjusted the rig to match the pose like the reference and used the same method as in this tutorial to pose the character:ClothingFor this particular project, I didn't focus on the topology as it was just for a single render. I just packed the UVs in Marvelous Designer, exported the quad mesh from Marvelous Designer, subdivided it a few times, and started working on the detailing part in ZBrush.For the texture, I used the low-division mesh from the ZBrush file, as I already had the UVs on it. I then baked the normal and other maps on it and took it to Substance 3D Painter.AnimationThere are multiple ways to animate the metahuman character. For this project, I've used Mixamo. I imported my character into Mixamo, selected the animation I liked, and exported it. After that, I just imported it to Marvelous Designer and hit the simulation button. You can check my previous breakdown for the Mixamo pipeline.Once happy with the result, I exported the simulated cloth as an Alembic to Unreal Engine. Tutorial for importing clothes into Unreal Engine:Lighting & RenderingThe main target was to match the lighting closely to the reference. This was my first project in Unreal Engine, so I wanted to explore the lighting and see how far I could go with it. Being new to the Unreal Engine, I went through a lot of tutorials. Here are the lights I've used for the posed version:For the dancing version, I've created a stage like the ref from the Step Up movie: Some tips I found useful for the rendering are in the video below:ConclusionAt first, I had a clear direction for this project and was confident in my skills to tackle the art aspect of it. But things changed when I dived into Unreal Engine for my presentation. More than half the time on this project went into learning and getting used to Unreal Engine. I don't regret a single second I invested in Unreal, as it was a new experience. It took around 15 days to wrap this one up.The lesson I learned is that upgrading your knowledge and learning new things will help you grow as an artist in the long run. Approaching how you make an artwork has changed a lot ever since I started 3D, and adapting to the changing art environment is a good thing. Here are some recommendations if you are interested in learning Unreal Engine.Pankaj Kholiya, Senior 3D Character ArtistInterview conducted by Amber Rutherford
    #how #create #ampamp #animate #breakdanceinspired
    How To Create & Animate Breakdance-Inspired Streetwear
    IntroductionHi, my name is Pankaj Kholiya, and I am a Senior 3D Character Artist. I've been working in the game industry for the past 8 years. I worked on titles like Call of Duty: Black Ops 6, That Christmas, Ghost of Tsushima Director's Cut, Star Wars: Outlaws, Alan Wake 2, Street Fighter 6, and many more. Currently, I'm working as a freelancer for the gaming and cinematics industry.Since my last interview, I made a few personal works, was a part of a Netflix movie, That Christmas, and worked with Platige on Star Wars: Outlaws and Call of Duty: Black Ops 6 cinematic.The Breakdancing Clothing ProjectIt all started when I witnessed a dance battle that a friend organized. It was like watching Step Up live. There, I got the inspiration to create a break dancer. I started by gathering different references from the internet. I found one particular image on Pinterest and decided to recreate it in 3D.At first, the idea was to create the outfit in one pose, but along the way, I also decided to create a dancing version of the character and explore Unreal Engine. Here is the ref I used for the dancing version:Getting StartedFor the upcoming talents, I'll try to describe my process in a few points. Even before starting Marvelous Designer, I made sure to have my base character ready for animation and simulation. This time, I decided to use the MetaHuman creator for the base due to its high-quality textures and materials. My primary focus was on the clothing, so using MetaHuman saved a lot of time.After I was satisfied with how my MetaHuman looked, I took it to Mixamo to get some animations. I was really impressed by how good the animations worked on the MetaHuman. Once I had the animations, I took the animation into Marvelous Designer and simulated the clothes.For the posed character, I adjusted the rig to match the pose like the reference and used the same method as in this tutorial to pose the character:ClothingFor this particular project, I didn't focus on the topology as it was just for a single render. I just packed the UVs in Marvelous Designer, exported the quad mesh from Marvelous Designer, subdivided it a few times, and started working on the detailing part in ZBrush.For the texture, I used the low-division mesh from the ZBrush file, as I already had the UVs on it. I then baked the normal and other maps on it and took it to Substance 3D Painter.AnimationThere are multiple ways to animate the metahuman character. For this project, I've used Mixamo. I imported my character into Mixamo, selected the animation I liked, and exported it. After that, I just imported it to Marvelous Designer and hit the simulation button. You can check my previous breakdown for the Mixamo pipeline.Once happy with the result, I exported the simulated cloth as an Alembic to Unreal Engine. Tutorial for importing clothes into Unreal Engine:Lighting & RenderingThe main target was to match the lighting closely to the reference. This was my first project in Unreal Engine, so I wanted to explore the lighting and see how far I could go with it. Being new to the Unreal Engine, I went through a lot of tutorials. Here are the lights I've used for the posed version:For the dancing version, I've created a stage like the ref from the Step Up movie: Some tips I found useful for the rendering are in the video below:ConclusionAt first, I had a clear direction for this project and was confident in my skills to tackle the art aspect of it. But things changed when I dived into Unreal Engine for my presentation. More than half the time on this project went into learning and getting used to Unreal Engine. I don't regret a single second I invested in Unreal, as it was a new experience. It took around 15 days to wrap this one up.The lesson I learned is that upgrading your knowledge and learning new things will help you grow as an artist in the long run. Approaching how you make an artwork has changed a lot ever since I started 3D, and adapting to the changing art environment is a good thing. Here are some recommendations if you are interested in learning Unreal Engine.Pankaj Kholiya, Senior 3D Character ArtistInterview conducted by Amber Rutherford #how #create #ampamp #animate #breakdanceinspired
    80.LV
    How To Create & Animate Breakdance-Inspired Streetwear
    IntroductionHi, my name is Pankaj Kholiya, and I am a Senior 3D Character Artist. I've been working in the game industry for the past 8 years. I worked on titles like Call of Duty: Black Ops 6, That Christmas, Ghost of Tsushima Director's Cut, Star Wars: Outlaws, Alan Wake 2, Street Fighter 6, and many more. Currently, I'm working as a freelancer for the gaming and cinematics industry.Since my last interview, I made a few personal works, was a part of a Netflix movie, That Christmas, and worked with Platige on Star Wars: Outlaws and Call of Duty: Black Ops 6 cinematic.The Breakdancing Clothing ProjectIt all started when I witnessed a dance battle that a friend organized. It was like watching Step Up live. There, I got the inspiration to create a break dancer. I started by gathering different references from the internet. I found one particular image on Pinterest and decided to recreate it in 3D.At first, the idea was to create the outfit in one pose, but along the way, I also decided to create a dancing version of the character and explore Unreal Engine. Here is the ref I used for the dancing version:Getting StartedFor the upcoming talents, I'll try to describe my process in a few points. Even before starting Marvelous Designer, I made sure to have my base character ready for animation and simulation. This time, I decided to use the MetaHuman creator for the base due to its high-quality textures and materials. My primary focus was on the clothing, so using MetaHuman saved a lot of time.After I was satisfied with how my MetaHuman looked, I took it to Mixamo to get some animations. I was really impressed by how good the animations worked on the MetaHuman. Once I had the animations, I took the animation into Marvelous Designer and simulated the clothes.For the posed character, I adjusted the rig to match the pose like the reference and used the same method as in this tutorial to pose the character:ClothingFor this particular project, I didn't focus on the topology as it was just for a single render. I just packed the UVs in Marvelous Designer, exported the quad mesh from Marvelous Designer, subdivided it a few times, and started working on the detailing part in ZBrush.For the texture, I used the low-division mesh from the ZBrush file, as I already had the UVs on it. I then baked the normal and other maps on it and took it to Substance 3D Painter.AnimationThere are multiple ways to animate the metahuman character. For this project, I've used Mixamo. I imported my character into Mixamo, selected the animation I liked, and exported it. After that, I just imported it to Marvelous Designer and hit the simulation button. You can check my previous breakdown for the Mixamo pipeline.Once happy with the result, I exported the simulated cloth as an Alembic to Unreal Engine. Tutorial for importing clothes into Unreal Engine:Lighting & RenderingThe main target was to match the lighting closely to the reference. This was my first project in Unreal Engine, so I wanted to explore the lighting and see how far I could go with it. Being new to the Unreal Engine, I went through a lot of tutorials. Here are the lights I've used for the posed version:For the dancing version, I've created a stage like the ref from the Step Up movie: Some tips I found useful for the rendering are in the video below:ConclusionAt first, I had a clear direction for this project and was confident in my skills to tackle the art aspect of it. But things changed when I dived into Unreal Engine for my presentation. More than half the time on this project went into learning and getting used to Unreal Engine. I don't regret a single second I invested in Unreal, as it was a new experience. It took around 15 days to wrap this one up.The lesson I learned is that upgrading your knowledge and learning new things will help you grow as an artist in the long run. Approaching how you make an artwork has changed a lot ever since I started 3D, and adapting to the changing art environment is a good thing. Here are some recommendations if you are interested in learning Unreal Engine.Pankaj Kholiya, Senior 3D Character ArtistInterview conducted by Amber Rutherford
    0 Comentários 0 Compartilhamentos
  • Making a killing: The playful 2D terror of Psycasso®

    A serial killer is stalking the streets, and his murders are a work of art. That’s more or less the premise behind Psycasso®, a tongue-in-cheek 2D pixel art game from Omni Digital Technologies that’s debuting a demo at Steam Next Fest this week, with plans to head into Early Access later this year. Playing as the killer, you get a job and build a life by day, then hunt the streets by night to find and torture victims, paint masterpieces with their blood, then sell them to fund operations.I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a freshtwist.Let’s start with a bit of background about the game.Omni: We wanted to make something that stands out. We know a lot of indie studios are releasing games and the market is ever growing, so we wanted to make something that’s not just fun to play, but catches people’s attention when others tell them about it. We’ve created an open-world pixel art game about an artist who spends his day getting a job, trying to fit into society. Then at nighttime, things take a more sinister turn and he goes around and makes artwork out of his victim's blood.We didn’t want to make it creepy and gory. We kind of wanted it to be cutesy and fun, just to make it ironic. Making it was a big challenge. We basically had to create an entire city with functioning shops and NPCs who have their own lives, their own hobbies. It was a huge challenge.So what does the actual gameplay look like?Omni: There’s a day cycle and a night cycle that breaks up the gameplay. During the day, you can get a job, level up skills, buy properties and furniture upgrades. At nighttime, the lighting completely changes, the vibe completely changes, there’s police on the street and the flow of the game shifts. The idea is that you can kidnap NPCs using a whole bunch of different weapons – guns, throwable grenades, little traps and cool stuff that you can capture people with.Once captured on the street, you can either harvest their blood and body parts there, or buy a specialist room to keep them in a cage and put them in various equipment like hanging chains or torture chairs. The player gets better rewards for harvesting blood and body parts this way.On the flip side, there’s a whole other element to the game where the player is given missions each week from galleries around the city. They come up on your phone menu, and you can accept them and do either portrait or landscape paintings, with all of the painting being done using only shades of red. We've got some nice drip effects and splat sounds to make it feel like you’re painting with blood. Then you can give your creation a name, submit it to a gallery, then it goes into a fake auction, people will bid on the artwork and you get paid and large amount of in-game money so you can then buy upgrades for the home, upgrade painting tools like bigger paint brushes, more selection tools, stuff like that.Ben: There’s definitely nothing like it. And that was the aim, is when you are telling people about it, they’re like, “Oh, okay. Right. We’re not going to forget about this.”

    Let’s dig into the 2D tools you used to create this world.Ben: It’s using the 2D Renderer. The Happy Harvest 2D sample project that you guys made was kind of a big starting point, from a lighting perspective, and doing the normal maps of the 2D and getting the lighting to look nice. Our night system is a very stripped-down, then added-on version of the thing that you guys made. I was particularly interested by its shadows. The building’s shadows aren’t actually shadows – it’s a black light. We tried to recreate that with all of our buildings in the entire open world – so it does look beautiful for a 2D game, if I do say so myself.Can you say a bit about how you’re using AI or procedural generation in NPCs?Ben: I don’t know how many actually made it into the demo to be fair, number-wise. Every single NPC has a unique identity, as in they all have a place of work that they go to on a regular schedule. They have hobbies, they have spots where they prefer to loiter, a park bench or whatever. So you can get to know everyone’s individual lifestyle.So, the old man that lives in the same building as me might love to go to the casino at nighttime or go consistently on a Monday and a Friday, that kind of vibe.It uses the A* Pathfinding Project, because we knew we wanted to have a lot of AIs. We’ve locked off most of the city for the demo, but the actual size of the city is huge. The police mechanics are currently turned off, but there’s 80% police mechanics in there as well. If you punch someone or hurt someone, that’s a crime, and if anyone sees it, they can go and report to the police and then things happen. That’s a feature that’s there but not demo-ready yet.How close would you say you are to a full release?Omni: We should be scheduled for October for early access. By that point we’ll have the stealth mechanics and the policing systems polished and in and get some of the other upcoming features buttoned up. We’re fairly close.Ben: Lots of it’s already done, it’s just turned off for the demo. We don’t want to overwhelm people because there’s just so much for the player to do.Tell me a bit about the paint mechanics – how did you build that?Ben: It is custom. We built it ourselves completely from scratch. But I can't take responsibility for that one – someone else did the whole thing – that was their baby. It is really, really cool though.Omni: It’s got a variety of masking tools, the ability to change opacity and spacing, you can undo, redo. It’s a really fantastic feature that gives people the opportunity to express themselves and make some great art.Ben: And it's gamified, so it doesn’t feel like you’ve just opened up Paint in Windows.Omni: Best of all is when you make a painting, it gets turned into an inventory item so you physically carry it around with you and can sell it or treasure it.What’s the most exciting part of Psycasso for you?Omni: Stunning graphics. I think graphically, it looks really pretty.Ben: Visually, you could look at it and go, “Oh, that’s Psycasso.”Omni: What we’ve done is taken a cozy retro-style game, and we’ve brought modern design, logic, and technology into it. So you're playing what feels like a nostalgic game, but you're getting the experience of a much newer project.Check out the Psycasso demo on Steam, and stay tuned for more NextFest coverage.
    #making #killing #playful #terror #psycasso
    Making a killing: The playful 2D terror of Psycasso®
    A serial killer is stalking the streets, and his murders are a work of art. That’s more or less the premise behind Psycasso®, a tongue-in-cheek 2D pixel art game from Omni Digital Technologies that’s debuting a demo at Steam Next Fest this week, with plans to head into Early Access later this year. Playing as the killer, you get a job and build a life by day, then hunt the streets by night to find and torture victims, paint masterpieces with their blood, then sell them to fund operations.I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a freshtwist.Let’s start with a bit of background about the game.Omni: We wanted to make something that stands out. We know a lot of indie studios are releasing games and the market is ever growing, so we wanted to make something that’s not just fun to play, but catches people’s attention when others tell them about it. We’ve created an open-world pixel art game about an artist who spends his day getting a job, trying to fit into society. Then at nighttime, things take a more sinister turn and he goes around and makes artwork out of his victim's blood.We didn’t want to make it creepy and gory. We kind of wanted it to be cutesy and fun, just to make it ironic. Making it was a big challenge. We basically had to create an entire city with functioning shops and NPCs who have their own lives, their own hobbies. It was a huge challenge.So what does the actual gameplay look like?Omni: There’s a day cycle and a night cycle that breaks up the gameplay. During the day, you can get a job, level up skills, buy properties and furniture upgrades. At nighttime, the lighting completely changes, the vibe completely changes, there’s police on the street and the flow of the game shifts. The idea is that you can kidnap NPCs using a whole bunch of different weapons – guns, throwable grenades, little traps and cool stuff that you can capture people with.Once captured on the street, you can either harvest their blood and body parts there, or buy a specialist room to keep them in a cage and put them in various equipment like hanging chains or torture chairs. The player gets better rewards for harvesting blood and body parts this way.On the flip side, there’s a whole other element to the game where the player is given missions each week from galleries around the city. They come up on your phone menu, and you can accept them and do either portrait or landscape paintings, with all of the painting being done using only shades of red. We've got some nice drip effects and splat sounds to make it feel like you’re painting with blood. Then you can give your creation a name, submit it to a gallery, then it goes into a fake auction, people will bid on the artwork and you get paid and large amount of in-game money so you can then buy upgrades for the home, upgrade painting tools like bigger paint brushes, more selection tools, stuff like that.Ben: There’s definitely nothing like it. And that was the aim, is when you are telling people about it, they’re like, “Oh, okay. Right. We’re not going to forget about this.” Let’s dig into the 2D tools you used to create this world.Ben: It’s using the 2D Renderer. The Happy Harvest 2D sample project that you guys made was kind of a big starting point, from a lighting perspective, and doing the normal maps of the 2D and getting the lighting to look nice. Our night system is a very stripped-down, then added-on version of the thing that you guys made. I was particularly interested by its shadows. The building’s shadows aren’t actually shadows – it’s a black light. We tried to recreate that with all of our buildings in the entire open world – so it does look beautiful for a 2D game, if I do say so myself.Can you say a bit about how you’re using AI or procedural generation in NPCs?Ben: I don’t know how many actually made it into the demo to be fair, number-wise. Every single NPC has a unique identity, as in they all have a place of work that they go to on a regular schedule. They have hobbies, they have spots where they prefer to loiter, a park bench or whatever. So you can get to know everyone’s individual lifestyle.So, the old man that lives in the same building as me might love to go to the casino at nighttime or go consistently on a Monday and a Friday, that kind of vibe.It uses the A* Pathfinding Project, because we knew we wanted to have a lot of AIs. We’ve locked off most of the city for the demo, but the actual size of the city is huge. The police mechanics are currently turned off, but there’s 80% police mechanics in there as well. If you punch someone or hurt someone, that’s a crime, and if anyone sees it, they can go and report to the police and then things happen. That’s a feature that’s there but not demo-ready yet.How close would you say you are to a full release?Omni: We should be scheduled for October for early access. By that point we’ll have the stealth mechanics and the policing systems polished and in and get some of the other upcoming features buttoned up. We’re fairly close.Ben: Lots of it’s already done, it’s just turned off for the demo. We don’t want to overwhelm people because there’s just so much for the player to do.Tell me a bit about the paint mechanics – how did you build that?Ben: It is custom. We built it ourselves completely from scratch. But I can't take responsibility for that one – someone else did the whole thing – that was their baby. It is really, really cool though.Omni: It’s got a variety of masking tools, the ability to change opacity and spacing, you can undo, redo. It’s a really fantastic feature that gives people the opportunity to express themselves and make some great art.Ben: And it's gamified, so it doesn’t feel like you’ve just opened up Paint in Windows.Omni: Best of all is when you make a painting, it gets turned into an inventory item so you physically carry it around with you and can sell it or treasure it.What’s the most exciting part of Psycasso for you?Omni: Stunning graphics. I think graphically, it looks really pretty.Ben: Visually, you could look at it and go, “Oh, that’s Psycasso.”Omni: What we’ve done is taken a cozy retro-style game, and we’ve brought modern design, logic, and technology into it. So you're playing what feels like a nostalgic game, but you're getting the experience of a much newer project.Check out the Psycasso demo on Steam, and stay tuned for more NextFest coverage. #making #killing #playful #terror #psycasso
    UNITY.COM
    Making a killing: The playful 2D terror of Psycasso®
    A serial killer is stalking the streets, and his murders are a work of art. That’s more or less the premise behind Psycasso®, a tongue-in-cheek 2D pixel art game from Omni Digital Technologies that’s debuting a demo at Steam Next Fest this week, with plans to head into Early Access later this year. Playing as the killer, you get a job and build a life by day, then hunt the streets by night to find and torture victims, paint masterpieces with their blood, then sell them to fund operations.I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a fresh (if gruesome) twist.Let’s start with a bit of background about the game.Omni: We wanted to make something that stands out. We know a lot of indie studios are releasing games and the market is ever growing, so we wanted to make something that’s not just fun to play, but catches people’s attention when others tell them about it. We’ve created an open-world pixel art game about an artist who spends his day getting a job, trying to fit into society. Then at nighttime, things take a more sinister turn and he goes around and makes artwork out of his victim's blood.We didn’t want to make it creepy and gory. We kind of wanted it to be cutesy and fun, just to make it ironic. Making it was a big challenge. We basically had to create an entire city with functioning shops and NPCs who have their own lives, their own hobbies. It was a huge challenge.So what does the actual gameplay look like?Omni: There’s a day cycle and a night cycle that breaks up the gameplay. During the day, you can get a job, level up skills, buy properties and furniture upgrades. At nighttime, the lighting completely changes, the vibe completely changes, there’s police on the street and the flow of the game shifts. The idea is that you can kidnap NPCs using a whole bunch of different weapons – guns, throwable grenades, little traps and cool stuff that you can capture people with.Once captured on the street, you can either harvest their blood and body parts there, or buy a specialist room to keep them in a cage and put them in various equipment like hanging chains or torture chairs. The player gets better rewards for harvesting blood and body parts this way.On the flip side, there’s a whole other element to the game where the player is given missions each week from galleries around the city. They come up on your phone menu, and you can accept them and do either portrait or landscape paintings, with all of the painting being done using only shades of red. We've got some nice drip effects and splat sounds to make it feel like you’re painting with blood. Then you can give your creation a name, submit it to a gallery, then it goes into a fake auction, people will bid on the artwork and you get paid and large amount of in-game money so you can then buy upgrades for the home, upgrade painting tools like bigger paint brushes, more selection tools, stuff like that.Ben: There’s definitely nothing like it. And that was the aim, is when you are telling people about it, they’re like, “Oh, okay. Right. We’re not going to forget about this.” Let’s dig into the 2D tools you used to create this world.Ben: It’s using the 2D Renderer. The Happy Harvest 2D sample project that you guys made was kind of a big starting point, from a lighting perspective, and doing the normal maps of the 2D and getting the lighting to look nice. Our night system is a very stripped-down, then added-on version of the thing that you guys made. I was particularly interested by its shadows. The building’s shadows aren’t actually shadows – it’s a black light. We tried to recreate that with all of our buildings in the entire open world – so it does look beautiful for a 2D game, if I do say so myself.Can you say a bit about how you’re using AI or procedural generation in NPCs?Ben: I don’t know how many actually made it into the demo to be fair, number-wise. Every single NPC has a unique identity, as in they all have a place of work that they go to on a regular schedule. They have hobbies, they have spots where they prefer to loiter, a park bench or whatever. So you can get to know everyone’s individual lifestyle.So, the old man that lives in the same building as me might love to go to the casino at nighttime or go consistently on a Monday and a Friday, that kind of vibe.It uses the A* Pathfinding Project, because we knew we wanted to have a lot of AIs. We’ve locked off most of the city for the demo, but the actual size of the city is huge. The police mechanics are currently turned off, but there’s 80% police mechanics in there as well. If you punch someone or hurt someone, that’s a crime, and if anyone sees it, they can go and report to the police and then things happen. That’s a feature that’s there but not demo-ready yet.How close would you say you are to a full release?Omni: We should be scheduled for October for early access. By that point we’ll have the stealth mechanics and the policing systems polished and in and get some of the other upcoming features buttoned up. We’re fairly close.Ben: Lots of it’s already done, it’s just turned off for the demo. We don’t want to overwhelm people because there’s just so much for the player to do.Tell me a bit about the paint mechanics – how did you build that?Ben: It is custom. We built it ourselves completely from scratch. But I can't take responsibility for that one – someone else did the whole thing – that was their baby. It is really, really cool though.Omni: It’s got a variety of masking tools, the ability to change opacity and spacing, you can undo, redo. It’s a really fantastic feature that gives people the opportunity to express themselves and make some great art.Ben: And it's gamified, so it doesn’t feel like you’ve just opened up Paint in Windows.Omni: Best of all is when you make a painting, it gets turned into an inventory item so you physically carry it around with you and can sell it or treasure it.What’s the most exciting part of Psycasso for you?Omni: Stunning graphics. I think graphically, it looks really pretty.Ben: Visually, you could look at it and go, “Oh, that’s Psycasso.”Omni: What we’ve done is taken a cozy retro-style game, and we’ve brought modern design, logic, and technology into it. So you're playing what feels like a nostalgic game, but you're getting the experience of a much newer project.Check out the Psycasso demo on Steam, and stay tuned for more NextFest coverage.
    0 Comentários 0 Compartilhamentos
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Comentários 0 Compartilhamentos
  • AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES

    By CHRIS McGOWAN

    Images courtesy of Warner Bros. Pictures.

    Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors.

    “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”
    —Nordin Rahhali, VFX Supervisor

    The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed.

    “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.”

    “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.”

    Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor.

    “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.”

    The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.”
    —Christian Sebaldt, ASC, Director of Photography

    For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day”

    Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”

    Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall.

    The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.”

    The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.”

    Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils.

    “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.”
    —Nordin Rahhali, VFX Supervisor

    Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.”

    To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.”

    Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine.

    Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard.

    A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.”

    Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films.

    From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    #explosive #mix #sfx #vfx #ignites
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbellhas a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes, inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the wholeon fire, but Tonytried and put as much fire as he could safely and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots.Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive setwas fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off thatand added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical.“We got all the Vancouver skylineso we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wallwe could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed.“We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineeredwhile we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots.some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbellas he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful locationGVRD, very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosionwas unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbelland drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producercame up with a great gagseptum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell– with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paulas he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire linewhen Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result.A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erikappears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws itare all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines.Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris.Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.” #explosive #mix #sfx #vfx #ignites
    WWW.VFXVOICE.COM
    AN EXPLOSIVE MIX OF SFX AND VFX IGNITES FINAL DESTINATION BLOODLINES
    By CHRIS McGOWAN Images courtesy of Warner Bros. Pictures. Final Destination Bloodlines, the sixth installment in the graphic horror series, kicks off with the film’s biggest challenge – deploying an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant. While there in 1968, young Iris Campbell (Brec Bassinger) has a premonition about the Skyview burning, cracking, crumbling and collapsing. Then, when she sees these events actually starting to happen around her, she intervenes and causes an evacuation of the tower, thus thwarting death’s design and saving many lives. Years later, her granddaughter, Stefani Reyes (Kaitlyn Santa Juana), inherits the vision of the destruction that could have occurred and realizes death is still coming for the survivors. “I knew we couldn’t put the whole [Skyview restaurant] on fire, but Tony [Lazarowich, Special Effects Supervisor] tried and put as much fire as he could safely and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction that can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” —Nordin Rahhali, VFX Supervisor The film opens with an elaborate, large-scale set piece involving the 400-foot-high Skyview Tower restaurant – and its collapse. Drone footage was digitized to create a 3D asset for the LED wall so the time of day could be changed as needed. “The set that the directors wanted was very large,” says Nordin Rahhali, VFX Supervisor. “We had limited space options in stages given the scale and the footprint of the actual restaurant that they wanted. It was the first set piece, the first big thing we shot, so we had to get it all ready and going right off the bat. We built a bigger volume for our needs, including an LED wall that we built the assets for.” “We were outside Vancouver at Bridge Studios in Burnaby. The custom-built LED volume was a little over 200 feet in length” states Christian Sebaldt, ASC, the movie’s DP. The volume was 98 feet in diameter and 24 feet tall. Rahhali explains, “Pixomondo was the vendor that we contracted to come in and build the volume. They also built the asset that went on the LED wall, so they were part of our filming team and production shoot. Subsequently, they were also the main vendor doing post, which was by design. By having them design and take care of the asset during production, we were able to leverage their assets, tools and builds for some of the post VFX.” Rahhali adds, “It was really important to make sure we had days with the volume team and with Christian and his camera team ahead of the shoot so we could dial it in.” Built at Bridge Studios in Burnaby outside Vancouver, the custom-built LED volume for events at the Skyview restaurant was over 200 feet long, 98 feet wide and 24 feet tall. Extensive previs with Digital Domain was done to advance key shots. (Photo: Eric Milner) Zach Lipovsky and Adam Stein directed Final Destination Bloodlines for New Line film, distributed by Warner Bros., in which chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated death at some point. Pixomondo was the lead VFX vendor, followed by FOLKS VFX. Picture Shop also contributed. There were around 800 VFX shots. Tony Lazarowich was the Special Effects Supervisor. “The Skyview restaurant involved building a massive set [that] was fire retardant, which meant the construction took longer than normal because they had to build it with certain materials and coat it with certain things because, obviously, it serves for the set piece. As it’s falling into chaos, a lot of that fire was practical. I really jived with what Christian and directors wanted and how Tony likes to work – to augment as much real practical stuff as possible,” Rahhali remarks. “I knew we couldn’t put the whole thing on fire, but Tony tried and put as much fire as he could safely, and then we just built off that [in VFX] and added a lot more. Even when it’s just a little bit of real fire, the lighting and interaction can’t be simulated, so I think it was a success in terms of blending that practical with the visual.” The Skyview restaurant required building a massive set that was fire retardant. Construction on the set took longer because it had to be built and coated with special materials. As the Skyview restaurant falls into chaos, much of the fire was practical. (Photo: Eric Milner) “We got all the Vancouver skyline [with drones] so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” —Christian Sebaldt, ASC, Director of Photography For drone shots, the team utilized a custom heavy-lift drone with three RED Komodo Digital Cinema cameras “giving us almost 180 degrees with overlap that we would then stitch in post and have a ridiculous amount of resolution off these three cameras,” Sebaldt states. “The other drone we used was a DJI Inspire 3, which was also very good. And we flew these drones up at the height [we needed]. We flew them at different times of day. We flew full 360s, and we also used them for photogrammetry. We got all the Vancouver skyline so we could rebuild our version of the city, which was based a little on the Vancouver footprint. So, we used all that to build a digital recreation of a city that was in line with what the directors wanted, which was a coastal city somewhere in the States that doesn’t necessarily have to be Vancouver or Seattle, but it looks a little like the Pacific Northwest.” Rahhali adds, “All of this allowed us to figure out what we were going to shoot. We had the stage build, and we had the drone footage that we then digitized and created a 3D asset to go on the wall [so] we could change the times of day” Pixomondo built the volume and the asset that went on the LED wall for the Skyview sequence. They were also the main vendor during post. FOLKS VFX and Picture Shop contributed. (Photo: Eric Milner) “We did extensive previs with Digital Domain,” Rahhali explains. “That was important because we knew the key shots that the directors wanted. With a combination of those key shots, we then kind of reverse-engineered [them] while we did techvis off the previs and worked with Christian and the art department so we would have proper flexibility with the set to be able to pull off some of these shots. [For example,] some of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Max Lloyd-Jones] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” Some shots required the Skyview’s ceiling to be lifted and partially removed to get a crane to shoot Paul Campbell (Max Lloyd-Jones) as he’s about to fall. The character Iris lived in a fortified house, isolating herself methodically to avoid the Grim Reaper. Rahhali comments, “That was a beautiful location [in] GVRD [Greater Vancouver], very cold. It was a long, hard shoot, because it was all nights. It was just this beautiful pocket out in the middle of the mountains. We in visual effects didn’t do a ton other than a couple of clean-ups of the big establishing shots when you see them pull up to the compound. We had to clean up small roads we wanted to make look like one road and make the road look like dirt.” There were flames involved. Sebaldt says, “The explosion [of Iris’s home] was unbelievably big. We had eight cameras on it at night and shot it at high speed, and we’re all going ‘Whoa.’” Rahhali notes, “There was some clean-up, but the explosion was 100% practical. Our Special Effects Supervisor, Tony, went to town on that. He blew up the whole house, and it looked spectacular.” The tattoo shop piercing scene is one of the most talked-about sequences in the movie, where a dangling chain from a ceiling fan attaches itself to the septum nose piercing of Erik Campbell (Richard Harmon) and drags him toward a raging fire. Rahhali observes, “That was very Final Destination and a great Rube Goldberg build-up event. Richard was great. He was tied up on a stunt line for most of it, balancing on top of furniture. All of that was him doing it for real with a stunt line.” Some effects solutions can be surprisingly extremely simple. Rahhali continues, “Our producer [Craig Perry] came up with a great gag [for the] septum ring.” Richard’s nose was connected with just a nose plug that went inside his nostrils. “All that tugging and everything that you’re seeing was real. For weeks and weeks, we were all trying to figure out how to do it without it being a big visual effects thing. ‘How are we gonna pull his nose for real?’ Craig said, ‘I have these things I use to help me open up my nose and you can’t really see them.’ They built it off of that, and it looked great.” Filmmakers spent weeks figuring out how to execute the harrowing tattoo shop scene. A dangling chain from a ceiling fan attaches itself to the septum nose ring of Erik Campbell (Richard Harmon) – with the actor’s nose being tugged by the chain connected to a nose plug that went inside his nostrils. “[S]ome of these shots required the Skyview restaurant ceiling to be lifted and partially removed for us to get a crane to shoot Paul [Campbell] as he’s about to fall and the camera’s going through a roof, that we then digitally had to recreate. Had we not done the previs to know those shots in advance, we would not have been able to build that in time to accomplish the look. We had many other shots that were driven off the previs that allowed the art department, construction and camera teams to work out how they would get those shots.” —Nordin Rahhali, VFX Supervisor Most of the fire in the tattoo parlor was practical. “There are some fire bars and stuff that you’re seeing in there from SFX and the big pool of fire on the wide shots.” Sebaldt adds, “That was a lot of fun to shoot because it’s so insane when he’s dancing and balancing on all this stuff – we were laughing and laughing. We were convinced that this was going to be the best scene in the movie up to that moment.” Rahhali says, “They used the scene wholesale for the trailer. It went viral – people were taking out their septum rings.” Erik survives the parlor blaze only to meet his fate in a hospital when he is pulled by a wheelchair into an out-of-control MRI machine at its highest magnetic level. Rahhali comments, “That is a good combination of a bunch of different departments. Our Stunt Coordinator, Simon Burnett, came up with this hard pull-wire line [for] when Erik flies and hits the MRI. That’s a real stunt with a double, and he hit hard. All the other shots are all CG wheelchairs because the directors wanted to art-direct how the crumpling metal was snapping and bending to show pressure on him as his body starts going into the MRI.” To augment the believability that comes with reality, the directors aimed to capture as much practically as possible, then VFX Supervisor Nordin Rahhali and his team built on that result. (Photo: Eric Milner) A train derailment concludes the film after Stefani and her brother, Charlie, realize they are still on death’s list. A train goes off the tracks, and logs from one of the cars fly though the air and kills them. “That one was special because it’s a hard sequence and was also shot quite late, so we didn’t have a lot of time. We went back to Vancouver and shot the actual street, and we shot our actors performing. They fell onto stunt pads, and the moment they get touched by the logs, it turns into CG as it was the only way to pull that off and the train of course. We had to add all that. The destruction of the houses and everything was done in visual effects.” Erik survives the tattoo parlor blaze only to meet his fate in a hospital when he is crushed by a wheelchair while being pulled into an out-of-control MRI machine. Erik (Richard Harmon) appears about to be run over by a delivery truck at the corner of 21A Ave. and 132A St., but he’s not – at least not then. The truck is actually on the opposite side of the road, and the person being run over is Howard. A rolling penny plays a major part in the catastrophic chain reactions and seems to be a character itself. “The magic penny was a mix from two vendors, Pixomondo and FOLKS; both had penny shots,” Rahhali says. “All the bouncing pennies you see going through the vents and hitting the fan blade are all FOLKS. The bouncing penny at the end as a lady takes it out of her purse, that goes down the ramp and into the rail – that’s FOLKS. The big explosion shots in the Skyview with the penny slowing down after the kid throws it [off the deck] are all Pixomondo shots. It was a mix. We took a little time to find that balance between readability and believability.” Approximately 800 VFX shots were required for Final Destination Bloodlines. (Photo: Eric Milner) Chain reactions of small and big events lead to bloody catastrophes befalling those who have cheated Death at some point in the Final Destination films. From left: Kaitlyn Santa Juana as Stefani Reyes, director Adam Stein, director Zach Lipovsky and Gabrielle Rose as Iris. (Photo: Eric Milner) Rahhali adds, “The film is a great collaboration of departments. Good visual effects are always a good combination of special effects, makeup effects and cinematography; it’s all the planning of all the pieces coming together. For a film of this size, I’m really proud of the work. I think we punched above our weight class, and it looks quite good.”
    0 Comentários 0 Compartilhamentos
  • FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY

    By TREVOR HOGG

    Denis Villeneuvefinds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track.If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation.Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.”

    A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects isRoger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.”

    Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey.Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow.One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it.Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters.Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, andcreate a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.”

    Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.”

    Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.”

    Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.”

    Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.”

    “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.”

    Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky.Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.”

    Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles.Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline.Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats.For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef.Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once.Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.”

    “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’”

    Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. ForEast of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.”

    Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well.Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh.

    Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise.Bluescreen and stunt doubles on Twisters.“The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.”
    —Dan Mindel, Cinematographer, Twisters

    Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.”
    #set #pixels #cinematic #artists #come
    FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY
    By TREVOR HOGG Denis Villeneuvefinds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track.If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation.Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.” A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects isRoger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.” Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey.Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow.One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it.Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters.Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, andcreate a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.” Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.” Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.” Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.” Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.” “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.” Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky.Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.” Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles.Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline.Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats.For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef.Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once.Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.” “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’” Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. ForEast of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.” Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well.Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh. Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise.Bluescreen and stunt doubles on Twisters.“The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.” —Dan Mindel, Cinematographer, Twisters Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.” #set #pixels #cinematic #artists #come
    WWW.VFXVOICE.COM
    FROM SET TO PIXELS: CINEMATIC ARTISTS COME TOGETHER TO CREATE POETRY
    By TREVOR HOGG Denis Villeneuve (Dune: Part Two) finds the difficulty of working with visual effects are sometimes the intermediaries between him and the artists and therefore the need to be precise with directions to keep things on track. (Image courtesy of Warner Bros. Pictures) If post-production has any chance of going smoothly, there must be a solid on-set relationship between the director, cinematographer and visual effects supervisor. “It’s my job to have a vision and to bring it to the screen,” notes Denis Villeneuve, director of Dune: Part Two. “That’s why working with visual effects requires a lot of discipline. It’s not like you work with a keyboard and can change your mind all the time. When I work with a camera, I commit to a mise-en-scène. I’m trying to take the risk, move forward in one direction and enhance it with visual effects. I push it until it looks perfect. It takes a tremendous amount of time and preparation. [VFX Supervisor] Paul Lambert is a perfectionist, and I love that about him. We will never put a shot on the screen that we don’t feel has a certain level of quality. It needs to look as real as the face of my actor.” A legendary cinematographer had a significant influence on how Villeneuve approaches digital augmentation. “Someone I have learned a lot from about visual effects is [Cinematographer] Roger Deakins. I remember that at the beginning, when I was doing Blade Runner 2049, some artwork was not defined enough, and I was like, ‘I will correct that later.’ Roger said, ‘No. Don’t do that. You have to make sure right at the start.’ I’ve learned the hard way that you need to be as precise as you can, otherwise it goes in a lot of directions.” Motion capture is visually jarring because your eye is always drawn to the performer in the mocap suit, but it worked out well on Better Man because the same thing happens when he gets replaced by a CG monkey. (Image courtesy of Paramount Pictures) Visual effects enabled the atmospherics on Wolfs to be art directed, which is not always possible with practical snow. (Image courtesy of Apple Studios) One of the most complex musical numbers in Better Man is “Rock DJ,” which required LiDAR scans of Regent Street and doing full 3D motion capture with the dancers dancing down the whole length of the street to work out how best to shoot it. (Image courtesy of Paramount Pictures) Cinematographer Dan Mindel favors on-set practical effects because the reactions from the cast come across as being more genuine, which was the case for Twisters. (Image courtesy of Universal Pictures) Storyboards are an essential part of the planning process. “When I finish a screenplay, the first thing I do is to storyboard, not just to define the visual element of the movie, but also to rewrite the movie through images,” Villeneuve explains. “Those storyboards inform my crew about the design, costumes, accessories and vehicles, and [they] create a visual inner rhythm of the film. This is the first step towards visual effects where there will be a conversation that will start from the boards. That will be translated into previs to help the animators know where we are going because the movie has to be made in a certain timeframe and needs choreography to make sure everybody is moving in the same direction.” The approach towards filmmaking has not changed over the years. “You have a camera and a couple of actors in front of you, and it’s about finding the right angle; the rest is noise. I try to protect the intimacy around the camera as much as possible and focus on that because if you don’t believe the actor, then you won’t believe anything.” Before transforming singer Robbie Williams into a CG primate, Michael Gracey started as a visual effects artist. “I feel so fortu- nate to have come from a visual effects background early on in my career,” recalls Michael Gracey, director of Better Man. “I would sit down and do all the post myself because I didn’t trust anyone to care as much as I did. Fortunately, over the years I’ve met people who do. It’s a huge part of how I even scrapbook ideas together. Early on, I was constantly throwing stuff up in Flame, doing a video test and asking, ‘Is this going to work?’ Jumping into 3D was something I felt comfortable doing. I’ve been able to plan out or previs ideas. It’s an amazing tool to be armed with if you are a director and have big ideas and you’re trying to convey them to a lot of people.” Previs was pivotal in getting Better Man financed. “Off the page, people were like, ‘Is this monkey even going to work?’ Then they were worried that it wouldn’t work in a musical number. We showed them the previs for Feel, the first musical number, and My Way at the end of the film. I would say, ‘If you get any kind of emotion watching these musical numbers, just imagine what it’s going to be like when it’s filmed and is photoreal.” Several shots had to be stitched together to create a ‘oner’ that features numerous costume changes and 500 dancers. “For Rock DJ, we were doing LiDAR scans of Regent Street and full 3D motion capture with the dancers dancing down the whole length of the street to work out all of the transition points and how best to shoot it,” Gracey states. “That process involved Erik Wilson, the Cinematographer; Luke Millar, the Visual Effects Supervisor; Ashley Wallen, the Choreographer; and Patrick Correll, Co-Producer. Patrick would sit on set and, in DaVinci Resolve, take the feed from the camera and check every take against the blueprint that we had already previs.” Motion capture is visually jarring to shoot. “Everything that is in-camera looks perfect, then a guy walks in wearing a mocap suit and your eye zooms onto him. But the truth is, your eye does that the moment you replace him with a monkey as well. It worked out quite well because that idea is true to what it is to be famous. A famous person walks into the room and your eye immediately goes to them.” Digital effects have had a significant impact on a particular area of filmmaking. “Physical effects were a much higher art form than it is now, or it was allowed to be then than it is now,” notes Dan Mindel, Cinematographer on Twisters. “People will decline a real pyrotechnic explosion and do a digital one. But you get a much bigger reaction when there’s actual noise and flash.” It is all about collaboration. Mindel explains, “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys, because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world. When we made Twister, it was an analog movie with digital effects, and it worked great. That’s because everyone on set doing the technical work understood both formats, and we were able to use them well.” Digital filmmaking has caused a generational gap. “The younger directors don’t think holistically,” Mindel notes. “It’s much more post-driven because they want to manipulate on the Avid or whatever platform it is going to be. What has happened is that the overreaching nature of these tools has left very little to the imagination. A movie that is heavy visual effects is mostly conceptualized on paper using computer-generated graphics and color; that insidiously sneaks into the look and feel of the movie before you know it. You see concept art blasted all over production offices. People could get used to looking at those images, and before you know it, that’s how the movie looks. That’s a very dangerous place to be, not to have the imagination to work around an issue that perhaps doesn’t manifest itself until you’re shooting.” There has to be a sense of purpose. Mindel remarks, “The ability to shoot in a way that doesn’t allow any manipulation in post is the only way to guarantee that there’s just one direction the look can go in. But that could be a little dangerous for some people. Generally, the crowd I’m working with is part of a team, and there’s little thought of taking the movie to a different place than what was shot. I work in the DI with the visual effects supervisor, and we look at our work together so we’re all in agreement that it fits into the movie.” “All of the advances in technology are a push for greater control,” notes Larkin Seiple, Cinematographer on Everything Everywhere All at Once. “There are still a lot of things that we do with visual effects that we could do practically, but a lot of times it’s more efficient, or we have more attempts at it later in post, than if we had tried to do it practically. I find today, there’s still a debate about what we do on set and what we do later digitally. Many directors have been trying to do more on set, and the best visual effects supervisors I work with push to do everything in-camera as much as possible to make it as realistic as possible.” Storytelling is about figuring out where to invest your time and effort. Seiple states, “I like the adventure of filmmaking. I prefer to go to a mountain top and shoot some of the scenes, get there and be inspired, as opposed to recreate it. Now, if it’s a five-second cutaway, I don’t want production to go to a mountain top and do that. For car work, we’ll shoot the real streets, figure out the time of day and even light the plates for it. Then, I’ll project those on LED walls with actors in a car on a stage. I love doing that because then I get to control how that looks.” Visual effects have freed Fallout Cinematographer Stuart Dryburgh to shoot quicker and in places that in the past would have been deemed imperfect because of power lines, out-of-period buildings or the sky. (Image courtesy of Prime Video) Visual effects assist in achieving the desired atmospherics. Seiple says, “On Wolfs, we tried to bring in our own snow for every scene. We would shoot one take, the snow would blow left, and the next take would blow right. Janek Sirrs is probably the best visual effects supervisor I’ve worked with, and he was like, ‘Please turn off the snow. It’ll be a nightmare trying to remove the snow from all these shots then add our own snow back for continuity because you can’t have the snow changing direction every other cut.’ Or we’d have to ‘snow’ a street, which would take ages. Janek would say, ‘Let’s put enough snow on the ground to see the lighting on it and where the actors walk. We’ll do the rest of the street later because we have a perfect reference of what it should look like.” Certain photographic principles have to be carried over into post-production to make shots believable to the eye. Seiple explains, “When you make all these amazing details that should be out of focus sharper, then the image feels like a visual effect because it doesn’t work the way a lens would work.” Familiarity with the visual effects process is an asset in being able to achieve the best result. “I inadvertently come from a lot of visual effect-heavy shoots and shows, so I’m quick to have an opinion about it. Many directors love to reference the way David Fincher uses visual effects because there is such great behind-the-scenes imagery that showcases how they were able to do simple things. Also, I like to shoot tests even on an iPhone to see if this comp will work or if this idea is a good one.” Cinematographer Fabian Wagner and VFX Supervisor John Moffatt spent a lot of time in pre-production for Venom: The Last Dance discussing how to bring out the texture of the symbiote through lighting and camera angles. (Image courtesy of Columbia Pictures) Game of Thrones Director of Photography Fabian Wagner had to make key decisions while prepping and breaking down the script so visual effects had enough time to meet deadline. (Image courtesy of HBO) Twisters was an analog movie with digital effects that worked well because everyone on set doing the technical work understood both formats. (Image courtesy of Universal Pictures) For Cinematographer Larkin Seiple, storytelling is about figuring out where to invest your time and effort. Scene from the Netflix series Beef. (Image courtesy of Netflix) Cinematographer Larkin Seiple believes that all of the advances in technology are a push for greater control, which occurred on Everything Everywhere All at Once. (Image courtesy of A24) Nothing beats reality when it comes to realism. “Every project I do I talk more about the real elements to bring into the shoot than the visual effect element because the more practical stuff that you can do on set, the more it will embed the visual effects into the image, and, therefore, they’re more real,” observes Fabian Wagner, Cinematographer on Venom: The Last Dance. “It also depends on the job you’re doing in terms of how real or unreal you want it to be. Game of Thrones was a good example because it was a visual effects-heavy show, but they were keen on pushing the reality of things as much as possible. We were doing interactive lighting and practical on-set things to embed the visual effects. It was successful.” Television has a significantly compressed schedule compared to feature films. “There are fewer times to iterate. You have to be much more precise. On Game of Thrones, we knew that certain decisions had to be made early on while we were still prepping and breaking down the script. Because of their due dates, to be ready in time, they had to start the visual effects process for certain dragon scenes months before we even started shooting.” “Like everything else, it’s always about communication,” Wagner notes. “I’ve been fortunate to work with extremely talented and collaborative visual effects supervisors, visual effects producers and directors. I have become friends with most of those visual effects departments throughout the shoot, so it’s easy to stay in touch. Even when Venom: The Last Dance was posting, I would be talking to John Moffatt, who was our talented visual effects supervisor. We would exchange emails, text messages or phone calls once a week, and he would send me updates, which we would talk about it. If I gave any notes or thoughts, John would listen, and if it were possible to do anything about, he would. In the end, it’s about those personal relationships, and if you have those, that can go a long way.” Wagner has had to deal with dragons, superheroes and symbiotes. “They’re all the same to me! For the symbiote, we had two previous films to see what they had done, where they had succeeded and where we could improve it slightly. While prepping, John and I spent a lot of time talking about how to bring out the texture of the symbiote and help it with the lighting and camera angles. One of the earliest tests was to see what would happen if we backlit or side lit it as well as trying different textures for reflections. We came up with something we all were happy with, and that’s what we did on set. It was down to trying to speak the same language and aiming for the same thing, which in this case was, ‘How could we make the symbiote look the coolest?’” Visual effects has become a crucial department throughout the filmmaking process. “The relationship with the visual effects supervisor is new,” states Stuart Dryburgh, Cinematographer on Fallout. “We didn’t really have that. On The Piano, the extent of the visual effects was having somebody scribbling in a lightning strike over a stormy sky and a little flash of an animated puppet. Runaway Bride had a two-camera setup where one of the cameras pushed into the frame, and that was digitally removed, but we weren’t using it the way we’re using it now. For [the 2026 Netflix limited series] East of Eden, we’re recreating 19th and early 20th century Connecticut, Boston and Salinas, California in New Zealand. While we have some great sets built and historical buildings that we can use, there is a lot of set extension and modification, and some complete bluescreen scenes, which allow us to more realistically portray a historical environment than we could have done back in the day.” The presence of a visual effects supervisor simplified principal photography. Dryburgh adds, “In many ways, using visual effects frees you to shoot quicker and in places that might otherwise be deemed imperfect because of one little thing, whether it’s power lines or out-of-period buildings or sky. All of those can be easily fixed. Most of us have been doing it for long enough that we have a good idea of what can and can’t be done and how it’s done so that the visual effects supervisor isn’t the arbiter.” Lighting cannot be arbitrarily altered in post as it never looks right. “Whether you set the lighting on the set and the background artist has to match that, or you have an existing background and you, as a DP, have to match that – that is the lighting trick to the whole thing,” Dryburgh observes. “Everything has to be the same, a soft or hard light, the direction and color. Those things all need to line up in a composited shot; that is crucial.” Every director has his or her own approach to filmmaking. “Harold Ramis told me, ‘I’ll deal with the acting and the words. You just make it look nice, alright?’ That’s the conversation we had about shots, and it worked out well. [Director] Garth Davis, who I’m working with now, is a terrific photographer in his own right and has a great visual sense, so he’s much more involved in anything visual, whether it be the designs of the sets, creation of the visual effects, my lighting or choice of lenses. It becomes much more collaborative. And that applies to the visual effects department as well.” Recreating vintage lenses digitally is an important part of the visual aesthetic. “As digital photography has become crisper, better and sharper, people have chosen to use fewer perfect optics, such as lenses that are softer on the edges or give a flare characteristic. Before production, we have the camera department shoot all of these lens grids of different packages and ranges, and visual effects takes that information so they can model every lens. If they’re doing a fully CG background, they can apply that lens characteristic,” remarks Dryburgh. Television schedules for productions like House of the Dragon do not allow a lot of time to iterate, so decisions have to be precise. (Image courtesy of HBO) Bluescreen and stunt doubles on Twisters. (Image courtesy of Universal Pictures) “The principle that I work with is that the visual effects department will make us look great, and we have to give them the raw materials in the best possible form so they can work with it instinctually. Sometimes, as a DP, you might want to do something different, but the bottom line is, you’ve got to listen to these guys because they know what they want. It gets a bit dogmatic, but most of the time, my relationship with visual effects is good, and especially the guys who have had a foot in the analog world at one point or another and have transitioned into the digital world.” —Dan Mindel, Cinematographer, Twisters Cinematographers like Greig Fraser have adopted Unreal Engine. “Greig has an incredible curiosity about new technology, and that helped us specifically with Dune: Part Two,” Villeneuve explains. “Greig was using Unreal Engine to capture natural environments. For example, if we decide to shoot in that specific rocky area, we’ll capture the whole area with drones to recreate the terrain in the computer. If I said, ‘I want to shoot in that valley on November 3rd and have the sun behind the actors. At what time is it? You have to be there at 9:45 am.’ We built the whole schedule like a puzzle to maximize the power of natural light, but that came through those studies, which were made with the software usually used for video games.” Technology is essentially a tool that keeps evolving. Villeneuve adds, “Sometimes, I don’t know if I feel like a dinosaur or if my last movie will be done in this house behind the computer alone. It would be much less tiring to do that, but seriously, the beauty of cinema is the idea of bringing many artists together to create poetry.”
    Like
    Love
    Wow
    Sad
    Angry
    634
    0 Comentários 0 Compartilhamentos
  • The Weirdest Part of the MCU Spider-Man Is Back for Vision Quest

    Remember that time when good ol’ Peter Parker called a drone strike on his classmates because another guy was flirting with MJ? Well, the artificial intelligence that made it happen is back, this time in snarky Canadian form!
    Deadline is reporting that Schitt’s Creek alum Emily Hampshire has been cast as E.D.I.T.H. in Vision Quest, the upcoming Disney+ series starring Paul Bettany as the synthezoid Avenger. E.D.I.T.H., of course, made her debut as a pair of ugly, gaudy sunglasses the late Tony Stark bequeathed to Peter in Spider-Man: Far From Home. Through E.D.I.T.H., Peter had access to vast technological resources, resources that Mysterio wanted to use for himself.

    At the end of Far From Home, Peter reclaimed the E.D.I.T.H. glasses and in Spider-Man: No Way Home, a screen readout assured us that they were inactive. Moreover, No Way Home ends with Peter having his secret identity wiped from everyone’s memory and a closing shot of him hand-stitching his own costume in a dingy New York apartment, suggeting that the MCU experiment of making working-class Peter Parker into the scion of a tech bro was done.
    That may still be true, in which case Vision Quest is a much better place for E.D.I.T.H. to exist. Created by Terry Matalas, showrunner of the Twelve Monkeys TV series and the third season of Star Trek: Picard, Vision Quest will follow the next phase in the life of the synthezoid Vision, who was killed in Avengers: Infinity War and resurrected as an initially evil clone in WandaVision.

    The title Vision Quest comes from a 1989-1990 arc of West Coast Avengers, written and penciled by John Byrne, in which the U.S. government dismantles Vision and recreates him into a mindless and easily controllable form, signified by his new bleach white look. Fans of the MCU will recognize that storyline from the last episodes of WandaVision, in which S.A.B.E.R. did the same thing to Bettany’s character.
    However, the Vision Quest comics continued to tell the story of Vision attempting to recover the humanity and personality he’d previously gained over the years, which will presumably be the plot of Vision Quest. However, E.D.I.T.H.’s casting is just the latest in a host of synthetic characters who will appear in the show. James Spader will return as Vision’s creator Ultron, and T’Nia Miller has joined the show as Jocasta, a female synthezoid originally created as Ultron’s bride. A few humans will show up as well, including the return of Faran Tahir as Raza, the leader of the Ten Rings terrorist cell, last seen in Iron Man, and frequent Matalas collaborator Todd Stashwick as a mystery man hunting Vision.
    That’s a packed cast, but as anyone who recalls the Picard season 3 episode in which androids Data and Lore merged, Matalas knows how to tell an interesting story about artificial intelligence. That episode also showed that Matalas knows how to add levity to heavy conversations about existence, making Hampshire’s casting as E.D.I.T.H. a wise choice. Just don’t let her anywhere near another school bus full of teenagers.
    Vision Quest is slated to appear on Disney+ in 2026.
    #weirdest #part #mcu #spiderman #back
    The Weirdest Part of the MCU Spider-Man Is Back for Vision Quest
    Remember that time when good ol’ Peter Parker called a drone strike on his classmates because another guy was flirting with MJ? Well, the artificial intelligence that made it happen is back, this time in snarky Canadian form! Deadline is reporting that Schitt’s Creek alum Emily Hampshire has been cast as E.D.I.T.H. in Vision Quest, the upcoming Disney+ series starring Paul Bettany as the synthezoid Avenger. E.D.I.T.H., of course, made her debut as a pair of ugly, gaudy sunglasses the late Tony Stark bequeathed to Peter in Spider-Man: Far From Home. Through E.D.I.T.H., Peter had access to vast technological resources, resources that Mysterio wanted to use for himself. At the end of Far From Home, Peter reclaimed the E.D.I.T.H. glasses and in Spider-Man: No Way Home, a screen readout assured us that they were inactive. Moreover, No Way Home ends with Peter having his secret identity wiped from everyone’s memory and a closing shot of him hand-stitching his own costume in a dingy New York apartment, suggeting that the MCU experiment of making working-class Peter Parker into the scion of a tech bro was done. That may still be true, in which case Vision Quest is a much better place for E.D.I.T.H. to exist. Created by Terry Matalas, showrunner of the Twelve Monkeys TV series and the third season of Star Trek: Picard, Vision Quest will follow the next phase in the life of the synthezoid Vision, who was killed in Avengers: Infinity War and resurrected as an initially evil clone in WandaVision. The title Vision Quest comes from a 1989-1990 arc of West Coast Avengers, written and penciled by John Byrne, in which the U.S. government dismantles Vision and recreates him into a mindless and easily controllable form, signified by his new bleach white look. Fans of the MCU will recognize that storyline from the last episodes of WandaVision, in which S.A.B.E.R. did the same thing to Bettany’s character. However, the Vision Quest comics continued to tell the story of Vision attempting to recover the humanity and personality he’d previously gained over the years, which will presumably be the plot of Vision Quest. However, E.D.I.T.H.’s casting is just the latest in a host of synthetic characters who will appear in the show. James Spader will return as Vision’s creator Ultron, and T’Nia Miller has joined the show as Jocasta, a female synthezoid originally created as Ultron’s bride. A few humans will show up as well, including the return of Faran Tahir as Raza, the leader of the Ten Rings terrorist cell, last seen in Iron Man, and frequent Matalas collaborator Todd Stashwick as a mystery man hunting Vision. That’s a packed cast, but as anyone who recalls the Picard season 3 episode in which androids Data and Lore merged, Matalas knows how to tell an interesting story about artificial intelligence. That episode also showed that Matalas knows how to add levity to heavy conversations about existence, making Hampshire’s casting as E.D.I.T.H. a wise choice. Just don’t let her anywhere near another school bus full of teenagers. Vision Quest is slated to appear on Disney+ in 2026. #weirdest #part #mcu #spiderman #back
    WWW.DENOFGEEK.COM
    The Weirdest Part of the MCU Spider-Man Is Back for Vision Quest
    Remember that time when good ol’ Peter Parker called a drone strike on his classmates because another guy was flirting with MJ? Well, the artificial intelligence that made it happen is back, this time in snarky Canadian form! Deadline is reporting that Schitt’s Creek alum Emily Hampshire has been cast as E.D.I.T.H. in Vision Quest, the upcoming Disney+ series starring Paul Bettany as the synthezoid Avenger. E.D.I.T.H., of course, made her debut as a pair of ugly, gaudy sunglasses the late Tony Stark bequeathed to Peter in Spider-Man: Far From Home. Through E.D.I.T.H., Peter had access to vast technological resources, resources that Mysterio wanted to use for himself. At the end of Far From Home, Peter reclaimed the E.D.I.T.H. glasses and in Spider-Man: No Way Home, a screen readout assured us that they were inactive. Moreover, No Way Home ends with Peter having his secret identity wiped from everyone’s memory and a closing shot of him hand-stitching his own costume in a dingy New York apartment, suggeting that the MCU experiment of making working-class Peter Parker into the scion of a tech bro was done. That may still be true, in which case Vision Quest is a much better place for E.D.I.T.H. to exist. Created by Terry Matalas, showrunner of the Twelve Monkeys TV series and the third season of Star Trek: Picard, Vision Quest will follow the next phase in the life of the synthezoid Vision, who was killed in Avengers: Infinity War and resurrected as an initially evil clone in WandaVision. The title Vision Quest comes from a 1989-1990 arc of West Coast Avengers, written and penciled by John Byrne, in which the U.S. government dismantles Vision and recreates him into a mindless and easily controllable form, signified by his new bleach white look. Fans of the MCU will recognize that storyline from the last episodes of WandaVision, in which S.A.B.E.R. did the same thing to Bettany’s character. However, the Vision Quest comics continued to tell the story of Vision attempting to recover the humanity and personality he’d previously gained over the years, which will presumably be the plot of Vision Quest. However, E.D.I.T.H.’s casting is just the latest in a host of synthetic characters who will appear in the show. James Spader will return as Vision’s creator Ultron, and T’Nia Miller has joined the show as Jocasta, a female synthezoid originally created as Ultron’s bride. A few humans will show up as well, including the return of Faran Tahir as Raza, the leader of the Ten Rings terrorist cell, last seen in Iron Man, and frequent Matalas collaborator Todd Stashwick as a mystery man hunting Vision. That’s a packed cast, but as anyone who recalls the Picard season 3 episode in which androids Data and Lore merged, Matalas knows how to tell an interesting story about artificial intelligence. That episode also showed that Matalas knows how to add levity to heavy conversations about existence, making Hampshire’s casting as E.D.I.T.H. a wise choice. Just don’t let her anywhere near another school bus full of teenagers. Vision Quest is slated to appear on Disney+ in 2026.
    Like
    Love
    Wow
    Sad
    Angry
    354
    0 Comentários 0 Compartilhamentos