• Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Σχόλια 0 Μοιράστηκε
  • Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester

    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester

    By John P. Mello Jr.
    June 3, 2025 5:00 AM PT

    ADVERTISEMENT
    Quality Leads That Turn Into Deals
    Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now.

    Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research.
    Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel.
    Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature.
    It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure.
    Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained.
    “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore.
    “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.”
    Sensor Magic
    “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La.
    “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.”
    Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld.

    “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.”
    “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.”
    Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed.
    Better Data Collection
    The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems.
    “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.”
    “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added.
    Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld.
    “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said.
    Drone Adoption Faces Regulatory Hurdles
    While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization.
    Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities.
    “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.”

    “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld.
    “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.”
    Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.”
    “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #drones #set #deliver #benefits #laborintensive
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester By John P. Mello Jr. June 3, 2025 5:00 AM PT ADVERTISEMENT Quality Leads That Turn Into Deals Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research. Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel. Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature. It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure. Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained. “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore. “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.” Sensor Magic “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La. “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.” Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld. “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.” “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes andusage of energy, which are an integral part of lean manufacturing principles.” Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed. Better Data Collection The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems. “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.” “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added. Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld. “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said. Drone Adoption Faces Regulatory Hurdles While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization. Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities. “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight— are evolving but still limit many high-value use cases.” “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld. “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.” Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.” “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #drones #set #deliver #benefits #laborintensive
    WWW.TECHNEWSWORLD.COM
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester
    Drones Set To Deliver Benefits for Labor-Intensive Industries: Forrester By John P. Mello Jr. June 3, 2025 5:00 AM PT ADVERTISEMENT Quality Leads That Turn Into Deals Full-service marketing programs from TechNewsWorld deliver sales-ready leads. Segment by geography, industry, company size, job title, and more. Get Started Now. Aerial drones are rapidly assuming a key role in the physical automation of business operations, according to a new report by Forrester Research. Aerial drones power airborne physical automation by addressing operational challenges in labor-intensive industries, delivering efficiency, intelligence, and experience, explained the report written by Principal Analyst Charlie Dai with Frederic Giron, Merritt Maxim, Arjun Kalra, and Bill Nagel. Some industries, like the public sector, are already reaping benefits, it continued. The report predicted that drones will deliver benefits within the next two years as technologies and regulations mature. It noted that drones can help organizations grapple with operational challenges that exacerbate risks and inefficiencies, such as overreliance on outdated, manual processes, fragmented data collection, geographic barriers, and insufficient infrastructure. Overreliance on outdated manual processes worsens inefficiencies in resource allocation and amplifies safety risks in dangerous work environments, increasing operational costs and liability, the report maintained. “Drones can do things more safely, at least from the standpoint of human risk, than humans,” said Rob Enderle, president and principal analyst at the Enderle Group, an advisory services firm, in Bend, Ore. “They can enter dangerous, exposed, very high-risk and even toxic environments without putting their operators at risk,” he told TechNewsWorld. “They can be made very small to go into areas where people can’t physically go. And a single operator can operate several AI-driven drones operating autonomously, keeping staffing levels down.” Sensor Magic “The magic of the drone is really in the sensor, while the drone itself is just the vehicle that holds the sensor wherever it needs to be,” explained DaCoda Bartels, senior vice president of operations with FlyGuys, a drone services provider, in Lafayette, La. “In doing so, it removes all human risk exposure because the pilot is somewhere safe on the ground, sending this sensor, which is, in most cases, more high-resolution than even a human eye,” he told TechNewsWorld. “In essence, it’s a better data collection tool than if you used 100 people. Instead, you deploy one drone around in all these different areas, which is safer, faster, and higher resolution.” Akash Kadam, a mechanical engineer with Caterpillar, maker of construction and mining equipment, based in Decatur, Ill., explained that drones have evolved into highly functional tools that directly respond to key inefficiencies and threats to labor-intensive industries. “Within the manufacturing and supply chains, drones are central to optimizing resource allocation and reducing the exposure of humans to high-risk duties,” he told TechNewsWorld. “Drones can be used in factory environments to automatically inspect overhead cranes, rooftops, and tight spaces — spaces previously requiring scaffolding or shutdowns, which carry both safety and cost risks,” he said. “A reduction in downtime, along with no requirement for manual intervention in hazardous areas, is provided through this aerial inspection by drones.” “In terms of resource usage, drones mounted with thermal cameras and tools for acquiring real-time data can spot bottlenecks, equipment failure, or energy leakage on the production floor,” he continued. “This can facilitate predictive maintenance processes and [optimal] usage of energy, which are an integral part of lean manufacturing principles.” Kadam added that drones provide accurate field mapping and multispectral imaging in agriculture, enabling the monitoring of crop health, soil quality, and irrigation distribution. “Besides the reduction in manual scouting, it ensures more effective input management, which leads to more yield while saving resources,” he observed. Better Data Collection The Forrester report also noted that drones can address problems with fragmented data collection and outdated monitoring systems. “Drones use cameras and sensors to get clear, up-to-date info,” said Daniel Kagan, quality manager at Rogers-O’Brien Construction, a general contractor in Dallas. “Some drones even make 3D maps or heat maps,” he told TechNewsWorld. “This helps farmers see where crops need more water, stores check roof damage after a storm, and builders track progress and find delays.” “The drone collects all this data in one flight, and it’s ready to view in minutes and not days,” he added. Dean Bezlov, global head of business development at MYX Robotics, a visualization technology company headquartered in Sofia, Bulgaria, added that drones are the most cost and time-efficient way to collect large amounts of visual data. “We are talking about two to three images per second with precision and speed unmatched by human-held cameras,” he told TechNewsWorld. “As such, drones are an excellent tool for ‘digital twins’ — timestamps of the real world with high accuracy which is useful in industries with physical assets such as roads, rail, oil and gas, telecom, renewables and agriculture, where the drone provides a far superior way of looking at the assets as a whole,” he said. Drone Adoption Faces Regulatory Hurdles While drones have great potential for many organizations, they will need to overcome some challenges and barriers. For example, Forrester pointed out that insurers deploy drones to evaluate asset risks but face evolving privacy regulations and gaps in data standardization. Media firms use drones to take cost-effective, cinematic aerial footage, but face strict regulations, it added, while in urban use cases like drone taxis and cargo transport remain experimental due to certification delays and airspace management complexities. “Regulatory frameworks, particularly in the U.S., remain complex, bureaucratic, and fragmented,” said Mark N. Vena, president and principal analyst with SmartTech Research in Las Vegas. “The FAA’s rules around drone operations — especially for flying beyond visual line of sight [BVLOS] — are evolving but still limit many high-value use cases.” “Privacy concerns also persist, especially in urban areas and sectors handling sensitive data,” he told TechNewsWorld. “For almost 20 years, we’ve been able to fly drones from a shipping container in one country, in a whole other country, halfway across the world,” said FlyGuys’ Bartels. “What’s limiting the technology from being adopted on a large scale is regulatory hurdles over everything.” Enderle added that innovation could also be a hangup for organizations. “This technology is advancing very quickly, making buying something that isn’t instantly obsolete very difficult,” he said. “In addition, there are a lot of drone choices, raising the risk you’ll pick one that isn’t ideal for your use case.” “We are still at the beginning of this trend,” he noted. “Robotic autonomous drones are starting to come to market, which will reduce dramatically the need for drone pilots. I expect that within 10 years, we’ll have drones doing many, if not most, of the dangerous jobs currently being done by humans, as robotics, in general, will displace much of the labor force.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    Like
    Love
    Wow
    Sad
    Angry
    341
    0 Σχόλια 0 Μοιράστηκε
  • I'm an MBA admissions consultant. My international clients are still applying in droves to US schools.

    designer491/Getty Images

    2025-06-05T08:27:25Z

    d

    Read in app

    This story is available exclusively to Business Insider
    subscribers. Become an Insider
    and start reading now.
    Have an account?

    Scott Edinburgh is still advising international students to apply for MBAs this year.
    US MBA programs can offer more networking and job opportunities than their European counterparts, he said.
    Sitting on the decision to apply for too long may hurt a candidate's acceptance chances.

    This as-told-to essay is based on a conversation with Scott Edinburgh, a Boston-based MBA admissions consultant. It has been edited for length and clarity.I launched an admissions consulting business in 2008 — it's a family business that I run with my sister.We get a lot of international students, and they sometimes make up half our clients for the year. Our reach matches what many schools are seeing. There is a lot of interest from India and China, as well as growing numbers in Europe, the Middle East, and some countries in Africa. For MBA programs, most students are in their mid- to late 20s and have some years of work experience.We offer guidance for universities in Europe and some other places, but there are still some unique features about pursuing an MBA in the US. An MBA as a course is more popular in the US than Europe and it opens up more networking opportunities, and the degree holds a bit more value. US programs are also stronger from a recruiting and job standpoint. It also comes down to where you want to establish yourself. If you want to live in the US, there's no better way to do it than to study here.Given the uncertainty surrounding US immigration policies, we've been getting questions about studying and working in the US and seeing some students apply to European schools instead. Still, there are a couple of reasons tons of students are still keen on pursuing an MBA in the US and why I recommend they apply now.Schools are working hard to keep international studentsI'm getting questions about the US being open to accepting international students and the risks of studying here.There are over 1.1 million international students in the US right now, and they're not all being kicked out and told to leave. There's a lot of hesitation among some international students about their ability to show up on campus.But what we're seeing from talking with deans and councils is that schools are doing a lot so that they can have their international students. These students make up a large percentage of the class at top business schools. Their legal teams are quite strong, and we've seen a lot of court interventions to uphold the rights and opportunities for international students.While it seems like there's an issue now, it's probably going to work itself out. People hesitating means there are fewer applicants, which means you're more likely to get in.You're not entering the job market nowPeople are worried about the job market not being great, and they're reading jobs reports that are coming out from these schools. We tell them you're not applying to apply for a job now.
    Things are cyclical. If you're applying to business school in 2025 and graduating in 2028, that's three years from now. The chances are that the job market will not be in the same place three years from now.Right now is the absolute best time to apply. This will be the best round in the span of many rounds that I've seen as far as acceptance rates go. The market is not great job-wise wise and you can spend that time educating yourself. You won't be missing out on huge promotions, huge raises, and new jobs. By the time you graduate, things may start to improve.MBAs are time-boundThe job market and political situation add an element of risk, but those who are looking to get ahead will find a way to succeed.Students often forget that the MBA is a time-bound program, and waiting too long to apply while the situation clears up might make it too difficult to get in.Universities prefer those in their mid- to late 20s because they are easier to place into jobs and because they want cohorts to mesh well. The median number of years of experience is five, and as you go further down the bell curve, there are just fewer and fewer spots that are available.Unless you are in your early 20s, you could be shooting yourself in the foot by delaying by one or two years. The fear of what might happen from a policy standpoint becomes irrelevant if you don't get into a program in a future year.Do you have a story to share about international graduate students in the US? Contact this reporter at sgoel@businessinsider.com.
    #i039m #mba #admissions #consultant #international
    I'm an MBA admissions consultant. My international clients are still applying in droves to US schools.
    designer491/Getty Images 2025-06-05T08:27:25Z d Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Scott Edinburgh is still advising international students to apply for MBAs this year. US MBA programs can offer more networking and job opportunities than their European counterparts, he said. Sitting on the decision to apply for too long may hurt a candidate's acceptance chances. This as-told-to essay is based on a conversation with Scott Edinburgh, a Boston-based MBA admissions consultant. It has been edited for length and clarity.I launched an admissions consulting business in 2008 — it's a family business that I run with my sister.We get a lot of international students, and they sometimes make up half our clients for the year. Our reach matches what many schools are seeing. There is a lot of interest from India and China, as well as growing numbers in Europe, the Middle East, and some countries in Africa. For MBA programs, most students are in their mid- to late 20s and have some years of work experience.We offer guidance for universities in Europe and some other places, but there are still some unique features about pursuing an MBA in the US. An MBA as a course is more popular in the US than Europe and it opens up more networking opportunities, and the degree holds a bit more value. US programs are also stronger from a recruiting and job standpoint. It also comes down to where you want to establish yourself. If you want to live in the US, there's no better way to do it than to study here.Given the uncertainty surrounding US immigration policies, we've been getting questions about studying and working in the US and seeing some students apply to European schools instead. Still, there are a couple of reasons tons of students are still keen on pursuing an MBA in the US and why I recommend they apply now.Schools are working hard to keep international studentsI'm getting questions about the US being open to accepting international students and the risks of studying here.There are over 1.1 million international students in the US right now, and they're not all being kicked out and told to leave. There's a lot of hesitation among some international students about their ability to show up on campus.But what we're seeing from talking with deans and councils is that schools are doing a lot so that they can have their international students. These students make up a large percentage of the class at top business schools. Their legal teams are quite strong, and we've seen a lot of court interventions to uphold the rights and opportunities for international students.While it seems like there's an issue now, it's probably going to work itself out. People hesitating means there are fewer applicants, which means you're more likely to get in.You're not entering the job market nowPeople are worried about the job market not being great, and they're reading jobs reports that are coming out from these schools. We tell them you're not applying to apply for a job now. Things are cyclical. If you're applying to business school in 2025 and graduating in 2028, that's three years from now. The chances are that the job market will not be in the same place three years from now.Right now is the absolute best time to apply. This will be the best round in the span of many rounds that I've seen as far as acceptance rates go. The market is not great job-wise wise and you can spend that time educating yourself. You won't be missing out on huge promotions, huge raises, and new jobs. By the time you graduate, things may start to improve.MBAs are time-boundThe job market and political situation add an element of risk, but those who are looking to get ahead will find a way to succeed.Students often forget that the MBA is a time-bound program, and waiting too long to apply while the situation clears up might make it too difficult to get in.Universities prefer those in their mid- to late 20s because they are easier to place into jobs and because they want cohorts to mesh well. The median number of years of experience is five, and as you go further down the bell curve, there are just fewer and fewer spots that are available.Unless you are in your early 20s, you could be shooting yourself in the foot by delaying by one or two years. The fear of what might happen from a policy standpoint becomes irrelevant if you don't get into a program in a future year.Do you have a story to share about international graduate students in the US? Contact this reporter at sgoel@businessinsider.com. #i039m #mba #admissions #consultant #international
    WWW.BUSINESSINSIDER.COM
    I'm an MBA admissions consultant. My international clients are still applying in droves to US schools.
    designer491/Getty Images 2025-06-05T08:27:25Z Save Saved Read in app This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now. Have an account? Scott Edinburgh is still advising international students to apply for MBAs this year. US MBA programs can offer more networking and job opportunities than their European counterparts, he said. Sitting on the decision to apply for too long may hurt a candidate's acceptance chances. This as-told-to essay is based on a conversation with Scott Edinburgh, a Boston-based MBA admissions consultant. It has been edited for length and clarity.I launched an admissions consulting business in 2008 — it's a family business that I run with my sister.We get a lot of international students, and they sometimes make up half our clients for the year. Our reach matches what many schools are seeing. There is a lot of interest from India and China, as well as growing numbers in Europe, the Middle East, and some countries in Africa. For MBA programs, most students are in their mid- to late 20s and have some years of work experience.We offer guidance for universities in Europe and some other places, but there are still some unique features about pursuing an MBA in the US. An MBA as a course is more popular in the US than Europe and it opens up more networking opportunities, and the degree holds a bit more value. US programs are also stronger from a recruiting and job standpoint. It also comes down to where you want to establish yourself. If you want to live in the US, there's no better way to do it than to study here.Given the uncertainty surrounding US immigration policies, we've been getting questions about studying and working in the US and seeing some students apply to European schools instead. Still, there are a couple of reasons tons of students are still keen on pursuing an MBA in the US and why I recommend they apply now.Schools are working hard to keep international studentsI'm getting questions about the US being open to accepting international students and the risks of studying here.There are over 1.1 million international students in the US right now, and they're not all being kicked out and told to leave. There's a lot of hesitation among some international students about their ability to show up on campus.But what we're seeing from talking with deans and councils is that schools are doing a lot so that they can have their international students. These students make up a large percentage of the class at top business schools. Their legal teams are quite strong, and we've seen a lot of court interventions to uphold the rights and opportunities for international students.While it seems like there's an issue now, it's probably going to work itself out. People hesitating means there are fewer applicants, which means you're more likely to get in.You're not entering the job market nowPeople are worried about the job market not being great, and they're reading jobs reports that are coming out from these schools. We tell them you're not applying to apply for a job now. Things are cyclical. If you're applying to business school in 2025 and graduating in 2028, that's three years from now. The chances are that the job market will not be in the same place three years from now.Right now is the absolute best time to apply. This will be the best round in the span of many rounds that I've seen as far as acceptance rates go. The market is not great job-wise wise and you can spend that time educating yourself. You won't be missing out on huge promotions, huge raises, and new jobs. By the time you graduate, things may start to improve.MBAs are time-boundThe job market and political situation add an element of risk, but those who are looking to get ahead will find a way to succeed.Students often forget that the MBA is a time-bound program, and waiting too long to apply while the situation clears up might make it too difficult to get in.Universities prefer those in their mid- to late 20s because they are easier to place into jobs and because they want cohorts to mesh well. The median number of years of experience is five, and as you go further down the bell curve, there are just fewer and fewer spots that are available.Unless you are in your early 20s, you could be shooting yourself in the foot by delaying by one or two years. The fear of what might happen from a policy standpoint becomes irrelevant if you don't get into a program in a future year.Do you have a story to share about international graduate students in the US? Contact this reporter at sgoel@businessinsider.com.
    Like
    Love
    Wow
    Sad
    Angry
    239
    0 Σχόλια 0 Μοιράστηκε
  • The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)

    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2.
    With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature.
    Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series?
    Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show.
    Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career. 
    Photograph by Liane Hentscher/HBO
    How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season?
    Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season.
    The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season?
    Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs.
    Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required.

    The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season?
    Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming.
    Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle. 
    What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic?
    Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences.
    Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover.
    Photograph by Liane Hentscher/HBO
    The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did?
    Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs.
    Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence. 

    Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects?
    Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours.
    Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot.
    Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation.
    The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles.
    We had over a hundred shots in episode 2 that required CG Infected horde.
    Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts.

    The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment?
    Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves. 
    The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters?
    Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence.
    During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it!
    When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule.

    Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force. 
    During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain. 

    Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance?
    Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves. 

    Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city?
    Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty.
    Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic?
    Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots.
    Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp
    it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston.
    Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game. 

    The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment?
    Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings.
    Photograph by Liane Hentscher/HBO
    The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects?
    Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal.
    When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement.
    Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth.
    Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season!
    Photograph by Liane Hentscher/HBO
    Looking back on the project, what aspects of the visual effects are you most proud of?
    Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable.
    Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light. 
    How long have you worked on this show?
    Alex Wang // I’ve been on this season for nearly two years.
    Fiona Campbell Westgate // A little over one year; I joined the show in April 2024.
    What’s the VFX shots count?
    Alex Wang // We had just over 2,500 shots this Season.
    Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots.
    What is your next project?
    Fiona Campbell Westgate // Stay tuned…
    A big thanks for your time.
    WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website.
    © Vincent Frei – The Art of VFX – 2025
    #last #season #alex #wang #production
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Maceled a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025 #last #season #alex #wang #production
    WWW.ARTOFVFX.COM
    The Last of Us – Season 2: Alex Wang (Production VFX Supervisor) & Fiona Campbell Westgate (Production VFX Producer)
    After detailing the VFX work on The Last of Us Season 1 in 2023, Alex Wang returns to reflect on how the scope and complexity have evolved in Season 2. With close to 30 years of experience in the visual effects industry, Fiona Campbell Westgate has contributed to major productions such as Ghost in the Shell, Avatar: The Way of Water, Ant-Man and the Wasp: Quantumania, and Nyad. Her work on Nyad earned her a VES Award for Outstanding Supporting Visual Effects in a Photoreal Feature. Collaboration with Craig Mazin and Neil Druckmann is key to shaping the visual universe of The Last of Us. Can you share with us how you work with them and how they influence the visual direction of the series? Alex Wang // Craig visualizes the shot or scene before putting words on the page. His writing is always exceptionally detailed and descriptive, ultimately helping us to imagine the shot. Of course, no one understands The Last of Us better than Neil, who knows all aspects of the lore very well. He’s done much research and design work with the Naughty Dog team, so he gives us good guidance regarding creature and environment designs. I always try to begin with concept art to get the ball rolling with Craig and Neil’s ideas. This season, we collaborated with Chromatic Studios for concept art. They also contributed to the games, so I felt that continuity was beneficial for our show. Fiona Campbell Westgate // From the outset, it was clear that collaborating with Craig would be an exceptional experience. Early meetings revealed just how personable and invested Craig is. He works closely with every department to ensure that each episode is done to the highest level. Craig places unwavering trust in our VFX Supervisor, Alex Wang. They have an understanding between them that lends to an exceptional partnership. As the VFX Producer, I know how vital the dynamic between the Showrunner and VFX Supervisor is; working with these two has made for one of the best professional experiences of my career.  Photograph by Liane Hentscher/HBO How has your collaboration with Craig evolved between the first and second seasons? Were there any adjustments in the visual approach or narrative techniques you made this season? Alex Wang // Since everything was new in Season 1, we dedicated a lot of time and effort to exploring the show’s visual language, and we all learned a great deal about what worked and what didn’t for the show. In my initial conversations with Craig about Season 2, it was clear that he wanted to expand the show’s scope by utilizing what we established and learned in Season 1. He felt significantly more at ease fully committing to using VFX to help tell the story this season. The first season involved multiple VFX studios to handle the complexity of the effects. How did you divide the work among different studios for the second season? Alex Wang // Most of the vendors this season were also in Season 1, so we already had a shorthand. The VFX Producer, Fiona Campbell Westgate, and I work closely together to decide how to divide the work among our vendors. The type of work needs to be well-suited for the vendor and fit into our budget and schedule. We were extremely fortunate to have the vendors we did this season. I want to take this opportunity to thank Weta FX, DNEG, RISE, Distillery VFX, Storm Studios, Important Looking Pirates, Blackbird, Wylie Co., RVX, and VDK. We also had ILM for concept art and Digital Domain for previs. Fiona Campbell Westgate // Alex Wang and I were very aware of the tight delivery schedule, which added to the challenge of distributing the workload. We planned the work based on the individual studio’s capabilities, and tried not to burden them with back to back episodes wherever possible. Fortunately, there was shorthand with vendors from Season One, who were well-acquainted with the process and the quality of work the show required. The town of Jackson is a key location in The Last of Us. Could you explain how you approached creating and expanding this environment for the second season? Alex Wang // Since Season 1, this show has created incredible sets. However, the Jackson town set build is by far the most impressive in terms of scope. They constructed an 822 ft x 400 ft set in Minaty Bay that resembled a real town! I had early discussions with Production Designer Don MacAulay and his team about where they should concentrate their efforts and where VFX would make the most sense to take over. They focused on developing the town’s main street, where we believed most scenes would occur. There is a big reveal of Jackson in the first episode after Ellie comes out of the barn. Distillery VFX was responsible for the town’s extension, which appears seamless because the team took great pride in researching and ensuring the architecture aligned with the set while staying true to the tone of Jackson, Wyoming. Fiona Campbell Westgate // An impressive set was constructed in Minaty Bay, which served as the foundation for VFX to build upon. There is a beautiful establishing shot of Jackson in Episode 1 that was completed by Distillery, showing a safe and almost normal setting as Season Two starts. Across the episodes, Jackson set extensions were completed by our partners at RISE and Weta. Each had a different phase of Jackson to create, from almost idyllic to a town immersed in Battle.  What challenges did you face filming Jackson on both real and virtual sets? Was there a particular fusion between visual effects and live-action shots to make it feel realistic? Alex Wang // I always advocate for building exterior sets outdoors to take advantage of natural light. However, the drawback is that we cannot control the weather and lighting when filming over several days across two units. In Episode 2, there’s supposed to be a winter storm in Jackson, so maintaining consistency within the episode was essential. On sunny and rainy days, we used cranes to lift large 30x60ft screens to block the sun or rain. It was impossible to shield the entire set from the rain or sun, so we prioritized protecting the actors from sunlight or rain. Thus, you can imagine there was extensive weather cleanup for the episode to ensure consistency within the sequences. Fiona Campbell Westgate // We were fortunate that production built a large scale Jackson set. It provided a base for the full CG Jackson aerial shots and CG Set Extensions. The weather conditions at Minaty Bay presented a challenge during the filming of the end of the Battle sequence in Episode 2. While there were periods of bright sunshine, rainfall occurred during the filming of the end of the Battle sequence in Episode 2. In addition to the obvious visual effects work, it became necessary to replace the ground cover. Photograph by Liane Hentscher/HBO The attack on Jackson by the horde of infected in season 2 is a very intense moment. How did you approach the visual effects for this sequence? What techniques did you use to make the scale of the attack feel as impressive as it did? Alex Wang // We knew this would be a very complex sequence to shoot, and for it to be successful, we needed to start planning with the HODs from the very beginning. We began previs during prep with Weta FX and the episode’s director, Mark Mylod. The previs helped us understand Mark and the showrunner’s vision. This then served as a blueprint for all departments to follow, and in many instances, we filmed the previs. Fiona Campbell Westgate // The sheer size of the CG Infected Horde sets the tone for the scale of the Battle. It’s an intimidating moment when they are revealed through the blowing snow. The addition of CG explosions and atmospheric effects contributed in adding scale to the sequence.  Can you give us an insight into the technical challenges of capturing the infected horde? How much of the effect was done using CGI, and how much was achieved with practical effects? Alex Wang // Starting with a detailed previs that Mark and Craig approved was essential for planning the horde. We understood that we would never have enough stunt performers to fill a horde, nor could they carry out some stunts that would be too dangerous. I reviewed the previs with Stunt Coordinator Marny Eng numerous times to decide the best placements for her team’s stunt performers. We also collaborated with Barrie Gower from the Prosthetics team to determine the most effective allocation of his team’s efforts. Stunt performers positioned closest to the camera would receive the full prosthetic treatment, which can take hours. Weta FX was responsible for the incredible CG Infected horde work in the Jackson Battle. They have been a creative partner with HBO’s The Last of Us since Season 1, so they were brought on early for Season 2. I began discussions with Weta’s VFX supervisor, Nick Epstein, about how we could tackle these complex horde shots very early during the shoot. Typically, repetition in CG crowd scenes can be acceptable, such as armies with soldiers dressed in the same uniform or armour. However, for our Infected horde, Craig wanted to convey that the Infected didn’t come off an assembly line or all shop at the same clothing department store. Any repetition would feel artificial. These Infected were once civilians with families, or they were groups of raiders. We needed complex variations in height, body size, age, clothing, and hair. We built our base library of Infected, and then Nick and the Weta FX team developed a “mix and match” system, allowing the Infected to wear any costume and hair groom. A procedural texturing system was also developed for costumes, providing even greater variation. The most crucial aspect of the Infected horde was their motion. We had numerous shots cutting back-to-back with practical Infected, as well as shots where our CG Infected ran right alongside a stunt horde. It was incredibly unforgiving! Weta FX’s animation supervisor from Season 1, Dennis Yoo, returned for Season 2 to meet the challenge. Having been part of the first season, Dennis understood the expectations of Craig and Neil. Similar to issues of model repetition within a horde, it was relatively easy to perceive repetition, especially if they were running toward the same target. It was essential to enhance the details of their performances with nuances such as tripping and falling, getting back up, and trampling over each other. There also needed to be a difference in the Infected’s running speed. To ensure we had enough complexity within the horde, Dennis motion-captured almost 600 unique motion cycles. We had over a hundred shots in episode 2 that required CG Infected horde. Fiona Campbell Westgate // Nick Epstein, Weta VFX Supervisor, and Dennis Yoo, Weta Animation Supervisor, were faced with having to add hero, close-up Horde that had to integrate with practical Stunt performers. They achieved this through over 60 motion capture sessions and running it through a deformation system they developed. Every detail was applied to allow for a seamless blend with our practical Stunt performances. The Weta team created a custom costume and hair system that provided individual looks to the CG Infected Horde. We were able to avoid the repetitive look of a CG crowd due to these efforts. The movement of the infected horde is crucial for the intensity of the scene. How did you manage the animation and simulation of the infected to ensure smooth and realistic interaction with the environment? Fiona Campbell Westgate // We worked closely with the Stunt department to plan out positioning and where VFX would be adding the CG Horde. Craig Mazin wanted the Infected Horde to move in a way that humans cannot. The deformation system kept the body shape anatomically correct and allowed us to push the limits from how a human physically moves.  The Bloater makes a terrifying return this season. What were the key challenges in designing and animating this creature? How did you work on the Bloater’s interaction with the environment and other characters? Alex Wang // In Season 1, the Kansas City cul-de-sac sequence featured only a handful of Bloater shots. This season, however, nearly forty shots showcase the Bloater in broad daylight during the Battle of Jackson. We needed to redesign the Bloater asset to ensure it looked good in close-up shots from head to toe. Weta FX designed the Bloater for Season 1 and revamped the design for this season. Starting with the Bloater’s silhouette, it had to appear large, intimidating, and menacing. We explored enlarging the cordyceps head shape to make it feel almost like a crown, enhancing the Bloater’s impressive and strong presence. During filming, a stunt double stood in for the Bloater. This was mainly for scale reference and composition. It also helped the Infected stunt performers understand the Bloater’s spatial position, allowing them to avoid running through his space. Once we had an edit, Dennis mocapped the Bloater’s performances with his team. It is always challenging to get the motion right for a creature that weighs 600 pounds. We don’t want the mocap to be overly exaggerated, but it does break the character if the Bloater feels too “light.” The brilliant animation team at Weta FX brought the Bloater character to life and nailed it! When Tommy goes head-to-head with the Bloater, Craig was quite specific during the prep days about how the Bloater would bubble, melt, and burn as Tommy torches him with the flamethrower. Important Looking Pirates took on the “Burning Bloater” sequence, led by VFX Supervisor Philip Engstrom. They began with extensive R&D to ensure the Bloater’s skin would start to bubble and burn. ILP took the final Bloater asset from Weta FX and had to resculpt and texture the asset for the Bloater’s final burn state. Craig felt it was important for the Bloater to appear maimed at the end. The layers of FX were so complex that the R&D continued almost to the end of the delivery schedule. Fiona Campbell Westgate // This season the Bloater had to be bigger, more intimidating. The CG Asset was recreated to withstand the scrutiny of close ups and in daylight. Both Craig Mazin and Neil Druckmann worked closely with us during the process of the build. We referenced the game and applied elements of that version with ours. You’ll notice that his head is in the shape of crown, this is to convey he’s a powerful force.  During the Burning Bloater sequence in Episode 2, we brainstormed with Philip Engström, ILP VFX Supervisor, on how this creature would react to the flamethrower and how it would affect the ground as it burns. When the Bloater finally falls to the ground and dies, the extraordinary detail of the embers burning, fluid draining and melting the surrounding snow really sells that the CG creature was in the terrain.  Given the Bloater’s imposing size, how did you approach its integration into scenes with the actors? What techniques did you use to create such a realistic and menacing appearance? Fiona Campbell Westgate // For the Bloater, a stunt performer wearing a motion capture suit was filmed on set. This provided interaction with the actors and the environment. VFX enhanced the intensity of his movements, incorporating simulations to the CG Bloater’s skin and muscles that would reflect the weight and force as this terrifying creature moves.  Seattle in The Last of Us is a completely devastated city. Can you talk about how you recreated this destruction? What were the most difficult visual aspects to realize for this post-apocalyptic city? Fiona Campbell Westgate // We were meticulous in blending the CG destruction with the practical environment. The flora’s ability to overtake the environment had to be believable, and we adhered to the principle of form follows function. Due to the vastness of the CG devastation it was crucial to avoid repetitive effects. Consequently, our vendors were tasked with creating bespoke designs that evoked a sense of awe and beauty. Was Seattle’s architecture a key element in how you designed the visual effects? How did you adapt the city’s real-life urban landscape to meet the needs of the story while maintaining a coherent aesthetic? Alex Wang // It’s always important to Craig and Neil that we remain true to the cities our characters are in. DNEG was one of our primary vendors for Boston in Season 1, so it was natural for them to return for Season 2, this time focusing on Seattle. DNEG’s VFX Supervisor, Stephen James, who played a crucial role in developing the visual language of Boston for Season 1, also returns for this season. Stephen and Melaina Mace (DFX Supervisor) led a team to Seattle to shoot plates and perform lidar scans of parts of the city. We identified the buildings unique to Seattle that would have existed in 2003, so we ensured these buildings were always included in our establishing shots. Overgrowth and destruction have significantly influenced the environments in The Last of Us. The environment functions almost as a character in both Season 1 and Season 2. In the last season, the building destruction in Boston was primarily caused by military bombings. During this season, destruction mainly arises from dilapidation. Living in the Pacific Northwest, I understand how damp it can get for most of the year. I imagined that, over 20 years, the integrity of the buildings would be compromised by natural forces. This abundant moisture creates an exceptionally lush and vibrant landscape for much of the year. Therefore, when designing Seattle, we ensured that the destruction and overgrowth appeared intentional and aesthetically distinct from those of Boston. Fiona Campbell Westgate // Led by Stephen James, DNEG VFX Supervisor, and Melaina Mace, DNEG DFX Supervisor, the team captured photography, drone footage and the Clear Angle team captured LiDAR data over a three-day period in Seattle. It was crucial to include recognizable Seattle landmarks that would resonate with people familiar with the game.  The devastated city almost becomes a character in itself this season. What aspects of the visual effects did you have to enhance to increase the immersion of the viewer into this hostile and deteriorated environment? Fiona Campbell Westgate // It is indeed a character. Craig wanted it to be deteriorated but to have moments where it’s also beautiful in its devastation. For instance, in the Music Store in Episode 4 where Ellie is playing guitar for Dina, the deteriorated interior provides a beautiful backdrop to this intimate moment. The Set Decorating team dressed a specific section of the set, while VFX extended the destruction and overgrowth to encompass the entire environment, immersing the viewer in strange yet familiar surroundings. Photograph by Liane Hentscher/HBO The sequence where Ellie navigates a boat through a violent storm is stunning. What were the key challenges in creating this scene, especially with water simulation and the storm’s effects? Alex Wang // In the concluding episode of Season 2, Ellie is deep in Seattle, searching for Abby. The episode draws us closer to the Aquarium, where this area of Seattle is heavily flooded. Naturally, this brings challenges with CG water. In the scene where Ellie encounters Isaac and the W.L.F soldiers by the dock, we had a complex shoot involving multiple locations, including a water tank and a boat gimbal. There were also several full CG shots. For Isaac’s riverine boat, which was in a stormy ocean, I felt it was essential that the boat and the actors were given the appropriate motion. Weta FX assisted with tech-vis for all the boat gimbal work. We began with different ocean wave sizes caused by the storm, and once the filmmakers selected one, the boat’s motion in the tech-vis fed the special FX gimbal. When Ellie gets into the Jon boat, I didn’t want it on the same gimbal because I felt it would be too mechanical. Ellie’s weight needed to affect the boat as she got in, and that wouldn’t have happened with a mechanical gimbal. So, we opted to have her boat in a water tank for this scene. Special FX had wave makers that provided the boat with the appropriate movement. Instead of guessing what the ocean sim for the riverine boat should be, the tech- vis data enabled DNEG to get a head start on the water simulations in post-production. Craig wanted this sequence to appear convincingly dark, much like it looks out on the ocean at night. This allowed us to create dramatic visuals, using lightning strikes at moments to reveal depth. Were there any memorable moments or scenes from the series that you found particularly rewarding or challenging to work on from a visual effects standpoint? Alex Wang // The Last of Us tells the story of our characters’ journey. If you look at how season 2 begins in Jackson, it differs significantly from how we conclude the season in Seattle. We seldom return to the exact location in each episode, meaning every episode presents a unique challenge. The scope of work this season has been incredibly rewarding. We burned a Bloater, and we also introduced spores this season! Photograph by Liane Hentscher/HBO Looking back on the project, what aspects of the visual effects are you most proud of? Alex Wang // The Jackson Battle was incredibly complex, involving a grueling and lengthy shoot in quite challenging conditions, along with over 600 VFX shots in episode 2. It was truly inspiring to witness the determination of every department and vendor to give their all and create something remarkable. Fiona Campbell Westgate // I am immensely proud of the exceptional work accomplished by all of our vendors. During the VFX reviews, I found myself clapping with delight when the final shots were displayed; it was exciting to see remarkable results of the artists’ efforts come to light.  How long have you worked on this show? Alex Wang // I’ve been on this season for nearly two years. Fiona Campbell Westgate // A little over one year; I joined the show in April 2024. What’s the VFX shots count? Alex Wang // We had just over 2,500 shots this Season. Fiona Campbell Westgate // In Season 2, there were a total of 2656 visual effects shots. What is your next project? Fiona Campbell Westgate // Stay tuned… A big thanks for your time. WANT TO KNOW MORE?Blackbird: Dedicated page about The Last of Us – Season 2 website.DNEG: Dedicated page about The Last of Us – Season 2 on DNEG website.Important Looking Pirates: Dedicated page about The Last of Us – Season 2 website.RISE: Dedicated page about The Last of Us – Season 2 website.Weta FX: Dedicated page about The Last of Us – Season 2 website. © Vincent Frei – The Art of VFX – 2025
    Like
    Love
    Wow
    Sad
    Angry
    192
    0 Σχόλια 0 Μοιράστηκε
  • Gurman: Apple needs a major AI comeback, but this WWDC probably won’t be it

    According to Mark Gurman in his latest Power On newsletter, Apple insiders “believe that the conference may be a letdown from an AI standpoint,” highlighting how far behind Apple still is. Still, Apple has a few AI-related announcements slated for June 9.

    As previously reported, this year’s biggest AI announcement will be Apple’s plans to open up its on-device foundation models to third-party developers.
    These are the same ~3B parameter models Apple currently uses for things like text summarization and autocorrect, and they’ll soon be available for devs to integrate into their own apps.
    To be clear, this is a meaningful milestone for Apple’s AI platform. It gives developers a powerful tool to natively integrate into their apps and potentially unlock genuinely useful features.
    Still, these on-device models are far less capable than the large-scale, cloud-based systems used by OpenAI and Google, so don’t expect earth-shattering features.
    AI features slated for this year’s iOS 26
    Elsewhere in its AI efforts, Apple will reportedly:

    Launch a new battery power management mode;
    Reboot its Translate app, “now integrated with AirPods and Siri”;
    Start describing some features within apps like Safari and Photos as “AI-powered”.

    As Gurman puts it, this feels like a risky “gap year.” Internally, Apple is aiming to make up for it at WWDC 2026, with bigger swings that “it hopes it can try to convince consumers that it’s an AI innovator.“. However, given how fast the competition is moving, waiting until next year might put Apple even further behind, perception-wise.
    What’s still in the works?
    Currently, Apple’s ongoing AI developments include an LLM Siri, a revamped Shortcuts app, the ambitious health-related Project Mulberry, and a full-blown ChatGPT competitor with web search capabilities.
    According to Gurman, Apple is holding off on previewing some of these features to avoid repeating last year’s mistake, when it showed off Apple Intelligence with features that were nowhere near ready and are still MIA.
    Behind the scenes, Gurman reports Apple has made progress. It now has models with 3B, 7B, 33B, and 150B parameters in testing, with the largest ones relying on the cloud.
    Internal benchmarks suggest its top model is close to recent ChatGPT updates in quality. Still, concerns over hallucinations and internal debates over Apple’s approach to generative AI are keeping things private, for now.
    Apple’s dev AI story
    As for Apple’s developer offerings, Gurman reports:

    “Developers will see AI get more deeply integrated into Apple’s developer tools, including those for user interface testing. And, in a development that will certainly appease many developers, SwiftUI, a set of Apple frameworks and tools for creating app user interfaces, will finally get a built-in rich text editor.”

    And if you’re still waiting for Swift Assist, the AI code-completion tool Apple announced last year, Gurman says Apple is expected to provide an update on it. Still, there is no word yet on whether this update includes releasing the Anthropic-powered code completion version that its employees have been testing for the past few months.

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #gurman #apple #needs #major #comeback
    Gurman: Apple needs a major AI comeback, but this WWDC probably won’t be it
    According to Mark Gurman in his latest Power On newsletter, Apple insiders “believe that the conference may be a letdown from an AI standpoint,” highlighting how far behind Apple still is. Still, Apple has a few AI-related announcements slated for June 9. As previously reported, this year’s biggest AI announcement will be Apple’s plans to open up its on-device foundation models to third-party developers. These are the same ~3B parameter models Apple currently uses for things like text summarization and autocorrect, and they’ll soon be available for devs to integrate into their own apps. To be clear, this is a meaningful milestone for Apple’s AI platform. It gives developers a powerful tool to natively integrate into their apps and potentially unlock genuinely useful features. Still, these on-device models are far less capable than the large-scale, cloud-based systems used by OpenAI and Google, so don’t expect earth-shattering features. AI features slated for this year’s iOS 26 Elsewhere in its AI efforts, Apple will reportedly: Launch a new battery power management mode; Reboot its Translate app, “now integrated with AirPods and Siri”; Start describing some features within apps like Safari and Photos as “AI-powered”. As Gurman puts it, this feels like a risky “gap year.” Internally, Apple is aiming to make up for it at WWDC 2026, with bigger swings that “it hopes it can try to convince consumers that it’s an AI innovator.“. However, given how fast the competition is moving, waiting until next year might put Apple even further behind, perception-wise. What’s still in the works? Currently, Apple’s ongoing AI developments include an LLM Siri, a revamped Shortcuts app, the ambitious health-related Project Mulberry, and a full-blown ChatGPT competitor with web search capabilities. According to Gurman, Apple is holding off on previewing some of these features to avoid repeating last year’s mistake, when it showed off Apple Intelligence with features that were nowhere near ready and are still MIA. Behind the scenes, Gurman reports Apple has made progress. It now has models with 3B, 7B, 33B, and 150B parameters in testing, with the largest ones relying on the cloud. Internal benchmarks suggest its top model is close to recent ChatGPT updates in quality. Still, concerns over hallucinations and internal debates over Apple’s approach to generative AI are keeping things private, for now. Apple’s dev AI story As for Apple’s developer offerings, Gurman reports: “Developers will see AI get more deeply integrated into Apple’s developer tools, including those for user interface testing. And, in a development that will certainly appease many developers, SwiftUI, a set of Apple frameworks and tools for creating app user interfaces, will finally get a built-in rich text editor.” And if you’re still waiting for Swift Assist, the AI code-completion tool Apple announced last year, Gurman says Apple is expected to provide an update on it. Still, there is no word yet on whether this update includes releasing the Anthropic-powered code completion version that its employees have been testing for the past few months. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #gurman #apple #needs #major #comeback
    9TO5MAC.COM
    Gurman: Apple needs a major AI comeback, but this WWDC probably won’t be it
    According to Mark Gurman in his latest Power On newsletter, Apple insiders “believe that the conference may be a letdown from an AI standpoint,” highlighting how far behind Apple still is. Still, Apple has a few AI-related announcements slated for June 9. As previously reported, this year’s biggest AI announcement will be Apple’s plans to open up its on-device foundation models to third-party developers. These are the same ~3B parameter models Apple currently uses for things like text summarization and autocorrect, and they’ll soon be available for devs to integrate into their own apps. To be clear, this is a meaningful milestone for Apple’s AI platform. It gives developers a powerful tool to natively integrate into their apps and potentially unlock genuinely useful features. Still, these on-device models are far less capable than the large-scale, cloud-based systems used by OpenAI and Google, so don’t expect earth-shattering features. AI features slated for this year’s iOS 26 Elsewhere in its AI efforts, Apple will reportedly: Launch a new battery power management mode; Reboot its Translate app, “now integrated with AirPods and Siri”; Start describing some features within apps like Safari and Photos as “AI-powered”. As Gurman puts it, this feels like a risky “gap year.” Internally, Apple is aiming to make up for it at WWDC 2026, with bigger swings that “it hopes it can try to convince consumers that it’s an AI innovator.“. However, given how fast the competition is moving, waiting until next year might put Apple even further behind, perception-wise. What’s still in the works? Currently, Apple’s ongoing AI developments include an LLM Siri, a revamped Shortcuts app, the ambitious health-related Project Mulberry, and a full-blown ChatGPT competitor with web search capabilities. According to Gurman, Apple is holding off on previewing some of these features to avoid repeating last year’s mistake, when it showed off Apple Intelligence with features that were nowhere near ready and are still MIA. Behind the scenes, Gurman reports Apple has made progress. It now has models with 3B, 7B, 33B, and 150B parameters in testing, with the largest ones relying on the cloud. Internal benchmarks suggest its top model is close to recent ChatGPT updates in quality. Still, concerns over hallucinations and internal debates over Apple’s approach to generative AI are keeping things private, for now. Apple’s dev AI story As for Apple’s developer offerings, Gurman reports: “Developers will see AI get more deeply integrated into Apple’s developer tools, including those for user interface testing. And, in a development that will certainly appease many developers, SwiftUI, a set of Apple frameworks and tools for creating app user interfaces, will finally get a built-in rich text editor.” And if you’re still waiting for Swift Assist, the AI code-completion tool Apple announced last year, Gurman says Apple is expected to provide an update on it. Still, there is no word yet on whether this update includes releasing the Anthropic-powered code completion version that its employees have been testing for the past few months. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Σχόλια 0 Μοιράστηκε
  • Automated Text Messages for Business: A Marketer’s Guide

    Reading Time: 9 minutes
    Did you know SMS open rates are as high as 98%, with 45% replying to branded SMS marketing? As texting continues to witness a steady rise, your brand could be missing out if you’re not leveraging automated text messaging for business!
    Text messaging is when your brand communicates with customers via SMS. These messages may be automated, that is, scheduled to be sent at opportune times.
    Interested in sending an automated text message for business? Keep reading.

     
    What is SMS Marketing Automation?
    SMS marketing automation is the process of automatically sending text messages to recipients. It enables brands to send messages when certain trigger conditions are met. For instance, sending messages to a customer to confirm a purchase or remind them to complete a purchase when they have left items in the cart.
    What are the main benefits of automated SMS marketing?
    An automated text message for business can unlock many benefits, some of which are listed below:

    Saving time: Automated text message marketing saves time for core tasks rather than dedicating time for manual text responses for order confirmations, abandoned cart reminders, etc.
    Scalability: Using SMS marketing automation, brands can look to scale their SMS campaigns to a large customer base.
    Strategic communication: With SMS marketing automation, brands can deliver larger automated SMS drip campaigns tailored to specific target groups.
    Greater engagement: 90% of SMS messages are read within 3 minutes of delivery. With automated SMS marketing, brands can send timely responses and elevate customer experience.
    Higher open rates: Automated text messages typically have higher open rates than emails, making them a more favorable medium for reaching customers.

    Is sending automated text messages legal for businesses?
    Businesses can send automated text messages to customers as long as they follow the rules and regulations of the state where they are being sent.
    In the United States, automated text marketing requires following the Telephone Consumer Protection Act, which requires brands to get written consent before sending messages to customers. The new TCPA rules also state that brands need to honor SMS opt-out requests within 10 business days.
    Additionally, the CAN-SPAM Act sets rules for commercial communications. Brands need to ensure that recipients have opted for an automated text message service, as spam messages can result in fines.
    Moving on, how can your brand implement SMS messaging? Here are some use cases for automated text messaging for business to help you get started.
     
    5 Common Use Cases for Automated Text Messaging for Business
    From the many ways in which brands can tap SMS marketing automation, below are a few use cases:
    1. Reminders and Confirmations
    You could send personalized reminders and notifications to customers in the form of an informative automated text message, be it for bill payments, restaurant bookings, or any other appointments.
    Example: “Hi, here’s a reminder of your appointment with Dr. XYZ at 2 PM tomorrow. Reply 1 to confirm, 2 to reschedule, and 3 to cancel.”
    2. Order Confirmations
    SMS marketing automation can be used to send tailored order confirmations and update customers on shipping and delivery. Manually doing this may be cumbersome and lead to errors, making it impossible for brands to operate at scale.
    Text messaging automation software can solve this problem by automatically sending shipping and delivery updates to customers to keep them informed of the latest activity. Triggers can be scheduled depending on the movement of the shipment.
    3. Abandoned Cart Reminders
    An often underrated use of automated text messages for business is nudging customers about items they left in their cart. Brands can incentivize customers to complete their purchase by sending them text messages with specific deals on items left in the cart. The result? Higher potential conversions!
    4. Promotional Offers
    Sending customized offers to customers via SMS marketing automation can be another way to boost conversions. Brands must ensure a crisp message followed by a clear CTA to relevant product pages. In this case, automation can do what is not manually possible. With the use of customer behavioral data, the right customers can be targeted with their preferred products.
    5. Customer Support and Feedback
    While your customer support may not be available 24/7, some customer queries can be handled automatically using automated text messages for business.
    For instance, your brand could answer frequently asked questions. SMS marketing automation can also be used for gathering quick feedback from customers who have recently interacted with your brand by simply including a link in the message.
     
    How to Set Up Automated Text Messages for Business

    Setting up SMS marketing automation is easy and can be accomplished in just 5 steps, as listed below.
    1. Sign up for an automated text messaging service
    To begin your automated text messaging campaign, you would first need to select a platform that meets your requirements. It would be wise to pick a service that allows you to automate end-to-end workflows and empower your team.
    2. Upload recipients’ contact details
    Next, upload the contact details of those who have opted to receive your texts. You can either drag your contact file onto the page or browse for it to upload it. Remember, just having the recipients’ phone numbers doesn’t mean you can start sending them SMS messages right away. They need to explicitly give you their consent first.
    3. Create segmented lists
    Create segmented groups of recipients based on criteria such as demographics, customer type, and so on. This can help your brand send automated text messages to relevant audiences and raise the chances of engagement.
    4. Compose messages
    This step can be highly crucial as the content of your SMS can make or break your connection with your customer. Leveraging data can help with hyper-personalizing your communication. Typically, a crisp message with a relevant CTA could boost your click-through rates.
    5. Schedule, test, and deploy
    Once your text messages are ready, it’s time to set up a schedule to send them. Some messages may be on specific recurring dates like birthdays, while some may be triggered by certain customer actions like cart abandonment. Finally, don’t forget to test your SMS marketing automation before you deploy it!
     
    5 Automated Text Message Examples and Templates for Campaign Inspiration
    Now that you know how to set up automated text messages for business, let’s get some much-needed inspiration for creating your own successful SMS marketing campaigns.
    1. Lulus | Promoting Offers and Deals

    Source: /
    Notice how women’s wear brand Lulus strategically uses automated SMS marketing to remind customers of their ongoing holiday sale, with a link to finish the purchase.
    In this case, automation helps deliver personalized deals to customers who may have earlier browsed through the brand’s website. For those who dropped off the website due to the absence of offers, this SMS can work as a smart nudge to complete that purchase.
    2. Old Navy | Wishing Customers on Special Days

    Source: /
    In the example above, Old Navy not only wishes its customers Happy Mother’s Day, but also goes a step further in offering deals on women’s wear.
    In this case, an automated text message could be used to narrow the target audience to women who are or could be mothers. This also includes wishing them on their special days.
    3. Gaspar | Appointment Confirmation and Reminders

    Source:
    A practical use of automated text messages for business is to confirm appointment reservations and get a response from the customer if they are willing to make changes. To elevate the experience further, the brand could add options to reschedule or cancel the reservation on the SMS itself, using automated text messaging.
    4. Amazon | Shipping and Tracking

    Source:
    This template ticks all the boxes for a well-crafted automated text message for business. Amazon informs the customer of the product name, expected date, and order status, along with the tracking link. These kinds of informative messages tend to be super useful to customers.
    5. The Perfect Jean | Retargeting via Cart Abandonment Messages

    Source
    This automated text message example can serve as a model template for cart abandonment notifications. The brand reminds the customer of the items left in the cart and encourages the purchase by offering a custom discount code and a link to complete the purchase.
     
    Top 3 Automated Text Message Services for Businesses
    SMS marketing automation can be simple to set up with the right tools. Here are some leading services that can help you send automated text messages to your customer base.
    1. MoEngage

    MoEngage’s SMS automation software can make targeted SMS campaigns seem like a breeze with ready-to-use campaign ideas, hyper-personalized messaging, and 360-degree customer views.
    Standout SMS Automation Feature: MoEngage’s automated text message marketing service stands out for its ability to integrate SMS into the overall customer journey and deliver insight-led, revenue-driven campaigns.
    How Pricing Works: Schedule a demo to know which of the two pricing plans works best for your business.
    2. Textedly

    Textedly offers an easy-to-use automated text message software for real-time conversations with your customers. However, it lags behind in providing an omnichannel brand experience.
    Standout SMS Automation Feature: The service has an intuitive platform and also a shared team inbox to ensure timely automated responses for instant customer support.
    How Pricing Works: While the service offers a free plan with limited features, the basic plan starts at a month and varies for other advanced plans.
    3. Sender

    Sender enables SMS automation campaigns for bulk, personalized messages to customers.
    Standout SMS Automation Feature: Sender is an automated SMS software that offers affordable, easy-to-use templates for automating text messaging campaigns. However, it is limited to email and text message services, and does not offer integrated engagement solutions as part of the overall customer experience.
    How Pricing Works: Sender offers a free plan, with paid plans starting at a month.
     
    SMS Marketing Automation Strategies That Improve Campaign Performance
    Sure, you may be excited to launch your own automated SMS campaign. But hold on! There’s so much more you can get from automated text messaging, with some practical SMS marketing automation strategies listed below.
    1. Make your automated text message conversational and interactive
    Customers love it when their brands are responsive to their needs! Using automated SMS services, you can make your text messages conversational. For instance, you can ask a series of questions to help customers reach informed decisions regarding your product or service.
    2. Always ask for consent
    Before blasting automated text messages to your customers, it’s best to get their consent. Clear opt-in and opt-out options are necessary not only from a legal standpoint but also from the transparency angle.
    3. Segment and personalize for your customer
    Nobody likes a random brand message that could be sent to just about anyone. You can use customer demographic and behavioral data to segment your customers into different groups. Armed with better insights, you can send more targeted and personalized text messages to your customers.
    4. Grow your subscriber list with incentives
    Customers would be happy to consent to receiving your brand’s messages if they are offered something in return. Weaving smart opt-in messages with exclusive promotional deals and other perks can grow your subscriber base.
    5. Review the little things
    A text message is one of the quickest ways to reach a customer. When done right, it can work wonders for your brand! But before you hurry to send out texts, remember to include crucial elements like a crisp main message, a valuable offer, a strong CTA, and a link to access the relevant offers or information. Finally, don’t forget to do automated SMS testing before sending the SMS.

     
    Enhance Your Outreach with Automated SMS Marketing from MoEngage
    Brands can use the small but mighty SMS in many ways to deepen customer engagement. However, it is important to get subscriber consent and use other best practices to make the most of automated SMS marketing.
    MoEngage’s SMS marketing platform can help you have smoother interactions with your customers guided by detailed analytics, as part of a seamless omnichannel journey.
    Get a personalized demo to know how MoEngage can help you tap the massive potential of automated text messaging.
    The post Automated Text Messages for Business: A Marketer’s Guide appeared first on MoEngage.
    #automated #text #messages #business #marketers
    Automated Text Messages for Business: A Marketer’s Guide
    Reading Time: 9 minutes Did you know SMS open rates are as high as 98%, with 45% replying to branded SMS marketing? As texting continues to witness a steady rise, your brand could be missing out if you’re not leveraging automated text messaging for business! Text messaging is when your brand communicates with customers via SMS. These messages may be automated, that is, scheduled to be sent at opportune times. Interested in sending an automated text message for business? Keep reading.   What is SMS Marketing Automation? SMS marketing automation is the process of automatically sending text messages to recipients. It enables brands to send messages when certain trigger conditions are met. For instance, sending messages to a customer to confirm a purchase or remind them to complete a purchase when they have left items in the cart. What are the main benefits of automated SMS marketing? An automated text message for business can unlock many benefits, some of which are listed below: Saving time: Automated text message marketing saves time for core tasks rather than dedicating time for manual text responses for order confirmations, abandoned cart reminders, etc. Scalability: Using SMS marketing automation, brands can look to scale their SMS campaigns to a large customer base. Strategic communication: With SMS marketing automation, brands can deliver larger automated SMS drip campaigns tailored to specific target groups. Greater engagement: 90% of SMS messages are read within 3 minutes of delivery. With automated SMS marketing, brands can send timely responses and elevate customer experience. Higher open rates: Automated text messages typically have higher open rates than emails, making them a more favorable medium for reaching customers. Is sending automated text messages legal for businesses? Businesses can send automated text messages to customers as long as they follow the rules and regulations of the state where they are being sent. In the United States, automated text marketing requires following the Telephone Consumer Protection Act, which requires brands to get written consent before sending messages to customers. The new TCPA rules also state that brands need to honor SMS opt-out requests within 10 business days. Additionally, the CAN-SPAM Act sets rules for commercial communications. Brands need to ensure that recipients have opted for an automated text message service, as spam messages can result in fines. Moving on, how can your brand implement SMS messaging? Here are some use cases for automated text messaging for business to help you get started.   5 Common Use Cases for Automated Text Messaging for Business From the many ways in which brands can tap SMS marketing automation, below are a few use cases: 1. Reminders and Confirmations You could send personalized reminders and notifications to customers in the form of an informative automated text message, be it for bill payments, restaurant bookings, or any other appointments. Example: “Hi, here’s a reminder of your appointment with Dr. XYZ at 2 PM tomorrow. Reply 1 to confirm, 2 to reschedule, and 3 to cancel.” 2. Order Confirmations SMS marketing automation can be used to send tailored order confirmations and update customers on shipping and delivery. Manually doing this may be cumbersome and lead to errors, making it impossible for brands to operate at scale. Text messaging automation software can solve this problem by automatically sending shipping and delivery updates to customers to keep them informed of the latest activity. Triggers can be scheduled depending on the movement of the shipment. 3. Abandoned Cart Reminders An often underrated use of automated text messages for business is nudging customers about items they left in their cart. Brands can incentivize customers to complete their purchase by sending them text messages with specific deals on items left in the cart. The result? Higher potential conversions! 4. Promotional Offers Sending customized offers to customers via SMS marketing automation can be another way to boost conversions. Brands must ensure a crisp message followed by a clear CTA to relevant product pages. In this case, automation can do what is not manually possible. With the use of customer behavioral data, the right customers can be targeted with their preferred products. 5. Customer Support and Feedback While your customer support may not be available 24/7, some customer queries can be handled automatically using automated text messages for business. For instance, your brand could answer frequently asked questions. SMS marketing automation can also be used for gathering quick feedback from customers who have recently interacted with your brand by simply including a link in the message.   How to Set Up Automated Text Messages for Business Setting up SMS marketing automation is easy and can be accomplished in just 5 steps, as listed below. 1. Sign up for an automated text messaging service To begin your automated text messaging campaign, you would first need to select a platform that meets your requirements. It would be wise to pick a service that allows you to automate end-to-end workflows and empower your team. 2. Upload recipients’ contact details Next, upload the contact details of those who have opted to receive your texts. You can either drag your contact file onto the page or browse for it to upload it. Remember, just having the recipients’ phone numbers doesn’t mean you can start sending them SMS messages right away. They need to explicitly give you their consent first. 3. Create segmented lists Create segmented groups of recipients based on criteria such as demographics, customer type, and so on. This can help your brand send automated text messages to relevant audiences and raise the chances of engagement. 4. Compose messages This step can be highly crucial as the content of your SMS can make or break your connection with your customer. Leveraging data can help with hyper-personalizing your communication. Typically, a crisp message with a relevant CTA could boost your click-through rates. 5. Schedule, test, and deploy Once your text messages are ready, it’s time to set up a schedule to send them. Some messages may be on specific recurring dates like birthdays, while some may be triggered by certain customer actions like cart abandonment. Finally, don’t forget to test your SMS marketing automation before you deploy it!   5 Automated Text Message Examples and Templates for Campaign Inspiration Now that you know how to set up automated text messages for business, let’s get some much-needed inspiration for creating your own successful SMS marketing campaigns. 1. Lulus | Promoting Offers and Deals Source: / Notice how women’s wear brand Lulus strategically uses automated SMS marketing to remind customers of their ongoing holiday sale, with a link to finish the purchase. In this case, automation helps deliver personalized deals to customers who may have earlier browsed through the brand’s website. For those who dropped off the website due to the absence of offers, this SMS can work as a smart nudge to complete that purchase. 2. Old Navy | Wishing Customers on Special Days Source: / In the example above, Old Navy not only wishes its customers Happy Mother’s Day, but also goes a step further in offering deals on women’s wear. In this case, an automated text message could be used to narrow the target audience to women who are or could be mothers. This also includes wishing them on their special days. 3. Gaspar | Appointment Confirmation and Reminders Source: A practical use of automated text messages for business is to confirm appointment reservations and get a response from the customer if they are willing to make changes. To elevate the experience further, the brand could add options to reschedule or cancel the reservation on the SMS itself, using automated text messaging. 4. Amazon | Shipping and Tracking Source: This template ticks all the boxes for a well-crafted automated text message for business. Amazon informs the customer of the product name, expected date, and order status, along with the tracking link. These kinds of informative messages tend to be super useful to customers. 5. The Perfect Jean | Retargeting via Cart Abandonment Messages Source This automated text message example can serve as a model template for cart abandonment notifications. The brand reminds the customer of the items left in the cart and encourages the purchase by offering a custom discount code and a link to complete the purchase.   Top 3 Automated Text Message Services for Businesses SMS marketing automation can be simple to set up with the right tools. Here are some leading services that can help you send automated text messages to your customer base. 1. MoEngage MoEngage’s SMS automation software can make targeted SMS campaigns seem like a breeze with ready-to-use campaign ideas, hyper-personalized messaging, and 360-degree customer views. Standout SMS Automation Feature: MoEngage’s automated text message marketing service stands out for its ability to integrate SMS into the overall customer journey and deliver insight-led, revenue-driven campaigns. How Pricing Works: Schedule a demo to know which of the two pricing plans works best for your business. 2. Textedly Textedly offers an easy-to-use automated text message software for real-time conversations with your customers. However, it lags behind in providing an omnichannel brand experience. Standout SMS Automation Feature: The service has an intuitive platform and also a shared team inbox to ensure timely automated responses for instant customer support. How Pricing Works: While the service offers a free plan with limited features, the basic plan starts at a month and varies for other advanced plans. 3. Sender Sender enables SMS automation campaigns for bulk, personalized messages to customers. Standout SMS Automation Feature: Sender is an automated SMS software that offers affordable, easy-to-use templates for automating text messaging campaigns. However, it is limited to email and text message services, and does not offer integrated engagement solutions as part of the overall customer experience. How Pricing Works: Sender offers a free plan, with paid plans starting at a month.   SMS Marketing Automation Strategies That Improve Campaign Performance Sure, you may be excited to launch your own automated SMS campaign. But hold on! There’s so much more you can get from automated text messaging, with some practical SMS marketing automation strategies listed below. 1. Make your automated text message conversational and interactive Customers love it when their brands are responsive to their needs! Using automated SMS services, you can make your text messages conversational. For instance, you can ask a series of questions to help customers reach informed decisions regarding your product or service. 2. Always ask for consent Before blasting automated text messages to your customers, it’s best to get their consent. Clear opt-in and opt-out options are necessary not only from a legal standpoint but also from the transparency angle. 3. Segment and personalize for your customer Nobody likes a random brand message that could be sent to just about anyone. You can use customer demographic and behavioral data to segment your customers into different groups. Armed with better insights, you can send more targeted and personalized text messages to your customers. 4. Grow your subscriber list with incentives Customers would be happy to consent to receiving your brand’s messages if they are offered something in return. Weaving smart opt-in messages with exclusive promotional deals and other perks can grow your subscriber base. 5. Review the little things A text message is one of the quickest ways to reach a customer. When done right, it can work wonders for your brand! But before you hurry to send out texts, remember to include crucial elements like a crisp main message, a valuable offer, a strong CTA, and a link to access the relevant offers or information. Finally, don’t forget to do automated SMS testing before sending the SMS.   Enhance Your Outreach with Automated SMS Marketing from MoEngage Brands can use the small but mighty SMS in many ways to deepen customer engagement. However, it is important to get subscriber consent and use other best practices to make the most of automated SMS marketing. MoEngage’s SMS marketing platform can help you have smoother interactions with your customers guided by detailed analytics, as part of a seamless omnichannel journey. Get a personalized demo to know how MoEngage can help you tap the massive potential of automated text messaging. The post Automated Text Messages for Business: A Marketer’s Guide appeared first on MoEngage. #automated #text #messages #business #marketers
    WWW.MOENGAGE.COM
    Automated Text Messages for Business: A Marketer’s Guide
    Reading Time: 9 minutes Did you know SMS open rates are as high as 98%, with 45% replying to branded SMS marketing? As texting continues to witness a steady rise, your brand could be missing out if you’re not leveraging automated text messaging for business! Text messaging is when your brand communicates with customers via SMS. These messages may be automated, that is, scheduled to be sent at opportune times. Interested in sending an automated text message for business? Keep reading.   What is SMS Marketing Automation? SMS marketing automation is the process of automatically sending text messages to recipients. It enables brands to send messages when certain trigger conditions are met. For instance, sending messages to a customer to confirm a purchase or remind them to complete a purchase when they have left items in the cart. What are the main benefits of automated SMS marketing? An automated text message for business can unlock many benefits, some of which are listed below: Saving time: Automated text message marketing saves time for core tasks rather than dedicating time for manual text responses for order confirmations, abandoned cart reminders, etc. Scalability: Using SMS marketing automation, brands can look to scale their SMS campaigns to a large customer base. Strategic communication: With SMS marketing automation, brands can deliver larger automated SMS drip campaigns tailored to specific target groups. Greater engagement: 90% of SMS messages are read within 3 minutes of delivery. With automated SMS marketing, brands can send timely responses and elevate customer experience. Higher open rates: Automated text messages typically have higher open rates than emails, making them a more favorable medium for reaching customers. Is sending automated text messages legal for businesses? Businesses can send automated text messages to customers as long as they follow the rules and regulations of the state where they are being sent. In the United States, automated text marketing requires following the Telephone Consumer Protection Act (TCPA), which requires brands to get written consent before sending messages to customers. The new TCPA rules also state that brands need to honor SMS opt-out requests within 10 business days. Additionally, the CAN-SPAM Act sets rules for commercial communications. Brands need to ensure that recipients have opted for an automated text message service, as spam messages can result in fines. Moving on, how can your brand implement SMS messaging? Here are some use cases for automated text messaging for business to help you get started.   5 Common Use Cases for Automated Text Messaging for Business From the many ways in which brands can tap SMS marketing automation, below are a few use cases: 1. Reminders and Confirmations You could send personalized reminders and notifications to customers in the form of an informative automated text message, be it for bill payments, restaurant bookings, or any other appointments. Example: “Hi [Customer Name], here’s a reminder of your appointment with Dr. XYZ at 2 PM tomorrow. Reply 1 to confirm, 2 to reschedule, and 3 to cancel.” 2. Order Confirmations SMS marketing automation can be used to send tailored order confirmations and update customers on shipping and delivery. Manually doing this may be cumbersome and lead to errors, making it impossible for brands to operate at scale. Text messaging automation software can solve this problem by automatically sending shipping and delivery updates to customers to keep them informed of the latest activity. Triggers can be scheduled depending on the movement of the shipment. 3. Abandoned Cart Reminders An often underrated use of automated text messages for business is nudging customers about items they left in their cart. Brands can incentivize customers to complete their purchase by sending them text messages with specific deals on items left in the cart. The result? Higher potential conversions! 4. Promotional Offers Sending customized offers to customers via SMS marketing automation can be another way to boost conversions. Brands must ensure a crisp message followed by a clear CTA to relevant product pages. In this case, automation can do what is not manually possible. With the use of customer behavioral data, the right customers can be targeted with their preferred products. 5. Customer Support and Feedback While your customer support may not be available 24/7, some customer queries can be handled automatically using automated text messages for business. For instance, your brand could answer frequently asked questions (FAQs). SMS marketing automation can also be used for gathering quick feedback from customers who have recently interacted with your brand by simply including a link in the message.   How to Set Up Automated Text Messages for Business Setting up SMS marketing automation is easy and can be accomplished in just 5 steps, as listed below. 1. Sign up for an automated text messaging service To begin your automated text messaging campaign, you would first need to select a platform that meets your requirements. It would be wise to pick a service that allows you to automate end-to-end workflows and empower your team. 2. Upload recipients’ contact details Next, upload the contact details of those who have opted to receive your texts. You can either drag your contact file onto the page or browse for it to upload it. Remember, just having the recipients’ phone numbers doesn’t mean you can start sending them SMS messages right away. They need to explicitly give you their consent first. 3. Create segmented lists Create segmented groups of recipients based on criteria such as demographics, customer type, and so on. This can help your brand send automated text messages to relevant audiences and raise the chances of engagement. 4. Compose messages This step can be highly crucial as the content of your SMS can make or break your connection with your customer. Leveraging data can help with hyper-personalizing your communication. Typically, a crisp message with a relevant CTA could boost your click-through rates. 5. Schedule, test, and deploy Once your text messages are ready, it’s time to set up a schedule to send them. Some messages may be on specific recurring dates like birthdays, while some may be triggered by certain customer actions like cart abandonment. Finally, don’t forget to test your SMS marketing automation before you deploy it!   5 Automated Text Message Examples and Templates for Campaign Inspiration Now that you know how to set up automated text messages for business, let’s get some much-needed inspiration for creating your own successful SMS marketing campaigns. 1. Lulus | Promoting Offers and Deals Source: https://smsarchives.com/messages/lulus-text-message-marketing-example-12-31-2021/ Notice how women’s wear brand Lulus strategically uses automated SMS marketing to remind customers of their ongoing holiday sale, with a link to finish the purchase. In this case, automation helps deliver personalized deals to customers who may have earlier browsed through the brand’s website. For those who dropped off the website due to the absence of offers, this SMS can work as a smart nudge to complete that purchase. 2. Old Navy | Wishing Customers on Special Days Source: https://smsarchives.com/messages/old-navy-text-message-marketing-example-05-09-2021/ In the example above, Old Navy not only wishes its customers Happy Mother’s Day, but also goes a step further in offering deals on women’s wear. In this case, an automated text message could be used to narrow the target audience to women who are or could be mothers. This also includes wishing them on their special days. 3. Gaspar | Appointment Confirmation and Reminders Source: https://support.opentable.com/servlet/rtaImage?eid=ka0UQ0000003ogD&feoid=00N0c00000Ay3y5&refid=0EMDn000001hK7V A practical use of automated text messages for business is to confirm appointment reservations and get a response from the customer if they are willing to make changes. To elevate the experience further, the brand could add options to reschedule or cancel the reservation on the SMS itself, using automated text messaging. 4. Amazon | Shipping and Tracking Source: https://www.wonderment.com/hubfs/Wonderment_September2021/image/amazon_sms_shipping_alerts_w720.jpg This template ticks all the boxes for a well-crafted automated text message for business. Amazon informs the customer of the product name, expected date, and order status, along with the tracking link. These kinds of informative messages tend to be super useful to customers. 5. The Perfect Jean | Retargeting via Cart Abandonment Messages Source This automated text message example can serve as a model template for cart abandonment notifications. The brand reminds the customer of the items left in the cart and encourages the purchase by offering a custom discount code and a link to complete the purchase.   Top 3 Automated Text Message Services for Businesses SMS marketing automation can be simple to set up with the right tools. Here are some leading services that can help you send automated text messages to your customer base. 1. MoEngage MoEngage’s SMS automation software can make targeted SMS campaigns seem like a breeze with ready-to-use campaign ideas, hyper-personalized messaging, and 360-degree customer views. Standout SMS Automation Feature: MoEngage’s automated text message marketing service stands out for its ability to integrate SMS into the overall customer journey and deliver insight-led, revenue-driven campaigns. How Pricing Works: Schedule a demo to know which of the two pricing plans works best for your business. 2. Textedly Textedly offers an easy-to-use automated text message software for real-time conversations with your customers. However, it lags behind in providing an omnichannel brand experience. Standout SMS Automation Feature: The service has an intuitive platform and also a shared team inbox to ensure timely automated responses for instant customer support. How Pricing Works: While the service offers a free plan with limited features, the basic plan starts at $26 a month and varies for other advanced plans. 3. Sender Sender enables SMS automation campaigns for bulk, personalized messages to customers. Standout SMS Automation Feature: Sender is an automated SMS software that offers affordable, easy-to-use templates for automating text messaging campaigns. However, it is limited to email and text message services, and does not offer integrated engagement solutions as part of the overall customer experience. How Pricing Works: Sender offers a free plan, with paid plans starting at $10 a month.   SMS Marketing Automation Strategies That Improve Campaign Performance Sure, you may be excited to launch your own automated SMS campaign. But hold on! There’s so much more you can get from automated text messaging, with some practical SMS marketing automation strategies listed below. 1. Make your automated text message conversational and interactive Customers love it when their brands are responsive to their needs! Using automated SMS services, you can make your text messages conversational. For instance, you can ask a series of questions to help customers reach informed decisions regarding your product or service. 2. Always ask for consent Before blasting automated text messages to your customers, it’s best to get their consent. Clear opt-in and opt-out options are necessary not only from a legal standpoint but also from the transparency angle. 3. Segment and personalize for your customer Nobody likes a random brand message that could be sent to just about anyone. You can use customer demographic and behavioral data to segment your customers into different groups. Armed with better insights, you can send more targeted and personalized text messages to your customers. 4. Grow your subscriber list with incentives Customers would be happy to consent to receiving your brand’s messages if they are offered something in return. Weaving smart opt-in messages with exclusive promotional deals and other perks can grow your subscriber base. 5. Review the little things A text message is one of the quickest ways to reach a customer. When done right, it can work wonders for your brand! But before you hurry to send out texts, remember to include crucial elements like a crisp main message, a valuable offer, a strong CTA, and a link to access the relevant offers or information. Finally, don’t forget to do automated SMS testing before sending the SMS.   Enhance Your Outreach with Automated SMS Marketing from MoEngage Brands can use the small but mighty SMS in many ways to deepen customer engagement. However, it is important to get subscriber consent and use other best practices to make the most of automated SMS marketing. MoEngage’s SMS marketing platform can help you have smoother interactions with your customers guided by detailed analytics, as part of a seamless omnichannel journey. Get a personalized demo to know how MoEngage can help you tap the massive potential of automated text messaging. The post Automated Text Messages for Business: A Marketer’s Guide appeared first on MoEngage.
    0 Σχόλια 0 Μοιράστηκε
  • Want to lower your dementia risk? Start by stressing less

    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk.

    Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia.

    We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age.

    Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life.

    Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging.

    Social isolation and stress

    Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health.

    In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age.

    It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other.

    For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline.

    Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences.

    Stress is often missing from dementia prevention efforts

    A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement.

    What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress.

    Avoiding stressful events and difficult life circumstances is typically not an option.

    Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age.

    Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits.

    Lifestyle changes to manage stress and lessen dementia risk

    The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood.

    Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress:

    Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference.

    Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable.

    If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress.

    If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits.

    Walkable neighborhoods, lifelong learning

    A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well.

    However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health.

    Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully.

    In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier.

    Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people.

    Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State.

    Martin J. Sliwinski is a professor of human development and family studies at Penn State.

    This article is republished from The Conversation under a Creative Commons license. Read the original article.
    #want #lower #your #dementia #risk
    Want to lower your dementia risk? Start by stressing less
    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk. Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia. We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age. Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life. Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging. Social isolation and stress Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health. In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age. It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other. For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline. Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences. Stress is often missing from dementia prevention efforts A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement. What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress. Avoiding stressful events and difficult life circumstances is typically not an option. Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age. Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits. Lifestyle changes to manage stress and lessen dementia risk The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood. Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress: Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference. Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable. If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress. If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits. Walkable neighborhoods, lifelong learning A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well. However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health. Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully. In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier. Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people. Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State. Martin J. Sliwinski is a professor of human development and family studies at Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article. #want #lower #your #dementia #risk
    WWW.FASTCOMPANY.COM
    Want to lower your dementia risk? Start by stressing less
    The probability of any American having dementia in their lifetime may be far greater than previously thought. For instance, a 2025 study that tracked a large sample of American adults across more than three decades found that their average likelihood of developing dementia between ages 55 to 95 was 42%, and that figure was even higher among women, Black adults and those with genetic risk. Now, a great deal of attention is being paid to how to stave off cognitive decline in the aging American population. But what is often missing from this conversation is the role that chronic stress can play in how well people age from a cognitive standpoint, as well as everybody’s risk for dementia. We are professors at Penn State in the Center for Healthy Aging, with expertise in health psychology and neuropsychology. We study the pathways by which chronic psychological stress influences the risk of dementia and how it influences the ability to stay healthy as people age. Recent research shows that Americans who are currently middle-aged or older report experiencing more frequent stressful events than previous generations. A key driver behind this increase appears to be rising economic and job insecurity, especially in the wake of the 2007-2009 Great Recession and ongoing shifts in the labor market. Many people stay in the workforce longer due to financial necessity, as Americans are living longer and face greater challenges covering basic expenses in later life. Therefore, it may be more important than ever to understand the pathways by which stress influences cognitive aging. Social isolation and stress Although everyone experiences some stress in daily life, some people experience stress that is more intense, persistent or prolonged. It is this relatively chronic stress that is most consistently linked with poorer health. In a recent review paper, our team summarized how chronic stress is a hidden but powerful factor underlying cognitive aging, or the speed at which your cognitive performance slows down with age. It is hard to overstate the impact of stress on your cognitive health as you age. This is in part because your psychological, behavioral and biological responses to everyday stressful events are closely intertwined, and each can amplify and interact with the other. For instance, living alone can be stressful—particularly for older adults—and being isolated makes it more difficult to live a healthy lifestyle, as well as to detect and get help for signs of cognitive decline. Moreover, stressful experiences—and your reactions to them—can make it harder to sleep well and to engage in other healthy behaviors, like getting enough exercise and maintaining a healthy diet. In turn, insufficient sleep and a lack of physical activity can make it harder to cope with stressful experiences. Stress is often missing from dementia prevention efforts A robust body of research highlights the importance of at least 14 different factors that relate to your risk of Alzheimer’s disease, a common and devastating form of dementia and other forms of dementia. Although some of these factors may be outside of your control, such as diabetes or depression, many of these factors involve things that people do, such as physical activity, healthy eating and social engagement. What is less well-recognized is that chronic stress is intimately interwoven with all of these factors that relate to dementia risk. Our work and research by others that we reviewed in our recent paper demonstrate that chronic stress can affect brain function and physiology, influence mood and make it harder to maintain healthy habits. Yet, dementia prevention efforts rarely address stress. Avoiding stressful events and difficult life circumstances is typically not an option. Where and how you live and work plays a major role in how much stress you experience. For example, people with lower incomes, less education or those living in disadvantaged neighborhoods often face more frequent stress and have fewer forms of support—such as nearby clinics, access to healthy food, reliable transportation or safe places to exercise or socialize—to help them manage the challenges of aging As shown in recent work on brain health in rural and underserved communities, these conditions can shape whether people have the chance to stay healthy as they age. Over time, the effects of stress tend to build up, wearing down the body’s systems and shaping long-term emotional and social habits. Lifestyle changes to manage stress and lessen dementia risk The good news is that there are multiple things that can be done to slow or prevent dementia, and our review suggests that these can be enhanced if the role of stress is better understood. Whether you are a young, midlife or an older adult, it is not too early or too late to address the implications of stress on brain health and aging. Here are a few ways you can take direct actions to help manage your level of stress: Follow lifestyle behaviors that can improve healthy aging. These include: following a healthy diet, engaging in physical activity and getting enough sleep. Even small changes in these domains can make a big difference. Prioritize your mental health and well-being to the extent you can. Things as simple as talking about your worries, asking for support from friends and family and going outside regularly can be immensely valuable. If your doctor says that you or someone you care about should follow a new health care regimen, or suggests there are signs of cognitive impairment, ask them what support or advice they have for managing related stress. If you or a loved one feel socially isolated, consider how small shifts could make a difference. For instance, research suggests that adding just one extra interaction a day—even if it’s a text message or a brief phone call—can be helpful, and that even interactions with people you don’t know well, such as at a coffee shop or doctor’s office, can have meaningful benefits. Walkable neighborhoods, lifelong learning A 2025 study identified stress as one of 17 overlapping factors that affect the odds of developing any brain disease, including stroke, late-life depression and dementia. This work suggests that addressing stress and overlapping issues such as loneliness may have additional health benefits as well. However, not all individuals or families are able to make big changes on their own. Research suggests that community-level and workplace interventions can reduce the risk of dementia. For example, safe and walkable neighborhoods and opportunities for social connection and lifelong learning—such as through community classes and events—have the potential to reduce stress and promote brain health. Importantly, researchers have estimated that even a modest delay in disease onset of Alzheimer’s would save hundreds of thousands of dollars for every American affected. Thus, providing incentives to companies who offer stress management resources could ultimately save money as well as help people age more healthfully. In addition, stress related to the stigma around mental health and aging can discourage people from seeking support that would benefit them. Even just thinking about your risk of dementia can be stressful in itself. Things can be done about this, too. For instance, normalizing the use of hearing aids and integrating reports of perceived memory and mental health issues into routine primary care and workplace wellness programs could encourage people to engage with preventive services earlier. Although research on potential biomedical treatments is ongoing and important, there is currently no cure for Alzheimer’s disease. However, if interventions aimed at reducing stress were prioritized in guidelines for dementia prevention, the benefits could be far-reaching, resulting in both delayed disease onset and improved quality of life for millions of people. Jennifer E. Graham-Engeland is a professor of biobehavioral health at Penn State. Martin J. Sliwinski is a professor of human development and family studies at Penn State. This article is republished from The Conversation under a Creative Commons license. Read the original article.
    12 Σχόλια 0 Μοιράστηκε
  • ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time

    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey. 
    But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers.

    But wait… there’s more!
    The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film.
    Needless to say, the studio truly became a “Master Builder” on the show.

    The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds. 
    Here's the final trailer:

    In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization.
    Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation?
    Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together. 
    Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell.
    DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with?
    SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers.
    KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying. 

    DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film?
    SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld. 
    The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways.
    When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game.
    KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode.

    DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size.
    KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air.
    After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on.
    SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea. 
    KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it.

    DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work. 
    SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version.
    KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.  
    Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have.

    DS: Did Unreal Engine come into play with her? 
    SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes. 
    Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments.
    KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable.

    DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?” 
    KE: Where do you begin? 
    SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question.
    KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.  
    SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of.

    Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    #minecraft #movie #wētā #helps #adapt
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmonand Jared Hess. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys, swinging incense. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network. #minecraft #movie #wētā #helps #adapt
    WWW.AWN.COM
    ‘A Minecraft Movie’: Wētā FX Helps Adapt an Iconic Game One Block at a Time
    Adapting the iconic, block-based design aesthetic of Mojang’s beloved Minecraft videogame into the hit feature film comedy adventure, The Minecraft Movie, posed an enormous number of hurdles for director Jared Hess and Oscar-winning Production VFX Supervisor Dan Lemmon. Tasked with helping translate the iconic pixelated world into something cinematically engaging, while remaining true to its visual DNA, was Wētā FX, who delivered 450 VFX shots on the film. And two of their key leads on the film were VFX Supervisor Sheldon Stopsack and Animation Supervisor Kevin Estey.  But the shot count merely scratches the surface of the extensive work the studio performed. Wētā led the design and creation of The Overworld, 64 unique terrains spanning deserts, lush forests, oceans, and mountain ranges, all combined into one continuous environment, assets that were also shared with Digital Domain for their work on the 3rd act battle. Wētā also handled extensive work on the lava-filled hellscape of The Nether that involved Unreal Engine for early representations used in previs, scene scouting, and onset during principal photography, before refining the environment during post-production. They also dressed The Nether with lava, fire, and torches, along with atmospherics and particulate like smoke, ash, and embers. But wait… there’s more! The studio’s Art Department, working closely with Hess, co-created the look and feel of all digital characters in the film. For Malgosha’s henchmen, the Piglins, Wētā designed and created 12 different variants, all with individual characteristics and personalities. They also designed sheep, bees, pandas, zombies, skeletons, and lovable wolf Dennis. Many of these characters were provided to other vendors for their work on the film. Needless to say, the studio truly became a “Master Builder” on the show. The film is based on the hugely popular game Minecraft, first released by Sweden’s Mojang Studios in 2011 and purchased by Microsoft for $2.5 billion in 2014, which immerses players in a low-res, pixelated “sandbox” simulation where they can use blocks to build entire worlds.  Here's the final trailer: In a far-ranging interview, Stopsack and Estey shared with AWN a peek into their creative process, from early design exploration to creation of an intricate practical cloak for Malgosha and the use of Unreal Engine for previs, postvis, and real-time onset visualization. Dan Sarto: The film is filled with distinct settings and characters sporting various “block” styled features. Can you share some of the work you did on the environments, character design, and character animation? Sheldon Stopsack: There's, there's so much to talk about and truth to be told, if you were to touch on everything, we would probably need to spend the whole day together.  Kevin Estey: Sheldon and I realized that when we talk about the film, either amongst ourselves or with someone else, we could just keep going, there are so many stories to tell. DS: Well, start with The Overworld and The Nether. How did the design process begin? What did you have to work with? SS: Visual effects is a tricky business, you know. It's always difficult. Always challenging. However, Minecraft stood out to us as not your usual quote unquote standard visual effects project, even though as you know, there is no standard visual effects project because they're all somehow different. They all come with their own creative ideas, inspirations, and challenges. But Minecraft, right from the get-go, was different, simply by the fact that when you first consider the idea of making such a live-action movie, you instantly ask yourself, “How do we make this work? How do we combine these two inherently very, very different but unique worlds?” That was everyone’s number one question. How do we land this? Where do we land this? And I don't think that any of us really had an answer, including our clients, Dan Lemmon [Production VFX Supervisor] and Jared Hess [the film’s director]. Everyone was really open for this journey. That's compelling for us, to get out of our comfort zone. It makes you nervous because there are no real obvious answers. KE: Early on, we seemed to thrive off these kinds of scary creative challenges. There were lots of question marks. We had many moments when we were trying to figure out character designs. We had a template from the game, but it was an incredibly vague, low-resolution template. And there were so many ways that we could go. But that design discovery throughout the project was really satisfying.  DS: Game adaptations are never simple. There usually isn’t much in the way of story. But with Minecraft, from a visual standpoint, how did you translate low res, block-styled characters into something entertaining that could sustain a 100-minute feature film? SS: Everything was a question mark. Using the lava that you see in The Nether as one example, we had beautiful concept art for all our environments, The Overworld and The Nether, but those concepts only really took you this far. They didn’t represent the block shapes or give you a clear answer of like how realistic some of those materials, shapes and structures would be. How organic would we go? All of this needed to be explored. For the lava, we had stylized concept pieces, with block shaped viscosity as it flowed down. But we spent months with our effects team, and Dan and Jared, just riffing on ideas. We came full circle, with the lava ending up being more realistic, a naturally viscous liquid based on real physics. And the same goes with the waterfall that you see in the Overworld.  The question is, how far do we take things into the true Minecraft representation of things? How much do we scale back a little bit and ground ourselves in reality, with effects we’re quite comfortable producing as a company? There's always a tradeoff to find that balance of how best to combine what’s been filmed, the practical sets and live-action performances, with effects. Where’s the sweet spot? What's the level of abstraction? What's honest to the game? As much as some call Minecraft a simple game, it isn't simple, right? It's incredibly complex. It's got a set of rules and logic to the world building process within the game that we had to learn, adapt, and honor in many ways. When our misfits first arrive and we have these big vistas and establishing shots, when you really look at it, you, you recognize a lot of the things that we tried to adapt from the game. There are different biomes, like the Badlands, which is very sand stoney; there's the Woodlands, which is a lush environment with cherry blossom trees; you’ve got the snow biome with big mountains in the background. Our intent was to honor the game. KE: I took a big cue from a lot of the early designs, and particularly the approach that Jared liked for the characters and to the design in general, which was maintaining the stylized, blocky aesthetic, but covering them in realistic flesh, fur, things that were going to make them appear as real as possible despite the absolutely unreal designs of their bodies. And so essentially, it was squared skeleton… squarish bones with flesh and realistic fur laid over top. We tried various things, all extremely stylized. The Creepers are a good example. We tried all kinds of ways for them to explode. Sheldon found a great reference for a cat coughing up a hairball. He was nice to censor the worst part of it, but those undulations in the chest and ribcage… Jared spoke of the Creepers being basically tragic characters that only wanted to be loved, to just be close to you. But sadly, whenever they did, they’d explode. So, we experimented with a lot of different motions of how they’d explode. DS: Talk about the process of determining how these characters would move. None seem to have remotely realistic proportions in their limbs, bodies, or head size. KE: There were a couple things that Jared always seemed to be chasing. One was just something that would make him laugh. Of course, it had to sit within the bounds of how a zombie might move, or a skeleton might move, as we were interpreting the game. But the main thing was just, was it fun and funny? I still remember one of the earliest gags they came up with in mocap sessions, even before I even joined the show, was how the zombies get up after they fall over. It was sort of like a tripod, where its face and feet were planted and its butt shoots up in the air. After a lot of experimentation, we came up with basic personality types for each character. There were 12 different types of Piglins. The zombies were essentially like you're coming home from the pub after a few too many pints and you're just trying to get in the door, but you can't find your keys. Loose, slightly inebriated movement. The best movement we found for the skeletons was essentially like an old man with rigid limbs and lack of ligaments that was chasing kids off his lawn. And so, we created this kind of bible of performance types that really helped guide performers on the mocap stage and animators later on. SS: A lot of our exploration didn’t stick. But Jared was the expert in all of this. He always came up with some quirky last-minute idea.  KE: My favorite from Jared came in the middle of one mocap shoot. He walked up to me and said he had this stupid idea. I said OK, go on. He said, what if Malgosha had these two little pigs next to her, like Catholic alter boys [the thurifers], swinging incense [a thurible]. Can we do that? I talked to our stage manager, and we quickly put together a temporary prop for the incense burners. And we got two performers who just stood there. What are they going to do? Jared said, “Nothing. Just stand there and swing. I think it would look funny.” So, that’s what we did.  We dubbed them the Priesty Boys. And they are there throughout the film. That was amazing about Jared. He was always like, let's just try it, see if it works. Otherwise ditch it. DS: Tell me about your work on Malgosha. And I also want to discuss your use of Unreal Engine and the previs and postvis work.  SS: For Malgosha as a character, our art department did a phenomenal job finding the character design at the concept phase. But it was a collective effort. So many contributors were involved in her making. And I'm not just talking about the digital artists here on our side. It was a joint venture of different people having different explorations and experiments. It started off with the concept work as a foundation, which we mocked up with 3D sketches before building a model. But with Malgosha, we also had the costume department on the production side building this elaborate cloak. Remember, that cloak kind of makes 80, 85% of her appearance. It's almost like a character in itself, the way we utilized it. And the costume department built this beautiful, elaborate, incredibly intricate, practical version of it that we intended to use on set for the performer to wear. It ended up being too impractical because it was too heavy. But it was beautiful. So, while we didn't really use it on set, it gave us something physically to kind of incorporate into our digital version. KE: Alan Henry is the motion performer who portrayed her on set and on the mocap stage. I've known him for close to 15 years. I started working with him on The Hobbit films. He was a stunt performer who eventually rolled into doing motion capture with us on The Hobbit. He’s an incredible actor and absolutely hilarious and can adapt to any sort of situation. He’s so improvisational. He came up with an approach to Malgosha very quickly. Added a limp so that she felt decrepit, leaning on the staff, adding her other arm as kind of like a gimp arm that she would point and gesture with.   Even though she’s a blocky character, her anatomy is very much a biped, with rounder limbs than the other Piglins. She's got hooves, is somewhat squarish, and her much more bulky mass in the middle was easier to manipulate and move around. Because she would have to battle with Steve in the end, she had to have a level of agility that even some of the Piglins didn't have. DS: Did Unreal Engine come into play with her?  SS: Unreal was used all the way through the project. Dan Lemmon and his team early on set up their own virtual art department to build representations of the Overworld and the Nether within the context of Unreal. We and Sony Imageworks tried to provide recreations of these environments that were then used within Unreal to previsualize what was happening on set during shooting of principal photography. And that's where our mocap and on-set teams were coming into play. Effects provided what we called the Nudge Cam. It was a system to do real-time tracking using a stereo pair of Basler computer vision cameras that were mounted onto the sides of the principal camera. We provided the live tracking that was then composited in real time with the Unreal Engine content that all the vendors had provided. It was a great way of utilizing Unreal to give the camera operators or DOP, even Jared, a good sense of what we would actually shoot. It gave everyone a little bit of context for the look and feel of what you could actually expect from these scenes.  Because we started this journey with Unreal having onset in mind, we internally decided, look, let's take this further. Let's take this into post-production as well. What would it take to utilize Unreal for shot creation? And it was really exclusively used on the Nether environment. I don’t want to say we used it for matte painting replacement. We used it more for say, let's build this extended environment in Unreal. Not only use it as a render engine with this reasonably fast turnaround but also use it for what it's good at: authoring things, quickly changing things, moving columns around, manipulating things, dressing them, lighting them, and rendering them. It became sort of a tool that we used in place of a traditional matte painting for the extended environments. KE: Another thing worth mentioning is we were able to utilize it on our mocap stage as well during the two-week shoot with Jared and crew. When we shoot on the mocap stage, we get a very simple sort of gray shaded diagnostic grid. You have your single-color characters that sometimes are textured, but they’re fairly simple without any context of environment. Our special projects team was able to port what we usually see in Giant, the software we use on the mocap stage, into Unreal, which gave us these beautifully lit environments with interactive fire and atmosphere. And Jared and the team could see their movie for the first time in a rough, but still very beautiful rough state. That was invaluable. DS: If you had to key on anything, what would say with the biggest challenges for your teams on the film? You're laughing. I can hear you thinking, “Do we have an hour?”  KE: Where do you begin?  SS: Exactly. It's so hard to really single one out. And I struggle with that question every time I've been asked that question. KE: I’ll start.  I've got a very simple practical answer and then a larger one, something that was new to us, kind of similar to what we were just talking about. The simple practical one is the Piglins square feet with no ankles. It was very tough to make them walk realistically. Think of the leg of a chair. How do you make that roll and bank and bend because there is no joint? There are a lot of Piglins walking on surfaces and it was a very difficult conundrum to solve. It took a lot of hard work from our motion edit team and our animation team to get those things walking realistically. You know, it’s doing that simple thing that you don't usually pay attention to. So that was one reasonably big challenge that is often literally buried in the shadows. The bigger one was something that was new to me. We often do a lot of our previs and postvis in-house and then finish the shots. And just because of circumstances and capacity, we did the postvis for the entire final battle, but we ended up sharing the sequence with Digital Domain, who did an amazing job completing some of the stuff on the Battlefield we did post on. For me personally, I've never experienced not finishing what I started. But it was also really rewarding to see how well the work we had put in was honored by DD when they took it over.   SS: I think the biggest challenge and the biggest achievement that I'm most proud of is really ending up with something that was well received by the wider audience. Of creating these two worlds, this sort of abstract adaptation of the Minecraft game and combining it with live-action. That was the achievement for me. That was the biggest challenge. We were all nervous from day one. And we continued to be nervous up until the day the movie came out. None of us really knew how it ultimately would be received. The fact that it came together and was so well received is a testament to everyone doing a fantastic job. And that's what I'm incredibly proud of. Dan Sarto is Publisher and Editor-in-Chief of Animation World Network.
    0 Σχόλια 0 Μοιράστηκε
  • Mickey 17: Stuart Penn – VFX Supervisor – Framestore

    Interviews

    Mickey 17: Stuart Penn – VFX Supervisor – Framestore

    By Vincent Frei - 27/05/2025

    When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits.
    How did you get involved on this show?
    Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us.

    How was the sequences made by Framestore?
    Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse.
    Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios.

    Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions?
    Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character.

    Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement?
    Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors.
    It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies.

    Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage?
    They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference.

    The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave?
    I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length.

    How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots?
    For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in.

    Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time?
    This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes.
    The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set?
    The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links.
    When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations?
    Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets.
    How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage?
    Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment.

    Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint?
    There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey.

    Looking back on the project, what aspects of the visual effects are you most proud of?
    The baby creeper and the Ice cave environment.
    How long have you worked on this show?
    I worked on it for about 18 months
    What’s the VFX shots count?
    Framestore worked on 405 shots.
    A big thanks for your time.
    WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website.
    © Vincent Frei – The Art of VFX – 2025
    #mickey #stuart #penn #vfx #supervisor
    Mickey 17: Stuart Penn – VFX Supervisor – Framestore
    Interviews Mickey 17: Stuart Penn – VFX Supervisor – Framestore By Vincent Frei - 27/05/2025 When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits. How did you get involved on this show? Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us. How was the sequences made by Framestore? Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse. Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios. Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions? Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character. Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement? Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors. It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies. Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage? They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference. The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave? I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length. How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots? For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in. Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time? This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes. The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set? The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links. When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations? Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets. How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage? Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment. Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint? There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey. Looking back on the project, what aspects of the visual effects are you most proud of? The baby creeper and the Ice cave environment. How long have you worked on this show? I worked on it for about 18 months What’s the VFX shots count? Framestore worked on 405 shots. A big thanks for your time. WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website. © Vincent Frei – The Art of VFX – 2025 #mickey #stuart #penn #vfx #supervisor
    WWW.ARTOFVFX.COM
    Mickey 17: Stuart Penn – VFX Supervisor – Framestore
    Interviews Mickey 17: Stuart Penn – VFX Supervisor – Framestore By Vincent Frei - 27/05/2025 When we last spoke with Stuart Penn in 2019, he walked us through Framestore’s work on Avengers: Endgame. He has since added The Aeronauts, Moon Knight, 1899, and Flite to his impressive list of credits. How did you get involved on this show? Soon after we had been awarded work, Director Bong visited our London Studio in May 2022 to meet us and share his vision with us. How was the sequences made by Framestore? Framestore was responsible for the development of the Baby and Mama Creepers. We worked on the shots of the Baby Creepers within the ship, and the Creepers in the caves and the ice crevasse. We developed the ice cave and crevasse environments, including a full-CG shot of Mickey falling into the crevasse. Within the ship we were also responsible for the cycler room with its lava pit, the human printer, a range of set extensions, Marshall’s beautiful rock and—one of my personal favourites—Pigeon Man’s spinning eyes. We also crafted the spacewalk sequence. All the work came out of our London and Mumbai studios. Bong Joon Ho has a very distinct visual storytelling style. How did you collaborate with him to ensure the VFX aligned with his vision, and were there any unexpected creative challenges that pushed the team in new directions? Director Bong was fun to work with, very collaborative and had a very clear vision of where the film was going. We had discussions before and during the shoot. While we were shooting, Director Bong would talk to us about the backstory of what the Creepers might be thinking that went beyond the scope of what we would see in the movie. This really helped with giving the creatures character. Can you walk us through the design and animation process for the baby and mother creepers? What references or inspirations helped shape their look and movement? Director Bong had been working with his creature designer, Heechul Jang, for many months before production started. We had kickoffs with Director Bong and Heechul that provided us with some of the best and most thought out concepts I think we’ve ever received. Director Bong set us the challenge of bringing them to life. We took the lead on the Baby and Mama Creepers and DNEG took on the Juniors. It’s fun to note that the energy and inquisitive nature of the Babies was inspired by reference footage of puppies. Were these creatures primarily CG, or was there any practical element involved? How did you ensure their integration into the live-action footage? They were all CG in the final film. On set we had a range of stuffies and mockups for actors to interact with and for lighting reference. People became quite attached to the baby creeper stuffies! For the Mama there was a head and large frame that was controlled by a team of puppeteers for eyeline and lighting reference. The ice cave has a very distinct visual style. How did you achieve the look of the ice, and what techniques were used to create the lighting and atmospheric effects inside the cave? I was sent to Iceland for a week to gather reference. I visited a range of ice cave locations—driving, hiking and being dropped by helicopter at various locations across a glacier. This reference provided the basis for the look of the caves. The ice was rendered fully refractive with interior volumes to create the structures. As it’s so computationally expensive to render we used tricks where we could reproject a limited number of fully rendered frames. This worked best on lock offs or small camera moves—others we just had to render full length. How were the scenes featuring multiple Mickeys filmed? Did you rely mostly on motion control, digital doubles, or a combination of techniques to seamlessly integrate the clones into the shots? For our shots it was mostly multiple plates relying on the skill of camera operators to match the framing and move and the comp work to either split frames or lift one of the Mickeys from a plate and replace the stand-in. Since Mickey’s clones are central to the story, what were the biggest VFX challenges in making them interact convincingly? Were there any specific techniques used to differentiate them visually or subtly show their progression over time? This really all came down to Robert Pattinson’s performances. He would usually be acting with his double for interaction and lighting. They would then switch positions and redo the performance. Robs could switch between the Mickey 17 and 18 characters with the assistance of quick hair and makeup changes. The prison environment seems to have a unique aesthetic and mood. How much of it was built practically, and how did VFX contribute to enhancing or extending the set? The foreground cells and storage containers were practical and everything beyond the fence was CG with a DMP overlay. The containers going off into the distance were carefully positioned and lit to enable you to feel the vast scale of the ship. We also replaced the fence in most shots with CG as it was easier than rotoing through the chain links. When Mickey is outside the ship, exposed to radiation, there are several extreme body effects, including his hand coming off. Can you discuss the challenges of creating these sequences, particularly in terms of digital prosthetics and damage simulations? Knocking Mickey’s hand off was quite straight forward due the speed of the impact. We started with a plate of the practical arm and glove and switch to a pre-sculpted CG glove and arm stump. The hand spinning off into the distance was hand animated to allow us to fully art direct the spin and trajectory. The final touch was to add and FX sim for the blood droplets. How did you balance realism and stylization in depicting the effects of radiation exposure? Were there real-world references or scientific studies that guided the look of the damage? Most of the radiation effects came from great make up and prosthetics—we just added some final touches such as an FX sim for a bursting blister. We tried a few different simulations based on work we had none on previous shows but ultimately dialed it back to something more subtle so it didn’t distract from the moment. Were there any memorable moments or scenes from the film that you found particularly rewarding or challenging to work on from a visual effects standpoint? There were a lot of quite diverse challenges. From creature work, environments, lava to a lot of ‘one off’ effects. The shot where the Creepers are pushing Mickey out into the snow was particularly challenging, with so many Creepers interacting with each other and Mickey, it took the combination of several animators and compositors to bring it together and integrate with the partial CG Mickey. Looking back on the project, what aspects of the visual effects are you most proud of? The baby creeper and the Ice cave environment. How long have you worked on this show? I worked on it for about 18 months What’s the VFX shots count? Framestore worked on 405 shots. A big thanks for your time. WANT TO KNOW MORE?Framestore: Dedicated page about Mickey 17 on Framestore website. © Vincent Frei – The Art of VFX – 2025
    0 Σχόλια 0 Μοιράστηκε
  • This tiny piece of tech will change how you watch the Indy 500

    When you describe it in words, the Indianapolis 500 might seem like a boring watch: Cars go round and round an oval track 200 times, totaling 500 miles over the course of a few hours. But if you were a driver, you’d be having a hell of a different experience. Think screaming speeds of 230 miles per hour, pulling 4 Gs on corners, with one’s reflexes and split-second decisions drawing a thin line between victory and tragedy . . . over the course of a few hours.

    It’s a level of intensity that TV networks have been trying to bring viewers into for years with in-car cameras and things like driver radio communiques. It has been working. Last year, NBC—which covered the spectacle from 2019–2024—netted the most streams of the race ever and averaged 5.34 million total viewers, up from 4.9 million in 2023 and 4.8 million in 2022. This year marks FOX’s first time ever broadcasting it, and they likely want that trend to continue, so they’re throwing all the tech they have at it. And that includes the innovative, diminutive Driver’s Eye, dubbed the world’s smallest live broadcast camera, which brings fans directly into drivers’ helmetslike never before. For the first time in Indy 500 history, viewers will have a view of the race exactly as its stars see it from within their helmets—from dramatic passes and vehicle-quaking jousts to the very mechanics of how they operate their cars at such speeds.“Driver’s Eye brings the human factor,” says Alex Miotto Haristos, COO of Racing Force Group, which owns the tech. “It brings the struggle.”

    And it could bring the ratings, too—especially if it catches on in the series like it has in Formula 1.MORE THAN MEETS THE EYE

    The UK-born, Italy-raised Haristos is perhaps an unlikely creator of racing gear. He began his career in management consulting and later real estate before acquiring an electronics company and launching it as Zeronoise in 2018 with Stephane Cohen of Bell Racing Helmets. Haristos doesn’t come from a racing background, but rather dubs himself a business engineer who saw it as an opportunity. He says he quickly found himself falling down the rabbit hole into a passion project given the sheer challenging nature of the Driver’s Eye tech, which they began developing in 2019.

    Ray Harroun driving his Marmon Wasp, the first winner of the Indy Race in 1911.That challenge is very real when you’re working on a product meant to be inserted into a race-car driver’s most critical piece of safety gear, particularly in a sport where said driver’s head is sticking out of the car. Racing helmets are modern design marvels that evolved out of leather and cloth versions in the Indy 500’s early days to steel helmets in 1916. According to IndyCar, every driver has a primary and one or two backups, and they’re all custom-fit and produced per FIAstandards.The outer shell features ultralight carbon fiber; there’s a fireproof liner; a built-in airbag to assist in helmet removal without neck strain; numerous elements to ensure maximum aerodynamics in 200+ mph runs; and audio insulation so drivers can communicate with their teams over the roar of 33 engines on the track. 

    “Your job is to not alter any feature of the helmet,” Haristos says. “The helmet you don’t touch. You have to work with what you have, and you have to manage to integrate everything seamlessly. This is the trick.”The team set out to capture exactly what a driver was seeing on the racetrack, raw and unfiltered, shakes and all—and quickly understood that they couldn’t work on the outer surface of a helmet because it would be a safety issue. So they homed in on the side padding of the helmet that Haristos says is around a centimeter away from the eye, which, given the sensitive proximity, went through the FIA for approval, as well. The organization mandated a minuscule size and weight for the camera, so rather than starting with what image quality they wanted to achieve and so on, “We started working backwards. And in the beginning it was like, No, this is impossible.”

    Ultimately, the team had to break apart camera design as we know it—a single unit—and separate the internal systems to make it work. They stripped out everything they could for what needed to go in the helmet, and were left with a tiny sensor with the ability to capture high-res videoin the smallest of real estate. Today, that unit clocks in at 8.8 x 8.8 mm, and weighs less than a dime. Then, they moved the rest of the camera’s guts to the car itself. Which is also a feat, particularly in Indy racing, which involves older cars that are already stuffed to the max from additions over the years. 

    “You can’t do one thing without affecting another,” says Michael Davies, FOX EVP of field and technical management and operations. “There’s no change that you can make on a car that doesn’t fuck something else up. And I’m always reminded of something a very smart man said, which is that when you solve a problem, you inevitably create another one, but you must make sure that the problem you create is smaller than the one that you solved.”

    Haristos says that for Indy, they were told that the only available space was on the side of the car by the radiator—not an ideal spot, given the high temperature and so on. So they had to develop a custom housing that was more efficient and could operate at a higher temp while still fitting into the tightest of spaces. 

    Ultimately, from the helmet camera to the housing, it was crucial that the additions all felt seamless to the driver. 

    “Comfort in motorsport translates into confidence,” Haristos says. “Confidence translates into performance.”CROSSING THE POND

    Safety equipment manufacturer OMP Racing acquired Zeronoise in 2019—and they also acquired Bell, a major purveyor of helmets to Formula 1 and the Indy 500, with 23 of the 33 drivers donning its headwear for the latter.After they developed the first iteration of Driver’s Eye, the team got it into Formula E racing in 2020, and was able to finalize the development of the tech, testing it in Formula 1 in 2021—and giving race fanatics a new, visceral way to experience the sport. It gained ground, and in 2023 became mandatory in Formula 1.

    FOX tested Driver’s Eye in some NASCAR races that same year, and now on Sunday you’ll be able to watch the Indy 500 from the perspective of 2023/2024 winner Josef Newgarden, Scott Dixon, Alex Palou, Will Power, Marcus Ericsson and Felix Rosenqvist. 

    Josef NewgardenOf course, there’s more tech wizardry at play behind the scenes than merely hooking up a camera. The Driver’s Eye is mounted in a dark helmet with a massive underexposure—and the track is a massive overexposure. Drivers race with different filters and colors on their visors, which they can tear off in layers periodically throughout the race as they get dirty. Moreover, the Indy 500 is hours long, there are varying weather scenarios, the sun and shadows are moving, and everything is very much in a state of flux. Haristos says Driver’s Eye compensates for all of it, from white balance to the varying visor colors, with a mix of automatic and manual controls, making for a seamless sync with the rest of the program. 

    From a production standpoint, FOX’s Davies says that since the system allows for a view of drivers’ hands on the controls and exactly what they’re looking at in any given moment, it’s also a boon to race commentators, who have told him that’s it’s the most useful angle for them in being able to craft a narrative around what’s happening on the track. Moreover, he says the raw nature of the footage truly shows the athleticism at play on the part of the drivers, something that can get lost in traditional shots.

    “We can really cover the event from the inside out, instead of the outside in,” he says.

    And on top of that, he adds, it’s something sponsors like—and request. Thus a bevy of IndyCar racing’s household names. now driving with the cameras embedded in their helmets. 

    The Driver’s Eye is just one tiny tool in FOX’s arsenal, which seems designed to shock and awe—and plant a flag in their take on the race. For the first time, live drones will be deployed, including custom high-speed FPV drones; there are more than 100 cameras in play, 108 mics, 16 in-car cameras offering views of drivers’ faces and cockpits, and more.

    “We’re playing some pretty big hits here and looking forward to seeing how it enhances the big race,” Davies says. “You can see it in a completely different way—even if you’ve watched Indy for as long as it’s been on TV.”
    #this #tiny #piece #tech #will
    This tiny piece of tech will change how you watch the Indy 500
    When you describe it in words, the Indianapolis 500 might seem like a boring watch: Cars go round and round an oval track 200 times, totaling 500 miles over the course of a few hours. But if you were a driver, you’d be having a hell of a different experience. Think screaming speeds of 230 miles per hour, pulling 4 Gs on corners, with one’s reflexes and split-second decisions drawing a thin line between victory and tragedy . . . over the course of a few hours. It’s a level of intensity that TV networks have been trying to bring viewers into for years with in-car cameras and things like driver radio communiques. It has been working. Last year, NBC—which covered the spectacle from 2019–2024—netted the most streams of the race ever and averaged 5.34 million total viewers, up from 4.9 million in 2023 and 4.8 million in 2022. This year marks FOX’s first time ever broadcasting it, and they likely want that trend to continue, so they’re throwing all the tech they have at it. And that includes the innovative, diminutive Driver’s Eye, dubbed the world’s smallest live broadcast camera, which brings fans directly into drivers’ helmetslike never before. For the first time in Indy 500 history, viewers will have a view of the race exactly as its stars see it from within their helmets—from dramatic passes and vehicle-quaking jousts to the very mechanics of how they operate their cars at such speeds.“Driver’s Eye brings the human factor,” says Alex Miotto Haristos, COO of Racing Force Group, which owns the tech. “It brings the struggle.” And it could bring the ratings, too—especially if it catches on in the series like it has in Formula 1.MORE THAN MEETS THE EYE The UK-born, Italy-raised Haristos is perhaps an unlikely creator of racing gear. He began his career in management consulting and later real estate before acquiring an electronics company and launching it as Zeronoise in 2018 with Stephane Cohen of Bell Racing Helmets. Haristos doesn’t come from a racing background, but rather dubs himself a business engineer who saw it as an opportunity. He says he quickly found himself falling down the rabbit hole into a passion project given the sheer challenging nature of the Driver’s Eye tech, which they began developing in 2019. Ray Harroun driving his Marmon Wasp, the first winner of the Indy Race in 1911.That challenge is very real when you’re working on a product meant to be inserted into a race-car driver’s most critical piece of safety gear, particularly in a sport where said driver’s head is sticking out of the car. Racing helmets are modern design marvels that evolved out of leather and cloth versions in the Indy 500’s early days to steel helmets in 1916. According to IndyCar, every driver has a primary and one or two backups, and they’re all custom-fit and produced per FIAstandards.The outer shell features ultralight carbon fiber; there’s a fireproof liner; a built-in airbag to assist in helmet removal without neck strain; numerous elements to ensure maximum aerodynamics in 200+ mph runs; and audio insulation so drivers can communicate with their teams over the roar of 33 engines on the track.  “Your job is to not alter any feature of the helmet,” Haristos says. “The helmet you don’t touch. You have to work with what you have, and you have to manage to integrate everything seamlessly. This is the trick.”The team set out to capture exactly what a driver was seeing on the racetrack, raw and unfiltered, shakes and all—and quickly understood that they couldn’t work on the outer surface of a helmet because it would be a safety issue. So they homed in on the side padding of the helmet that Haristos says is around a centimeter away from the eye, which, given the sensitive proximity, went through the FIA for approval, as well. The organization mandated a minuscule size and weight for the camera, so rather than starting with what image quality they wanted to achieve and so on, “We started working backwards. And in the beginning it was like, No, this is impossible.” Ultimately, the team had to break apart camera design as we know it—a single unit—and separate the internal systems to make it work. They stripped out everything they could for what needed to go in the helmet, and were left with a tiny sensor with the ability to capture high-res videoin the smallest of real estate. Today, that unit clocks in at 8.8 x 8.8 mm, and weighs less than a dime. Then, they moved the rest of the camera’s guts to the car itself. Which is also a feat, particularly in Indy racing, which involves older cars that are already stuffed to the max from additions over the years.  “You can’t do one thing without affecting another,” says Michael Davies, FOX EVP of field and technical management and operations. “There’s no change that you can make on a car that doesn’t fuck something else up. And I’m always reminded of something a very smart man said, which is that when you solve a problem, you inevitably create another one, but you must make sure that the problem you create is smaller than the one that you solved.” Haristos says that for Indy, they were told that the only available space was on the side of the car by the radiator—not an ideal spot, given the high temperature and so on. So they had to develop a custom housing that was more efficient and could operate at a higher temp while still fitting into the tightest of spaces.  Ultimately, from the helmet camera to the housing, it was crucial that the additions all felt seamless to the driver.  “Comfort in motorsport translates into confidence,” Haristos says. “Confidence translates into performance.”CROSSING THE POND Safety equipment manufacturer OMP Racing acquired Zeronoise in 2019—and they also acquired Bell, a major purveyor of helmets to Formula 1 and the Indy 500, with 23 of the 33 drivers donning its headwear for the latter.After they developed the first iteration of Driver’s Eye, the team got it into Formula E racing in 2020, and was able to finalize the development of the tech, testing it in Formula 1 in 2021—and giving race fanatics a new, visceral way to experience the sport. It gained ground, and in 2023 became mandatory in Formula 1. FOX tested Driver’s Eye in some NASCAR races that same year, and now on Sunday you’ll be able to watch the Indy 500 from the perspective of 2023/2024 winner Josef Newgarden, Scott Dixon, Alex Palou, Will Power, Marcus Ericsson and Felix Rosenqvist.  Josef NewgardenOf course, there’s more tech wizardry at play behind the scenes than merely hooking up a camera. The Driver’s Eye is mounted in a dark helmet with a massive underexposure—and the track is a massive overexposure. Drivers race with different filters and colors on their visors, which they can tear off in layers periodically throughout the race as they get dirty. Moreover, the Indy 500 is hours long, there are varying weather scenarios, the sun and shadows are moving, and everything is very much in a state of flux. Haristos says Driver’s Eye compensates for all of it, from white balance to the varying visor colors, with a mix of automatic and manual controls, making for a seamless sync with the rest of the program.  From a production standpoint, FOX’s Davies says that since the system allows for a view of drivers’ hands on the controls and exactly what they’re looking at in any given moment, it’s also a boon to race commentators, who have told him that’s it’s the most useful angle for them in being able to craft a narrative around what’s happening on the track. Moreover, he says the raw nature of the footage truly shows the athleticism at play on the part of the drivers, something that can get lost in traditional shots. “We can really cover the event from the inside out, instead of the outside in,” he says. And on top of that, he adds, it’s something sponsors like—and request. Thus a bevy of IndyCar racing’s household names. now driving with the cameras embedded in their helmets.  The Driver’s Eye is just one tiny tool in FOX’s arsenal, which seems designed to shock and awe—and plant a flag in their take on the race. For the first time, live drones will be deployed, including custom high-speed FPV drones; there are more than 100 cameras in play, 108 mics, 16 in-car cameras offering views of drivers’ faces and cockpits, and more. “We’re playing some pretty big hits here and looking forward to seeing how it enhances the big race,” Davies says. “You can see it in a completely different way—even if you’ve watched Indy for as long as it’s been on TV.” #this #tiny #piece #tech #will
    WWW.FASTCOMPANY.COM
    This tiny piece of tech will change how you watch the Indy 500
    When you describe it in words, the Indianapolis 500 might seem like a boring watch: Cars go round and round an oval track 200 times, totaling 500 miles over the course of a few hours. But if you were a driver, you’d be having a hell of a different experience. Think screaming speeds of 230 miles per hour, pulling 4 Gs on corners, with one’s reflexes and split-second decisions drawing a thin line between victory and tragedy . . . over the course of a few hours. It’s a level of intensity that TV networks have been trying to bring viewers into for years with in-car cameras and things like driver radio communiques. It has been working. Last year, NBC—which covered the spectacle from 2019–2024—netted the most streams of the race ever and averaged 5.34 million total viewers, up from 4.9 million in 2023 and 4.8 million in 2022. This year marks FOX’s first time ever broadcasting it, and they likely want that trend to continue, so they’re throwing all the tech they have at it. And that includes the innovative, diminutive Driver’s Eye, dubbed the world’s smallest live broadcast camera, which brings fans directly into drivers’ helmets (quite literally) like never before. For the first time in Indy 500 history, viewers will have a view of the race exactly as its stars see it from within their helmets—from dramatic passes and vehicle-quaking jousts to the very mechanics of how they operate their cars at such speeds. [Photo: Bell Racing] “Driver’s Eye brings the human factor,” says Alex Miotto Haristos, COO of Racing Force Group, which owns the tech. “It brings the struggle.” And it could bring the ratings, too—especially if it catches on in the series like it has in Formula 1. [Photo: Bell Racing] MORE THAN MEETS THE EYE The UK-born, Italy-raised Haristos is perhaps an unlikely creator of racing gear. He began his career in management consulting and later real estate before acquiring an electronics company and launching it as Zeronoise in 2018 with Stephane Cohen of Bell Racing Helmets. Haristos doesn’t come from a racing background, but rather dubs himself a business engineer who saw it as an opportunity. He says he quickly found himself falling down the rabbit hole into a passion project given the sheer challenging nature of the Driver’s Eye tech, which they began developing in 2019. Ray Harroun driving his Marmon Wasp, the first winner of the Indy Race in 1911. [Photo: Bettmann/Getty Images] That challenge is very real when you’re working on a product meant to be inserted into a race-car driver’s most critical piece of safety gear, particularly in a sport where said driver’s head is sticking out of the car. Racing helmets are modern design marvels that evolved out of leather and cloth versions in the Indy 500’s early days to steel helmets in 1916. According to IndyCar, every driver has a primary and one or two backups, and they’re all custom-fit and produced per FIA (Federation Internationale de l’Automobile) standards. (Want to buy your own? Haristos says that’ll cost you between $5,000 and $8,000.) The outer shell features ultralight carbon fiber; there’s a fireproof liner; a built-in airbag to assist in helmet removal without neck strain; numerous elements to ensure maximum aerodynamics in 200+ mph runs; and audio insulation so drivers can communicate with their teams over the roar of 33 engines on the track.  “Your job is to not alter any feature of the helmet,” Haristos says. “The helmet you don’t touch. You have to work with what you have, and you have to manage to integrate everything seamlessly. This is the trick.” [Photo: Bell Racing] The team set out to capture exactly what a driver was seeing on the racetrack, raw and unfiltered, shakes and all—and quickly understood that they couldn’t work on the outer surface of a helmet because it would be a safety issue. So they homed in on the side padding of the helmet that Haristos says is around a centimeter away from the eye, which, given the sensitive proximity, went through the FIA for approval, as well. The organization mandated a minuscule size and weight for the camera, so rather than starting with what image quality they wanted to achieve and so on, “We started working backwards. And in the beginning it was like, No, this is impossible.” Ultimately, the team had to break apart camera design as we know it—a single unit—and separate the internal systems to make it work. They stripped out everything they could for what needed to go in the helmet, and were left with a tiny sensor with the ability to capture high-res video (in the case of the Indy 500, in 1080p, 60fps) in the smallest of real estate. Today, that unit clocks in at 8.8 x 8.8 mm, and weighs less than a dime. Then, they moved the rest of the camera’s guts to the car itself. Which is also a feat, particularly in Indy racing, which involves older cars that are already stuffed to the max from additions over the years.  “You can’t do one thing without affecting another,” says Michael Davies, FOX EVP of field and technical management and operations. “There’s no change that you can make on a car that doesn’t fuck something else up. And I’m always reminded of something a very smart man said, which is that when you solve a problem, you inevitably create another one, but you must make sure that the problem you create is smaller than the one that you solved.” Haristos says that for Indy, they were told that the only available space was on the side of the car by the radiator—not an ideal spot, given the high temperature and so on. So they had to develop a custom housing that was more efficient and could operate at a higher temp while still fitting into the tightest of spaces.  Ultimately, from the helmet camera to the housing, it was crucial that the additions all felt seamless to the driver.  “Comfort in motorsport translates into confidence,” Haristos says. “Confidence translates into performance.” [Photo: Bell Racing] CROSSING THE POND Safety equipment manufacturer OMP Racing acquired Zeronoise in 2019—and they also acquired Bell, a major purveyor of helmets to Formula 1 and the Indy 500, with 23 of the 33 drivers donning its headwear for the latter. (All the brands would eventually coalesce under the newly formed Racing Force Group in 2021; last year, it ​did $74.1 million in revenue, up 4.8% from 2023.) After they developed the first iteration of Driver’s Eye, the team got it into Formula E racing in 2020, and was able to finalize the development of the tech, testing it in Formula 1 in 2021—and giving race fanatics a new, visceral way to experience the sport. It gained ground, and in 2023 became mandatory in Formula 1. FOX tested Driver’s Eye in some NASCAR races that same year, and now on Sunday you’ll be able to watch the Indy 500 from the perspective of 2023/2024 winner Josef Newgarden, Scott Dixon, Alex Palou, Will Power, Marcus Ericsson and Felix Rosenqvist.  Josef Newgarden [Photo: Bell Racing] Of course, there’s more tech wizardry at play behind the scenes than merely hooking up a camera. The Driver’s Eye is mounted in a dark helmet with a massive underexposure—and the track is a massive overexposure. Drivers race with different filters and colors on their visors, which they can tear off in layers periodically throughout the race as they get dirty. Moreover, the Indy 500 is hours long, there are varying weather scenarios, the sun and shadows are moving, and everything is very much in a state of flux. Haristos says Driver’s Eye compensates for all of it, from white balance to the varying visor colors, with a mix of automatic and manual controls, making for a seamless sync with the rest of the program. (Which, let’s be honest, is critical—a director has to use the shots, lest Driver’s Eye be rendered obsolete.)  From a production standpoint, FOX’s Davies says that since the system allows for a view of drivers’ hands on the controls and exactly what they’re looking at in any given moment, it’s also a boon to race commentators, who have told him that’s it’s the most useful angle for them in being able to craft a narrative around what’s happening on the track. Moreover, he says the raw nature of the footage truly shows the athleticism at play on the part of the drivers, something that can get lost in traditional shots. “We can really cover the event from the inside out, instead of the outside in,” he says. And on top of that, he adds, it’s something sponsors like—and request. Thus a bevy of IndyCar racing’s household names. now driving with the cameras embedded in their helmets.  The Driver’s Eye is just one tiny tool in FOX’s arsenal, which seems designed to shock and awe—and plant a flag in their take on the race. For the first time, live drones will be deployed, including custom high-speed FPV drones; there are more than 100 cameras in play, 108 mics, 16 in-car cameras offering views of drivers’ faces and cockpits, and more (including 5.1 surround sound “that’ll blow your head off”). “We’re playing some pretty big hits here and looking forward to seeing how it enhances the big race,” Davies says. “You can see it in a completely different way—even if you’ve watched Indy for as long as it’s been on TV.”
    0 Σχόλια 0 Μοιράστηκε
Αναζήτηση αποτελεσμάτων