• NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI

    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions.
    Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges.
    To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure.
    Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations.
    Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint.

    NVIDIA Omniverse Blueprint for Smart City AI 
    The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes:

    NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale.
    NVIDIA Cosmos to generate synthetic data at scale for post-training AI models.
    NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models.
    NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes.

    The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint.
    NVIDIA Partner Ecosystem Powers Smart Cities Worldwide
    The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own.
    SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning.
    This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management.
    Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption.

    The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second.
    Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events.
    To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second.

    Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance.
    Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases.
    The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems.

    Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins.
    Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%.

    Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance.
    Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities.
    Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents.
    Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #nvidia #brings #physical #european #cities
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #nvidia #brings #physical #european #cities
    BLOGS.NVIDIA.COM
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language models (VLMs) and large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization (VSS), helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. https://blogs.nvidia.com/wp-content/uploads/2025/06/01-Monaco-Akila.mp4 The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-K2K-Polermo-1600x900-1.mp4 Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. https://blogs.nvidia.com/wp-content/uploads/2025/06/03-Milestone.mp4 Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-Linker-Vision-1280x680-1.mp4 Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Wow
    34
    0 Commentaires 0 Parts
  • competitor analysis, website traffic, traffic comparison, SEO strategies, digital marketing, engagement metrics, user demographics, website analytics, 2025 trends

    In the cutthroat world of digital marketing, knowing your competitors’ website traffic is akin to knowing the secret ingredient of a rival chef’s famous dish. In 2025, analyzing and comparing competitor website traffic has evolved into an art form. Forget the days of simply glancing at their homepages; today, we dissect, delve, and de...
    competitor analysis, website traffic, traffic comparison, SEO strategies, digital marketing, engagement metrics, user demographics, website analytics, 2025 trends In the cutthroat world of digital marketing, knowing your competitors’ website traffic is akin to knowing the secret ingredient of a rival chef’s famous dish. In 2025, analyzing and comparing competitor website traffic has evolved into an art form. Forget the days of simply glancing at their homepages; today, we dissect, delve, and de...
    # How to Analyze & Compare Competitor Website Traffic in 2025
    competitor analysis, website traffic, traffic comparison, SEO strategies, digital marketing, engagement metrics, user demographics, website analytics, 2025 trends In the cutthroat world of digital marketing, knowing your competitors’ website traffic is akin to knowing the secret ingredient of a rival chef’s famous dish. In 2025, analyzing and comparing competitor website traffic has evolved...
    Like
    Love
    Wow
    Sad
    Angry
    195
    1 Commentaires 0 Parts
  • Hey there, fabulous friends!

    Are you ready to take your market research game to the next level? Today, I want to share with you something that can truly transform how you see competition! In this fast-paced world, every entrepreneur and marketer needs to be equipped with the right tools to uncover hidden gems in the market. And guess what? The answer lies in the **14 Best Competitive Intelligence Tools for Market Research**!

    Imagine having the power to peek behind the curtain of your competitors and discover their strategies and tactics! With these amazing tools, you can gather insights that will not only help you understand your market better but also give you the edge you need to soar higher than ever before!

    One standout tool that I absolutely adore is the **Semrush Traffic & Market Toolkit**. It’s like having a secret weapon in your back pocket! This toolkit provides invaluable data about traffic sources, keyword strategies, and much more! Say goodbye to guesswork and hello to informed decisions! Each piece of information you gather brings you one step closer to your goals.

    But that’s not all! Each of the 14 tools has its own unique features that cater to different aspects of competitive intelligence. Whether it's analyzing social media performance, tracking keywords, or monitoring brand mentions, there’s something for everyone! It’s time to embrace the power of knowledge and turn it into your competitive advantage!

    I know that diving into market research might seem daunting, but let me tell you, it’s a thrilling adventure! Every insight you uncover is like finding a treasure map leading you to success! So, don’t shy away from exploring these tools. Embrace them with open arms and watch your business flourish!

    Remember, the only limit to your success is the extent of your imagination and the determination to use the right resources. So gear up, equip yourself with these 14 best competitive intelligence tools, and let’s conquer the market together!

    Let’s lift each other up and share our discoveries! What tools are you excited to try? Drop your thoughts in the comments below! Let’s inspire one another to reach new heights!

    #MarketResearch #CompetitiveIntelligence #BusinessGrowth #Semrush #Inspiration
    🌟 Hey there, fabulous friends! 🌟 Are you ready to take your market research game to the next level? 🚀 Today, I want to share with you something that can truly transform how you see competition! In this fast-paced world, every entrepreneur and marketer needs to be equipped with the right tools to uncover hidden gems in the market. And guess what? The answer lies in the **14 Best Competitive Intelligence Tools for Market Research**! 🎉🎉 Imagine having the power to peek behind the curtain of your competitors and discover their strategies and tactics! With these amazing tools, you can gather insights that will not only help you understand your market better but also give you the edge you need to soar higher than ever before! 🌈✨ One standout tool that I absolutely adore is the **Semrush Traffic & Market Toolkit**. It’s like having a secret weapon in your back pocket! 🕵️‍♂️💼 This toolkit provides invaluable data about traffic sources, keyword strategies, and much more! Say goodbye to guesswork and hello to informed decisions! Each piece of information you gather brings you one step closer to your goals. 🌟 But that’s not all! Each of the 14 tools has its own unique features that cater to different aspects of competitive intelligence. Whether it's analyzing social media performance, tracking keywords, or monitoring brand mentions, there’s something for everyone! It’s time to embrace the power of knowledge and turn it into your competitive advantage! 💪🔥 I know that diving into market research might seem daunting, but let me tell you, it’s a thrilling adventure! Every insight you uncover is like finding a treasure map leading you to success! 🗺️💖 So, don’t shy away from exploring these tools. Embrace them with open arms and watch your business flourish! 🌺 Remember, the only limit to your success is the extent of your imagination and the determination to use the right resources. So gear up, equip yourself with these 14 best competitive intelligence tools, and let’s conquer the market together! 🌍💫 Let’s lift each other up and share our discoveries! What tools are you excited to try? Drop your thoughts in the comments below! 👇💬 Let’s inspire one another to reach new heights! #MarketResearch #CompetitiveIntelligence #BusinessGrowth #Semrush #Inspiration
    The 14 Best Competitive Intelligence Tools for Market Research
    Discover the competition and reveal strategies and tactics of any industry player with these top 14 competitive intelligence tools, including the Semrush Traffic & Market Toolkit.
    Like
    Love
    Wow
    Angry
    Sad
    567
    1 Commentaires 0 Parts
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Commentaires 0 Parts
  • How to optimize your hybrid waterfall with CPM buckets

    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets,you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like to larger increments like so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfalland make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the -80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a -80 CPM, imagine how much more revenue you could bring at a CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue."

    -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here.
    #how #optimize #your #hybrid #waterfall
    How to optimize your hybrid waterfall with CPM buckets
    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets,you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like to larger increments like so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfalland make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the -80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a -80 CPM, imagine how much more revenue you could bring at a CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue." -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here. #how #optimize #your #hybrid #waterfall
    UNITY.COM
    How to optimize your hybrid waterfall with CPM buckets
    In-app bidding has automated most waterfall optimization, yet developers still manage multiple hybrid waterfalls, each with dozens of manual instances. Naturally, this can be timely and overwhelming to maintain, keeping you from optimizing to perfection and focusing on other opportunities to boost revenue.Rather than analyzing each individual network and checking if instances are available at each price point, breaking down your waterfall into different CPM ranges allows you to visualize the waterfall and easily identify the gaps.Here are some tips on how to use CPM buckets to better optimize your waterfall’s performance.What are CPM buckets?CPM buckets show you exactly how much revenue and how many impressions you’re getting from each CPM price range, giving you a more granular idea of how different networks are competing in the waterfall. CPM buckets are a feature of real time pivot reports, available on ironSource LevelPlay.Identifying and closing the gapsTypically in a waterfall, you can only see each ad network’s average CPM. But this keeps you from seeing ad network distribution across all price points and understanding exactly where ad networks are bidding. Bottom line - you don’t know where in the waterfall you should add a new instance.By separating CPM into buckets, (for example, seeing all the ad networks generating a CPM of $10-$20) you understand exactly which networks are driving impressions and revenue and which CPMs aren’t being filledNow how do you do it? As a LevelPlay client, simply use ironSource’s real time pivot reports - choose the CPM bucket filter option and sort by “average bid price.” From here, you’ll see how your revenue spreads out among CPM ranges and you’ll start to notice gaps in your bar graph. Every gap in revenue - where revenue is much lower than the neighboring CPM group - indicates an opportunity to optimize your monetization strategy. The buckets can range from small increments like $1 to larger increments like $10, so it’s important to compare CPM buckets of the same incremental value.Pro tip: To best set up your waterfall, create one tab with the general waterfall (filter app, OS, Ad unit, geo/geos from a specific group) and make sure to look at Revenue and eCPM in the “measures” dropdown. In the “show” section, choose CPM buckets and sort by average bid price. From here, you can mark down any gaps.But where do these gaps come from? Gaps in revenue are often due to friction in the waterfall, like not enough instances, instances that aren’t working, or a waterfall setup mistake. But gaps can also be adjusted and fixed.Once you’ve found a gap, you can look at the CPM buckets around it to better understand the context. Let’s say you see a strong instance generating significant revenue in the CPM bucket right below it, in the $70-80 group. This instance from this specific ad network has a lot of potential, so it’s worth trying to push it to a higher CPM bucket.In fact, when you look at higher CPM buckets, you don’t see this ad network anywhere else in the waterfall - what a missed opportunity! Try adding another instance of this network higher up in the waterfall. If you’re profiting well with a $70-80 CPM, imagine how much more revenue you could bring at a $150 CPM.Pro tip: Focusing on higher areas in the waterfall makes a larger financial impact, leading to bigger increases in ARPDAU.Let’s say you decide to add 5 instances of that network to higher CPM buckets. You can use LevelPlay’s quick A/B test to understand if this adjustment boosts your revenue - not just for this gap, but for any and all that you find. Simply compare your existing waterfall against the new waterfall with these 5 higher instances - then implement the one that drives the highest instances.Božo Janković, Head of Ad Monetization at GameBiz Consulting, uses CPM buckets "to understand at which CPMs the bidding networks are filling. From there, I can pinpoint exactly where in the waterfall to add more traditional instances - which creates more competition, especially for the bidding networks, and creates an opportunity for revenue growth."Finding new insightsYou can dig even deeper into your data by filtering by ad source. Before CPM buckets, you were limited to seeing an average eCPM for each bidding network. Maybe you knew that one ad source had an average CPM of $50, but the distribution of impression across the waterfall was a black box. Now, we know exactly which CPMs the bidders are filling. “I find ironSource CPM buckets feature very insightful and and use it daily. It’s an easy way to identify opportunities to optimize the waterfall and earn even more revenue." -Božo Janković, Head of Ad Monetization at GameBiz ConsultingUnderstanding your CPM distribution empowers you to not only identify your revenue sources, but also to promote revenue growth. Armed with the knowledge of which buckets some of their stronger bidding networking are performing in, some publishers actively add instances from traditional networks above those ranges. This creates better competition and also helps drive up the bids from the biddersThere’s no need for deep analysis - once you see the gaps, you can quickly understand who’s performing in the lower and higher buckets, and see exactly what’s missing. This way, you won’t miss out on any lost revenue.Learn more about CPM buckets, available exclusively to ironSource LevelPlay here.
    Like
    Love
    Wow
    Sad
    Angry
    544
    0 Commentaires 0 Parts
  • Alec Haase Q&A: Customer Engagement Book Interview

    Reading Time: 6 minutes
    What is marketing without data? Assumptions. Guesses. Fluff.
    For Chapter 6 of our book, “The Customer Engagement Book: Adapt or Die,” we spoke with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, to explore how engagement data can truly inform critical business decisions. 
    Alec discusses the different types of customer behaviors that matter most, how to separate meaningful information from the rest, and the role of systems that learn over time to create tailored customer experiences.
    This interview provides insights into using data for real-time actions and shaping the future of marketing. Prepare to learn about AI decision-making and how a focus on data is changing how we engage with customers.

     
    Alec Haase Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    It’s a culmination of everything.
    Behavioral signals — the actual conversions and micro-conversions that users take within your product or website.
    Obviously, that’s things like purchases. But there are also other behavioral signals marketers should be using and thinking about. Things like micro-conversions — maybe that’s shopping for a product, clicking to learn more about a product, or visiting a certain page on your website.
    Behind that, you also need to have all your user data to tie that to.

    So I know someone took said action; I can follow up with them in email or out on paid social. I need the user identifiers to do that.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    Data that’s actionable includes the conversions and micro-conversions — very clear instances of “someone did this.” I can react to or measure those.
    What’s becoming a bit of a challenge for marketers is understanding that there’s other data that is valuable for machine learning or reinforcement learning models, things like tags on the types of products customers are interacting with.
    Maybe there’s category information about that product, or color information. That would otherwise look like noise to the average marketer. But behind the scenes, it can be used for reinforcement learning.

    There is definitely the “clear-cut” actionable data, but marketers shouldn’t be quick to classify things as noise because the rise in machine learning and reinforcement learning will make that data more valuable.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    At Hightouch, we don’t necessarily think about retroactive analysis. We have a system where we have customer engagement data firing in that we then have real-time scores reacting to.
    An interesting example is when you have machine learning and reinforcement learning models running. In the pet retailer example I gave you, the system is able to figure out what to prioritize.
    The concept of reinforcement learning is not a marketer making rules to say, “I know this type of thing works well on this type of audience.”

    It’s the machine itself using the data to determine what attribute responds well to which offer, recommendation, or marketing campaign.

    4. How can marketers ensure their use of customer engagement data aligns with the broader business objectives?
    It starts with the objectives. It’s starting with the desired outcome and working your way back. That whole flip of the paradigm is starting with outcomes and letting the system optimize. What are you trying to drive, and then back into the types of experiences that can make that happen?
    There’s personalization.
    When we talk about data-driven experiences and personalization, Spotify Wrapped is the North Star. For Spotify Wrapped, you want to drive customer stickiness and create a brand. To make that happen, you want to send a personalized email. What components do you want in that email?

    Maybe it’s top five songs, top five artists, and then you can back into the actual event data you need to make that happen.

    5. What role does engagement data play in influencing cross-functional decisions such as those in product development, sales, or customer service?
    For product development, it’s product analytics — knowing what features users are using, or seeing in heat maps where users are clicking.
    Sales is similar. We’re using behavioral signals like what types of content they’re reading on the site to help inform what they would be interested in — the types of products or the types of use cases.

    For customer service, you can look at errors they’ve run into in the past or specific purchases they’ve made, so that when you’re helping them the next time they engage with you, you know exactly what their past behaviors were and what products they could be calling about.

    6. What are some challenges marketers face when trying to translate customer engagement data into actionable insights?
    Access to data is one challenge. You might not know what data you have because marketers historically may not have been used to the systems where data is stored.
    Historically, that’s been pretty siloed away from them. Rich behavioral data and other data across the business was stored somewhere else.
    Now, as more companies embrace the data warehouse at the center of their business, it gives everyone a true single place where data can be stored.

    Marketers are working more with data teams, understanding more about the data they have, and using that data to power downstream use cases, personalization, reinforcement learning, or general business insights.

    7. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    As a marketer, I think proof is key. The best thing is if you’ve actually run a test. “I think we should do this. I ran a small test, and it’s showing that this is actually proving out.” Being able to clearly explain and justify your reasoning with data is super important.

    8. What technology or tools have you found most effective for gathering and analyzing customer engagement data?
    Any type of behavioral event collection, specifically ones that write to the cloud data warehouse, is the critical component. Your data team is operating off the data warehouse.
    Having an event collection product that stores data in that central spot is really important if you want to use the other data when making recommendations.
    You want to get everything into the data warehouse where it can be used both for insights and for putting into action.

    For Spotify Wrapped, you want to collect behavioral event signals like songs listened to or concerts attended, writing to the warehouse so that you can get insights back — how many songs were played this year, projections for next month — but then you can also use those behavioral events in downstream platforms to fire off personalized emails with product recommendations or Spotify Wrapped-style experiences.

    9. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?

    What we’re excited about is the concept of AI Decisioning — having AI agents actually using customer data to train their own models and decision-making to create personalized experiences.
    We’re sitting on top of all this behavioral data, engagement data, and user attributes, and our system is learning from all of that to make the best decisions across downstream systems.
    Whether that’s as simple as driving a loyalty program and figuring out what emails to send or what on-site experiences to show, or exposing insights that might lead you to completely change your business strategy, we see engagement data as the fuel to the engine of reinforcement learning, machine learning, AI agents, this whole next wave of Martech that’s just now coming.
    But it all starts with having the data to train those systems.

    I think that behavioral data is the fuel of modern Martech, and that only holds more true as Martech platforms adopt these decisioning and AI capabilities, because they’re only as good as the data that’s training the models.

     

     
    This interview Q&A was hosted with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Alec Haase Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #alec #haase #qampampa #customer #engagement
    Alec Haase Q&A: Customer Engagement Book Interview
    Reading Time: 6 minutes What is marketing without data? Assumptions. Guesses. Fluff. For Chapter 6 of our book, “The Customer Engagement Book: Adapt or Die,” we spoke with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, to explore how engagement data can truly inform critical business decisions.  Alec discusses the different types of customer behaviors that matter most, how to separate meaningful information from the rest, and the role of systems that learn over time to create tailored customer experiences. This interview provides insights into using data for real-time actions and shaping the future of marketing. Prepare to learn about AI decision-making and how a focus on data is changing how we engage with customers.   Alec Haase Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? It’s a culmination of everything. Behavioral signals — the actual conversions and micro-conversions that users take within your product or website. Obviously, that’s things like purchases. But there are also other behavioral signals marketers should be using and thinking about. Things like micro-conversions — maybe that’s shopping for a product, clicking to learn more about a product, or visiting a certain page on your website. Behind that, you also need to have all your user data to tie that to. So I know someone took said action; I can follow up with them in email or out on paid social. I need the user identifiers to do that. 2. How do you distinguish between data that is actionable versus data that is just noise? Data that’s actionable includes the conversions and micro-conversions — very clear instances of “someone did this.” I can react to or measure those. What’s becoming a bit of a challenge for marketers is understanding that there’s other data that is valuable for machine learning or reinforcement learning models, things like tags on the types of products customers are interacting with. Maybe there’s category information about that product, or color information. That would otherwise look like noise to the average marketer. But behind the scenes, it can be used for reinforcement learning. There is definitely the “clear-cut” actionable data, but marketers shouldn’t be quick to classify things as noise because the rise in machine learning and reinforcement learning will make that data more valuable. 3. How can customer engagement data be used to identify and prioritize new business opportunities? At Hightouch, we don’t necessarily think about retroactive analysis. We have a system where we have customer engagement data firing in that we then have real-time scores reacting to. An interesting example is when you have machine learning and reinforcement learning models running. In the pet retailer example I gave you, the system is able to figure out what to prioritize. The concept of reinforcement learning is not a marketer making rules to say, “I know this type of thing works well on this type of audience.” It’s the machine itself using the data to determine what attribute responds well to which offer, recommendation, or marketing campaign. 4. How can marketers ensure their use of customer engagement data aligns with the broader business objectives? It starts with the objectives. It’s starting with the desired outcome and working your way back. That whole flip of the paradigm is starting with outcomes and letting the system optimize. What are you trying to drive, and then back into the types of experiences that can make that happen? There’s personalization. When we talk about data-driven experiences and personalization, Spotify Wrapped is the North Star. For Spotify Wrapped, you want to drive customer stickiness and create a brand. To make that happen, you want to send a personalized email. What components do you want in that email? Maybe it’s top five songs, top five artists, and then you can back into the actual event data you need to make that happen. 5. What role does engagement data play in influencing cross-functional decisions such as those in product development, sales, or customer service? For product development, it’s product analytics — knowing what features users are using, or seeing in heat maps where users are clicking. Sales is similar. We’re using behavioral signals like what types of content they’re reading on the site to help inform what they would be interested in — the types of products or the types of use cases. For customer service, you can look at errors they’ve run into in the past or specific purchases they’ve made, so that when you’re helping them the next time they engage with you, you know exactly what their past behaviors were and what products they could be calling about. 6. What are some challenges marketers face when trying to translate customer engagement data into actionable insights? Access to data is one challenge. You might not know what data you have because marketers historically may not have been used to the systems where data is stored. Historically, that’s been pretty siloed away from them. Rich behavioral data and other data across the business was stored somewhere else. Now, as more companies embrace the data warehouse at the center of their business, it gives everyone a true single place where data can be stored. Marketers are working more with data teams, understanding more about the data they have, and using that data to power downstream use cases, personalization, reinforcement learning, or general business insights. 7. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? As a marketer, I think proof is key. The best thing is if you’ve actually run a test. “I think we should do this. I ran a small test, and it’s showing that this is actually proving out.” Being able to clearly explain and justify your reasoning with data is super important. 8. What technology or tools have you found most effective for gathering and analyzing customer engagement data? Any type of behavioral event collection, specifically ones that write to the cloud data warehouse, is the critical component. Your data team is operating off the data warehouse. Having an event collection product that stores data in that central spot is really important if you want to use the other data when making recommendations. You want to get everything into the data warehouse where it can be used both for insights and for putting into action. For Spotify Wrapped, you want to collect behavioral event signals like songs listened to or concerts attended, writing to the warehouse so that you can get insights back — how many songs were played this year, projections for next month — but then you can also use those behavioral events in downstream platforms to fire off personalized emails with product recommendations or Spotify Wrapped-style experiences. 9. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? What we’re excited about is the concept of AI Decisioning — having AI agents actually using customer data to train their own models and decision-making to create personalized experiences. We’re sitting on top of all this behavioral data, engagement data, and user attributes, and our system is learning from all of that to make the best decisions across downstream systems. Whether that’s as simple as driving a loyalty program and figuring out what emails to send or what on-site experiences to show, or exposing insights that might lead you to completely change your business strategy, we see engagement data as the fuel to the engine of reinforcement learning, machine learning, AI agents, this whole next wave of Martech that’s just now coming. But it all starts with having the data to train those systems. I think that behavioral data is the fuel of modern Martech, and that only holds more true as Martech platforms adopt these decisioning and AI capabilities, because they’re only as good as the data that’s training the models.     This interview Q&A was hosted with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Alec Haase Q&A: Customer Engagement Book Interview appeared first on MoEngage. #alec #haase #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Alec Haase Q&A: Customer Engagement Book Interview
    Reading Time: 6 minutes What is marketing without data? Assumptions. Guesses. Fluff. For Chapter 6 of our book, “The Customer Engagement Book: Adapt or Die,” we spoke with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, to explore how engagement data can truly inform critical business decisions.  Alec discusses the different types of customer behaviors that matter most, how to separate meaningful information from the rest, and the role of systems that learn over time to create tailored customer experiences. This interview provides insights into using data for real-time actions and shaping the future of marketing. Prepare to learn about AI decision-making and how a focus on data is changing how we engage with customers.   Alec Haase Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? It’s a culmination of everything. Behavioral signals — the actual conversions and micro-conversions that users take within your product or website. Obviously, that’s things like purchases. But there are also other behavioral signals marketers should be using and thinking about. Things like micro-conversions — maybe that’s shopping for a product, clicking to learn more about a product, or visiting a certain page on your website. Behind that, you also need to have all your user data to tie that to. So I know someone took said action; I can follow up with them in email or out on paid social. I need the user identifiers to do that. 2. How do you distinguish between data that is actionable versus data that is just noise? Data that’s actionable includes the conversions and micro-conversions — very clear instances of “someone did this.” I can react to or measure those. What’s becoming a bit of a challenge for marketers is understanding that there’s other data that is valuable for machine learning or reinforcement learning models, things like tags on the types of products customers are interacting with. Maybe there’s category information about that product, or color information. That would otherwise look like noise to the average marketer. But behind the scenes, it can be used for reinforcement learning. There is definitely the “clear-cut” actionable data, but marketers shouldn’t be quick to classify things as noise because the rise in machine learning and reinforcement learning will make that data more valuable. 3. How can customer engagement data be used to identify and prioritize new business opportunities? At Hightouch, we don’t necessarily think about retroactive analysis. We have a system where we have customer engagement data firing in that we then have real-time scores reacting to. An interesting example is when you have machine learning and reinforcement learning models running. In the pet retailer example I gave you, the system is able to figure out what to prioritize. The concept of reinforcement learning is not a marketer making rules to say, “I know this type of thing works well on this type of audience.” It’s the machine itself using the data to determine what attribute responds well to which offer, recommendation, or marketing campaign. 4. How can marketers ensure their use of customer engagement data aligns with the broader business objectives? It starts with the objectives. It’s starting with the desired outcome and working your way back. That whole flip of the paradigm is starting with outcomes and letting the system optimize. What are you trying to drive, and then back into the types of experiences that can make that happen? There’s personalization. When we talk about data-driven experiences and personalization, Spotify Wrapped is the North Star. For Spotify Wrapped, you want to drive customer stickiness and create a brand. To make that happen, you want to send a personalized email. What components do you want in that email? Maybe it’s top five songs, top five artists, and then you can back into the actual event data you need to make that happen. 5. What role does engagement data play in influencing cross-functional decisions such as those in product development, sales, or customer service? For product development, it’s product analytics — knowing what features users are using, or seeing in heat maps where users are clicking. Sales is similar. We’re using behavioral signals like what types of content they’re reading on the site to help inform what they would be interested in — the types of products or the types of use cases. For customer service, you can look at errors they’ve run into in the past or specific purchases they’ve made, so that when you’re helping them the next time they engage with you, you know exactly what their past behaviors were and what products they could be calling about. 6. What are some challenges marketers face when trying to translate customer engagement data into actionable insights? Access to data is one challenge. You might not know what data you have because marketers historically may not have been used to the systems where data is stored. Historically, that’s been pretty siloed away from them. Rich behavioral data and other data across the business was stored somewhere else. Now, as more companies embrace the data warehouse at the center of their business, it gives everyone a true single place where data can be stored. Marketers are working more with data teams, understanding more about the data they have, and using that data to power downstream use cases, personalization, reinforcement learning, or general business insights. 7. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? As a marketer, I think proof is key. The best thing is if you’ve actually run a test. “I think we should do this. I ran a small test, and it’s showing that this is actually proving out.” Being able to clearly explain and justify your reasoning with data is super important. 8. What technology or tools have you found most effective for gathering and analyzing customer engagement data? Any type of behavioral event collection, specifically ones that write to the cloud data warehouse, is the critical component. Your data team is operating off the data warehouse. Having an event collection product that stores data in that central spot is really important if you want to use the other data when making recommendations. You want to get everything into the data warehouse where it can be used both for insights and for putting into action. For Spotify Wrapped, you want to collect behavioral event signals like songs listened to or concerts attended, writing to the warehouse so that you can get insights back — how many songs were played this year, projections for next month — but then you can also use those behavioral events in downstream platforms to fire off personalized emails with product recommendations or Spotify Wrapped-style experiences. 9. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? What we’re excited about is the concept of AI Decisioning — having AI agents actually using customer data to train their own models and decision-making to create personalized experiences. We’re sitting on top of all this behavioral data, engagement data, and user attributes, and our system is learning from all of that to make the best decisions across downstream systems. Whether that’s as simple as driving a loyalty program and figuring out what emails to send or what on-site experiences to show, or exposing insights that might lead you to completely change your business strategy, we see engagement data as the fuel to the engine of reinforcement learning, machine learning, AI agents, this whole next wave of Martech that’s just now coming. But it all starts with having the data to train those systems. I think that behavioral data is the fuel of modern Martech, and that only holds more true as Martech platforms adopt these decisioning and AI capabilities, because they’re only as good as the data that’s training the models.     This interview Q&A was hosted with Alec Haase, Product GTM Lead, Commerce and AI at Hightouch, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Alec Haase Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    0 Commentaires 0 Parts
  • Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 

    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks.
    To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms.
    Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA. 
    Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior.
    Proving Rust program properties with Aeneas
    Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”.
    For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references.
    As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs.
    Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community.
    Compiling Rust to C supports backward compatibility  
    We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs.
    Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code.
    As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed.

    Microsoft research podcast

    Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness
    As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India.

    Listen now

    Opens in a new tab
    Timing analysis with Revizor 
    Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct. 
    To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.  
    Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel. 
    By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code.
    Verified Rust implementations begin with ML-KEM
    This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling.
    A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings. 
    Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations. 
    As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems.
    Looking forward 
    This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library.
    Opens in a new tab
    #rewriting #symcrypt #rust #modernize #microsofts
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab #rewriting #symcrypt #rust #modernize #microsofts
    WWW.MICROSOFT.COM
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt (opens in new tab)—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsics (compiler-provided low-level functions) and assembly language (direct processor instructions). It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneas (opens in new tab) because it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean (opens in new tab), allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice (opens in new tab), a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydice (opens in new tab) compiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries (via C or Rust APIs), or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor (opens in new tab), a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcrypto (opens in new tab) branch of the SymCrypt repository. We encourage users to try the Rust build and share feedback (opens in new tab). Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab
    0 Commentaires 0 Parts
  • Mirela Cialai Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential.
    That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success.
    In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers.
    You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI.
    Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.

     
    Mirela Cialai Q&A Interview
    1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience?

    Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives.

    This could be revenue growth, customer retention, market expansion, or operational efficiency.
    We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition.
    We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals.
    In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance.
    This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth.
    Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings.
    Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences.
    To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale.

    By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals.

    2. What steps did you take to ensure data accuracy?
    The data team was very diligent in ensuring that our data warehouse had accurate data.
    So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc.

    That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data.

    3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy?
    Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability.
    I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%.
    This data helps make a compelling case to stakeholders about the importance of prioritizing retention.
    Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth.
    This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives.

    By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy.

    4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement?
    Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach.
    The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives.
    I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse.
    Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows.
    Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities.

    Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape.

    5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for?
    I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels.
    Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns.
    Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns.
    Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability.

    If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs.

    6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap?
    Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes.
    Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact.
    Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert.

    By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success.

    7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives?
    To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success.
    Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value.
    Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities.
    Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth.
    By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs.

    In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability.

    In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first.
    8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you?
    Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability.
    We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success.
    To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams.

    To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together.

    9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like?
    A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine.
    In one word: PAPER. Here’s how it breaks down.

    Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals.
    Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps.
    Prioritize: initiatives based on impact, feasibility, and ROI potential.
    Execute: by implementing the roadmap in manageable phases.
    Refine: by continuously improving CRM performance and refining the roadmap.

    So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy.

    10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively?
    The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences.

    The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth.

    Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies.
    The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes.
    Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution.
    A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions.
    Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others.
    While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends.

    By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success.

    11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind?
    I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives.
    Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives.

    Another important lesson: The roadmap is only as effective as the data and systems it’s built upon.

    I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on.
    A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers.

    So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.

     

     
    This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #mirela #cialai #qampampa #customer #engagement
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage. #mirela #cialai #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    0 Commentaires 0 Parts
  • New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know

    The Secure Government EmailCommon Implementation Framework
    New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service. 
    Key Takeaways

    All NZ government agencies must comply with new email security requirements by October 2025.
    The new framework strengthens trust and security in government communications by preventing spoofing and phishing.
    The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls.
    EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting.

    Start a Free Trial

    What is the Secure Government Email Common Implementation Framework?
    The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service.
    Why is New Zealand Implementing New Government Email Security Standards?
    The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide:

    Encryption for transmission security
    Digital signing for message integrity
    Basic non-repudiationDomain spoofing protection

    These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications.
    What Email Security Technologies Are Required by the New NZ SGE Framework?
    The SGE Framework outlines the following key technologies that agencies must implement:

    TLS 1.2 or higher with implicit TLS enforced
    TLS-RPTSPFDKIMDMARCwith reporting
    MTA-STSData Loss Prevention controls

    These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks.

    Get in touch

    When Do NZ Government Agencies Need to Comply with this Framework?
    All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline.
    The All of Government Secure Email Common Implementation Framework v1.0
    What are the Mandated Requirements for Domains?
    Below are the exact requirements for all email-enabled domains under the new framework.
    ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements.
    Compliance Monitoring and Reporting
    The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies. 
    Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually.
    Deployment Checklist for NZ Government Compliance

    Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT
    SPF with -all
    DKIM on all outbound email
    DMARC p=reject 
    adkim=s where suitable
    For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict
    Compliance dashboard
    Inbound DMARC evaluation enforced
    DLP aligned with NZISM

    Start a Free Trial

    How EasyDMARC Can Help Government Agencies Comply
    EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance.
    1. TLS-RPT / MTA-STS audit
    EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures.

    Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks.

    As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources.
    2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation.

    Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports.
    Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues.
    3. DKIM on all outbound email
    DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases.
    As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface.
    EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs. 
    Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements.
    If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS.

    4. DMARC p=reject rollout
    As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated. 
    This phased approach ensures full protection against domain spoofing without risking legitimate email delivery.

    5. adkim Strict Alignment Check
    This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender.

    6. Securing Non-Email Enabled Domains
    The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record.
    Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”.
    • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”.
    EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject.
    7. Compliance Dashboard
    Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework.

    8. Inbound DMARC Evaluation Enforced
    You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails.
    However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender.
    If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change.
    9. Data Loss Prevention Aligned with NZISM
    The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG.
    Need Help Setting up SPF and DKIM for your Email Provider?
    Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients.
    Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs.
    Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider.
    Here are our step-by-step guides for the most common platforms:

    Google Workspace

    Microsoft 365

    These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout.
    Meet New Government Email Security Standards With EasyDMARC
    New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    #new #zealands #email #security #requirements
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government EmailCommon Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government EmailCommon Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairsas part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name Systemto enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiationDomain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPTSPFDKIMDMARCwith reporting MTA-STSData Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government EmailCommon Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manualand Protective Security Requirements. Compliance Monitoring and Reporting The All of Government Service Deliveryteam will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly. If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface. EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA, DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS. 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manualis the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention, which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government EmailFramework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail. #new #zealands #email #security #requirements
    EASYDMARC.COM
    New Zealand’s Email Security Requirements for Government Organizations: What You Need to Know
    The Secure Government Email (SGE) Common Implementation Framework New Zealand’s government is introducing a comprehensive email security framework designed to protect official communications from phishing and domain spoofing. This new framework, which will be mandatory for all government agencies by October 2025, establishes clear technical standards to enhance email security and retire the outdated SEEMail service.  Key Takeaways All NZ government agencies must comply with new email security requirements by October 2025. The new framework strengthens trust and security in government communications by preventing spoofing and phishing. The framework mandates TLS 1.2+, SPF, DKIM, DMARC with p=reject, MTA-STS, and DLP controls. EasyDMARC simplifies compliance with our guided setup, monitoring, and automated reporting. Start a Free Trial What is the Secure Government Email Common Implementation Framework? The Secure Government Email (SGE) Common Implementation Framework is a new government-led initiative in New Zealand designed to standardize email security across all government agencies. Its main goal is to secure external email communication, reduce domain spoofing in phishing attacks, and replace the legacy SEEMail service. Why is New Zealand Implementing New Government Email Security Standards? The framework was developed by New Zealand’s Department of Internal Affairs (DIA) as part of its role in managing ICT Common Capabilities. It leverages modern email security controls via the Domain Name System (DNS) to enable the retirement of the legacy SEEMail service and provide: Encryption for transmission security Digital signing for message integrity Basic non-repudiation (by allowing only authorized senders) Domain spoofing protection These improvements apply to all emails, not just those routed through SEEMail, offering broader protection across agency communications. What Email Security Technologies Are Required by the New NZ SGE Framework? The SGE Framework outlines the following key technologies that agencies must implement: TLS 1.2 or higher with implicit TLS enforced TLS-RPT (TLS Reporting) SPF (Sender Policy Framework) DKIM (DomainKeys Identified Mail) DMARC (Domain-based Message Authentication, Reporting, and Conformance) with reporting MTA-STS (Mail Transfer Agent Strict Transport Security) Data Loss Prevention controls These technologies work together to ensure encrypted email transmission, validate sender identity, prevent unauthorized use of domains, and reduce the risk of sensitive data leaks. Get in touch When Do NZ Government Agencies Need to Comply with this Framework? All New Zealand government agencies are expected to fully implement the Secure Government Email (SGE) Common Implementation Framework by October 2025. Agencies should begin their planning and deployment now to ensure full compliance by the deadline. The All of Government Secure Email Common Implementation Framework v1.0 What are the Mandated Requirements for Domains? Below are the exact requirements for all email-enabled domains under the new framework. ControlExact RequirementTLSMinimum TLS 1.2. TLS 1.1, 1.0, SSL, or clear-text not permitted.TLS-RPTAll email-sending domains must have TLS reporting enabled.SPFMust exist and end with -all.DKIMAll outbound email from every sending service must be DKIM-signed at the final hop.DMARCPolicy of p=reject on all email-enabled domains. adkim=s is recommended when not bulk-sending.MTA-STSEnabled and set to enforce.Implicit TLSMust be configured and enforced for every connection.Data Loss PreventionEnforce in line with the New Zealand Information Security Manual (NZISM) and Protective Security Requirements (PSR). Compliance Monitoring and Reporting The All of Government Service Delivery (AoGSD) team will be monitoring compliance with the framework. Monitoring will initially cover SPF, DMARC, and MTA-STS settings and will be expanded to include DKIM. Changes to these settings will be monitored, enabling reporting on email security compliance across all government agencies. Ongoing monitoring will highlight changes to domains, ensure new domains are set up with security in place, and monitor the implementation of future email security technologies.  Should compliance changes occur, such as an agency’s SPF record being changed from -all to ~all, this will be captured so that the AoGSD Security Team can investigate. They will then communicate directly with the agency to determine if an issue exists or if an error has occurred, reviewing each case individually. Deployment Checklist for NZ Government Compliance Enforce TLS 1.2 minimum, implicit TLS, MTA-STS & TLS-RPT SPF with -all DKIM on all outbound email DMARC p=reject  adkim=s where suitable For non-email/parked domains: SPF -all, empty DKIM, DMARC reject strict Compliance dashboard Inbound DMARC evaluation enforced DLP aligned with NZISM Start a Free Trial How EasyDMARC Can Help Government Agencies Comply EasyDMARC provides a comprehensive email security solution that simplifies the deployment and ongoing management of DNS-based email security protocols like SPF, DKIM, and DMARC with reporting. Our platform offers automated checks, real-time monitoring, and a guided setup to help government organizations quickly reach compliance. 1. TLS-RPT / MTA-STS audit EasyDMARC enables you to enable the Managed MTA-STS and TLS-RPT option with a single click. We provide the required DNS records and continuously monitor them for issues, delivering reports on TLS negotiation problems. This helps agencies ensure secure email transmission and quickly detect delivery or encryption failures. Note: In this screenshot, you can see how to deploy MTA-STS and TLS Reporting by adding just three CNAME records provided by EasyDMARC. It’s recommended to start in “testing” mode, evaluate the TLS-RPT reports, and then gradually switch your MTA-STS policy to “enforce”. The process is simple and takes just a few clicks. As shown above, EasyDMARC parses incoming TLS reports into a centralized dashboard, giving you clear visibility into delivery and encryption issues across all sending sources. 2. SPF with “-all”In the EasyDARC platform, you can run the SPF Record Generator to create a compliant record. Publish your v=spf1 record with “-all” to enforce a hard fail for unauthorized senders and prevent spoofed emails from passing SPF checks. This strengthens your domain’s protection against impersonation. Note: It is highly recommended to start adjusting your SPF record only after you begin receiving DMARC reports and identifying your legitimate email sources. As we’ll explain in more detail below, both SPF and DKIM should be adjusted after you gain visibility through reports. Making changes without proper visibility can lead to false positives, misconfigurations, and potential loss of legitimate emails. That’s why the first step should always be setting DMARC to p=none, receiving reports, analyzing them, and then gradually fixing any SPF or DKIM issues. 3. DKIM on all outbound email DKIM must be configured for all email sources sending emails on behalf of your domain. This is critical, as DKIM plays a bigger role than SPF when it comes to building domain reputation, surviving auto-forwarding, mailing lists, and other edge cases. As mentioned above, DMARC reports provide visibility into your email sources, allowing you to implement DKIM accordingly (see first screenshot). If you’re using third-party services like Google Workspace, Microsoft 365, or Mimecast, you’ll need to retrieve the public DKIM key from your provider’s admin interface (see second screenshot). EasyDMARC maintains a backend directory of over 1,400 email sources. We also give you detailed guidance on how to configure SPF and DKIM correctly for major ESPs.  Note: At the end of this article, you’ll find configuration links for well-known ESPs like Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid – helping you avoid common misconfigurations and get aligned with SGE requirements. If you’re using a dedicated MTA (e.g., Postfix), DKIM must be implemented manually. EasyDMARC’s DKIM Record Generator lets you generate both public and private keys for your server. The private key is stored on your MTA, while the public key must be published in your DNS (see third and fourth screenshots). 4. DMARC p=reject rollout As mentioned in previous points, DMARC reporting is the first and most important step on your DMARC enforcement journey. Always start with a p=none policy and configure RUA reports to be sent to EasyDMARC. Use the report insights to identify and fix SPF and DKIM alignment issues, then gradually move to p=quarantine and finally p=reject once all legitimate email sources have been authenticated.  This phased approach ensures full protection against domain spoofing without risking legitimate email delivery. 5. adkim Strict Alignment Check This strict alignment check is not always applicable, especially if you’re using third-party bulk ESPs, such as Sendgrid, that require you to set DKIM on a subdomain level. You can set adkim=s in your DMARC TXT record, or simply enable strict mode in EasyDMARC’s Managed DMARC settings. This ensures that only emails with a DKIM signature that exactly match your domain pass alignment, adding an extra layer of protection against domain spoofing. But only do this if you are NOT a bulk sender. 6. Securing Non-Email Enabled Domains The purpose of deploying email security to non-email-enabled domains, or parked domains, is to prevent messages being spoofed from that domain. This requirement remains even if the root-level domain has SP=reject set within its DMARC record. Under this new framework, you must bulk import and mark parked domains as “Parked.” Crucially, this requires adjusting SPF settings to an empty record, setting DMARC to p=reject, and ensuring an empty DKIM record is in place: • SPF record: “v=spf1 -all”. • Wildcard DKIM record with empty public key.• DMARC record: “v=DMARC1;p=reject;adkim=s;aspf=s;rua=mailto:…”. EasyDMARC allows you to add and label parked domains for free. This is important because it helps you monitor any activity from these domains and ensure they remain protected with a strict DMARC policy of p=reject. 7. Compliance Dashboard Use EasyDMARC’s Domain Scanner to assess the security posture of each domain with a clear compliance score and risk level. The dashboard highlights configuration gaps and guides remediation steps, helping government agencies stay on track toward full compliance with the SGE Framework. 8. Inbound DMARC Evaluation Enforced You don’t need to apply any changes if you’re using Google Workspace, Microsoft 365, or other major mailbox providers. Most of them already enforce DMARC evaluation on incoming emails. However, some legacy Microsoft 365 setups may still quarantine emails that fail DMARC checks, even when the sending domain has a p=reject policy, instead of rejecting them. This behavior can be adjusted directly from your Microsoft Defender portal. Read more about this in our step-by-step guide on how to set up SPF, DKIM, and DMARC from Microsoft Defender. If you’re using a third-party mail provider that doesn’t enforce having a DMARC policy for incoming emails, which is rare, you’ll need to contact their support to request a configuration change. 9. Data Loss Prevention Aligned with NZISM The New Zealand Information Security Manual (NZISM) is the New Zealand Government’s manual on information assurance and information systems security. It includes guidance on data loss prevention (DLP), which must be followed to be aligned with the SEG. Need Help Setting up SPF and DKIM for your Email Provider? Setting up SPF and DKIM for different ESPs often requires specific configurations. Some providers require you to publish SPF and DKIM on a subdomain, while others only require DKIM, or have different formatting rules. We’ve simplified all these steps to help you avoid misconfigurations that could delay your DMARC enforcement, or worse, block legitimate emails from reaching your recipients. Below you’ll find comprehensive setup guides for Google Workspace, Microsoft 365, Zoho Mail, Amazon SES, and SendGrid. You can also explore our full blog section that covers setup instructions for many other well-known ESPs. Remember, all this information is reflected in your DMARC aggregate reports. These reports give you live visibility into your outgoing email ecosystem, helping you analyze and fix any issues specific to a given provider. Here are our step-by-step guides for the most common platforms: Google Workspace Microsoft 365 These guides will help ensure your DNS records are configured correctly as part of the Secure Government Email (SGE) Framework rollout. Meet New Government Email Security Standards With EasyDMARC New Zealand’s SEG Framework sets a clear path for government agencies to enhance their email security by October 2025. With EasyDMARC, you can meet these technical requirements efficiently and with confidence. From protocol setup to continuous monitoring and compliance tracking, EasyDMARC streamlines the entire process, ensuring strong protection against spoofing, phishing, and data loss while simplifying your transition from SEEMail.
    0 Commentaires 0 Parts
  • Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’

    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One.
    By Jay Stobie
    Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more.
    The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif.
    A final frame from the Battle of Scarif in Rogue One: A Star Wars Story.
    A Context for Conflict
    In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design.
    On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival.
    From Physical to Digital
    By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001.
    Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com.
    However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.”
    John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact.
    Legendary Lineages
    In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.”
    Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet.
    While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.”
    The U.S.S. Enterprise-E in Star Trek: First Contact.
    Familiar Foes
    To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin.
    As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.”
    Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.”
    A final frame from Rogue One: A Star Wars Story.
    Forming Up the Fleets
    In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics.
    Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography…
    Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized.
    Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story.
    Tough Little Ships
    The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001!
    Exploration and Hope
    The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire.
    The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope?

    Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    #looking #back #two #classics #ilm
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knollconfers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contactand Rogue One: A Star Wars Storypropelled their respective franchises to new heights. While Star Trek Generationswelcomed Captain Jean-Luc Picard’screw to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk. Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope, it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story, The Mandalorian, Andor, Ahsoka, The Acolyte, and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story. A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Ersoand Cassian Andorand the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical modelsfor its features was gradually giving way to innovative computer graphicsmodels, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knollconfers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact. Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got fromVER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact. Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generationand Star Trek: Deep Space Nine, creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back, respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story. Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs, live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples. These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’spersonal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story. Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships”in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobieis a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy. #looking #back #two #classics #ilm
    WWW.ILM.COM
    Looking Back at Two Classics: ILM Deploys the Fleet in ‘Star Trek: First Contact’ and ‘Rogue One: A Star Wars Story’
    Guided by visual effects supervisor John Knoll, ILM embraced continually evolving methodologies to craft breathtaking visual effects for the iconic space battles in First Contact and Rogue One. By Jay Stobie Visual effects supervisor John Knoll (right) confers with modelmakers Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Bolstered by visual effects from Industrial Light & Magic, Star Trek: First Contact (1996) and Rogue One: A Star Wars Story (2016) propelled their respective franchises to new heights. While Star Trek Generations (1994) welcomed Captain Jean-Luc Picard’s (Patrick Stewart) crew to the big screen, First Contact stood as the first Star Trek feature that did not focus on its original captain, the legendary James T. Kirk (William Shatner). Similarly, though Rogue One immediately preceded the events of Star Wars: A New Hope (1977), it was set apart from the episodic Star Wars films and launched an era of storytelling outside of the main Skywalker saga that has gone on to include Solo: A Star Wars Story (2018), The Mandalorian (2019-23), Andor (2022-25), Ahsoka (2023), The Acolyte (2024), and more. The two films also shared a key ILM contributor, John Knoll, who served as visual effects supervisor on both projects, as well as an executive producer on Rogue One. Currently, ILM’s executive creative director and senior visual effects supervisor, Knoll – who also conceived the initial framework for Rogue One’s story – guided ILM as it brought its talents to bear on these sci-fi and fantasy epics. The work involved crafting two spectacular starship-packed space clashes – First Contact’s Battle of Sector 001 and Rogue One’s Battle of Scarif. Although these iconic installments were released roughly two decades apart, they represent a captivating case study of how ILM’s approach to visual effects has evolved over time. With this in mind, let’s examine the films’ unforgettable space battles through the lens of fascinating in-universe parallels and the ILM-produced fleets that face off near Earth and Scarif. A final frame from the Battle of Scarif in Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). A Context for Conflict In First Contact, the United Federation of Planets – a 200-year-old interstellar government consisting of more than 150 member worlds – braces itself for an invasion by the Borg – an overwhelmingly powerful collective composed of cybernetic beings who devastate entire planets by assimilating their biological populations and technological innovations. The Borg only send a single vessel, a massive cube containing thousands of hive-minded drones and their queen, pushing the Federation’s Starfleet defenders to Earth’s doorstep. Conversely, in Rogue One, the Rebel Alliance – a fledgling coalition of freedom fighters – seeks to undermine and overthrow the stalwart Galactic Empire – a totalitarian regime preparing to tighten its grip on the galaxy by revealing a horrifying superweapon. A rebel team infiltrates a top-secret vault on Scarif in a bid to steal plans to that battle station, the dreaded Death Star, with hopes of exploiting a vulnerability in its design. On the surface, the situations could not seem to be more disparate, particularly in terms of the Federation’s well-established prestige and the Rebel Alliance’s haphazardly organized factions. Yet, upon closer inspection, the spaceborne conflicts at Earth and Scarif are linked by a vital commonality. The threat posed by the Borg is well-known to the Federation, but the sudden intrusion upon their space takes its defenses by surprise. Starfleet assembles any vessel within range – including antiquated Oberth-class science ships – to intercept the Borg cube in the Typhon Sector, only to be forced back to Earth on the edge of defeat. The unsanctioned mission to Scarif with Jyn Erso (Felicity Jones) and Cassian Andor (Diego Luna) and the sudden need to take down the planet’s shield gate propels the Rebel Alliance fleet into rushing to their rescue with everything from their flagship Profundity to GR-75 medium transports. Whether Federation or Rebel Alliance, these fleets gather in last-ditch efforts to oppose enemies who would embrace their eradication – the Battles of Sector 001 and Scarif are fights for survival. From Physical to Digital By the time Jonathan Frakes was selected to direct First Contact, Star Trek’s reliance on constructing traditional physical models (many of which were built by ILM) for its features was gradually giving way to innovative computer graphics (CG) models, resulting in the film’s use of both techniques. “If one of the ships was to be seen full-screen and at length,” associate visual effects supervisor George Murphy told Cinefex’s Kevin H. Martin, “we knew it would be done as a stage model. Ships that would be doing a lot of elaborate maneuvers in space battle scenes would be created digitally.” In fact, physical and CG versions of the U.S.S. Enterprise-E appear in the film, with the latter being harnessed in shots involving the vessel’s entry into a temporal vortex at the conclusion of the Battle of Sector 001. Despite the technological leaps that ILM pioneered in the decades between First Contact and Rogue One, they considered filming physical miniatures for certain ship-related shots in the latter film. ILM considered filming physical miniatures for certain ship-related shots in Rogue One. The feature’s fleets were ultimately created digitally to allow for changes throughout post-production. “If it’s a photographed miniature element, it’s not possible to go back and make adjustments. So it’s the additional flexibility that comes with the computer graphics models that’s very attractive to many people,” John Knoll relayed to writer Jon Witmer at American Cinematographer’s TheASC.com. However, Knoll aimed to develop computer graphics that retained the same high-quality details as their physical counterparts, leading ILM to employ a modern approach to a time-honored modelmaking tactic. “I also wanted to emulate the kit-bashing aesthetic that had been part of Star Wars from the very beginning, where a lot of mechanical detail had been added onto the ships by using little pieces from plastic model kits,” explained Knoll in his chat with TheASC.com. For Rogue One, ILM replicated the process by obtaining such kits, scanning their parts, building a computer graphics library, and applying the CG parts to digitally modeled ships. “I’m very happy to say it was super-successful,” concluded Knoll. “I think a lot of our digital models look like they are motion-control models.” John Knoll (second from left) confers with Kim Smith and John Goodson with the miniature of the U.S.S. Enterprise-E during production of Star Trek: First Contact (Credit: ILM). Legendary Lineages In First Contact, Captain Picard commanded a brand-new vessel, the Sovereign-class U.S.S. Enterprise-E, continuing the celebrated starship’s legacy in terms of its famous name and design aesthetic. Designed by John Eaves and developed into blueprints by Rick Sternbach, the Enterprise-E was built into a 10-foot physical model by ILM model project supervisor John Goodson and his shop’s talented team. ILM infused the ship with extraordinary detail, including viewports equipped with backlit set images from the craft’s predecessor, the U.S.S. Enterprise-D. For the vessel’s larger windows, namely those associated with the observation lounge and arboretum, ILM took a painstakingly practical approach to match the interiors shown with the real-world set pieces. “We filled that area of the model with tiny, micro-scale furniture,” Goodson informed Cinefex, “including tables and chairs.” Rogue One’s rebel team initially traversed the galaxy in a U-wing transport/gunship, which, much like the Enterprise-E, was a unique vessel that nonetheless channeled a certain degree of inspiration from a classic design. Lucasfilm’s Doug Chiang, a co-production designer for Rogue One, referred to the U-wing as the film’s “Huey helicopter version of an X-wing” in the Designing Rogue One bonus featurette on Disney+ before revealing that, “Towards the end of the design cycle, we actually decided that maybe we should put in more X-wing features. And so we took the X-wing engines and literally mounted them onto the configuration that we had going.” Modeled by ILM digital artist Colie Wertz, the U-wing’s final computer graphics design subtly incorporated these X-wing influences to give the transport a distinctive feel without making the craft seem out of place within the rebel fleet. While ILM’s work on the Enterprise-E’s viewports offered a compelling view toward the ship’s interior, a breakthrough LED setup for Rogue One permitted ILM to obtain realistic lighting on actors as they looked out from their ships and into the space around them. “All of our major spaceship cockpit scenes were done that way, with the gimbal in this giant horseshoe of LED panels we got from [equipment vendor] VER, and we prepared graphics that went on the screens,” John Knoll shared with American Cinematographer’s Benjamin B and Jon D. Witmer. Furthermore, in Disney+’s Rogue One: Digital Storytelling bonus featurette, visual effects producer Janet Lewin noted, “For the actors, I think, in the space battle cockpits, for them to be able to see what was happening in the battle brought a higher level of accuracy to their performance.” The U.S.S. Enterprise-E in Star Trek: First Contact (Credit: Paramount). Familiar Foes To transport First Contact’s Borg invaders, John Goodson’s team at ILM resurrected the Borg cube design previously seen in Star Trek: The Next Generation (1987) and Star Trek: Deep Space Nine (1993), creating a nearly three-foot physical model to replace the one from the series. Art consultant and ILM veteran Bill George proposed that the cube’s seemingly straightforward layout be augmented with a complex network of photo-etched brass, a suggestion which produced a jagged surface and offered a visual that was both intricate and menacing. ILM also developed a two-foot motion-control model for a Borg sphere, a brand-new auxiliary vessel that emerged from the cube. “We vacuformed about 15 different patterns that conformed to this spherical curve and covered those with a lot of molded and cast pieces. Then we added tons of acid-etched brass over it, just like we had on the cube,” Goodson outlined to Cinefex’s Kevin H. Martin. As for Rogue One’s villainous fleet, reproducing the original trilogy’s Death Star and Imperial Star Destroyers centered upon translating physical models into digital assets. Although ILM no longer possessed A New Hope’s three-foot Death Star shooting model, John Knoll recreated the station’s surface paneling by gathering archival images, and as he spelled out to writer Joe Fordham in Cinefex, “I pieced all the images together. I unwrapped them into texture space and projected them onto a sphere with a trench. By doing that with enough pictures, I got pretty complete coverage of the original model, and that became a template upon which to redraw very high-resolution texture maps. Every panel, every vertical striped line, I matched from a photograph. It was as accurate as it was possible to be as a reproduction of the original model.” Knoll’s investigative eye continued to pay dividends when analyzing the three-foot and eight-foot Star Destroyer motion-control models, which had been built for A New Hope and Star Wars: The Empire Strikes Back (1980), respectively. “Our general mantra was, ‘Match your memory of it more than the reality,’ because sometimes you go look at the actual prop in the archive building or you look back at the actual shot from the movie, and you go, ‘Oh, I remember it being a little better than that,’” Knoll conveyed to TheASC.com. This philosophy motivated ILM to combine elements from those two physical models into a single digital design. “Generally, we copied the three-footer for details like the superstructure on the top of the bridge, but then we copied the internal lighting plan from the eight-footer,” Knoll explained. “And then the upper surface of the three-footer was relatively undetailed because there were no shots that saw it closely, so we took a lot of the high-detail upper surface from the eight-footer. So it’s this amalgam of the two models, but the goal was to try to make it look like you remember it from A New Hope.” A final frame from Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Forming Up the Fleets In addition to the U.S.S. Enterprise-E, the Battle of Sector 001 debuted numerous vessels representing four new Starfleet ship classes – the Akira, Steamrunner, Saber, and Norway – all designed by ILM visual effects art director Alex Jaeger. “Since we figured a lot of the background action in the space battle would be done with computer graphics ships that needed to be built from scratch anyway, I realized that there was no reason not to do some new designs,” John Knoll told American Cinematographer writer Ron Magid. Used in previous Star Trek projects, older physical models for the Oberth and Nebula classes were mixed into the fleet for good measure, though the vast majority of the armada originated as computer graphics. Over at Scarif, ILM portrayed the Rebel Alliance forces with computer graphics models of fresh designs (the MC75 cruiser Profundity and U-wings), live-action versions of Star Wars Rebels’ VCX-100 light freighter Ghost and Hammerhead corvettes, and Star Wars staples (Nebulon-B frigates, X-wings, Y-wings, and more). These ships face off against two Imperial Star Destroyers and squadrons of TIE fighters, and – upon their late arrival to the battle – Darth Vader’s Star Destroyer and the Death Star. The Tantive IV, a CR90 corvette more popularly referred to as a blockade runner, made its own special cameo at the tail end of the fight. As Princess Leia Organa’s (Carrie Fisher and Ingvild Deila) personal ship, the Tantive IV received the Death Star plans and fled the scene, destined to be captured by Vader’s Star Destroyer at the beginning of A New Hope. And, while we’re on the subject of intricate starship maneuvers and space-based choreography… Although the First Contact team could plan visual effects shots with animated storyboards, ILM supplied Gareth Edwards with a next-level virtual viewfinder that allowed the director to select his shots by immersing himself among Rogue One’s ships in real time. “What we wanted to do is give Gareth the opportunity to shoot his space battles and other all-digital scenes the same way he shoots his live-action. Then he could go in with this sort of virtual viewfinder and view the space battle going on, and figure out what the best angle was to shoot those ships from,” senior animation supervisor Hal Hickel described in the Rogue One: Digital Storytelling featurette. Hickel divulged that the sequence involving the dish array docking with the Death Star was an example of the “spontaneous discovery of great angles,” as the scene was never storyboarded or previsualized. Visual effects supervisor John Knoll with director Gareth Edwards during production of Rogue One: A Star Wars Story (Credit: ILM & Lucasfilm). Tough Little Ships The Federation and Rebel Alliance each deployed “tough little ships” (an endearing description Commander William T. Riker [Jonathan Frakes] bestowed upon the U.S.S. Defiant in First Contact) in their respective conflicts, namely the U.S.S. Defiant from Deep Space Nine and the Tantive IV from A New Hope. VisionArt had already built a CG Defiant for the Deep Space Nine series, but ILM upgraded the model with images gathered from the ship’s three-foot physical model. A similar tactic was taken to bring the Tantive IV into the digital realm for Rogue One. “This was the Blockade Runner. This was the most accurate 1:1 reproduction we could possibly have made,” model supervisor Russell Paul declared to Cinefex’s Joe Fordham. “We did an extensive photo reference shoot and photogrammetry re-creation of the miniature. From there, we built it out as accurately as possible.” Speaking of sturdy ships, if you look very closely, you can spot a model of the Millennium Falcon flashing across the background as the U.S.S. Defiant makes an attack run on the Borg cube at the Battle of Sector 001! Exploration and Hope The in-universe ramifications that materialize from the Battles of Sector 001 and Scarif are monumental. The destruction of the Borg cube compels the Borg Queen to travel back in time in an attempt to vanquish Earth before the Federation can even be formed, but Captain Picard and the Enterprise-E foil the plot and end up helping their 21st century ancestors make “first contact” with another species, the logic-revering Vulcans. The post-Scarif benefits take longer to play out for the Rebel Alliance, but the theft of the Death Star plans eventually leads to the superweapon’s destruction. The Galactic Civil War is far from over, but Scarif is a significant step in the Alliance’s effort to overthrow the Empire. The visual effects ILM provided for First Contact and Rogue One contributed significantly to the critical and commercial acclaim both pictures enjoyed, a victory reflecting the relentless dedication, tireless work ethic, and innovative spirit embodied by visual effects supervisor John Knoll and ILM’s entire staff. While being interviewed for The Making of Star Trek: First Contact, actor Patrick Stewart praised ILM’s invaluable influence, emphasizing, “ILM was with us, on this movie, almost every day on set. There is so much that they are involved in.” And, regardless of your personal preferences – phasers or lasers, photon torpedoes or proton torpedoes, warp speed or hyperspace – perhaps Industrial Light & Magic’s ability to infuse excitement into both franchises demonstrates that Star Trek and Star Wars encompass themes that are not competitive, but compatible. After all, what goes together better than exploration and hope? – Jay Stobie (he/him) is a writer, author, and consultant who has contributed articles to ILM.com, Skysound.com, Star Wars Insider, StarWars.com, Star Trek Explorer, Star Trek Magazine, and StarTrek.com. Jay loves sci-fi, fantasy, and film, and you can learn more about him by visiting JayStobie.com or finding him on Twitter, Instagram, and other social media platforms at @StobiesGalaxy.
    0 Commentaires 0 Parts
Plus de résultats