• Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 Commentarios 0 Acciones
  • NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica

    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth.
    Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI.
    This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany.
    NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics.
    NVIDIA Technologies Boost Robotics Development 
    Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics.
    To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks.
    To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data.
    In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub.
    Image courtesy of Wandelbots.
    Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More 
    Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots.
    NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment.
    NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies.
    Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows.
    Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact.
    Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations.
    Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries.
    Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic.
    Image courtesy of Franka Robotics.
    Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support.
    Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies.
    SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario.
    Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation.

    Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications.
    Image courtesy of Vention.
    Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    #nvidia #partners #highlight #nextgeneration #robotics
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27.  #nvidia #partners #highlight #nextgeneration #robotics
    BLOGS.NVIDIA.COM
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a $200 billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3 (FR3) robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    Like
    Love
    Wow
    Sad
    Angry
    19
    0 Commentarios 0 Acciones
  • Blender Jobs for June 27, 2025

    Here's an overview of the most recent Blender jobs on Blender Artists, ArtStation and 3djobs.xyz: 3D Generalist (Blender) for Luxury Real Estate Visualization – Collaborative, Time-Bound Project Looking for artist to create product demo videos - delivery within one week 3D rigger/animator Blender artist Aquent | 3D Designer Art Lead TDA | 3D Artist TUEREN, [...]
    Source
    Blender Jobs for June 27, 2025 Here's an overview of the most recent Blender jobs on Blender Artists, ArtStation and 3djobs.xyz: 3D Generalist (Blender) for Luxury Real Estate Visualization – Collaborative, Time-Bound Project Looking for artist to create product demo videos - delivery within one week 3D rigger/animator Blender artist Aquent | 3D Designer Art Lead TDA | 3D Artist TUEREN, [...] Source
    Blender Jobs for June 27, 2025
    Here's an overview of the most recent Blender jobs on Blender Artists, ArtStation and 3djobs.xyz: 3D Generalist (Blender) for Luxury Real Estate Visualization – Collaborative, Time-Bound Project Looking for artist to create product demo videos - deli
    1 Commentarios 0 Acciones
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 Commentarios 0 Acciones
  • In the shadows of my solitude, I find myself contemplating the weight of my choices, as if each decision has led me further into a labyrinth of despair. Just like the latest updates from NIM Labs with their NIM 7.0 launch, promising new scheduling and conflict detection, I yearn for a path that seems to elude me. Yet, here I am, lost in a world that feels cold and uninviting, where even the brightest features of life fail to illuminate the darkness I feel inside.

    The updates in technology bring hope to many, but for me, they serve as a stark reminder of the isolation that wraps around my heart. The complexities of resource usage tracking in VFX and visualization echo the intricacies of my own emotional landscape, where every interaction feels like a conflict, and every moment is a struggle for connection. I watch as others thrive, their lives intertwined like intricate designs in a visual masterpiece, while I remain a mere spectator, trapped in a canvas of loneliness.

    Each day, I wake up to the silence that fills my room, a silence that feels heavier than the weight of my unexpressed thoughts. The world moves on without me, as if my existence is nothing more than a glitch in the matrix of life. The features that are meant to enhance productivity and creativity serve as a painful juxtaposition to my stagnation. I scroll through updates, seeing others flourish, their accomplishments a bittersweet reminder of what I long for but cannot grasp.

    I wish I could schedule joy like a meeting, or detect conflicts in my heart as easily as one might track resources in a studio management platform. Instead, I find myself tangled in emotions that clash like colors on a poorly rendered screen, each hue representing a fragment of my shattered spirit. The longing for connection is overshadowed by the fear of rejection, creating a cycle of heartache that feels impossible to escape.

    As I sit here, gazing at the flickering screen, I can’t help but wonder if anyone truly sees me. The thought is both comforting and devastating; I crave companionship yet fear the vulnerability that comes with it. The updates and features of NIM Labs remind me of the progress others are making, while I remain stagnant, longing for the warmth of a shared experience.

    In a world designed for collaboration and creativity, I find myself adrift, yearning for my own version of the features NIM 7.0 brings to others. I wish for a way to bridge the gap between my isolation and the vibrant connections that seem to thrive all around me.

    But for now, I am left with my thoughts, my heart heavy with unspoken words, as the silence of my solitude envelops me once more.

    #Loneliness #Heartbreak #Isolation #NIMLabs #EmotionalStruggles
    In the shadows of my solitude, I find myself contemplating the weight of my choices, as if each decision has led me further into a labyrinth of despair. Just like the latest updates from NIM Labs with their NIM 7.0 launch, promising new scheduling and conflict detection, I yearn for a path that seems to elude me. Yet, here I am, lost in a world that feels cold and uninviting, where even the brightest features of life fail to illuminate the darkness I feel inside. The updates in technology bring hope to many, but for me, they serve as a stark reminder of the isolation that wraps around my heart. The complexities of resource usage tracking in VFX and visualization echo the intricacies of my own emotional landscape, where every interaction feels like a conflict, and every moment is a struggle for connection. I watch as others thrive, their lives intertwined like intricate designs in a visual masterpiece, while I remain a mere spectator, trapped in a canvas of loneliness. Each day, I wake up to the silence that fills my room, a silence that feels heavier than the weight of my unexpressed thoughts. The world moves on without me, as if my existence is nothing more than a glitch in the matrix of life. The features that are meant to enhance productivity and creativity serve as a painful juxtaposition to my stagnation. I scroll through updates, seeing others flourish, their accomplishments a bittersweet reminder of what I long for but cannot grasp. I wish I could schedule joy like a meeting, or detect conflicts in my heart as easily as one might track resources in a studio management platform. Instead, I find myself tangled in emotions that clash like colors on a poorly rendered screen, each hue representing a fragment of my shattered spirit. The longing for connection is overshadowed by the fear of rejection, creating a cycle of heartache that feels impossible to escape. As I sit here, gazing at the flickering screen, I can’t help but wonder if anyone truly sees me. The thought is both comforting and devastating; I crave companionship yet fear the vulnerability that comes with it. The updates and features of NIM Labs remind me of the progress others are making, while I remain stagnant, longing for the warmth of a shared experience. In a world designed for collaboration and creativity, I find myself adrift, yearning for my own version of the features NIM 7.0 brings to others. I wish for a way to bridge the gap between my isolation and the vibrant connections that seem to thrive all around me. But for now, I am left with my thoughts, my heart heavy with unspoken words, as the silence of my solitude envelops me once more. #Loneliness #Heartbreak #Isolation #NIMLabs #EmotionalStruggles
    NIM Labs launches NIM 7.0
    Studio management platform for VFX and visualization gets new scheduling, conflict detection and resource usage tracking features.
    Like
    Love
    Wow
    Sad
    Angry
    355
    1 Commentarios 0 Acciones
  • data analysis, dashboards, data visualization, business decisions, analytics mistakes, ineffective dashboards, data solutions, performance metrics, business intelligence

    ## Introduction

    Are you one of those professionals who spends countless hours crafting a visually stunning dashboard, only to discover that nobody gives a damn about it? Welcome to the harsh reality of data analysis! It's shocking how many analysts fall into this trap, pouring their creativity and skill into something that ult...
    data analysis, dashboards, data visualization, business decisions, analytics mistakes, ineffective dashboards, data solutions, performance metrics, business intelligence ## Introduction Are you one of those professionals who spends countless hours crafting a visually stunning dashboard, only to discover that nobody gives a damn about it? Welcome to the harsh reality of data analysis! It's shocking how many analysts fall into this trap, pouring their creativity and skill into something that ult...
    Dashboard: The Mistake That Ruins Your Data Analysis
    data analysis, dashboards, data visualization, business decisions, analytics mistakes, ineffective dashboards, data solutions, performance metrics, business intelligence ## Introduction Are you one of those professionals who spends countless hours crafting a visually stunning dashboard, only to discover that nobody gives a damn about it? Welcome to the harsh reality of data analysis! It's...
    Like
    Love
    Wow
    Angry
    Sad
    603
    1 Commentarios 0 Acciones
  • Ankur Kothari Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns.
    But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic.
    This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results.
    Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.

     
    Ankur Kothari Q&A Interview
    1. What types of customer engagement data are most valuable for making strategic business decisions?
    Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns.
    Second would be demographic information: age, location, income, and other relevant personal characteristics.
    Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews.
    Fourth would be the customer journey data.

    We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data.

    2. How do you distinguish between data that is actionable versus data that is just noise?
    First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance.
    Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in.

    You also want to make sure that there is consistency across sources.
    Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory.
    Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy.

    By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions.

    3. How can customer engagement data be used to identify and prioritize new business opportunities?
    First, it helps us to uncover unmet needs.

    By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points.

    Second would be identifying emerging needs.
    Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly.
    Third would be segmentation analysis.
    Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies.
    Last is to build competitive differentiation.

    Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions.

    4. Can you share an example of where data insights directly influenced a critical decision?
    I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings.
    We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms.
    That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs.

    That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial.

    5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time?
    When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences.
    We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments.
    Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content.

    With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns.

    6. How are you doing the 1:1 personalization?
    We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer.
    So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer.
    That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience.

    We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers.

    7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service?
    Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved.
    The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments.

    Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention.

    So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization.

    8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights?
    I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights.

    Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement.

    Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant.
    As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively.
    So there’s a lack of understanding of marketing and sales as domains.
    It’s a huge effort and can take a lot of investment.

    Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing.

    9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data?
    If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge.
    Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side.

    Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important.

    10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before?
    First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do.
    And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations.
    The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it.

    Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one.

    11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations?
    We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI.
    We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals.

    We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization.

    12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data?
    I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points.
    Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us.
    We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels.
    Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms.

    Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps.

    13. How do you ensure data quality and consistency across multiple channels to make these informed decisions?
    We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies.
    While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing.
    We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats.

    On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically.

    14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years?
    The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices.
    Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities.
    We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases.
    As the world is collecting more data, privacy concerns and regulations come into play.
    I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies.
    And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture.

    So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.

     
    This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #ankur #kothari #qampampa #customer #engagement
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question, we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage. #ankur #kothari #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Ankur Kothari Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In marketing, data isn’t a buzzword. It’s the lifeblood of all successful campaigns. But are you truly harnessing its power, or are you drowning in a sea of information? To answer this question (and many others), we sat down with Ankur Kothari, a seasoned Martech expert, to dive deep into this crucial topic. This interview, originally conducted for Chapter 6 of “The Customer Engagement Book: Adapt or Die” explores how businesses can translate raw data into actionable insights that drive real results. Ankur shares his wealth of knowledge on identifying valuable customer engagement data, distinguishing between signal and noise, and ultimately, shaping real-time strategies that keep companies ahead of the curve.   Ankur Kothari Q&A Interview 1. What types of customer engagement data are most valuable for making strategic business decisions? Primarily, there are four different buckets of customer engagement data. I would begin with behavioral data, encompassing website interaction, purchase history, and other app usage patterns. Second would be demographic information: age, location, income, and other relevant personal characteristics. Third would be sentiment analysis, where we derive information from social media interaction, customer feedback, or other customer reviews. Fourth would be the customer journey data. We track touchpoints across various channels of the customers to understand the customer journey path and conversion. Combining these four primary sources helps us understand the engagement data. 2. How do you distinguish between data that is actionable versus data that is just noise? First is keeping relevant to your business objectives, making actionable data that directly relates to your specific goals or KPIs, and then taking help from statistical significance. Actionable data shows clear patterns or trends that are statistically valid, whereas other data consists of random fluctuations or outliers, which may not be what you are interested in. You also want to make sure that there is consistency across sources. Actionable insights are typically corroborated by multiple data points or channels, while other data or noise can be more isolated and contradictory. Actionable data suggests clear opportunities for improvement or decision making, whereas noise does not lead to meaningful actions or changes in strategy. By applying these criteria, I can effectively filter out the noise and focus on data that delivers or drives valuable business decisions. 3. How can customer engagement data be used to identify and prioritize new business opportunities? First, it helps us to uncover unmet needs. By analyzing the customer feedback, touch points, support interactions, or usage patterns, we can identify the gaps in our current offerings or areas where customers are experiencing pain points. Second would be identifying emerging needs. Monitoring changes in customer behavior or preferences over time can reveal new market trends or shifts in demand, allowing my company to adapt their products or services accordingly. Third would be segmentation analysis. Detailed customer data analysis enables us to identify unserved or underserved segments or niche markets that may represent untapped opportunities for growth or expansion into newer areas and new geographies. Last is to build competitive differentiation. Engagement data can highlight where our companies outperform competitors, helping us to prioritize opportunities that leverage existing strengths and unique selling propositions. 4. Can you share an example of where data insights directly influenced a critical decision? I will share an example from my previous organization at one of the financial services where we were very data-driven, which made a major impact on our critical decision regarding our credit card offerings. We analyzed the customer engagement data, and we discovered that a large segment of our millennial customers were underutilizing our traditional credit cards but showed high engagement with mobile payment platforms. That insight led us to develop and launch our first digital credit card product with enhanced mobile features and rewards tailored to the millennial spending habits. Since we had access to a lot of transactional data as well, we were able to build a financial product which met that specific segment’s needs. That data-driven decision resulted in a 40% increase in our new credit card applications from this demographic within the first quarter of the launch. Subsequently, our market share improved in that specific segment, which was very crucial. 5. Are there any other examples of ways that you see customer engagement data being able to shape marketing strategy in real time? When it comes to using the engagement data in real-time, we do quite a few things. In the recent past two, three years, we are using that for dynamic content personalization, adjusting the website content, email messaging, or ad creative based on real-time user behavior and preferences. We automate campaign optimization using specific AI-driven tools to continuously analyze performance metrics and automatically reallocate the budget to top-performing channels or ad segments. Then we also build responsive social media engagement platforms like monitoring social media sentiments and trending topics to quickly adapt the messaging and create timely and relevant content. With one-on-one personalization, we do a lot of A/B testing as part of the overall rapid testing and market elements like subject lines, CTAs, and building various successful variants of the campaigns. 6. How are you doing the 1:1 personalization? We have advanced CDP systems, and we are tracking each customer’s behavior in real-time. So the moment they move to different channels, we know what the context is, what the relevance is, and the recent interaction points, so we can cater the right offer. So for example, if you looked at a certain offer on the website and you came from Google, and then the next day you walk into an in-person interaction, our agent will already know that you were looking at that offer. That gives our customer or potential customer more one-to-one personalization instead of just segment-based or bulk interaction kind of experience. We have a huge team of data scientists, data analysts, and AI model creators who help us to analyze big volumes of data and bring the right insights to our marketing and sales team so that they can provide the right experience to our customers. 7. What role does customer engagement data play in influencing cross-functional decisions, such as with product development, sales, and customer service? Primarily with product development — we have different products, not just the financial products or products whichever organizations sell, but also various products like mobile apps or websites they use for transactions. So that kind of product development gets improved. The engagement data helps our sales and marketing teams create more targeted campaigns, optimize channel selection, and refine messaging to resonate with specific customer segments. Customer service also gets helped by anticipating common issues, personalizing support interactions over the phone or email or chat, and proactively addressing potential problems, leading to improved customer satisfaction and retention. So in general, cross-functional application of engagement improves the customer-centric approach throughout the organization. 8. What do you think some of the main challenges marketers face when trying to translate customer engagement data into actionable business insights? I think the huge amount of data we are dealing with. As we are getting more digitally savvy and most of the customers are moving to digital channels, we are getting a lot of data, and that sheer volume of data can be overwhelming, making it very difficult to identify truly meaningful patterns and insights. Because of the huge data overload, we create data silos in this process, so information often exists in separate systems across different departments. We are not able to build a holistic view of customer engagement. Because of data silos and overload of data, data quality issues appear. There is inconsistency, and inaccurate data can lead to incorrect insights or poor decision-making. Quality issues could also be due to the wrong format of the data, or the data is stale and no longer relevant. As we are growing and adding more people to help us understand customer engagement, I’ve also noticed that technical folks, especially data scientists and data analysts, lack skills to properly interpret the data or apply data insights effectively. So there’s a lack of understanding of marketing and sales as domains. It’s a huge effort and can take a lot of investment. Not being able to calculate the ROI of your overall investment is a big challenge that many organizations are facing. 9. Why do you think the analysts don’t have the business acumen to properly do more than analyze the data? If people do not have the right idea of why we are collecting this data, we collect a lot of noise, and that brings in huge volumes of data. If you cannot stop that from step one—not bringing noise into the data system—that cannot be done by just technical folks or people who do not have business knowledge. Business people do not know everything about what data is being collected from which source and what data they need. It’s a gap between business domain knowledge, specifically marketing and sales needs, and technical folks who don’t have a lot of exposure to that side. Similarly, marketing business people do not have much exposure to the technical side — what’s possible to do with data, how much effort it takes, what’s relevant versus not relevant, and how to prioritize which data sources will be most important. 10. Do you have any suggestions for how this can be overcome, or have you seen it in action where it has been solved before? First, cross-functional training: training different roles to help them understand why we’re doing this and what the business goals are, giving technical people exposure to what marketing and sales teams do. And giving business folks exposure to the technology side through training on different tools, strategies, and the roadmap of data integrations. The second is helping teams work more collaboratively. So it’s not like the technology team works in a silo and comes back when their work is done, and then marketing and sales teams act upon it. Now we’re making it more like one team. You work together so that you can complement each other, and we have a better strategy from day one. 11. How do you address skepticism or resistance from stakeholders when presenting data-driven recommendations? We present clear business cases where we demonstrate how data-driven recommendations can directly align with business objectives and potential ROI. We build compelling visualizations, easy-to-understand charts and graphs that clearly illustrate the insights and the implications for business goals. We also do a lot of POCs and pilot projects with small-scale implementations to showcase tangible results and build confidence in the data-driven approach throughout the organization. 12. What technologies or tools have you found most effective for gathering and analyzing customer engagement data? I’ve found that Customer Data Platforms help us unify customer data from various sources, providing a comprehensive view of customer interactions across touch points. Having advanced analytics platforms — tools with AI and machine learning capabilities that can process large volumes of data and uncover complex patterns and insights — is a great value to us. We always use, or many organizations use, marketing automation systems to improve marketing team productivity, helping us track and analyze customer interactions across multiple channels. Another thing is social media listening tools, wherever your brand is mentioned or you want to measure customer sentiment over social media, or track the engagement of your campaigns across social media platforms. Last is web analytical tools, which provide detailed insights into your website visitors’ behaviors and engagement metrics, for browser apps, small browser apps, various devices, and mobile apps. 13. How do you ensure data quality and consistency across multiple channels to make these informed decisions? We established clear guidelines for data collection, storage, and usage across all channels to maintain consistency. Then we use data integration platforms — tools that consolidate data from various sources into a single unified view, reducing discrepancies and inconsistencies. While we collect data from different sources, we clean the data so it becomes cleaner with every stage of processing. We also conduct regular data audits — performing periodic checks to identify and rectify data quality issues, ensuring accuracy and reliability of information. We also deploy standardized data formats. On top of that, we have various automated data cleansing tools, specific software to detect and correct data errors, redundancies, duplicates, and inconsistencies in data sets automatically. 14. How do you see the role of customer engagement data evolving in shaping business strategies over the next five years? The first thing that’s been the biggest trend from the past two years is AI-driven decision making, which I think will become more prevalent, with advanced algorithms processing vast amounts of engagement data in real-time to inform strategic choices. Somewhat related to this is predictive analytics, which will play an even larger role, enabling businesses to anticipate customer needs and market trends with more accuracy and better predictive capabilities. We also touched upon hyper-personalization. We are all trying to strive toward more hyper-personalization at scale, which is more one-on-one personalization, as we are increasingly capturing more engagement data and have bigger systems and infrastructure to support processing those large volumes of data so we can achieve those hyper-personalization use cases. As the world is collecting more data, privacy concerns and regulations come into play. I believe in the next few years there will be more innovation toward how businesses can collect data ethically and what the usage practices are, leading to more transparent and consent-based engagement data strategies. And lastly, I think about the integration of engagement data, which is always a big challenge. I believe as we’re solving those integration challenges, we are adding more and more complex data sources to the picture. So I think there will need to be more innovation or sophistication brought into data integration strategies, which will help us take a truly customer-centric approach to strategy formulation.   This interview Q&A was hosted with Ankur Kothari, a previous Martech Executive, for Chapter 6 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Ankur Kothari Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Commentarios 0 Acciones
  • Building an Architectural Visualization Community: The Case for Physical Gatherings

    Barbara Betlejewska is a PR consultant and manager with extensive experience in architecture and real estate, currently involved with World Visualization Festival, a global event bringing together CGI and digital storytelling professionals for 3 days of presentations, workshops, and networking in Warsaw, Poland, this October.
    Over the last twenty years, visualization and 3D rendering have evolved from supporting tools to become central pillars of architectural storytelling, design development, and marketing across various industries. As digital technologies have advanced, the landscape of creative work has changed dramatically. Artists can now collaborate with clients worldwide without leaving their homes, and their careers can flourish without ever setting foot in a traditional studio.
    In this hyper-connected world, where access to knowledge, clients, and inspiration is just a click away, do we still need to gather in person? Do conferences, festivals and meetups in the CGI and architectural visualization world still carry weight?

    The People Behind the Pixels
    Professionals from the visualization industry exchanging ideas at WVF 2024.
    For a growing number of professionals — especially those in creative and tech-driven fields — remote work has become the norm. The shift to digital workflows, accelerated by the pandemic, has brought freedom and flexibility that many are reluctant to give up. It’s easier than ever to work for clients in distant cities or countries, to build a freelance career from a laptop, or to pursue the lifestyle of a digital nomad.
    On the surface, it is a broadening of horizons. But for many, the freedom of remote work comes with a cost: isolation. For visualization artists, the reality often means spending long hours alone, rarely interacting face-to-face with peers or collaborators. And while there are undeniable advantages to independent work, the lack of human connection can lead to creative stagnation, professional burnout, and a sense of detachment from the industry as a whole.
    Despite being a highly technical and often solitary craft, visualization and CGI thrive on the exchange of ideas, feedback and inspiration. The tools and techniques evolve rapidly, and staying relevant usually means learning not just from tutorials but from honest conversations with others who understand the nuances of the field.

    A Community in the Making
    Professionals from the visualization industry exchanging ideas at WVF 2024.
    That need for connection is what pushed Michał Nowak, a Polish visualizer and founder of Nowak Studio, to organize Poland’s first-ever architectural visualization meetup in 2017. With no background in event planning, he wasn’t sure where to begin, but he knew something was missing. The Polish Arch Viz scene lacked a shared space for meetings, discussions, and idea exchange. Michał wanted more than screen time; he wanted honest conversations, spontaneous collaboration and a chance to grow alongside others in the field.
    What began as a modest gathering quickly grew into something much bigger. That original meetup evolved into what is now the World Visualization Festival, an international event that welcomes artists from across Europe and beyond.
    “I didn’t expect our small gathering to grow into a global festival,” Michał says. “But I knew I wanted a connection. I believed that through sharing ideas and experiences, we could all grow professionally, creatively, and personally. And that we’d enjoy the journey more.”
    The response was overwhelming. Each year, more artists from across Poland and Europe join the event in Wrocław, located in south-western Poland. Michał also traveled to other festivals in countries like Portugal and Austria, where he observed the same thing: a spirit of openness, generosity, and shared curiosity. No matter the country or the maturity of the market, the needs were the same — people wanted to connect, learn and grow.
    And beyond the professional side, there was something else: joy. These events were simply fun. They were energizing. They gave people a reason to step away from their desks and remember why they love what they do.

    The Professional Benefits
    Hands-on learning at the AI-driven visualization workshop in Warsaw, October 2024.
    The professional benefits of attending industry events are well documented. These gatherings provide access to mentorship, collaboration and knowledge that can be challenging to find online. Festivals and industry meetups serve as platforms for emerging trends, new tools and fresh workflows — often before they hit the mainstream. They’re places where ideas collide, assumptions are challenged and growth happens.
    The range of topics covered at such events is broad, encompassing everything from portfolio reviews and in-depth discussions of particular rendering engines to discussions about pricing your work and building a sustainable business. At the 2024 edition of the World Visualization Festival, panels focused on scaling creative businesses and navigating industry rates drew some of the biggest crowds, proving that artists are hungry for both artistic and entrepreneurial insights.
    Being part of a creative community also shapes professional identity. It’s not just about finding clients — it’s about finding your place. In a field as fast-moving and competitive as Arch Viz, connection and conversation aren’t luxuries. They’re tools for survival.
    There’s also the matter of building your social capital. Online interactions can only go so far. Meeting someone in person builds relationships that stick. The coffee-break conversations, the spontaneous feedback — these are the moments that cement a community and have the power to spark future projects or long-lasting partnerships. This usually doesn’t happen in Zoom calls.
    And let’s not forget the symbolic power of events like industry awards, such as the Architizer’s Vision Awards or CGArchitect’s 3D Awards. These aren’t just celebrations of talent; they’re affirmations of the craft itself. They contribute to the growth and cohesion of the industry while helping to establish and promote best practices. These events clearly define the role and significance of CGI and visualization as a distinct profession, positioned at the intersection of architecture, marketing, and sales. They advocate for the field to be recognized on its own terms, not merely as a support service, but as an independent discipline. For its creators, they bring visibility, credit, and recognition — elements that inspire growth and fuel motivation to keep pushing the craft forward. Occasions like these remind us that what we do has actual value, impact and meaning.

    The Energy We Take Home
    The WVF 2024 afterparty provided a vibrant space for networking and celebration in Warsaw.
    Many artists describe the post-event glow: a renewed sense of purpose, a fresh jolt of energy, an eagerness to get back to work. Sometimes, new projects emerge, new clients appear, or long-dormant ideas finally gain momentum. These events aren’t just about learning — they’re about recharging.
    One of the most potent moments of last year’s WVF was a series of talks focused on mental health and creative well-being. Co-organized by Michał Nowak and the Polish Arch Viz studio ELEMENT, the festival addressed the emotional realities of the profession, including burnout, self-doubt, and the pressure to constantly produce. These conversations resonated deeply because they were real.
    Seeing that others face the same struggles — and come through them — is profoundly reassuring. Listening to someone share a business strategy that worked, or a failure they learned from, turns competition into camaraderie. Vulnerability becomes strength. Shared experiences become the foundation of resilience.

    Make a Statement. Show up!
    Top industry leaders shared insights during presentations at WVF 2024
    In an era when nearly everything can be done online, showing up in person is a powerful statement. It says: I want more than just efficiency. I want connection, creativity and conversation.
    As the CGI and visualization industries continue to evolve, the need for human connection hasn’t disappeared — it’s grown stronger. Conferences, festivals and meetups, such as World Viz Fest, remain vital spaces for knowledge sharing, innovation and community building. They give us a chance to reset, reconnect and remember that we are part of something bigger than our screens.
    So, yes, despite the tools, the bandwidth, and the ever-faster workflows, we still need to meet in person. Not out of nostalgia, but out of necessity. Because, no matter how far technology takes us, creativity remains a human endeavor.
    Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Start your entry ahead of the Final Entry Deadline on July 11th. 
    The post Building an Architectural Visualization Community: The Case for Physical Gatherings appeared first on Journal.
    #building #architectural #visualization #community #case
    Building an Architectural Visualization Community: The Case for Physical Gatherings
    Barbara Betlejewska is a PR consultant and manager with extensive experience in architecture and real estate, currently involved with World Visualization Festival, a global event bringing together CGI and digital storytelling professionals for 3 days of presentations, workshops, and networking in Warsaw, Poland, this October. Over the last twenty years, visualization and 3D rendering have evolved from supporting tools to become central pillars of architectural storytelling, design development, and marketing across various industries. As digital technologies have advanced, the landscape of creative work has changed dramatically. Artists can now collaborate with clients worldwide without leaving their homes, and their careers can flourish without ever setting foot in a traditional studio. In this hyper-connected world, where access to knowledge, clients, and inspiration is just a click away, do we still need to gather in person? Do conferences, festivals and meetups in the CGI and architectural visualization world still carry weight? The People Behind the Pixels Professionals from the visualization industry exchanging ideas at WVF 2024. For a growing number of professionals — especially those in creative and tech-driven fields — remote work has become the norm. The shift to digital workflows, accelerated by the pandemic, has brought freedom and flexibility that many are reluctant to give up. It’s easier than ever to work for clients in distant cities or countries, to build a freelance career from a laptop, or to pursue the lifestyle of a digital nomad. On the surface, it is a broadening of horizons. But for many, the freedom of remote work comes with a cost: isolation. For visualization artists, the reality often means spending long hours alone, rarely interacting face-to-face with peers or collaborators. And while there are undeniable advantages to independent work, the lack of human connection can lead to creative stagnation, professional burnout, and a sense of detachment from the industry as a whole. Despite being a highly technical and often solitary craft, visualization and CGI thrive on the exchange of ideas, feedback and inspiration. The tools and techniques evolve rapidly, and staying relevant usually means learning not just from tutorials but from honest conversations with others who understand the nuances of the field. A Community in the Making Professionals from the visualization industry exchanging ideas at WVF 2024. That need for connection is what pushed Michał Nowak, a Polish visualizer and founder of Nowak Studio, to organize Poland’s first-ever architectural visualization meetup in 2017. With no background in event planning, he wasn’t sure where to begin, but he knew something was missing. The Polish Arch Viz scene lacked a shared space for meetings, discussions, and idea exchange. Michał wanted more than screen time; he wanted honest conversations, spontaneous collaboration and a chance to grow alongside others in the field. What began as a modest gathering quickly grew into something much bigger. That original meetup evolved into what is now the World Visualization Festival, an international event that welcomes artists from across Europe and beyond. “I didn’t expect our small gathering to grow into a global festival,” Michał says. “But I knew I wanted a connection. I believed that through sharing ideas and experiences, we could all grow professionally, creatively, and personally. And that we’d enjoy the journey more.” The response was overwhelming. Each year, more artists from across Poland and Europe join the event in Wrocław, located in south-western Poland. Michał also traveled to other festivals in countries like Portugal and Austria, where he observed the same thing: a spirit of openness, generosity, and shared curiosity. No matter the country or the maturity of the market, the needs were the same — people wanted to connect, learn and grow. And beyond the professional side, there was something else: joy. These events were simply fun. They were energizing. They gave people a reason to step away from their desks and remember why they love what they do. The Professional Benefits Hands-on learning at the AI-driven visualization workshop in Warsaw, October 2024. The professional benefits of attending industry events are well documented. These gatherings provide access to mentorship, collaboration and knowledge that can be challenging to find online. Festivals and industry meetups serve as platforms for emerging trends, new tools and fresh workflows — often before they hit the mainstream. They’re places where ideas collide, assumptions are challenged and growth happens. The range of topics covered at such events is broad, encompassing everything from portfolio reviews and in-depth discussions of particular rendering engines to discussions about pricing your work and building a sustainable business. At the 2024 edition of the World Visualization Festival, panels focused on scaling creative businesses and navigating industry rates drew some of the biggest crowds, proving that artists are hungry for both artistic and entrepreneurial insights. Being part of a creative community also shapes professional identity. It’s not just about finding clients — it’s about finding your place. In a field as fast-moving and competitive as Arch Viz, connection and conversation aren’t luxuries. They’re tools for survival. There’s also the matter of building your social capital. Online interactions can only go so far. Meeting someone in person builds relationships that stick. The coffee-break conversations, the spontaneous feedback — these are the moments that cement a community and have the power to spark future projects or long-lasting partnerships. This usually doesn’t happen in Zoom calls. And let’s not forget the symbolic power of events like industry awards, such as the Architizer’s Vision Awards or CGArchitect’s 3D Awards. These aren’t just celebrations of talent; they’re affirmations of the craft itself. They contribute to the growth and cohesion of the industry while helping to establish and promote best practices. These events clearly define the role and significance of CGI and visualization as a distinct profession, positioned at the intersection of architecture, marketing, and sales. They advocate for the field to be recognized on its own terms, not merely as a support service, but as an independent discipline. For its creators, they bring visibility, credit, and recognition — elements that inspire growth and fuel motivation to keep pushing the craft forward. Occasions like these remind us that what we do has actual value, impact and meaning. The Energy We Take Home The WVF 2024 afterparty provided a vibrant space for networking and celebration in Warsaw. Many artists describe the post-event glow: a renewed sense of purpose, a fresh jolt of energy, an eagerness to get back to work. Sometimes, new projects emerge, new clients appear, or long-dormant ideas finally gain momentum. These events aren’t just about learning — they’re about recharging. One of the most potent moments of last year’s WVF was a series of talks focused on mental health and creative well-being. Co-organized by Michał Nowak and the Polish Arch Viz studio ELEMENT, the festival addressed the emotional realities of the profession, including burnout, self-doubt, and the pressure to constantly produce. These conversations resonated deeply because they were real. Seeing that others face the same struggles — and come through them — is profoundly reassuring. Listening to someone share a business strategy that worked, or a failure they learned from, turns competition into camaraderie. Vulnerability becomes strength. Shared experiences become the foundation of resilience. Make a Statement. Show up! Top industry leaders shared insights during presentations at WVF 2024 In an era when nearly everything can be done online, showing up in person is a powerful statement. It says: I want more than just efficiency. I want connection, creativity and conversation. As the CGI and visualization industries continue to evolve, the need for human connection hasn’t disappeared — it’s grown stronger. Conferences, festivals and meetups, such as World Viz Fest, remain vital spaces for knowledge sharing, innovation and community building. They give us a chance to reset, reconnect and remember that we are part of something bigger than our screens. So, yes, despite the tools, the bandwidth, and the ever-faster workflows, we still need to meet in person. Not out of nostalgia, but out of necessity. Because, no matter how far technology takes us, creativity remains a human endeavor. Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Start your entry ahead of the Final Entry Deadline on July 11th.  The post Building an Architectural Visualization Community: The Case for Physical Gatherings appeared first on Journal. #building #architectural #visualization #community #case
    ARCHITIZER.COM
    Building an Architectural Visualization Community: The Case for Physical Gatherings
    Barbara Betlejewska is a PR consultant and manager with extensive experience in architecture and real estate, currently involved with World Visualization Festival, a global event bringing together CGI and digital storytelling professionals for 3 days of presentations, workshops, and networking in Warsaw, Poland, this October. Over the last twenty years, visualization and 3D rendering have evolved from supporting tools to become central pillars of architectural storytelling, design development, and marketing across various industries. As digital technologies have advanced, the landscape of creative work has changed dramatically. Artists can now collaborate with clients worldwide without leaving their homes, and their careers can flourish without ever setting foot in a traditional studio. In this hyper-connected world, where access to knowledge, clients, and inspiration is just a click away, do we still need to gather in person? Do conferences, festivals and meetups in the CGI and architectural visualization world still carry weight? The People Behind the Pixels Professionals from the visualization industry exchanging ideas at WVF 2024. For a growing number of professionals — especially those in creative and tech-driven fields — remote work has become the norm. The shift to digital workflows, accelerated by the pandemic, has brought freedom and flexibility that many are reluctant to give up. It’s easier than ever to work for clients in distant cities or countries, to build a freelance career from a laptop, or to pursue the lifestyle of a digital nomad. On the surface, it is a broadening of horizons. But for many, the freedom of remote work comes with a cost: isolation. For visualization artists, the reality often means spending long hours alone, rarely interacting face-to-face with peers or collaborators. And while there are undeniable advantages to independent work, the lack of human connection can lead to creative stagnation, professional burnout, and a sense of detachment from the industry as a whole. Despite being a highly technical and often solitary craft, visualization and CGI thrive on the exchange of ideas, feedback and inspiration. The tools and techniques evolve rapidly, and staying relevant usually means learning not just from tutorials but from honest conversations with others who understand the nuances of the field. A Community in the Making Professionals from the visualization industry exchanging ideas at WVF 2024. That need for connection is what pushed Michał Nowak, a Polish visualizer and founder of Nowak Studio, to organize Poland’s first-ever architectural visualization meetup in 2017. With no background in event planning, he wasn’t sure where to begin, but he knew something was missing. The Polish Arch Viz scene lacked a shared space for meetings, discussions, and idea exchange. Michał wanted more than screen time; he wanted honest conversations, spontaneous collaboration and a chance to grow alongside others in the field. What began as a modest gathering quickly grew into something much bigger. That original meetup evolved into what is now the World Visualization Festival (WVF), an international event that welcomes artists from across Europe and beyond. “I didn’t expect our small gathering to grow into a global festival,” Michał says. “But I knew I wanted a connection. I believed that through sharing ideas and experiences, we could all grow professionally, creatively, and personally. And that we’d enjoy the journey more.” The response was overwhelming. Each year, more artists from across Poland and Europe join the event in Wrocław, located in south-western Poland. Michał also traveled to other festivals in countries like Portugal and Austria, where he observed the same thing: a spirit of openness, generosity, and shared curiosity. No matter the country or the maturity of the market, the needs were the same — people wanted to connect, learn and grow. And beyond the professional side, there was something else: joy. These events were simply fun. They were energizing. They gave people a reason to step away from their desks and remember why they love what they do. The Professional Benefits Hands-on learning at the AI-driven visualization workshop in Warsaw, October 2024. The professional benefits of attending industry events are well documented. These gatherings provide access to mentorship, collaboration and knowledge that can be challenging to find online. Festivals and industry meetups serve as platforms for emerging trends, new tools and fresh workflows — often before they hit the mainstream. They’re places where ideas collide, assumptions are challenged and growth happens. The range of topics covered at such events is broad, encompassing everything from portfolio reviews and in-depth discussions of particular rendering engines to discussions about pricing your work and building a sustainable business. At the 2024 edition of the World Visualization Festival, panels focused on scaling creative businesses and navigating industry rates drew some of the biggest crowds, proving that artists are hungry for both artistic and entrepreneurial insights. Being part of a creative community also shapes professional identity. It’s not just about finding clients — it’s about finding your place. In a field as fast-moving and competitive as Arch Viz, connection and conversation aren’t luxuries. They’re tools for survival. There’s also the matter of building your social capital. Online interactions can only go so far. Meeting someone in person builds relationships that stick. The coffee-break conversations, the spontaneous feedback — these are the moments that cement a community and have the power to spark future projects or long-lasting partnerships. This usually doesn’t happen in Zoom calls. And let’s not forget the symbolic power of events like industry awards, such as the Architizer’s Vision Awards or CGArchitect’s 3D Awards. These aren’t just celebrations of talent; they’re affirmations of the craft itself. They contribute to the growth and cohesion of the industry while helping to establish and promote best practices. These events clearly define the role and significance of CGI and visualization as a distinct profession, positioned at the intersection of architecture, marketing, and sales. They advocate for the field to be recognized on its own terms, not merely as a support service, but as an independent discipline. For its creators, they bring visibility, credit, and recognition — elements that inspire growth and fuel motivation to keep pushing the craft forward. Occasions like these remind us that what we do has actual value, impact and meaning. The Energy We Take Home The WVF 2024 afterparty provided a vibrant space for networking and celebration in Warsaw. Many artists describe the post-event glow: a renewed sense of purpose, a fresh jolt of energy, an eagerness to get back to work. Sometimes, new projects emerge, new clients appear, or long-dormant ideas finally gain momentum. These events aren’t just about learning — they’re about recharging. One of the most potent moments of last year’s WVF was a series of talks focused on mental health and creative well-being. Co-organized by Michał Nowak and the Polish Arch Viz studio ELEMENT, the festival addressed the emotional realities of the profession, including burnout, self-doubt, and the pressure to constantly produce. These conversations resonated deeply because they were real. Seeing that others face the same struggles — and come through them — is profoundly reassuring. Listening to someone share a business strategy that worked, or a failure they learned from, turns competition into camaraderie. Vulnerability becomes strength. Shared experiences become the foundation of resilience. Make a Statement. Show up! Top industry leaders shared insights during presentations at WVF 2024 In an era when nearly everything can be done online, showing up in person is a powerful statement. It says: I want more than just efficiency. I want connection, creativity and conversation. As the CGI and visualization industries continue to evolve, the need for human connection hasn’t disappeared — it’s grown stronger. Conferences, festivals and meetups, such as World Viz Fest, remain vital spaces for knowledge sharing, innovation and community building. They give us a chance to reset, reconnect and remember that we are part of something bigger than our screens. So, yes, despite the tools, the bandwidth, and the ever-faster workflows, we still need to meet in person. Not out of nostalgia, but out of necessity. Because, no matter how far technology takes us, creativity remains a human endeavor. Architizer’s Vision Awards are back! The global awards program honors the world’s best architectural concepts, ideas and imagery. Start your entry ahead of the Final Entry Deadline on July 11th.  The post Building an Architectural Visualization Community: The Case for Physical Gatherings appeared first on Journal.
    Like
    Love
    Wow
    Sad
    Angry
    532
    2 Commentarios 0 Acciones
  • Komires: Matali Physics 6.9 Released

    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more.

    Posted by komires on Jun 3rd, 2025
    What is Matali Physics?
    Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects.
    What's new in version 6.9?

    Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others;
    Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes;
    Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.;
    Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol;
    Other improvements and fixes which complete list is available on the History webpage.

    What platforms does Matali Physics support?

    Android
    Android TV
    *BSD
    iOS
    iPadOS
    LinuxmacOS
    Steam Deck
    tvOS
    UWPWindowsWhat are the benefits of using Matali Physics?

    Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy
    Composed of dedicated modules that do not require additional licences and fees
    Supports fully dynamic and destructible scenes
    Supports physics-based behavioral animations
    Supports physical AI, object motion and state change control
    Supports physics-based GUI
    Supports physics-based particle effects
    Supports multi-scene physics simulation and scene combining
    Supports physics-based photo mode
    Supports physics-driven sound
    Supports physics-driven music
    Supports debug visualization
    Fully serializable and deserializable
    Available for all major mobile, desktop and TV platforms
    New features on request
    Dedicated technical support
    Regular updates and fixes

    If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    #komires #matali #physics #released
    Komires: Matali Physics 6.9 Released
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS LinuxmacOS Steam Deck tvOS UWPWindowsWhat are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us. #komires #matali #physics #released
    WWW.INDIEDB.COM
    Komires: Matali Physics 6.9 Released
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illumination (GI) in some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illumination (GI) in some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS Linux (distributions) macOS Steam Deck tvOS UWP (Desktop, Xbox Series X/S) Windows (Classic, GDK, Handheld consoles) What are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    0 Commentarios 0 Acciones
  • The 2025 Complete Splunk Beginner Bundle is now 25% off

    Deal

     When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    The 2025 Complete Splunk Beginner Bundle is now 25% off

    Neowin Deals
    ·

    Jun 14, 2025 16:00 EDT

    Today's highlighted deal comes via Neowin Deals store, where you can save 75% on The 2025 Complete Splunk Beginner Bundle
    Splunk is a powerful data platform used to gather information from multiple sources and index it for efficient access. You can then use collected data to create visualizations, analytics, and a variety of automated and security-related functions. With its web-style interface, Splunk is easy to use and is utilized by many companies worldwide.
    What's Included:

    Splunk Fundamentals for Effective Management of SOC and SIEM

    Oak Academy 38 LessonsLifetime Value
    Splunk | Splunk Core Certified User Certification Prep Lab

    Oak Academy, 63 Lessons,Lifetime, Value
    Splunk | Splunk Core Certified Power User SPLK 1002 Prep

    Oak Academy, 53 Lessons, Lifetime, Value
    Splunk| Splunk Enterprise Certified Admin Certification Prep

    Oak Academy, 68 Lessons, Lifetime, Value
    Requirements

    Basic understanding of IT and networking concepts
    Familiarity with Linux and Windows operating systems
    A computer with internet access for hands-on practice

    Good to Know

    Length of time users can access this course: lifetime
    Access options: desktop or mobile
    Redemption deadline: redeem your code within 30 days of purchase
    Experience level required: all levels
    Certificate of Completion ONLY
    Updates included
    Closed captioning NOT available
    NOT downloadable for offline viewing

    Learn more about our Lifetime deals here!

    Lifetime access to this 2025 Complete Splunk Beginner Bundle normally costs but this deal can be yours for just that's a saving of For full terms, specifications, and info, click the link below.
    Although priced in U.S. dollars, this deal is available for digital purchase worldwide.

    We post these because we earn commission on each sale so as not to rely solely on advertising, which many of our readers block. It all helps toward paying staff reporters, servers and hosting costs.
    Other ways to support Neowin

    Whitelist Neowin by not blocking our ads

    Create a free member account to see fewer ads

    Make a donation to support our day to day running costs

    Subscribe to Neowin - for a year, or a year for an ad-free experience

    Disclosure: Neowin benefits from revenue of each sale made through our branded deals site powered by StackCommerce.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #complete #splunk #beginner #bundle #now
    The 2025 Complete Splunk Beginner Bundle is now 25% off
    Deal  When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. The 2025 Complete Splunk Beginner Bundle is now 25% off Neowin Deals · Jun 14, 2025 16:00 EDT Today's highlighted deal comes via Neowin Deals store, where you can save 75% on The 2025 Complete Splunk Beginner Bundle Splunk is a powerful data platform used to gather information from multiple sources and index it for efficient access. You can then use collected data to create visualizations, analytics, and a variety of automated and security-related functions. With its web-style interface, Splunk is easy to use and is utilized by many companies worldwide. What's Included: Splunk Fundamentals for Effective Management of SOC and SIEM Oak Academy 38 LessonsLifetime Value Splunk | Splunk Core Certified User Certification Prep Lab Oak Academy, 63 Lessons,Lifetime, Value Splunk | Splunk Core Certified Power User SPLK 1002 Prep Oak Academy, 53 Lessons, Lifetime, Value Splunk| Splunk Enterprise Certified Admin Certification Prep Oak Academy, 68 Lessons, Lifetime, Value Requirements Basic understanding of IT and networking concepts Familiarity with Linux and Windows operating systems A computer with internet access for hands-on practice Good to Know Length of time users can access this course: lifetime Access options: desktop or mobile Redemption deadline: redeem your code within 30 days of purchase Experience level required: all levels Certificate of Completion ONLY Updates included Closed captioning NOT available NOT downloadable for offline viewing Learn more about our Lifetime deals here! Lifetime access to this 2025 Complete Splunk Beginner Bundle normally costs but this deal can be yours for just that's a saving of For full terms, specifications, and info, click the link below. Although priced in U.S. dollars, this deal is available for digital purchase worldwide. We post these because we earn commission on each sale so as not to rely solely on advertising, which many of our readers block. It all helps toward paying staff reporters, servers and hosting costs. Other ways to support Neowin Whitelist Neowin by not blocking our ads Create a free member account to see fewer ads Make a donation to support our day to day running costs Subscribe to Neowin - for a year, or a year for an ad-free experience Disclosure: Neowin benefits from revenue of each sale made through our branded deals site powered by StackCommerce. Tags Report a problem with article Follow @NeowinFeed #complete #splunk #beginner #bundle #now
    WWW.NEOWIN.NET
    The 2025 Complete Splunk Beginner Bundle is now 25% off
    Deal  When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. The 2025 Complete Splunk Beginner Bundle is now 25% off Neowin Deals · Jun 14, 2025 16:00 EDT Today's highlighted deal comes via Neowin Deals store, where you can save 75% on The 2025 Complete Splunk Beginner Bundle Splunk is a powerful data platform used to gather information from multiple sources and index it for efficient access. You can then use collected data to create visualizations, analytics, and a variety of automated and security-related functions. With its web-style interface, Splunk is easy to use and is utilized by many companies worldwide. What's Included: Splunk Fundamentals for Effective Management of SOC and SIEM Oak Academy 38 Lessons (3.5h) Lifetime $20.00 Value Splunk | Splunk Core Certified User Certification Prep Lab Oak Academy, 63 Lessons (6h),Lifetime, $20.00 Value Splunk | Splunk Core Certified Power User SPLK 1002 Prep Oak Academy, 53 Lessons (5.5h), Lifetime, $20.00 Value Splunk| Splunk Enterprise Certified Admin Certification Prep Oak Academy, 68 Lessons (8.5h), Lifetime, $20.00 Value Requirements Basic understanding of IT and networking concepts Familiarity with Linux and Windows operating systems A computer with internet access for hands-on practice Good to Know Length of time users can access this course: lifetime Access options: desktop or mobile Redemption deadline: redeem your code within 30 days of purchase Experience level required: all levels Certificate of Completion ONLY Updates included Closed captioning NOT available NOT downloadable for offline viewing Learn more about our Lifetime deals here! Lifetime access to this 2025 Complete Splunk Beginner Bundle normally costs $80, but this deal can be yours for just $19.99, that's a saving of $60. For full terms, specifications, and info, click the link below. Although priced in U.S. dollars, this deal is available for digital purchase worldwide. We post these because we earn commission on each sale so as not to rely solely on advertising, which many of our readers block. It all helps toward paying staff reporters, servers and hosting costs. Other ways to support Neowin Whitelist Neowin by not blocking our ads Create a free member account to see fewer ads Make a donation to support our day to day running costs Subscribe to Neowin - for $14 a year, or $28 a year for an ad-free experience Disclosure: Neowin benefits from revenue of each sale made through our branded deals site powered by StackCommerce. Tags Report a problem with article Follow @NeowinFeed
    0 Commentarios 0 Acciones
Resultados de la búsqueda