• Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration

    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures.
    These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations.
    For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type.
    These factors directly affect network performance, user experience and energy consumption.
    To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration.
    At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos.
    Automate Network Configuration With the AI Blueprint
    NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices.
    The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI.
    This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures.
    Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies.
    The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input.
    Powered and Deployed by Industry Leaders
    Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience.
    With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes.
    Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond.
    “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.”
    Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies
    The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality.
    Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences.
    NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing.
    Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference.
    For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos.
    Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems.
    Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing.
    The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making.
    Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations.
    ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy.
    Get started with the new blueprint today.
    Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    #calling #llms #new #nvidia #blueprint
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA. #calling #llms #new #nvidia #blueprint
    BLOGS.NVIDIA.COM
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly $295 billion in capital expenditures and over $1 trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance (ISNA), designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    Like
    Love
    Wow
    Sad
    Angry
    80
    0 التعليقات 0 المشاركات
  • NVIDIA CEO Drops the Blueprint for Europe’s AI Boom

    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it.
    “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris.
    From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future.

    A New Industrial Revolution
    At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing.
    “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance.
    At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware.
    There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers.
    Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue.
    NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth.
    Quantum Meets Classical
    Europe’s quantum ambitions just got a boost.
    The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems.
    Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction.
    “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.”
    Sovereign Models, Smarter Agents
    European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs.
    “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said.
    These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe.
    “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said.
    Huang explained how NVIDIA is helping countries across Europe build AI infrastructure.
    Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments.
    The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents.
    To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity.
    “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute.
    The Industrial Cloud Goes Live
    AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution.
    “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent.
    Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.”
    To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale.
    “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.”
    NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation.
    And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics.
    The Next Wave
    The next wave of AI has begun — and it’s exponential, Huang explained.
    “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.”
    This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said.
    To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.”
    Huang and Grek, as he explained how AI is driving advancements in robotics.
    These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence.
    “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.”
    With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe.
    Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    #nvidia #ceo #drops #blueprint #europes
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions. #nvidia #ceo #drops #blueprint #europes
    BLOGS.NVIDIA.COM
    NVIDIA CEO Drops the Blueprint for Europe’s AI Boom
    At GTC Paris — held alongside VivaTech, Europe’s largest tech event — NVIDIA founder and CEO Jensen Huang delivered a clear message: Europe isn’t just adopting AI — it’s building it. “We now have a new industry, an AI industry, and it’s now part of the new infrastructure, called intelligence infrastructure, that will be used by every country, every society,” Huang said, addressing an audience gathered online and at the iconic Dôme de Paris. From exponential inference growth to quantum breakthroughs, and from infrastructure to industry, agentic AI to robotics, Huang outlined how the region is laying the groundwork for an AI-powered future. A New Industrial Revolution At the heart of this transformation, Huang explained, are systems like GB200 NVL72 — “one giant GPU” and NVIDIA’s most powerful AI platform yet — now in full production and powering everything from sovereign models to quantum computing. “This machine was designed to be a thinking machine, a thinking machine, in the sense that it reasons, it plans, it spends a lot of time talking to itself,” Huang said, walking the audience through the size and scale of these machines and their performance. At GTC Paris, Huang showed audience members the innards of some of NVIDIA’s latest hardware. There’s more coming, with Huang saying NVIDIA’s partners are now producing 1,000 GB200 systems a week, “and this is just the beginning.” He walked the audience through a range of available systems ranging from the tiny NVIDIA DGX Spark to rack-mounted RTX PRO Servers. Huang explained that NVIDIA is working to help countries use technologies like these to build both AI infrastructure — services built for third parties to use and innovate on — and AI factories, which companies build for their own use, to generate revenue. NVIDIA is partnering with European governments, telcos and cloud providers to deploy NVIDIA technologies across the region. NVIDIA is also expanding its network of technology centers across Europe — including new hubs in Finland, Germany, Spain, Italy and the U.K. — to accelerate skills development and quantum growth. Quantum Meets Classical Europe’s quantum ambitions just got a boost. The NVIDIA CUDA-Q platform is live on Denmark’s Gefion supercomputer, opening new possibilities for hybrid AI and quantum engineering. In addition, Huang announced that CUDA-Q is now available on NVIDIA Grace Blackwell systems. Across the continent, NVIDIA is partnering with supercomputing centers and quantum hardware builders to advance hybrid quantum-AI research and accelerate quantum error correction. “Quantum computing is reaching an inflection point,” Huang said. “We are within reach of being able to apply quantum computing, quantum classical computing, in areas that can solve some interesting problems in the coming years.” Sovereign Models, Smarter Agents European developers want more control over their models. Enter NVIDIA Nemotron, designed to help build large language models tuned to local needs. “And so now you know that you have access to an enhanced open model that is still open, that is top of the leader chart,” Huang said. These models will be coming to Perplexity, a reasoning search engine, enabling secure, multilingual AI deployment across Europe. “You can now ask and get questions answered in the language, in the culture, in the sensibility of your country,” Huang said. Huang explained how NVIDIA is helping countries across Europe build AI infrastructure. Every company will build its own agents, Huang said. To help create those agents, Huang introduced a suite of agentic AI blueprints, including an Agentic AI Safety blueprint for enterprises and governments. The new NVIDIA NeMo Agent toolkit and NVIDIA AI Blueprint for building data flywheels further accelerate the development of safe, high-performing AI agents. To help deploy these agents, NVIDIA is partnering with European governments, telcos and cloud providers to deploy the DGX Cloud Lepton platform across the region, providing instant access to accelerated computing capacity. “One model architecture, one deployment, and you can run it anywhere,” Huang said, adding that Lepton is now integrated with Hugging Face, giving developers direct access to global compute. The Industrial Cloud Goes Live AI isn’t just virtual. It’s powering physical systems, too, sparking a new industrial revolution. “We’re working on industrial AI with one company after another,” Huang said, describing work to build digital twins based on the NVIDIA Omniverse platform with companies across the continent. Huang explained that everything he showed during his keynote was “computer simulation, not animation” and that it looks beautiful because “it turns out the world is beautiful, and it turns out math is beautiful.” To further this work, Huang announced NVIDIA is launching the world’s first industrial AI cloud — to be built in Germany — to help Europe’s manufacturers simulate, automate and optimize at scale. “Soon, everything that moves will be robotic,” Huang said. “And the car is the next one.” NVIDIA DRIVE, NVIDIA’s full-stack AV platform, is now in production to accelerate the large-scale deployment of safe, intelligent transportation. And to show what’s coming next, Huang was joined on stage by Grek, a pint-sized robot, as Huang talked about how NVIDIA partnered with DeepMind and Disney to build Newton, the world’s most advanced physics training engine for robotics. The Next Wave The next wave of AI has begun — and it’s exponential, Huang explained. “We have physical robots, and we have information robots. We call them agents,” Huang said. “The technology necessary to teach a robot to manipulate, to simulate — and of course, the manifestation of an incredible robot — is now right in front of us.” This new era of AI is being driven by a surge in inference workloads. “The number of people using inference has gone from 8 million to 800 million — 100x in just a couple of years,” Huang said. To meet this demand, Huang emphasized the need for a new kind of computer: “We need a special computer designed for thinking, designed for reasoning. And that’s what Blackwell is — a thinking machine.” Huang and Grek, as he explained how AI is driving advancements in robotics. These Blackwell-powered systems will live in a new class of data centers — AI factories — built to generate tokens, the raw material of modern intelligence. “These AI factories are going to generate tokens,” Huang said, turning to Grek with a smile. “And these tokens are going to become your food, little Grek.” With that, the keynote closed on a bold vision: a future powered by sovereign infrastructure, agentic AI, robotics — and exponential inference — all built in partnership with Europe. Watch the NVIDIA GTC Paris keynote from Huang at VivaTech and explore GTC Paris sessions.
    Like
    Love
    Sad
    23
    0 التعليقات 0 المشاركات
  • Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 التعليقات 0 المشاركات
  • NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica

    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth.
    Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI.
    This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany.
    NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics.
    NVIDIA Technologies Boost Robotics Development 
    Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics.
    To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks.
    To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data.
    In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub.
    Image courtesy of Wandelbots.
    Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More 
    Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots.
    NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment.
    NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies.
    Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows.
    Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact.
    Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations.
    Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries.
    Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic.
    Image courtesy of Franka Robotics.
    Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support.
    Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies.
    SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario.
    Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation.

    Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications.
    Image courtesy of Vention.
    Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    #nvidia #partners #highlight #nextgeneration #robotics
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27.  #nvidia #partners #highlight #nextgeneration #robotics
    BLOGS.NVIDIA.COM
    NVIDIA and Partners Highlight Next-Generation Robotics, Automation and AI Technologies at Automatica
    From the heart of Germany’s automotive sector to manufacturing hubs across France and Italy, Europe is embracing industrial AI and advanced AI-powered robotics to address labor shortages, boost productivity and fuel sustainable economic growth. Robotics companies are developing humanoid robots and collaborative systems that integrate AI into real-world manufacturing applications. Supported by a $200 billion investment initiative and coordinated efforts from the European Commission, Europe is positioning itself at the forefront of the next wave of industrial automation, powered by AI. This momentum is on full display at Automatica — Europe’s premier conference on advancements in robotics, machine vision and intelligent manufacturing — taking place this week in Munich, Germany. NVIDIA and its ecosystem of partners and customers are showcasing next-generation robots, automation and AI technologies designed to accelerate the continent’s leadership in smart manufacturing and logistics. NVIDIA Technologies Boost Robotics Development  Central to advancing robotics development is Europe’s first industrial AI cloud, announced at NVIDIA GTC Paris at VivaTech earlier this month. The Germany-based AI factory, featuring 10,000 NVIDIA GPUs, provides European manufacturers with secure, sovereign and centralized AI infrastructure for industrial workloads. It will support applications ranging from design and engineering to factory digital twins and robotics. To help accelerate humanoid development, NVIDIA released NVIDIA Isaac GR00T N1.5 — an open foundation model for humanoid robot reasoning and skills. This update enhances the model’s adaptability and ability to follow instructions, significantly improving its performance in material handling and manufacturing tasks. To help post-train GR00T N1.5, NVIDIA has also released the Isaac GR00T-Dreams blueprint — a reference workflow for generating vast amounts of synthetic trajectory data from a small number of human demonstrations — enabling robots to generalize across behaviors and adapt to new environments with minimal human demonstration data. In addition, early developer previews of NVIDIA Isaac Sim 5.0 and Isaac Lab 2.2 — open-source robot simulation and learning frameworks optimized for NVIDIA RTX PRO 6000 workstations — are now available on GitHub. Image courtesy of Wandelbots. Robotics Leaders Tap NVIDIA Simulation Technology to Develop and Deploy Humanoids and More  Robotics developers and solutions providers across the globe are integrating NVIDIA’s three computers to train, simulate and deploy robots. NEURA Robotics, a German robotics company and pioneer for cognitive robots, unveiled the third generation of its humanoid, 4NE1, designed to assist humans in domestic and professional environments through advanced cognitive capabilities and humanlike interaction. 4NE1 is powered by GR00T N1 and was trained in Isaac Sim and Isaac Lab before real-world deployment. NEURA Robotics is also presenting Neuraverse, a digital twin and interconnected ecosystem for robot training, skills and applications, fully compatible with NVIDIA Omniverse technologies. Delta Electronics, a global leader in power management and smart green solutions, is debuting two next-generation collaborative robots: D-Bot Mar and D-Bot 2 in 1 — both trained using Omniverse and Isaac Sim technologies and libraries. These cobots are engineered to transform intralogistics and optimize production flows. Wandelbots, the creator of the Wandelbots NOVA software platform for industrial robotics, is partnering with SoftServe, a global IT consulting and digital services provider, to scale simulation-first automating using NVIDIA Isaac Sim, enabling virtual validation and real-world deployment with maximum impact. Cyngn, a pioneer in autonomous mobile robotics, is integrating its DriveMod technology into Isaac Sim to enable large-scale, high fidelity virtual testing of advanced autonomous operation. Purpose-built for industrial applications, DriveMod is already deployed on vehicles such as the Motrec MT-160 Tugger and BYD Forklift, delivering sophisticated automation to material handling operations. Doosan Robotics, a company specializing in AI robotic solutions, will showcase its “sim to real” solution, using NVIDIA Isaac Sim and cuRobo. Doosan will be showcasing how to seamlessly transfer tasks from simulation to real robots across a wide range of applications — from manufacturing to service industries. Franka Robotics has integrated Isaac GR00T N1.5 into a dual-arm Franka Research 3 (FR3) robot for robotic control. The integration of GR00T N1.5 allows the system to interpret visual input, understand task context and autonomously perform complex manipulation — without the need for task-specific programming or hardcoded logic. Image courtesy of Franka Robotics. Hexagon, the global leader in measurement technologies, launched its new humanoid, dubbed AEON. With its unique locomotion system and multimodal sensor fusion, and powered by NVIDIA’s three-computer solution, AEON is engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Intrinsic, a software and AI robotics company, is integrating Intrinsic Flowstate with  Omniverse and OpenUSD for advanced visualization and digital twins that can be used in many industrial use cases. The company is also using NVIDIA foundation models to enhance robot capabilities like grasp planning through AI and simulation technologies. SCHUNK, a global leader in gripping systems and automation technology, is showcasing its innovative grasping kit powered by the NVIDIA Jetson AGX Orin module. The kit intelligently detects objects and calculates optimal grasping points. Schunk is also demonstrating seamless simulation-to-reality transfer using IGS Virtuous software — built on Omniverse technologies — to control a real robot through simulation in a pick-and-place scenario. Universal Robots is showcasing UR15, its fastest cobot yet. Powered by the UR AI Accelerator — developed with NVIDIA and running on Jetson AGX Orin using CUDA-accelerated Isaac libraries — UR15 helps set a new standard for industrial automation. Vention, a full-stack software and hardware automation company, launched its Machine Motion AI, built on CUDA-accelerated Isaac libraries and powered by Jetson. Vention is also expanding its lineup of robotic offerings by adding the FR3 robot from Franka Robotics to its ecosystem, enhancing its solutions for academic and research applications. Image courtesy of Vention. Learn more about the latest robotics advancements by joining NVIDIA at Automatica, running through Friday, June 27. 
    Like
    Love
    Wow
    Sad
    Angry
    19
    0 التعليقات 0 المشاركات
  • HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift

    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas.
    The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI.
    The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market.
    The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster.
    This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
    These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows.
    HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October.
    In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption.
    The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center.
    To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis.
    HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity.
    Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments.

    Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay.
    Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    #hpe #nvidia #debut #factory #stack
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page. #hpe #nvidia #debut #factory #stack
    BLOGS.NVIDIA.COM
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers (HPE ProLiant Compute DL380a Gen12), to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    0 التعليقات 0 المشاركات
  • BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4

    By TREVOR HOGG
    Images courtesy of Prime Video.

    For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”

    When Splintersplits in two, the cloning effect was inspired by cellular mitosis.

    “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!”
    —Stephan Fleet, VFX Supervisor

    A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”

    Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed.

    Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.”

    “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.”
    —Stephan Fleet, VFX Supervisor

    The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.”

    A building is replaced by a massive crowd attending a rally being held by Homelander.

    In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.”

    In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around.

    “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.”
    —Stephan Fleet, VFX Supervisor

    Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”

    The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes.

    Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.”

    Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination.

    Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.”

    “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.”
    —Stephan Fleet, VFX Supervisor

    Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution.

    A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.”

    Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4.

    When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    #bouncing #rubber #duckies #flying #sheep
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splintersplits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith, who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be, so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a humanyou tend to want to give it human gestures and eyebrows. Erik Kripkesaid, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep wasin one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr.develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelanderbreaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchiehallucinates as Kimiko Miyashirogoes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splintersplits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker. “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.” #bouncing #rubber #duckies #flying #sheep
    WWW.VFXVOICE.COM
    BOUNCING FROM RUBBER DUCKIES AND FLYING SHEEP TO CLONES FOR THE BOYS SEASON 4
    By TREVOR HOGG Images courtesy of Prime Video. For those seeking an alternative to the MCU, Prime Video has two offerings of the live-action and animated variety that take the superhero genre into R-rated territory where the hands of the god-like figures get dirty, bloodied and severed. “The Boys is about the intersection of celebrity and politics using superheroes,” states Stephan Fleet, VFX Supervisor on The Boys. “Sometimes I see the news and I don’t even know we can write to catch up to it! But we try. Invincible is an intense look at an alternate DC Universe that has more grit to the superhero side of it all. On one hand, I was jealous watching Season 1 of Invincible because in animation you can do things that you can’t do in real life on a budget.” Season 4 does not tone down the blood, gore and body count. Fleet notes, “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” When Splinter (Rob Benedict) splits in two, the cloning effect was inspired by cellular mitosis. “The writers almost have this dialogue with us. Sometimes, they’ll write in the script, ‘And Fleet will come up with a cool visual effect for how to kill this person.’ Or, ‘Chhiu, our fight coordinator, will make an awesome fight.’ It is a frequent topic of conversation. We’re constantly trying to be inventive and create new ways to kill people!” —Stephan Fleet, VFX Supervisor A total of 1,600 visual effects shots were created for the eight episodes by ILM, Pixomondo, MPC Toronto, Spin VFX, DNEG, Untold Studios, Luma Pictures and Rocket Science VFX. Previs was a critical part of the process. “We have John Griffith [Previs Director], who owns a small company called CNCPT out of Texas, and he does wonderful Unreal Engine level previs,” Fleet remarks. “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” Founding Director of Federal Bureau of Superhuman Affairs, Victoria Neuman, literally gets ripped in half by two tendrils coming out of Compound V-enhanced Billy Butcher, the leader of superhero resistance group The Boys. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” Multiple plates were shot to enable Simon Pegg to phase through the actor laying in a hospital bed. Testing can get rather elaborate. “For that end scene with Butcher’s tendrils, the room was two stories, and we were able to put the camera up high along with a bunch of blood cannons,” Fleet recalls. “When the body rips in half and explodes, there is a practical component. We rained down a bunch of real blood and guts right in front of Huey. It’s a known joke that we like to douse Jack Quaid with blood as much as possible! In this case, the special effects team led by Hudson Kenny needed to test it the day before, and I said, “I’ll be the guinea pig for the test.’ They covered the whole place with plastic like it was a Dexter kill room because you don’t want to destroy the set. I’m standing there in a white hazmat suit with goggles on, covered from head to toe in plastic and waiting as they’re tweaking all of these things. It sounds like World War II going on. They’re on walkie talkies to each other, and then all of a sudden, it’s ‘Five, four, three, two, one…’  And I get exploded with blood. I wanted to see what it was like, and it’s intense.” “On set, we have a cartoon of what is going to be done, and you’ll be amazed, specifically for action and heavy visual effects stuff, how close those shots are to the previs when we finish.” —Stephan Fleet, VFX Supervisor The Deep has a love affair with an octopus called Ambrosius, voiced by Tilda Swinton. “It’s implied bestiality!” Fleet laughs. “I would call it more of a romance. What was fun from my perspective is that I knew what the look was going to be [from Season 3], so then it’s about putting in the details and the animation. One of the instincts that you always have when you’re making a sea creature that talks to a human [is] you tend to want to give it human gestures and eyebrows. Erik Kripke [Creator, Executive Producer, Showrunner, Director, Writer] said, ‘No. We have to find things that an octopus could do that conveys the same emotion.’ That’s when ideas came in, such as putting a little The Deep toy inside the water tank. When Ambrosius is trying to have an intimate moment or connect with him, she can wrap a tentacle around that. My favorite experience doing Ambrosius was when The Deep is reading poetry to her on a bed. CG creatures touching humans is one of the more complicated things to do and make look real. Ambrosius’ tentacles reach for his arm, and it becomes an intimate moment. More than touching the skin, displacing the bedsheet as Ambrosius moved ended up becoming a lot of CG, and we had to go back and forth a few times to get that looking right; that turned out to be tricky.” A building is replaced by a massive crowd attending a rally being held by Homelander. In a twisted form of sexual foreplay, Sister Sage has The Deep perform a transorbital lobotomy on her. “Thank you, Amazon for selling lobotomy tools as novelty items!” Fleet chuckles. “We filmed it with a lobotomy tool on set. There is a lot of safety involved in doing something like that. Obviously, you don’t want to put any performer in any situation where they come close to putting anything real near their eye. We created this half lobotomy tool and did this complicated split screen with the lobotomy tool on a teeter totter. The Deep was [acting in a certain way] in one shot and Sister Sage reacted in the other shot. To marry the two ended up being a lot of CG work. Then there are these close-ups which are full CG. I always keep a dummy head that is painted gray that I use all of the time for reference. In macrophotography I filmed this lobotomy tool going right into the eye area. I did that because the tool is chrome, so it’s reflective and has ridges. It has an interesting reflective property. I was able to see how and what part of the human eye reflects onto the tool. A lot of that shot became about realistic reflections and lighting on the tool. Then heavy CG for displacing the eye and pushing the lobotomy tool into it. That was one of the more complicated sequences that we had to achieve.” In order to create an intimate moment between Ambrosius and The Deep, a toy version of the superhero was placed inside of the water tank that she could wrap a tentacle around. “The word that we like to use on this show is ‘grounded,’ and I like to say ‘grounded’ with an asterisk in this day and age because we’re grounded until we get to killing people in the craziest ways. In this case, having someone floating in the air and being ripped in half by two tendrils was all CG.” —Stephan Fleet, VFX Supervisor Sheep and chickens embark on a violent rampage courtesy of Compound V with the latter piercing the chest of a bodyguard belonging to Victoria Neuman. “Weirdly, that was one of our more traditional shots,’ Fleet states. “What is fun about that one is I asked for real chickens as reference. The chicken flying through his chest is real. It’s our chicken wrangler in green suit gently tossing a chicken. We blended two real plates together with some CG in the middle.” A connection was made with a sci-fi classic. “The sheep kill this bull, and we shot it is in this narrow corridor of fencing. When they run, I always equated it as the Trench Run in Star Wars and looked at the sheep as TIE fighters or X-wings coming at them.” The scene was one of the scarier moments for the visual effects team. Fleet explains, “When I read the script, I thought this could be the moment where we jump the shark. For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” The sheep injected with Compound V develop the ability to fly and were shot in an imperfect manner to help ground the scenes. Once injected with Compound V, Hugh Campbell Sr. (Simon Pegg) develops the ability to phase through objects, including human beings. “We called it the Bro-nut because his name in the script is Wall Street Bro,” Fleet notes. “That was a complicated motion control shot, repeating the move over and over again. We had to shoot multiple plates of Simon Pegg and the guy in the bed. Special effects and prosthetics created a dummy guy with a hole in his chest with practical blood dripping down. It was meshing it together and getting the timing right in post. On top of that, there was the CG blood immediately around Simon Pegg.” The phasing effect had to avoid appearing as a dissolve. “I had this idea of doing high-frequency vibration on the X axis loosely based on how The Flash vibrates through walls. You want everything to have a loose motivation that then helps trigger the visuals. We tried not to overcomplicate that because, ultimately, you want something like that to be quick. If you spend too much time on phasing, it can look cheesy. In our case, it was a lot of false walls. Simon Pegg is running into a greenscreen hole which we plug in with a wall or coming out of one. I went off the actor’s action, and we added a light opacity mix with some X-axis shake.” Providing a different twist to the fights was the replacement of spurting blood with photoreal rubber duckies during a drug-induced hallucination. Homelander (Anthony Starr) breaks a mirror which emphasizes his multiple personality disorder. “The original plan was that special effects was going to pre-break a mirror, and we were going to shoot Anthony Starr moving his head doing all of the performances in the different parts of the mirror,” Fleet reveals. “This was all based on a photo that my ex-brother-in-law sent me. He was walking down a street in Glendale, California, came across a broken mirror that someone had thrown out, and took a photo of himself where he had five heads in the mirror. We get there on the day, and I’m realizing that this is really complicated. Anthony has to do these five different performances, and we have to deal with infinite mirrors. At the last minute, I said, ‘We have to do this on a clean mirror.’ We did it on a clear mirror and gave Anthony different eyelines. The mirror break was all done in post, and we were able to cheat his head slightly and art-direct where the break crosses his chin. Editorial was able to do split screens for the timing of the dialogue.” “For the shots where the sheep are still and scream to the camera, Untold Studios did a bunch of R&D and came up with baboon teeth. I tried to keep anything real as much as possible, but, obviously, when sheep are flying, they have to be CG. I call it the Battlestar Galactica theory, where I like to shake the camera, overshoot shots and make it sloppy when they’re in the air so you can add motion blur. Comedy also helps sell visual effects.” —Stephan Fleet, VFX Supervisor Initially, the plan was to use a practical mirror, but creating a digital version proved to be the more effective solution. A different spin on the bloodbath occurs during a fight when a drugged Frenchie (Tomer Capone) hallucinates as Kimiko Miyashiro (Karen Fukuhara) goes on a killing spree. “We went back and forth with a lot of different concepts for what this hallucination would be,” Fleet remarks. “When we filmed it, we landed on Frenchie having a synesthesia moment where he’s seeing a lot of abstract colors flying in the air. We started getting into that in post and it wasn’t working. We went back to the rubber duckies, which goes back to the story of him in the bathtub. What’s in the bathtub? Rubber duckies, bubbles and water. There was a lot of physics and logic required to figure out how these rubber duckies could float out of someone’s neck. We decided on bubbles when Kimiko hits people’s heads. At one point, we had water when she got shot, but it wasn’t working, so we killed it. We probably did about 100 different versions. We got really detailed with our rubber duckie modeling because we didn’t want it to look cartoony. That took a long time.” Ambrosius, voiced by Tilda Swinton, gets a lot more screentime in Season 4. When Splinter (Rob Benedict) splits in two was achieved heavily in CG. “Erik threw out the words ‘cellular mitosis’ early on as something he wanted to use,” Fleet states. “We shot Rob Benedict on a greenscreen doing all of the different performances for the clones that pop out. It was a crazy amount of CG work with Houdini and particle and skin effects. We previs’d the sequence so we had specific actions. One clone comes out to the right and the other pulls backwards.” What tends to go unnoticed by many is Splinter’s clones setting up for a press conference being held by Firecracker (Valorie Curry). “It’s funny how no one brings up the 22-hour motion control shot that we had to do with Splinter on the stage, which was the most complicated shot!” Fleet observes. “We have this sweeping long shot that brings you into the room and follows Splinter as he carries a container to the stage and hands it off to a clone, and then you reveal five more of them interweaving each other and interacting with all of these objects. It’s like a minute-long dance. First off, you have to choreograph it. We previs’d it, but then you need to get people to do it. We hired dancers and put different colored armbands on them. The camera is like another performer, and a metronome is going, which enables you to find a pace. That took about eight hours of rehearsal. Then Rob has to watch each one of their performances and mimic it to the beat. When he is handing off a box of cables, it’s to a double who is going to have to be erased and be him on the other side. They have to be almost perfect in their timing and lineup in order to take it over in visual effects and make it work.”
    0 التعليقات 0 المشاركات
  • Street Fighter Movie Adds Dan And Balrog Actors, Confirms Akuma Casting - Report

    The new Street Fighter movie has added a few more names to the cast. Earlier this week, rapper-turned-actor Curtis "50 Cent" Jackson teased that he will play Balrog in the film. That rumor now appears to be confirmed, and another performer has signed up to play Dan Hibiki, one of the weakest fighters in the Street Fighter universe.According to Deadline, comedian Andrew Schulz will play Dan in the film. That's appropriate, since Dan is largely a comic relief character who gets played for laughs. This will mark Dan's first-ever appearance in live-action media. Schulz has previously appeared in The Underdoggs and the remake of White Men Can’t Jump, as well as the second season of Netflix's sitcom Tires. He is also the host of Flagrant Pod, a popular comedy podcast, and his most recent comedy special was streamed on Netflix.Jackson's casting as Balrog was confirmed in a subsequent report by The Hollywood Reporter. THR went a step further by confirming the roles of a few previously cast actors including Andrew Koji as Ryu, Noah Centineo as Ken, Jason Momoa as Blanka, and Orville Peck as Vega. Additionally, the outlet notes that Joe "Roman Reigns" Anoa’i, a longtime WWE superstar and former World Champion, will play Akuma, one of the film's primary villains.Continue Reading at GameSpot
    #street #fighter #movie #adds #dan
    Street Fighter Movie Adds Dan And Balrog Actors, Confirms Akuma Casting - Report
    The new Street Fighter movie has added a few more names to the cast. Earlier this week, rapper-turned-actor Curtis "50 Cent" Jackson teased that he will play Balrog in the film. That rumor now appears to be confirmed, and another performer has signed up to play Dan Hibiki, one of the weakest fighters in the Street Fighter universe.According to Deadline, comedian Andrew Schulz will play Dan in the film. That's appropriate, since Dan is largely a comic relief character who gets played for laughs. This will mark Dan's first-ever appearance in live-action media. Schulz has previously appeared in The Underdoggs and the remake of White Men Can’t Jump, as well as the second season of Netflix's sitcom Tires. He is also the host of Flagrant Pod, a popular comedy podcast, and his most recent comedy special was streamed on Netflix.Jackson's casting as Balrog was confirmed in a subsequent report by The Hollywood Reporter. THR went a step further by confirming the roles of a few previously cast actors including Andrew Koji as Ryu, Noah Centineo as Ken, Jason Momoa as Blanka, and Orville Peck as Vega. Additionally, the outlet notes that Joe "Roman Reigns" Anoa’i, a longtime WWE superstar and former World Champion, will play Akuma, one of the film's primary villains.Continue Reading at GameSpot #street #fighter #movie #adds #dan
    WWW.GAMESPOT.COM
    Street Fighter Movie Adds Dan And Balrog Actors, Confirms Akuma Casting - Report
    The new Street Fighter movie has added a few more names to the cast. Earlier this week, rapper-turned-actor Curtis "50 Cent" Jackson teased that he will play Balrog in the film. That rumor now appears to be confirmed, and another performer has signed up to play Dan Hibiki, one of the weakest fighters in the Street Fighter universe.According to Deadline, comedian Andrew Schulz will play Dan in the film. That's appropriate, since Dan is largely a comic relief character who gets played for laughs. This will mark Dan's first-ever appearance in live-action media. Schulz has previously appeared in The Underdoggs and the remake of White Men Can’t Jump, as well as the second season of Netflix's sitcom Tires. He is also the host of Flagrant Pod, a popular comedy podcast, and his most recent comedy special was streamed on Netflix.Jackson's casting as Balrog was confirmed in a subsequent report by The Hollywood Reporter. THR went a step further by confirming the roles of a few previously cast actors including Andrew Koji as Ryu, Noah Centineo as Ken, Jason Momoa as Blanka, and Orville Peck as Vega. Additionally, the outlet notes that Joe "Roman Reigns" Anoa’i, a longtime WWE superstar and former World Champion, will play Akuma, one of the film's primary villains.Continue Reading at GameSpot
    0 التعليقات 0 المشاركات
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 التعليقات 0 المشاركات
  • Watch Ben Starr Date Himself In Date Everything

    Since his breakthrough role in Final Fantasy XVI, fans can't get enough of Ben Starr. Neither can he, as in a video for GameSpot, the voice actor decided to play through Date Everything by only dating his own character.The surreal and comedic sandbox dating sim lets you literally date everything, thanks to a pair of magical glasses called Dateviators, which transform everyday household objects into dateable characters with their own stories. Each one is brought to by a huge roster of voice actors, including Ashly Burch, Matthew Mercer, Laura Bailey, Felicia Day, Steve Blum, Ashley Johnson, as well as the game's lead designer and veteran voice actor Ray Chase of Final Fantasy XV fame. Despite more than 100 dateable objects, "resident video game narcissist" Starr has opted to only date himself. Starr actually voices multiple characters as he plays a personified door called Dorian and there are 17 variations of Dorian throughout the house. There's Front Dorian who wears a little hat, Back Dorian, which the actor said he recorded "facing away from the microphone with my hand over my mouth", as well as Trap Dorian, who happens to wear a lot less clothes than the rest.Continue Reading at GameSpot
    #watch #ben #starr #date #himself
    Watch Ben Starr Date Himself In Date Everything
    Since his breakthrough role in Final Fantasy XVI, fans can't get enough of Ben Starr. Neither can he, as in a video for GameSpot, the voice actor decided to play through Date Everything by only dating his own character.The surreal and comedic sandbox dating sim lets you literally date everything, thanks to a pair of magical glasses called Dateviators, which transform everyday household objects into dateable characters with their own stories. Each one is brought to by a huge roster of voice actors, including Ashly Burch, Matthew Mercer, Laura Bailey, Felicia Day, Steve Blum, Ashley Johnson, as well as the game's lead designer and veteran voice actor Ray Chase of Final Fantasy XV fame. Despite more than 100 dateable objects, "resident video game narcissist" Starr has opted to only date himself. Starr actually voices multiple characters as he plays a personified door called Dorian and there are 17 variations of Dorian throughout the house. There's Front Dorian who wears a little hat, Back Dorian, which the actor said he recorded "facing away from the microphone with my hand over my mouth", as well as Trap Dorian, who happens to wear a lot less clothes than the rest.Continue Reading at GameSpot #watch #ben #starr #date #himself
    WWW.GAMESPOT.COM
    Watch Ben Starr Date Himself In Date Everything
    Since his breakthrough role in Final Fantasy XVI, fans can't get enough of Ben Starr. Neither can he, as in a video for GameSpot, the voice actor decided to play through Date Everything by only dating his own character.The surreal and comedic sandbox dating sim lets you literally date everything, thanks to a pair of magical glasses called Dateviators, which transform everyday household objects into dateable characters with their own stories. Each one is brought to by a huge roster of voice actors, including Ashly Burch, Matthew Mercer, Laura Bailey, Felicia Day, Steve Blum, Ashley Johnson, as well as the game's lead designer and veteran voice actor Ray Chase of Final Fantasy XV fame. Despite more than 100 dateable objects, "resident video game narcissist" Starr has opted to only date himself. Starr actually voices multiple characters as he plays a personified door called Dorian and there are 17 variations of Dorian throughout the house. There's Front Dorian who wears a little hat, Back Dorian, which the actor said he recorded "facing away from the microphone with my hand over my mouth", as well as Trap Dorian, who happens to wear a lot less clothes than the rest.Continue Reading at GameSpot
    0 التعليقات 0 المشاركات
  • So, Strasbourg has officially joined the world of Hollywood with the grand opening of Ex Persona, a motion capture studio that’s just a hop, skip, and a jump from the city center. Because, of course, what every aspiring actor needs is a high-tech studio where they can perfectly simulate the art of standing still while looking vaguely excited.

    Equipped with fancy Vicon cameras, I can only imagine the thrill of seeing your every awkward movement captured in stunning detail. Finally, you can bring your most cringe-worthy dance moves to life—because who wouldn't want their most embarrassing moments immortalized in 3D?

    Let’s just hope the talent they attract has more personality than their studio name suggests!

    #MotionCapture #ExPersona #Strasbourg
    So, Strasbourg has officially joined the world of Hollywood with the grand opening of Ex Persona, a motion capture studio that’s just a hop, skip, and a jump from the city center. Because, of course, what every aspiring actor needs is a high-tech studio where they can perfectly simulate the art of standing still while looking vaguely excited. Equipped with fancy Vicon cameras, I can only imagine the thrill of seeing your every awkward movement captured in stunning detail. Finally, you can bring your most cringe-worthy dance moves to life—because who wouldn't want their most embarrassing moments immortalized in 3D? Let’s just hope the talent they attract has more personality than their studio name suggests! #MotionCapture #ExPersona #Strasbourg
    Motion Capture : le studio Ex Persona ouvre ses portes
    Il y a quelques jours, un nouveau studio de motion capture a ouvert ses portes à Strasbourg : Ex Persona. Basé à 10 minutes du centre (et donc de la gare) de façon à faciliter l’accès du plateau pour les talents, Ex Persona dispose d’un p
    1 التعليقات 0 المشاركات
الصفحات المعززة