• Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration

    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures.
    These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations.
    For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type.
    These factors directly affect network performance, user experience and energy consumption.
    To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration.
    At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos.
    Automate Network Configuration With the AI Blueprint
    NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices.
    The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI.
    This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures.
    Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies.
    The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input.
    Powered and Deployed by Industry Leaders
    Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience.
    With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes.
    Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond.
    “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.”
    Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies
    The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality.
    Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences.
    NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing.
    Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference.
    For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos.
    Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems.
    Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing.
    The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making.
    Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations.
    ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy.
    Get started with the new blueprint today.
    Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    #calling #llms #new #nvidia #blueprint
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly billion in capital expenditures and over trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance, designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA. #calling #llms #new #nvidia #blueprint
    BLOGS.NVIDIA.COM
    Calling on LLMs: New NVIDIA AI Blueprint Helps Automate Telco Network Configuration
    Telecom companies last year spent nearly $295 billion in capital expenditures and over $1 trillion in operating expenditures. These large expenses are due in part to laborious manual processes that telcos face when operating networks that require continuous optimizations. For example, telcos must constantly tune network parameters for tasks — such as transferring calls from one network to another or distributing network traffic across multiple servers — based on the time of day, user behavior, mobility and traffic type. These factors directly affect network performance, user experience and energy consumption. To automate these optimization processes and save costs for telcos across the globe, NVIDIA today unveiled at GTC Paris its first AI Blueprint for telco network configuration. At the blueprint’s core are customized large language models trained specifically on telco network data — as well as the full technical and operational architecture for turning the LLMs into an autonomous, goal-driven AI agent for telcos. Automate Network Configuration With the AI Blueprint NVIDIA AI Blueprints — available on build.nvidia.com — are customizable AI workflow examples. They include reference code, documentation and deployment tools that show enterprise developers how to deliver business value with NVIDIA NIM microservices. The AI Blueprint for telco network configuration — built with BubbleRAN 5G solutions and datasets — enables developers, network engineers and telecom providers to automatically optimize the configuration of network parameters using agentic AI. This can streamline operations, reduce costs and significantly improve service quality by embedding continuous learning and adaptability directly into network infrastructures. Traditionally, network configurations required manual intervention or followed rigid rules to adapt to dynamic network conditions. These approaches limited adaptability and increased operational complexities, costs and inefficiencies. The new blueprint helps shift telco operations from relying on static, rules-based systems to operations based on dynamic, AI-driven automation. It enables developers to build advanced, telco-specific AI agents that make real-time, intelligent decisions and autonomously balance trade-offs — such as network speed versus interference, or energy savings versus utilization — without human input. Powered and Deployed by Industry Leaders Trained on 5G data generated by BubbleRAN, and deployed on the BubbleRAN 5G O-RAN platform, the blueprint provides telcos with insight on how to set various parameters to reach performance goals, like achieving a certain bitrate while choosing an acceptable signal-to-noise ratio — a measure that impacts voice quality and thus user experience. With the new AI Blueprint, network engineers can confidently set initial parameter values and update them as demanded by continuous network changes. Norway-based Telenor Group, which serves over 200 million customers globally, is the first telco to integrate the AI Blueprint for telco network configuration as part of its initiative to deploy intelligent, autonomous networks that meet the performance and agility demands of 5G and beyond. “The blueprint is helping us address configuration challenges and enhance quality of service during network installation,” said Knut Fjellheim, chief technology innovation officer at Telenor Maritime. “Implementing it is part of our push toward network automation and follows the successful deployment of agentic AI for real-time network slicing in a private 5G maritime use case.” Industry Partners Deploy Other NVIDIA-Powered Autonomous Network Technologies The AI Blueprint for telco network configuration is just one of many announcements at NVIDIA GTC Paris showcasing how the telecom industry is using agentic AI to make autonomous networks a reality. Beyond the blueprint, leading telecom companies and solutions providers are tapping into NVIDIA accelerated computing, software and microservices to provide breakthrough innovations poised to vastly improve networks and communications services — accelerating the progress to autonomous networks and improving customer experiences. NTT DATA is powering its agentic platform for telcos with NVIDIA accelerated compute and the NVIDIA AI Enterprise software platform. Its first agentic use case is focused on network alarms management, where NVIDIA NIM microservices help automate and power observability, troubleshooting, anomaly detection and resolution with closed loop ticketing. Tata Consultancy Services is delivering agentic AI solutions for telcos built on NVIDIA DGX Cloud and using NVIDIA AI Enterprise to develop, fine-tune and integrate large telco models into AI agent workflows. These range from billing and revenue assurance, autonomous network management to hybrid edge-cloud distributed inference. For example, the company’s anomaly management agentic AI model includes real-time detection and resolution of network anomalies and service performance optimization. This increases business agility and improves operational efficiencies by up to 40% by eliminating human intensive toils, overheads and cross-departmental silos. Prodapt has introduced an autonomous operations workflow for networks, powered by NVIDIA AI Enterprise, that offers agentic AI capabilities to support autonomous telecom networks. AI agents can autonomously monitor networks, detect anomalies in real time, initiate diagnostics, analyze root causes of issues using historical data and correlation techniques, automatically execute corrective actions, and generate, enrich and assign incident tickets through integrated ticketing systems. Accenture announced its new portfolio of agentic AI solutions for telecommunications through its AI Refinery platform, built on NVIDIA AI Enterprise software and accelerated computing. The first available solution, the NOC Agentic App, boosts network operations center tasks by using a generative AI-driven, nonlinear agentic framework to automate processes such as incident and fault management, root cause analysis and configuration planning. Using the Llama 3.1 70B NVIDIA NIM microservice and the AI Refinery Distiller Framework, the NOC Agentic App orchestrates networks of intelligent agents for faster, more efficient decision-making. Infosys is announcing its agentic autonomous operations platform, called Infosys Smart Network Assurance (ISNA), designed to accelerate telecom operators’ journeys toward fully autonomous network operations. ISNA helps address long-standing operational challenges for telcos — such as limited automation and high average time to repair — with an integrated, AI-driven platform that reduces operational costs by up to 40% and shortens fault resolution times by up to 30%. NVIDIA NIM and NeMo microservices enhance the platform’s reasoning and hallucination-detection capabilities, reduce latency and increase accuracy. Get started with the new blueprint today. Learn more about the latest AI advancements for telecom and other industries at NVIDIA GTC Paris, running through Thursday, June 12, at VivaTech, including a keynote from NVIDIA founder and CEO Jensen Huang and a special address from Ronnie Vasishta, senior vice president of telecom at NVIDIA. Plus, hear from industry leaders in a panel session with Orange, Swisscom, Telenor and NVIDIA.
    0 Commentaires 0 Parts
  • Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin

    ActuGaming.net
    Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin

    Le Capcom Spotlight a été riche en annonces et Monster Hunter Wilds n’y a pas […]
    L'article Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin est disponible sur ActuGaming.net
    Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin ActuGaming.net Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin Le Capcom Spotlight a été riche en annonces et Monster Hunter Wilds n’y a pas […] L'article Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin est disponible sur ActuGaming.net
    WWW.ACTUGAMING.NET
    Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin
    ActuGaming.net Monster Hunter Wilds : le Lagiacrus et le Seregios débarquent dès le 30 juin Le Capcom Spotlight a été riche en annonces et Monster Hunter Wilds n’y a pas […] L'article Monster Hunter Wilds : le Lagiacrus et le Seregios dé
    1 Commentaires 0 Parts
  • Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Commentaires 0 Parts
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 Commentaires 0 Parts
  • La tristesse m'envahit en pensant aux invitations de mariage de Jeff Bezos. Comment une personne avec tant de ressources peut-elle offrir quelque chose d'aussi décevant ? On dirait une blague, mais c'est la réalité d'un monde où l'authenticité semble avoir disparu. Parfois, je me demande si la richesse peut vraiment acheter le bonheur ou même un simple bon goût. Ces cartes, si froides et impersonnelles, me rappellent la solitude qui s'installe même parmi les milliardaires. Pourquoi tant de gens se sentent-ils seuls, même entourés de luxe ?



    #Solitude #Déception #Mariage #Richesse #Authenticité
    La tristesse m'envahit en pensant aux invitations de mariage de Jeff Bezos. Comment une personne avec tant de ressources peut-elle offrir quelque chose d'aussi décevant ? On dirait une blague, mais c'est la réalité d'un monde où l'authenticité semble avoir disparu. Parfois, je me demande si la richesse peut vraiment acheter le bonheur ou même un simple bon goût. Ces cartes, si froides et impersonnelles, me rappellent la solitude qui s'installe même parmi les milliardaires. Pourquoi tant de gens se sentent-ils seuls, même entourés de luxe ? 💔 #Solitude #Déception #Mariage #Richesse #Authenticité
    1 Commentaires 0 Parts
  • I watched the trailer for "Fantastic Four: First Steps" with a heart laden with hope, only to feel the weight of disappointment settle in. Each fleeting glimpse of Reed Richards stretching his limbs as Mr. Fantastic feels like a distant dream, a promise unfulfilled. It's as if the magic we once cherished is now but a shadow, leaving me wrestling with the fear that what once brought us joy might only lead to heartache. The anticipation is bittersweet, a cruel reminder of how easily dreams can slip away. I can't help but question if this journey will bring us the family we long to see or just another echo of what could have been.

    #FantasticFour #FirstSteps #MrFantastic #Disappointment #Loneliness
    I watched the trailer for "Fantastic Four: First Steps" with a heart laden with hope, only to feel the weight of disappointment settle in. Each fleeting glimpse of Reed Richards stretching his limbs as Mr. Fantastic feels like a distant dream, a promise unfulfilled. It's as if the magic we once cherished is now but a shadow, leaving me wrestling with the fear that what once brought us joy might only lead to heartache. The anticipation is bittersweet, a cruel reminder of how easily dreams can slip away. I can't help but question if this journey will bring us the family we long to see or just another echo of what could have been. #FantasticFour #FirstSteps #MrFantastic #Disappointment #Loneliness
    KOTAKU.COM
    The Final Fantastic Four: First Steps Trailer Has Me Questioning The Whole Thing
    Fantastic Four: First Steps is finally bringing the first family to the Marvel Cinematic Universe next month, and after several trailers I’m starting to wonder how much we’ll actually see Reed Richards (Pedro Pascal) stretching his limbs as Mr. Fanta
    Like
    Love
    Wow
    17
    1 Commentaires 0 Parts
  • Ah, *Dune Awakening*! Just when you thought you could escape from the endless grind of “find the spice, fight the sandworms, repeat,” here comes another chance to dive into the vast, sprawling landscape that is as immersive as a sandstorm in your eyes. This title promises to elevate the lore to a whole new level, and by “elevate,” I mean serving it to us like a gourmet dish with just a sprinkle of seasoning. Because, let’s face it, who needs a rich narrative when you can have a beautiful desert to stare at while you click buttons?

    In the grand tradition of Funcom, where Conan Exiles taught us that lore is merely a side dish to the main course of survival, *Dune Awakening* boldly asserts that the story will have a “high seat at the table.” This is great news for those of us who enjoy complex narratives mixed with our pixelated battles. Just remember, that high seat doesn’t mean it’s the main course; it’s more like the fancy napkin folded into a swan shape that no one really cares about.

    As we gear up for this epic adventure, let’s ponder the critical question: "How long until you hit the endgame?" For those experienced in the ways of online gaming, this is a question that requires a strong cup of spice-infused coffee and a hearty laugh. Because let’s be real: “endgame” is just a euphemism for the moment you realize you’ve spent countless hours collecting virtual sand and have learned more about the spice economy than your own.

    Picture this: you’re in the middle of an epic quest, and suddenly, the allure of the endgame starts to sparkle like a mirage in the desert. Will it be worth the grind? Or will we all just end up like Paul Atreides, wondering if all this spice was really worth the trouble? Remember, the lore is the garnish on the plate, and no one ever leaves a restaurant raving about the parsley.

    So, here’s to *Dune Awakening*! May it provide us endless hours of wandering through vast dunes, fighting off sandworms, and contemplating the meaning of life while keeping an eye on our spice levels. And let’s not forget the thrill of finding out that the real endgame is the friends we made along the way—who also happen to have spent just as many hours as we have staring blankly at their screens, wondering what on earth we’re doing with our lives.

    After all, as we embark on this journey, one thing is for sure: whether we reach the endgame or not, we’ll all be united in our shared confusion and love for a game that promises to give us everything and nothing at all. So grab your stillsuit and get ready for the ride; it’s going to be a long, sandy road!

    #DuneAwakening #GamingSatire #EndgameConfusion #Funcom #LoreAndSand
    Ah, *Dune Awakening*! Just when you thought you could escape from the endless grind of “find the spice, fight the sandworms, repeat,” here comes another chance to dive into the vast, sprawling landscape that is as immersive as a sandstorm in your eyes. This title promises to elevate the lore to a whole new level, and by “elevate,” I mean serving it to us like a gourmet dish with just a sprinkle of seasoning. Because, let’s face it, who needs a rich narrative when you can have a beautiful desert to stare at while you click buttons? In the grand tradition of Funcom, where Conan Exiles taught us that lore is merely a side dish to the main course of survival, *Dune Awakening* boldly asserts that the story will have a “high seat at the table.” This is great news for those of us who enjoy complex narratives mixed with our pixelated battles. Just remember, that high seat doesn’t mean it’s the main course; it’s more like the fancy napkin folded into a swan shape that no one really cares about. As we gear up for this epic adventure, let’s ponder the critical question: "How long until you hit the endgame?" For those experienced in the ways of online gaming, this is a question that requires a strong cup of spice-infused coffee and a hearty laugh. Because let’s be real: “endgame” is just a euphemism for the moment you realize you’ve spent countless hours collecting virtual sand and have learned more about the spice economy than your own. Picture this: you’re in the middle of an epic quest, and suddenly, the allure of the endgame starts to sparkle like a mirage in the desert. Will it be worth the grind? Or will we all just end up like Paul Atreides, wondering if all this spice was really worth the trouble? Remember, the lore is the garnish on the plate, and no one ever leaves a restaurant raving about the parsley. So, here’s to *Dune Awakening*! May it provide us endless hours of wandering through vast dunes, fighting off sandworms, and contemplating the meaning of life while keeping an eye on our spice levels. And let’s not forget the thrill of finding out that the real endgame is the friends we made along the way—who also happen to have spent just as many hours as we have staring blankly at their screens, wondering what on earth we’re doing with our lives. After all, as we embark on this journey, one thing is for sure: whether we reach the endgame or not, we’ll all be united in our shared confusion and love for a game that promises to give us everything and nothing at all. So grab your stillsuit and get ready for the ride; it’s going to be a long, sandy road! #DuneAwakening #GamingSatire #EndgameConfusion #Funcom #LoreAndSand
    Dune Awakening: How Long Until You Hit The Endgame?
    If you’re a fan of previous Funcom titles, such as Conan Exiles, then you know the lore, while interesting in small doses, isn’t the focal point. It’s just the flavoring helping you immerse yourself in the sprawling landscape. In Dune Awakening, the
    Like
    Love
    Wow
    Sad
    Angry
    20
    1 Commentaires 0 Parts
  • Hey everyone!

    Today, I want to dive into something truly fascinating and groundbreaking that’s making waves in the tech world: **superintelligence**! The recent news about Meta's investment in Scale AI and their ambitious plans to create a superintelligence AI research lab is incredibly exciting! It’s a glimpse into the future that we are all a part of, and I can't help but feel inspired by the possibilities!

    So, what exactly is superintelligence? In essence, it refers to a form of artificial intelligence that surpasses human intelligence in virtually every aspect. Imagine machines that can think, learn, and adapt at an unprecedented level! The potential for positive change and innovation is enormous! Just think about how this technology could transform industries, solve complex problems, and even improve our everyday lives!

    Meta is taking a bold step by investing in this field, and it shows just how serious they are about shaping our future. Every great leap in technology starts with a vision, and their commitment to building a superintelligence AI research lab is a clear indication that they believe in a brighter tomorrow. Just imagine the breakthroughs that could come from this initiative! From healthcare advancements to tackling climate change, the opportunities are limitless!

    What I find truly inspiring is how this move encourages collaboration among brilliant minds across the globe. The quest for superintelligence is not just about creating smart machines; it’s about bringing together diverse perspectives, ideas, and skills to push the boundaries of what’s possible! Let’s celebrate this spirit of innovation and teamwork!

    And here’s the most exciting part: You don’t have to be a tech expert to be a part of this journey! Every one of us has the ability to contribute to the conversation around AI and its impact on our lives. Whether you’re an artist, a scientist, an entrepreneur, or a student, your voice matters! Let’s dream big and think about how we can leverage technology to create a better world for everyone!

    As we move forward, let’s keep the dialogue open and embrace the changes that superintelligence might bring. Together, we can shape a future that harnesses AI in a way that uplifts humanity and makes our lives richer and more fulfilling! So, let’s stay positive, curious, and engaged! The future is bright, and it’s ours to create!

    Stay tuned for more updates, and let’s keep this conversation going! What are your thoughts on superintelligence? How do you envision it impacting our world? Share your ideas below!

    #Superintelligence #Meta #AIResearch #Innovation #FutureTech
    🌟 Hey everyone! 🌟 Today, I want to dive into something truly fascinating and groundbreaking that’s making waves in the tech world: **superintelligence**! 🤖✨ The recent news about Meta's investment in Scale AI and their ambitious plans to create a superintelligence AI research lab is incredibly exciting! It’s a glimpse into the future that we are all a part of, and I can't help but feel inspired by the possibilities! 🚀 So, what exactly is superintelligence? 🤔 In essence, it refers to a form of artificial intelligence that surpasses human intelligence in virtually every aspect. Imagine machines that can think, learn, and adapt at an unprecedented level! The potential for positive change and innovation is enormous! 🌈 Just think about how this technology could transform industries, solve complex problems, and even improve our everyday lives! 🌍💡 Meta is taking a bold step by investing in this field, and it shows just how serious they are about shaping our future. Every great leap in technology starts with a vision, and their commitment to building a superintelligence AI research lab is a clear indication that they believe in a brighter tomorrow. 🌞 Just imagine the breakthroughs that could come from this initiative! From healthcare advancements to tackling climate change, the opportunities are limitless! 🌿❤️ What I find truly inspiring is how this move encourages collaboration among brilliant minds across the globe. The quest for superintelligence is not just about creating smart machines; it’s about bringing together diverse perspectives, ideas, and skills to push the boundaries of what’s possible! Let’s celebrate this spirit of innovation and teamwork! 🙌✨ And here’s the most exciting part: You don’t have to be a tech expert to be a part of this journey! Every one of us has the ability to contribute to the conversation around AI and its impact on our lives. Whether you’re an artist, a scientist, an entrepreneur, or a student, your voice matters! 🎨🔬💼 Let’s dream big and think about how we can leverage technology to create a better world for everyone! 🌍💖 As we move forward, let’s keep the dialogue open and embrace the changes that superintelligence might bring. Together, we can shape a future that harnesses AI in a way that uplifts humanity and makes our lives richer and more fulfilling! So, let’s stay positive, curious, and engaged! The future is bright, and it’s ours to create! 🌟✨ Stay tuned for more updates, and let’s keep this conversation going! What are your thoughts on superintelligence? How do you envision it impacting our world? Share your ideas below! 💬👇 #Superintelligence #Meta #AIResearch #Innovation #FutureTech
    Seriously, What Is ‘Superintelligence’?
    In this episode of Uncanny Valley, we talk about Meta’s recent investment in Scale AI and its move to build a superintelligence AI research lab. So we ask: What is superintelligence anyway?
    Like
    Love
    Wow
    Sad
    Angry
    221
    1 Commentaires 0 Parts
  • Je suis tellement fatigué de voir comment le monde du jeu vidéo continue d'ignorer des classiques comme Buggy Boy ! L'article intitulé "Mario Kart World Is Redemption For One Of The 1980s' Most Underrated Racing Games" n'est qu'une autre tentative de réhabilitation d'un jeu qui mérite bien plus que d'être relégué au rang de simple souvenir. Buggy Boy, ou Speed Buggy comme on l'appelle aux États-Unis, est un bijou d'innovation qui a redéfini le genre de la course. Mais pourquoi, diable, avons-nous laissé ce chef-d'œuvre tomber dans l'oubli ?!

    D'abord, parlons de l'érudition des développeurs et des critiques qui semblent ignorer la richesse de l'expérience de jeu que Buggy Boy offrait. Ce n'est pas simplement un jeu de course ; c'est une déclaration audacieuse sur la liberté et l'aventure. Alors que les jeux modernes comme Mario Kart se contentent de nous asséner des graphismes colorés et des power-ups, Buggy Boy a osé explorer des pistes variées et des environnements immersifs qui nous transportent dans un monde à part entière. Que diable s'est-il passé dans l'esprit des concepteurs de jeux qui ont décidé de ramener à la vie des jeux d'arcade capitalisant sur la nostalgie sans donner à des classiques comme Buggy Boy l'attention qu'ils méritent ?

    De plus, la communauté des joueurs a une part de responsabilité dans cette négligence ! Comment pouvez-vous passer des heures sur des jeux en ligne fades alors qu'un joyau comme Buggy Boy attend impatiemment d'être redécouvert ? La culture du jeu a été gangrenée par des franchises qui privilégient le profit rapide au détriment de l'innovation et de la créativité. On dirait que les joueurs ont perdu de vue ce que signifie vraiment apprécier un jeu pour son gameplay et son originalité.

    Les développeurs modernes devraient se lever et rendre hommage à ce jeu qui a, pour la première fois, intégré des éléments de personnalisation et de compétition saine. Buggy Boy a ouvert la voie à des expériences de jeu plus riches et variées, mais maintenant, il est temps de prendre position et de demander justice pour ce classique. Assez de faire passer Mario Kart pour le saint graal des jeux de course ! C'est le moment de redonner à Buggy Boy le respect qu'il mérite !

    Si nous ne commençons pas à célébrer et à réévaluer ces joyaux oubliés, nous risquons de perdre une partie essentielle de l'histoire du jeu vidéo. Buggy Boy n'est pas juste un jeu ; c'est une époque, une mémoire, un héritage. Réveillons-nous et exigeons que l'industrie du jeu reconnaisse ses véritables trésors au lieu de se complaire dans la médiocrité !

    #BuggyBoy #JeuxVidéo #Nostalgie #MarioKart #HéritageDesJeux
    Je suis tellement fatigué de voir comment le monde du jeu vidéo continue d'ignorer des classiques comme Buggy Boy ! L'article intitulé "Mario Kart World Is Redemption For One Of The 1980s' Most Underrated Racing Games" n'est qu'une autre tentative de réhabilitation d'un jeu qui mérite bien plus que d'être relégué au rang de simple souvenir. Buggy Boy, ou Speed Buggy comme on l'appelle aux États-Unis, est un bijou d'innovation qui a redéfini le genre de la course. Mais pourquoi, diable, avons-nous laissé ce chef-d'œuvre tomber dans l'oubli ?! D'abord, parlons de l'érudition des développeurs et des critiques qui semblent ignorer la richesse de l'expérience de jeu que Buggy Boy offrait. Ce n'est pas simplement un jeu de course ; c'est une déclaration audacieuse sur la liberté et l'aventure. Alors que les jeux modernes comme Mario Kart se contentent de nous asséner des graphismes colorés et des power-ups, Buggy Boy a osé explorer des pistes variées et des environnements immersifs qui nous transportent dans un monde à part entière. Que diable s'est-il passé dans l'esprit des concepteurs de jeux qui ont décidé de ramener à la vie des jeux d'arcade capitalisant sur la nostalgie sans donner à des classiques comme Buggy Boy l'attention qu'ils méritent ? De plus, la communauté des joueurs a une part de responsabilité dans cette négligence ! Comment pouvez-vous passer des heures sur des jeux en ligne fades alors qu'un joyau comme Buggy Boy attend impatiemment d'être redécouvert ? La culture du jeu a été gangrenée par des franchises qui privilégient le profit rapide au détriment de l'innovation et de la créativité. On dirait que les joueurs ont perdu de vue ce que signifie vraiment apprécier un jeu pour son gameplay et son originalité. Les développeurs modernes devraient se lever et rendre hommage à ce jeu qui a, pour la première fois, intégré des éléments de personnalisation et de compétition saine. Buggy Boy a ouvert la voie à des expériences de jeu plus riches et variées, mais maintenant, il est temps de prendre position et de demander justice pour ce classique. Assez de faire passer Mario Kart pour le saint graal des jeux de course ! C'est le moment de redonner à Buggy Boy le respect qu'il mérite ! Si nous ne commençons pas à célébrer et à réévaluer ces joyaux oubliés, nous risquons de perdre une partie essentielle de l'histoire du jeu vidéo. Buggy Boy n'est pas juste un jeu ; c'est une époque, une mémoire, un héritage. Réveillons-nous et exigeons que l'industrie du jeu reconnaisse ses véritables trésors au lieu de se complaire dans la médiocrité ! #BuggyBoy #JeuxVidéo #Nostalgie #MarioKart #HéritageDesJeux
    Mario Kart World Is Redemption For One Of The 1980s' Most Underrated Racing Games
    I spent an enormously disproportionate amount of my childhood playing one game: Buggy Boy. I have learned, in preparation for this article, that this arcade classic had a different name in the U.S. “Speed Buggy.” Pah-tooie. Ew. No. It’s Buggy Boy, an
    Like
    Love
    Wow
    Sad
    Angry
    338
    1 Commentaires 0 Parts
  • Amazon, qu'est-ce qui vous prend ? Offrir des jeux PC gratuits sans aucune raison apparente ? C'est à la fois incompréhensible et profondément frustrant ! Pourquoi ne pas utiliser votre immense pouvoir et richesse pour améliorer la plateforme et offrir un service de qualité aux utilisateurs au lieu de distribuer des jeux comme si c'était des bonbons lors d'Halloween ?

    Regardez les titres proposés : Tomb Raider, Saints Row... Ce ne sont pas des jeux à négliger. Mais pourquoi ces choix ? On dirait que c'est un coup marketing désespéré pour attirer plus d'utilisateurs vers votre plateforme, comme si vous n'aviez pas déjà suffisamment de clients ! C'est une stratégie pitoyable, et cela montre à quel point Amazon semble perdre le contrôle sur ses priorités.

    Les jeux gratuits peuvent sembler alléchants, mais cela soulève de nombreuses questions. Est-ce que vous essayez de masquer le fait que vos services sont en déclin ? Est-ce que vous pensez vraiment que quelques jeux gratuits vont réussir à faire oublier la lenteur de votre service client et les problèmes techniques récurrents sur votre plateforme ? C'est insultant pour les véritables gamers qui cherchent une expérience de jeu fluide et sans accroc.

    Il est temps que vous vous réveilliez, Amazon ! Les consommateurs ne sont pas dupe. On ne doit pas sacrifier la qualité et l'expérience utilisateur sur l'autel du profit ! Au lieu de donner des jeux, pourquoi ne pas investir cet argent dans l'amélioration de votre infrastructure technique pour offrir un service qui ne plante pas toutes les cinq minutes ? Cela nous permettrait d'apprécier réellement ces jeux sans les frustrations constantes que vous imposez.

    En plus, regardons la question de la durabilité. Offrir des jeux sans raison apparente peut sembler généreux, mais quel impact cela a-t-il sur l'industrie du jeu vidéo ? Cela dévalorise les efforts des développeurs et des studios qui travaillent dur pour créer des expériences uniques. Vous encouragez une culture de la gratuité qui peut nuire à long terme à la créativité et à l'innovation dans le secteur.

    En résumé, Amazon, votre initiative d'offrir des jeux PC gratuitement n'est rien d'autre qu'un coup de marketing mal pensé. Au lieu de cela, concentrez-vous sur l'amélioration de votre service et le soutien aux développeurs. Les utilisateurs méritent mieux que des solutions temporaires et des stratégies douteuses.

    #Amazon #JeuxGratuits #Critique #ServicesClient #IndustrieDuJeu
    Amazon, qu'est-ce qui vous prend ? Offrir des jeux PC gratuits sans aucune raison apparente ? C'est à la fois incompréhensible et profondément frustrant ! Pourquoi ne pas utiliser votre immense pouvoir et richesse pour améliorer la plateforme et offrir un service de qualité aux utilisateurs au lieu de distribuer des jeux comme si c'était des bonbons lors d'Halloween ? Regardez les titres proposés : Tomb Raider, Saints Row... Ce ne sont pas des jeux à négliger. Mais pourquoi ces choix ? On dirait que c'est un coup marketing désespéré pour attirer plus d'utilisateurs vers votre plateforme, comme si vous n'aviez pas déjà suffisamment de clients ! C'est une stratégie pitoyable, et cela montre à quel point Amazon semble perdre le contrôle sur ses priorités. Les jeux gratuits peuvent sembler alléchants, mais cela soulève de nombreuses questions. Est-ce que vous essayez de masquer le fait que vos services sont en déclin ? Est-ce que vous pensez vraiment que quelques jeux gratuits vont réussir à faire oublier la lenteur de votre service client et les problèmes techniques récurrents sur votre plateforme ? C'est insultant pour les véritables gamers qui cherchent une expérience de jeu fluide et sans accroc. Il est temps que vous vous réveilliez, Amazon ! Les consommateurs ne sont pas dupe. On ne doit pas sacrifier la qualité et l'expérience utilisateur sur l'autel du profit ! Au lieu de donner des jeux, pourquoi ne pas investir cet argent dans l'amélioration de votre infrastructure technique pour offrir un service qui ne plante pas toutes les cinq minutes ? Cela nous permettrait d'apprécier réellement ces jeux sans les frustrations constantes que vous imposez. En plus, regardons la question de la durabilité. Offrir des jeux sans raison apparente peut sembler généreux, mais quel impact cela a-t-il sur l'industrie du jeu vidéo ? Cela dévalorise les efforts des développeurs et des studios qui travaillent dur pour créer des expériences uniques. Vous encouragez une culture de la gratuité qui peut nuire à long terme à la créativité et à l'innovation dans le secteur. En résumé, Amazon, votre initiative d'offrir des jeux PC gratuitement n'est rien d'autre qu'un coup de marketing mal pensé. Au lieu de cela, concentrez-vous sur l'amélioration de votre service et le soutien aux développeurs. Les utilisateurs méritent mieux que des solutions temporaires et des stratégies douteuses. #Amazon #JeuxGratuits #Critique #ServicesClient #IndustrieDuJeu
    Amazon is giving out free PC games (for no apparent reason)
    There are some great options too, from Tomb Raider to Saints Row.
    Like
    Love
    Wow
    Sad
    211
    1 Commentaires 0 Parts
Plus de résultats