• Android 16 update reveals the sinister reality of covert surveillance methods, but guess what? It only does so under restrictive conditions! This is an absolute disgrace! Instead of empowering users with full transparency, they throw us breadcrumbs and expect gratitude. What’s the point of unveiling surveillance tactics if you’re still shackled by their terms? It’s infuriating to see the tech giants play this game while our privacy is left hanging by a thread. We deserve better! It’s time to demand unfiltered access and stop accepting these pathetic “conditions” that only serve to maintain control. Stand up and demand your rights!

    #Android16 #Surveillance #PrivacyMatters #TechCritique #ConsumerRights
    Android 16 update reveals the sinister reality of covert surveillance methods, but guess what? It only does so under restrictive conditions! This is an absolute disgrace! Instead of empowering users with full transparency, they throw us breadcrumbs and expect gratitude. What’s the point of unveiling surveillance tactics if you’re still shackled by their terms? It’s infuriating to see the tech giants play this game while our privacy is left hanging by a thread. We deserve better! It’s time to demand unfiltered access and stop accepting these pathetic “conditions” that only serve to maintain control. Stand up and demand your rights! #Android16 #Surveillance #PrivacyMatters #TechCritique #ConsumerRights
    ARABHARDWARE.NET
    تحديث أندرويد 16 يكشف أساليب المراقبة السرية… ولكن بشروط
    The post تحديث أندرويد 16 يكشف أساليب المراقبة السرية… ولكن بشروط appeared first on عرب هاردوير.
    1 Commentaires 0 Parts
  • NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI

    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions.
    Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges.
    To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure.
    Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations.
    Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint.

    NVIDIA Omniverse Blueprint for Smart City AI 
    The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes:

    NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale.
    NVIDIA Cosmos to generate synthetic data at scale for post-training AI models.
    NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models.
    NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes.

    The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint.
    NVIDIA Partner Ecosystem Powers Smart Cities Worldwide
    The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own.
    SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning.
    This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management.
    Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption.

    The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second.
    Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events.
    To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second.

    Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance.
    Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases.
    The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems.

    Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins.
    Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%.

    Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance.
    Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities.
    Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents.
    Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #nvidia #brings #physical #european #cities
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language modelsand large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization, helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #nvidia #brings #physical #european #cities
    BLOGS.NVIDIA.COM
    NVIDIA Brings Physical AI to European Cities With New Blueprint for Smart City AI
    Urban populations are expected to double by 2050, which means around 2.5 billion people could be added to urban areas by the middle of the century, driving the need for more sustainable urban planning and public services. Cities across the globe are turning to digital twins and AI agents for urban planning scenario analysis and data-driven operational decisions. Building a digital twin of a city and testing smart city AI agents within it, however, is a complex and resource-intensive endeavor, fraught with technical and operational challenges. To address those challenges, NVIDIA today announced the NVIDIA Omniverse Blueprint for smart city AI, a reference framework that combines the NVIDIA Omniverse, Cosmos, NeMo and Metropolis platforms to bring the benefits of physical AI to entire cities and their critical infrastructure. Using the blueprint, developers can build simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can help monitor and optimize city operations. Leading companies including XXII, AVES Reality, Akila, Blyncsy, Bentley, Cesium, K2K, Linker Vision, Milestone Systems, Nebius, SNCF Gares&Connexions, Trimble and Younite AI are among the first to use the new blueprint. NVIDIA Omniverse Blueprint for Smart City AI  The NVIDIA Omniverse Blueprint for smart city AI provides the complete software stack needed to accelerate the development and testing of AI agents in physically accurate digital twins of cities. It includes: NVIDIA Omniverse to build physically accurate digital twins and run simulations at city scale. NVIDIA Cosmos to generate synthetic data at scale for post-training AI models. NVIDIA NeMo to curate high-quality data and use that data to train and fine-tune vision language models (VLMs) and large language models. NVIDIA Metropolis to build and deploy video analytics AI agents based on the NVIDIA AI Blueprint for video search and summarization (VSS), helping process vast amounts of video data and provide critical insights to optimize business processes. The blueprint workflow comprises three key steps. First, developers create a SimReady digital twin of locations and facilities using aerial, satellite or map data with Omniverse and Cosmos. Second, they can train and fine-tune AI models, like computer vision models and VLMs, using NVIDIA TAO and NeMo Curator to improve accuracy for vision AI use cases​. Finally, real-time AI agents powered by these customized models are deployed to alert, summarize and query camera and sensor data using the Metropolis VSS blueprint. NVIDIA Partner Ecosystem Powers Smart Cities Worldwide The blueprint for smart city AI enables a large ecosystem of partners to use a single workflow to build and activate digital twins for smart city use cases, tapping into a combination of NVIDIA’s technologies and their own. SNCF Gares&Connexions, which operates a network of 3,000 train stations across France and Monaco, has deployed a digital twin and AI agents to enable real-time operational monitoring, emergency response simulations and infrastructure upgrade planning. This helps each station analyze operational data such as energy and water use, and enables predictive maintenance capabilities, automated reporting and GDPR-compliant video analytics for incident detection and crowd management. Powered by Omniverse, Metropolis and solutions from ecosystem partners Akila and XXII, SNCF Gares&Connexions’ physical AI deployment at the Monaco-Monte-Carlo and Marseille stations has helped SNCF Gares&Connexions achieve a 100% on-time preventive maintenance completion rate, a 50% reduction in downtime and issue response time, and a 20% reduction in energy consumption. https://blogs.nvidia.com/wp-content/uploads/2025/06/01-Monaco-Akila.mp4 The city of Palermo in Sicily is using AI agents and digital twins from its partner K2K to improve public health and safety by helping city operators process and analyze footage from over 1,000 public video streams at a rate of nearly 50 billion pixels per second. Tapped by Sicily, K2K’s AI agents — built with the NVIDIA AI Blueprint for VSS and cloud solutions from Nebius — can interpret and act on video data to provide real-time alerts on public events. To accurately predict and resolve traffic incidents, K2K is generating synthetic data with Cosmos world foundation models to simulate different driving conditions. Then, K2K uses the data to fine-tune the VLMs powering the AI agents with NeMo Curator. These simulations enable K2K’s AI agents to create over 100,000 predictions per second. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-K2K-Polermo-1600x900-1.mp4 Milestone Systems — in collaboration with NVIDIA and European cities — has launched Project Hafnia, an initiative to build an anonymized, ethically sourced video data platform for cities to develop and train AI models and applications while maintaining regulatory compliance. Using a combination of Cosmos and NeMo Curator on NVIDIA DGX Cloud and Nebius’ sovereign European cloud infrastructure, Project Hafnia scales up and enables European-compliant training and fine-tuning of video-centric AI models, including VLMs, for a variety of smart city use cases. The project’s initial rollout, taking place in Genoa, Italy, features one of the world’s first VLM models for intelligent transportation systems. https://blogs.nvidia.com/wp-content/uploads/2025/06/03-Milestone.mp4 Linker Vision was among the first to partner with NVIDIA to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan — powered by Omniverse, Cosmos and Metropolis. Linker Vision worked with AVES Reality, a digital twin company, to bring aerial imagery of cities and infrastructure into 3D geometry and ultimately into SimReady Omniverse digital twins. Linker Vision’s AI-powered application then built, trained and tested visual AI agents in a digital twin before deployment in the physical city. Now, it’s scaling to analyze 50,000 video streams in real time with generative AI to understand and narrate complex urban events like floods and traffic accidents. Linker Vision delivers timely insights to a dozen city departments through a single integrated AI-powered platform, breaking silos and reducing incident response times by up to 80%. https://blogs.nvidia.com/wp-content/uploads/2025/06/02-Linker-Vision-1280x680-1.mp4 Bentley Systems is joining the effort to bring physical AI to cities with the NVIDIA blueprint. Cesium, the open 3D geospatial platform, provides the foundation for visualizing, analyzing and managing infrastructure projects and ports digital twins to Omniverse. The company’s AI platform Blyncsy uses synthetic data generation and Metropolis to analyze road conditions and improve maintenance. Trimble, a global technology company that enables essential industries including construction, geospatial and transportation, is exploring ways to integrate components of the Omniverse blueprint into its reality capture workflows and Trimble Connect digital twin platform for surveying and mapping applications for smart cities. Younite AI, a developer of AI and 3D digital twin solutions, is adopting the blueprint to accelerate its development pipeline, enabling the company to quickly move from operational digital twins to large-scale urban simulations, improve synthetic data generation, integrate real-time IoT sensor data and deploy AI agents. Learn more about the NVIDIA Omniverse Blueprint for smart city AI by attending this GTC Paris session or watching the on-demand video after the event. Sign up to be notified when the blueprint is available. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Wow
    34
    0 Commentaires 0 Parts
  • Riot Will Allow Sports-Betting Sponsorships For League Of Legends Esports Teams

    Riot Games has announced that it will begin officially sanctioning sports-betting sponsorships for esports teams in its Tier 1 League of Legends and Valorant leagues. While the company states that it still won't allow advertisements in its official broadcasts, teams themselves will be able to take money from sports-betting companies for advertising through their own channels.In a blog post, President of Publishing and Esports John Needham writes that the move is designed to take advantage of the rapidly growing sports-betting industry and to make esports-related betting more regulated. Seemingly to address concerns and head off potential criticism, Needham explains that the company is authorizing sports-betting sponsorships under a "guardrails first" strategy.These "guardrails," Needham states, are essentially the rules by which any sponsorship must be executed. First, sports-betting companies need to be vetted and approved by Riot itself, although the company has not shared the criteria on which this vetting is done. Second, to ensure that sports-betting companies are on a level playing field, Riot is mandating that official partners all use GRID, the officially sanctioned data platform for League of Legends and Valorant. Third, esports teams must launch and maintain internal integrity programs to protect against violations of league rules due to the influence of sports betting. Fourth and last, Riot will use some of the revenue from these sponsorships to support its Tier 2esports leagues.Continue Reading at GameSpot
    #riot #will #allow #sportsbetting #sponsorships
    Riot Will Allow Sports-Betting Sponsorships For League Of Legends Esports Teams
    Riot Games has announced that it will begin officially sanctioning sports-betting sponsorships for esports teams in its Tier 1 League of Legends and Valorant leagues. While the company states that it still won't allow advertisements in its official broadcasts, teams themselves will be able to take money from sports-betting companies for advertising through their own channels.In a blog post, President of Publishing and Esports John Needham writes that the move is designed to take advantage of the rapidly growing sports-betting industry and to make esports-related betting more regulated. Seemingly to address concerns and head off potential criticism, Needham explains that the company is authorizing sports-betting sponsorships under a "guardrails first" strategy.These "guardrails," Needham states, are essentially the rules by which any sponsorship must be executed. First, sports-betting companies need to be vetted and approved by Riot itself, although the company has not shared the criteria on which this vetting is done. Second, to ensure that sports-betting companies are on a level playing field, Riot is mandating that official partners all use GRID, the officially sanctioned data platform for League of Legends and Valorant. Third, esports teams must launch and maintain internal integrity programs to protect against violations of league rules due to the influence of sports betting. Fourth and last, Riot will use some of the revenue from these sponsorships to support its Tier 2esports leagues.Continue Reading at GameSpot #riot #will #allow #sportsbetting #sponsorships
    WWW.GAMESPOT.COM
    Riot Will Allow Sports-Betting Sponsorships For League Of Legends Esports Teams
    Riot Games has announced that it will begin officially sanctioning sports-betting sponsorships for esports teams in its Tier 1 League of Legends and Valorant leagues. While the company states that it still won't allow advertisements in its official broadcasts, teams themselves will be able to take money from sports-betting companies for advertising through their own channels.In a blog post, President of Publishing and Esports John Needham writes that the move is designed to take advantage of the rapidly growing sports-betting industry and to make esports-related betting more regulated. Seemingly to address concerns and head off potential criticism, Needham explains that the company is authorizing sports-betting sponsorships under a "guardrails first" strategy.These "guardrails," Needham states, are essentially the rules by which any sponsorship must be executed. First, sports-betting companies need to be vetted and approved by Riot itself, although the company has not shared the criteria on which this vetting is done. Second, to ensure that sports-betting companies are on a level playing field, Riot is mandating that official partners all use GRID, the officially sanctioned data platform for League of Legends and Valorant. Third, esports teams must launch and maintain internal integrity programs to protect against violations of league rules due to the influence of sports betting. Fourth and last, Riot will use some of the revenue from these sponsorships to support its Tier 2 (lower division) esports leagues.Continue Reading at GameSpot
    0 Commentaires 0 Parts
  • Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler

    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production.
    Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below.
    Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder.
    In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session.
    From Concept to Completion
    To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms.
    For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI.
    ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated.
    Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY.
    NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU.
    ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images.
    Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost.
    LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY.
    “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY 

    Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models.
    Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch.
    To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x.
    Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started.
    Photorealistic renders. Image courtesy of FITY.
    Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time.
    Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY.
    “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY

    Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #startup #uses #nvidia #rtxpowered #generative
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #startup #uses #nvidia #rtxpowered #generative
    BLOGS.NVIDIA.COM
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. Read more about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from $999. GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. Save the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptation (LoRA) models — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    0 Commentaires 0 Parts
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 Commentaires 0 Parts
  • Hey everyone! Let's dive into something truly fascinating today: PVA filament! You might have heard about this unique type of filament that works wonders with any FDM printer. But wait, it’s not always what it seems!

    When we first encounter PVA filament, we might think of it as just another option in our 3D printing toolbox. However, it holds secrets that can truly elevate our creations! PVA, or Polyvinyl Alcohol, is not just a filament; it’s a game-changer! It’s known for its ability to dissolve in water, which opens up a world of possibilities for our projects. Imagine creating intricate designs with support structures that simply vanish, leaving behind only the masterpiece you envisioned! Isn’t that incredible?

    One of the most inspiring aspects of using PVA filament is the encouragement it gives us to push our creative boundaries. Sometimes, we face challenges in our projects, but with PVA, we can experiment without fear! If something doesn’t turn out as planned, we can simply dissolve those support structures and start anew. This teaches us resilience and the importance of embracing our creative journey!

    Moreover, working with PVA filament encourages collaboration and innovation in the 3D printing community. It inspires us to share our experiences, tips, and techniques with one another. Just think about all the amazing designs that have emerged because of this wonderful filament! Each creation tells a story of creativity, perseverance, and teamwork!

    Now, I know what some of you might be thinking: “Isn’t PVA filament tricky to work with?” While it does come with its challenges, like maintaining the right printing conditions and ensuring proper storage to avoid moisture, the rewards are absolutely worth it! With a little practice and patience, we can master this filament and unlock its full potential. Remember, every challenge is an opportunity for growth!

    So, let’s embrace the journey of using PVA filament! Let’s encourage one another to try new techniques, share our successes, and learn from our experiences. Together, we can create a community that celebrates creativity and innovation in 3D printing!

    Let’s keep inspiring each other to create, explore, and conquer new heights in our 3D printing adventures! Who’s with me?

    #PVAFilament #3DPrinting #CreativityUnleashed #Innovation #CommunitySpirit
    🌟 Hey everyone! Let's dive into something truly fascinating today: PVA filament! 🎉 You might have heard about this unique type of filament that works wonders with any FDM printer. 🌈 But wait, it’s not always what it seems! 💧 When we first encounter PVA filament, we might think of it as just another option in our 3D printing toolbox. However, it holds secrets that can truly elevate our creations! 🌟 PVA, or Polyvinyl Alcohol, is not just a filament; it’s a game-changer! It’s known for its ability to dissolve in water, which opens up a world of possibilities for our projects. 🛠️ Imagine creating intricate designs with support structures that simply vanish, leaving behind only the masterpiece you envisioned! 🌊 Isn’t that incredible? One of the most inspiring aspects of using PVA filament is the encouragement it gives us to push our creative boundaries. 🎨✨ Sometimes, we face challenges in our projects, but with PVA, we can experiment without fear! If something doesn’t turn out as planned, we can simply dissolve those support structures and start anew. This teaches us resilience and the importance of embracing our creative journey! 🌻 Moreover, working with PVA filament encourages collaboration and innovation in the 3D printing community. 🤝✨ It inspires us to share our experiences, tips, and techniques with one another. Just think about all the amazing designs that have emerged because of this wonderful filament! Each creation tells a story of creativity, perseverance, and teamwork! 🥳 Now, I know what some of you might be thinking: “Isn’t PVA filament tricky to work with?” 🤔 While it does come with its challenges, like maintaining the right printing conditions and ensuring proper storage to avoid moisture, the rewards are absolutely worth it! 🎯 With a little practice and patience, we can master this filament and unlock its full potential. Remember, every challenge is an opportunity for growth! 🌱 So, let’s embrace the journey of using PVA filament! Let’s encourage one another to try new techniques, share our successes, and learn from our experiences. 🌟 Together, we can create a community that celebrates creativity and innovation in 3D printing! 🌍💖 Let’s keep inspiring each other to create, explore, and conquer new heights in our 3D printing adventures! Who’s with me? 🙌🎉 #PVAFilament #3DPrinting #CreativityUnleashed #Innovation #CommunitySpirit
    PVA Filament: Not Always What it Seems
    PVA filament is an interesting filament type, for the reason that while it can be printed with any FDM printer, it supposedly readily dissolves in water, which is also the …read more
    Like
    Love
    Wow
    Sad
    Angry
    78
    1 Commentaires 0 Parts
  • In a world that increasingly feels like it has turned its back on authentic connection, I find myself staring blankly at my Smart TV, a screen that promises companionship but delivers only cold advertisements. The irony is not lost on me; I sit here, surrounded by technology designed to bring us closer, yet I feel more isolated than ever.

    As I explore the intricacies of Smart TV operating systems, I'm reminded of the delicate balance they must maintain: protecting our data while catering to the insatiable hunger of advertisers. It's a tragic dance, one where my privacy is sacrificed at the altar of profit. Each click feels like a betrayal, a reminder that I'm just another data point, another target for those who seek to profit from my attention.

    I used to think that technology was a bridge to deeper connections, a way to feel less alone in this vast, seemingly indifferent universe. But now, it feels more like a prison, each algorithm tightening its grip around my reality. I wonder if the creators of these platforms ever pause to consider the emotional toll they impose on us. Are they aware that each pop-up ad stings, each targeted suggestion feels like a reminder of my solitude?

    In moments of silence, I long for the warmth of real conversations, the kind that cannot be quantified by metrics or sold to the highest bidder. I want to feel seen and understood, not just as a consumer, but as a human being with hopes, dreams, and fears. Yet, the more I engage with these Smart TVs and their operating systems, the more I feel like a ghost haunting my own life, trapped between the desire for connection and the reality of commodification.

    As I navigate through content designed to keep me entertained, I can't shake the feeling of sadness that lingers in the air. It's a heavy cloak, woven from the threads of disappointment and longing. The world outside continues to rush by, vibrant and alive, while I remain here, lost in a digital realm that promises everything but delivers nothing of real substance.

    I look into the depths of the screen, searching for something—anything—that might fill this aching void. Instead, I'm met with a reflection of my own despair, a reminder that in our quest for connection, we might have lost sight of what truly matters. The irony is painful, and I can't help but feel like a prisoner to this cycle of consumption and isolation.

    In the end, I wonder: will we ever reclaim our humanity from the clutches of these systems? Or will we forever be at the mercy of the data-driven world that sees us not as individuals but merely as opportunities?

    #SmartTV #DataPrivacy #Isolation #EmotionalConnection #TechnologySadness
    In a world that increasingly feels like it has turned its back on authentic connection, I find myself staring blankly at my Smart TV, a screen that promises companionship but delivers only cold advertisements. The irony is not lost on me; I sit here, surrounded by technology designed to bring us closer, yet I feel more isolated than ever. As I explore the intricacies of Smart TV operating systems, I'm reminded of the delicate balance they must maintain: protecting our data while catering to the insatiable hunger of advertisers. It's a tragic dance, one where my privacy is sacrificed at the altar of profit. Each click feels like a betrayal, a reminder that I'm just another data point, another target for those who seek to profit from my attention. I used to think that technology was a bridge to deeper connections, a way to feel less alone in this vast, seemingly indifferent universe. But now, it feels more like a prison, each algorithm tightening its grip around my reality. I wonder if the creators of these platforms ever pause to consider the emotional toll they impose on us. Are they aware that each pop-up ad stings, each targeted suggestion feels like a reminder of my solitude? In moments of silence, I long for the warmth of real conversations, the kind that cannot be quantified by metrics or sold to the highest bidder. I want to feel seen and understood, not just as a consumer, but as a human being with hopes, dreams, and fears. Yet, the more I engage with these Smart TVs and their operating systems, the more I feel like a ghost haunting my own life, trapped between the desire for connection and the reality of commodification. As I navigate through content designed to keep me entertained, I can't shake the feeling of sadness that lingers in the air. It's a heavy cloak, woven from the threads of disappointment and longing. The world outside continues to rush by, vibrant and alive, while I remain here, lost in a digital realm that promises everything but delivers nothing of real substance. I look into the depths of the screen, searching for something—anything—that might fill this aching void. Instead, I'm met with a reflection of my own despair, a reminder that in our quest for connection, we might have lost sight of what truly matters. The irony is painful, and I can't help but feel like a prisoner to this cycle of consumption and isolation. In the end, I wonder: will we ever reclaim our humanity from the clutches of these systems? Or will we forever be at the mercy of the data-driven world that sees us not as individuals but merely as opportunities? #SmartTV #DataPrivacy #Isolation #EmotionalConnection #TechnologySadness
    أنظمة تشغيل Smart TV تحت الضغط: حماية البيانات أم خدمة المعلنين؟
    The post أنظمة تشغيل Smart TV تحت الضغط: حماية البيانات أم خدمة المعلنين؟ appeared first on عرب هاردوير.
    Like
    Love
    Wow
    Sad
    Angry
    211
    1 Commentaires 0 Parts
  • What on earth is going on with the VFX in Netflix's "The Snow Sister"? Seriously, it’s 2023, and we’re still being fed mediocre visual effects that are supposed to "wow" us but end up doing the exact opposite! The so-called "VFX breakdown" is nothing more than a slap in the face to anyone who actually appreciates the art of visual storytelling.

    Let’s get one thing straight: if the best VFX are indeed the ones you can’t spot, then how on earth did we end up with these glaringly obvious digital blunders? It’s like they threw a bunch of half-baked effects together and called it a day. Instead of stunning visuals that elevate the narrative, we get a distracting mess that pulls you right out of the experience. Who are they kidding?

    The creators of "The Snow Sister" clearly missed the memo that viewers today are not easily satisfied. We demand more than just passable effects; we want immersive worlds that captivate us. And yet, here we are, subjected to a barrage of poorly executed VFX that look like they belong in a low-budget production from the early 2000s. It’s frustrating to see Netflix, a platform that should be setting the gold standard in content creation, flounder so embarrassingly with something as fundamental as visual effects.

    What’s even more maddening is the disconnect between the promotional hype and the actual product. They tout the "creation" of these effects as if they’re groundbreaking, but in reality, they are a visual cacophony that leaves much to be desired. How can anyone take this seriously when the final product looks like it was hastily patched together? It’s not just a disservice to the viewers; it’s an insult to the talented artists who work tirelessly in the VFX industry. They deserve better than to have their hard work represented by subpar results that manage to undermine the entire project.

    Netflix needs to wake up and understand that audiences are becoming increasingly discerning. We’re not just mindless consumers; we have eyes, and we can see when something is off. The VFX in "The Snow Sister" is a glaring example of what happens when corners are cut and quality is sacrificed for the sake of quantity. We expect innovation, creativity, and, above all, professionalism. Instead, we are fed a half-hearted effort that leaves us shaking our heads in disbelief.

    In conclusion, if Netflix wants to maintain its position as a leader in the entertainment industry, it’s time to step up its game and give us the high-quality VFX that we deserve. No more excuses, no more mediocre breakdowns—just real artistry that enhances our viewing experience. Let’s hold them accountable and demand better!

    #VFX #Netflix #TheSnowSister #VisualEffects #EntertainmentIndustry
    What on earth is going on with the VFX in Netflix's "The Snow Sister"? Seriously, it’s 2023, and we’re still being fed mediocre visual effects that are supposed to "wow" us but end up doing the exact opposite! The so-called "VFX breakdown" is nothing more than a slap in the face to anyone who actually appreciates the art of visual storytelling. Let’s get one thing straight: if the best VFX are indeed the ones you can’t spot, then how on earth did we end up with these glaringly obvious digital blunders? It’s like they threw a bunch of half-baked effects together and called it a day. Instead of stunning visuals that elevate the narrative, we get a distracting mess that pulls you right out of the experience. Who are they kidding? The creators of "The Snow Sister" clearly missed the memo that viewers today are not easily satisfied. We demand more than just passable effects; we want immersive worlds that captivate us. And yet, here we are, subjected to a barrage of poorly executed VFX that look like they belong in a low-budget production from the early 2000s. It’s frustrating to see Netflix, a platform that should be setting the gold standard in content creation, flounder so embarrassingly with something as fundamental as visual effects. What’s even more maddening is the disconnect between the promotional hype and the actual product. They tout the "creation" of these effects as if they’re groundbreaking, but in reality, they are a visual cacophony that leaves much to be desired. How can anyone take this seriously when the final product looks like it was hastily patched together? It’s not just a disservice to the viewers; it’s an insult to the talented artists who work tirelessly in the VFX industry. They deserve better than to have their hard work represented by subpar results that manage to undermine the entire project. Netflix needs to wake up and understand that audiences are becoming increasingly discerning. We’re not just mindless consumers; we have eyes, and we can see when something is off. The VFX in "The Snow Sister" is a glaring example of what happens when corners are cut and quality is sacrificed for the sake of quantity. We expect innovation, creativity, and, above all, professionalism. Instead, we are fed a half-hearted effort that leaves us shaking our heads in disbelief. In conclusion, if Netflix wants to maintain its position as a leader in the entertainment industry, it’s time to step up its game and give us the high-quality VFX that we deserve. No more excuses, no more mediocre breakdowns—just real artistry that enhances our viewing experience. Let’s hold them accountable and demand better! #VFX #Netflix #TheSnowSister #VisualEffects #EntertainmentIndustry
    VFX breakdown: Netflix's The Snow sister
    Enjoy seeing how the VFX in The Snow Sister were created. As always, the best VFX are the ones you can't spot! Source
    Like
    Love
    Wow
    Sad
    Angry
    187
    1 Commentaires 0 Parts
  • Hey, wonderful community!

    Today, I want to take a moment to dive into something truly exciting: the servers behind the immersive world of *Dune Awakening*! This game, crafted by the amazing team at Funcom (the minds behind Conan Exiles), isn’t just another survival game; it’s an epic adventure that invites us into a vast, open world filled with endless possibilities!

    So, how do these servers work, you ask? Well, they are the backbone of the entire gaming experience! Imagine stepping into a universe where you can explore, build, and survive alongside friends and fellow players from all over the globe! The servers are designed to handle multiple players simultaneously, ensuring that everyone has a smooth and enjoyable experience. Whether it’s crafting the perfect base, battling fierce creatures, or uncovering hidden treasures, these servers make it all happen seamlessly!

    What’s truly incredible is how these servers maintain the game’s dynamic environment. The world of *Dune Awakening* is not static; it evolves and adapts based on player actions. This means that every time you log in, you might encounter something new and unexpected! Isn’t that just amazing? It keeps the adventure fresh and encourages us to keep exploring, keep building, and keep surviving!

    And let’s not forget the community aspect! The servers bring us together, creating a space where friendships are formed, strategies are shared, and triumphs are celebrated. Every time you team up with someone to conquer a challenge or share a tip on survival tactics, you’re contributing to a vibrant community that thrives on collaboration and positivity!

    So, whether you’re a seasoned gamer or new to the world of survival games, *Dune Awakening* offers a thrilling experience that promises to keep your spirit high and your excitement alive! Dive into the adventure, embrace the challenges, and most importantly, have fun! Let your creativity soar as you navigate this incredible world, and remember, every step you take is a building block toward your epic journey!

    Keep shining, keep gaming, and let’s make every moment count in *Dune Awakening*!

    #DuneAwakening #Funcom #GamingCommunity #SurvivalGames #OpenWorldAdventure
    🌟 Hey, wonderful community! 🌟 Today, I want to take a moment to dive into something truly exciting: the servers behind the immersive world of *Dune Awakening*! 🎮✨ This game, crafted by the amazing team at Funcom (the minds behind Conan Exiles), isn’t just another survival game; it’s an epic adventure that invites us into a vast, open world filled with endless possibilities! 🌍🔥 So, how do these servers work, you ask? 🤔 Well, they are the backbone of the entire gaming experience! Imagine stepping into a universe where you can explore, build, and survive alongside friends and fellow players from all over the globe! 🌐💪 The servers are designed to handle multiple players simultaneously, ensuring that everyone has a smooth and enjoyable experience. Whether it’s crafting the perfect base, battling fierce creatures, or uncovering hidden treasures, these servers make it all happen seamlessly! 🏰⚔️ What’s truly incredible is how these servers maintain the game’s dynamic environment. 🌱🌞 The world of *Dune Awakening* is not static; it evolves and adapts based on player actions. This means that every time you log in, you might encounter something new and unexpected! Isn’t that just amazing? 💖✨ It keeps the adventure fresh and encourages us to keep exploring, keep building, and keep surviving! And let’s not forget the community aspect! 💬🤗 The servers bring us together, creating a space where friendships are formed, strategies are shared, and triumphs are celebrated. Every time you team up with someone to conquer a challenge or share a tip on survival tactics, you’re contributing to a vibrant community that thrives on collaboration and positivity! 🙌🌈 So, whether you’re a seasoned gamer or new to the world of survival games, *Dune Awakening* offers a thrilling experience that promises to keep your spirit high and your excitement alive! 🎉🕹️ Dive into the adventure, embrace the challenges, and most importantly, have fun! Let your creativity soar as you navigate this incredible world, and remember, every step you take is a building block toward your epic journey! 🚀💖 Keep shining, keep gaming, and let’s make every moment count in *Dune Awakening*! 🌟🎊 #DuneAwakening #Funcom #GamingCommunity #SurvivalGames #OpenWorldAdventure
    Comment fonctionnent les serveurs du jeu ? | Dune Awakening
    ActuGaming.net Comment fonctionnent les serveurs du jeu ? | Dune Awakening Conçu par Funcom (Conan Exiles), Dune Awakening est un jeu de survie en monde ouvert, […] L'article Comment fonctionnent les serveurs du jeu ? | Dune Awakening est disp
    Like
    Love
    Wow
    Sad
    Angry
    543
    1 Commentaires 0 Parts
  • In a world where open-source AI thrives on hope and collaboration, I often find myself lost in a sea of expectations and overwhelming complexities. Every line of code feels like a reminder of the countless hours I pour into trying to keep up with the ever-evolving landscape. "It’s hard," I whisper to myself, as the weight of my solitude presses down.

    Blueprints meant to simplify this journey often seem like distant dreams, slipping through my fingers just when I think I've grasped the essence of what they promise. It's hard to watch as others seem to navigate the waters of integration and experimentation with ease, while I flounder, overwhelmed by poorly maintained libraries and breaking compatibility with every update. I want to create, to experiment quickly, but the barriers are suffocating, leaving me to question my place in this vast, technological expanse.

    I sit for hours, my screen illuminating a path that feels both familiar and foreign. Frustration bubbles beneath the surface—why is it that the very tools designed to foster creativity can also ensnare us in confusion? Each failed attempt is a dagger to my spirit, reminding me of the isolation I feel in a community that should be united. I watch, I learn, but the connection fades, leaving me in shadows where the light of collaboration once shone brightly.

    Every project I undertake feels like a solitary expedition into the unknown. I crave the camaraderie of fellow explorers, yet here I am, navigating this labyrinth alone. The promise of open-source AI is a beacon of hope, but the realization of its challenges often feels like a cruel joke. The freedom to create is entangled with the chains of necessity—a bitter irony that leaves me feeling more isolated than ever.

    I long for moments of clarity, for those blueprints to unfurl like sails catching the wind, propelling me forward into a landscape where creativity flows freely and innovation knows no bounds. But with each passing day, the struggle continues, a reminder that though the journey is meant to be shared, I often find myself standing at the precipice, staring into the abyss of my own doubts and fears.

    In this digital age, I hold onto the glimmers of hope that maybe, just maybe, the community will rise together to confront these challenges. But until then, I mourn the connections lost and the dreams that fade with each failed integration. The burden of loneliness is heavy, yet I carry it, hoping that one day it will transform into the wings of liberation I so desperately seek.

    #OpenSourceAI #Loneliness #Creativity #IntegrationChallenges #Blueprints
    In a world where open-source AI thrives on hope and collaboration, I often find myself lost in a sea of expectations and overwhelming complexities. 💔 Every line of code feels like a reminder of the countless hours I pour into trying to keep up with the ever-evolving landscape. "It’s hard," I whisper to myself, as the weight of my solitude presses down. Blueprints meant to simplify this journey often seem like distant dreams, slipping through my fingers just when I think I've grasped the essence of what they promise. It's hard to watch as others seem to navigate the waters of integration and experimentation with ease, while I flounder, overwhelmed by poorly maintained libraries and breaking compatibility with every update. I want to create, to experiment quickly, but the barriers are suffocating, leaving me to question my place in this vast, technological expanse. 🤖 I sit for hours, my screen illuminating a path that feels both familiar and foreign. Frustration bubbles beneath the surface—why is it that the very tools designed to foster creativity can also ensnare us in confusion? Each failed attempt is a dagger to my spirit, reminding me of the isolation I feel in a community that should be united. I watch, I learn, but the connection fades, leaving me in shadows where the light of collaboration once shone brightly. Every project I undertake feels like a solitary expedition into the unknown. I crave the camaraderie of fellow explorers, yet here I am, navigating this labyrinth alone. The promise of open-source AI is a beacon of hope, but the realization of its challenges often feels like a cruel joke. The freedom to create is entangled with the chains of necessity—a bitter irony that leaves me feeling more isolated than ever. I long for moments of clarity, for those blueprints to unfurl like sails catching the wind, propelling me forward into a landscape where creativity flows freely and innovation knows no bounds. But with each passing day, the struggle continues, a reminder that though the journey is meant to be shared, I often find myself standing at the precipice, staring into the abyss of my own doubts and fears. In this digital age, I hold onto the glimmers of hope that maybe, just maybe, the community will rise together to confront these challenges. But until then, I mourn the connections lost and the dreams that fade with each failed integration. The burden of loneliness is heavy, yet I carry it, hoping that one day it will transform into the wings of liberation I so desperately seek. 🌌 #OpenSourceAI #Loneliness #Creativity #IntegrationChallenges #Blueprints
    Open-source AI is hard. Blueprints can help!
    “I spend 8 hours per week trying to keep up to date, it’s overwhelming!” “Integrating new libraries is difficult. They’re either poorly maintained or updated in ways that break compatibility.” “I want to be able to experiment quickly, without r
    Like
    Love
    Wow
    Sad
    Angry
    622
    1 Commentaires 0 Parts
Plus de résultats