• Modular Sci-Fi Interiors With Just 6 Materials – Built for Blender [$]

    If you’ve ever tried to build a sci-fi corridor or control room in Blender and found yourself knee-deep in kitbash chaos or juggling too many materials, this might save you some serious time. Dallas (a new creator working with 3D Tudor’s Starving Artist Campaign) just dropped a modular sci-fi interior kit built for artists like [...]
    Source
    Modular Sci-Fi Interiors With Just 6 Materials – Built for Blender [$] If you’ve ever tried to build a sci-fi corridor or control room in Blender and found yourself knee-deep in kitbash chaos or juggling too many materials, this might save you some serious time. Dallas (a new creator working with 3D Tudor’s Starving Artist Campaign) just dropped a modular sci-fi interior kit built for artists like [...] Source
    Modular Sci-Fi Interiors With Just 6 Materials – Built for Blender [$]
    If you’ve ever tried to build a sci-fi corridor or control room in Blender and found yourself knee-deep in kitbash chaos or juggling too many materials, this might save you some serious time. Dallas (a new creator working with 3D Tudor’s Starving Art
    2 التعليقات 0 المشاركات
  • HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift

    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas.
    The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI.
    The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market.
    The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster.
    This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs.
    These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows.
    HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October.
    In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption.
    The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center.
    To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis.
    HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity.
    Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments.

    Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay.
    Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    #hpe #nvidia #debut #factory #stack
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers, to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page. #hpe #nvidia #debut #factory #stack
    BLOGS.NVIDIA.COM
    HPE and NVIDIA Debut AI Factory Stack to Power Next Industrial Shift
    To speed up AI adoption across industries, HPE and NVIDIA today launched new AI factory offerings at HPE Discover in Las Vegas. The new lineup includes everything from modular AI factory infrastructure and HPE’s AI-ready RTX PRO Servers (HPE ProLiant Compute DL380a Gen12), to the next generation of HPE’s turnkey AI platform, HPE Private Cloud AI. The goal: give enterprises a framework to build and scale generative, agentic and industrial AI. The NVIDIA AI Computing by HPE portfolio is now among the broadest in the market. The portfolio combines NVIDIA Blackwell accelerated computing, NVIDIA Spectrum-X Ethernet and NVIDIA BlueField-3 networking technologies, NVIDIA AI Enterprise software and HPE’s full portfolio of servers, storage, services and software. This now includes HPE OpsRamp Software, a validated observability solution for the NVIDIA Enterprise AI Factory, and HPE Morpheus Enterprise Software for orchestration. The result is a pre-integrated, modular infrastructure stack to help teams get AI into production faster. This includes the next-generation HPE Private Cloud AI, co-engineered with NVIDIA and validated as part of the NVIDIA Enterprise AI Factory framework. This full-stack, turnkey AI factory solution will offer HPE ProLiant Compute DL380a Gen12 servers with the new NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs. These new NVIDIA RTX PRO Servers from HPE provide a universal data center platform for a wide range of enterprise AI and industrial AI use cases, and are now available to order from HPE. HPE Private Cloud AI includes the latest NVIDIA AI Blueprints, including the NVIDIA AI-Q Blueprint for AI agent creation and workflows. HPE also announced a new NVIDIA HGX B300 system, the HPE Compute XD690, built with NVIDIA Blackwell Ultra GPUs. It’s the latest entry in the NVIDIA AI Computing by HPE lineup and is expected to ship in October. In Japan, KDDI is working with HPE to build NVIDIA AI infrastructure to accelerate global adoption. The HPE-built KDDI system will be based on the NVIDIA GB200 NVL72 platform, built on the NVIDIA Grace Blackwell architecture, at the KDDI Osaka Sakai Data Center. To accelerate AI for financial services, HPE will co-test agentic AI workflows built on Accenture’s AI Refinery with NVIDIA, running on HPE Private Cloud AI. Initial use cases include sourcing, procurement and risk analysis. HPE said it’s adding 26 new partners to its “Unleash AI” ecosystem to support more NVIDIA AI use cases. The company now offers more than 70 packaged AI workloads, from fraud detection and video analytics to sovereign AI and cybersecurity. Security and governance were a focus, too. HPE Private Cloud AI supports air-gapped management, multi-tenancy and post-quantum cryptography. HPE’s try-before-you-buy program lets customers test the system in Equinix data centers before purchase. HPE also introduced new programs, including AI Acceleration Workshops with NVIDIA, to help scale AI deployments. Watch the keynote: HPE CEO Antonio Neri announced the news from the Las Vegas Sphere on Tuesday at 9 a.m. PT. Register for the livestream and watch the replay. Explore more: Learn how NVIDIA and HPE build AI factories for every industry. Visit the partner page.
    0 التعليقات 0 المشاركات
  • Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 التعليقات 0 المشاركات
  • Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler

    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production.
    Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below.
    Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder.
    In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session.
    From Concept to Completion
    To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms.
    For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI.
    ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated.
    Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY.
    NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU.
    ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images.
    Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost.
    LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY.
    “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY 

    Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models.
    Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch.
    To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x.
    Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started.
    Photorealistic renders. Image courtesy of FITY.
    Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time.
    Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY.
    “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY

    Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #startup #uses #nvidia #rtxpowered #generative
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #startup #uses #nvidia #rtxpowered #generative
    BLOGS.NVIDIA.COM
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. Read more about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from $999. GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. Save the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptation (LoRA) models — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    0 التعليقات 0 المشاركات
  • HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE

    By TREVOR HOGG

    Images courtesy of Warner Bros. Pictures.

    Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon.

    “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.”
    —Talia Finlayson, Creative Technologist, Disguise

    Interior and exterior environments had to be created, such as the shop owned by Steve.

    “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”

    Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.”

    A virtual exploration of Steve’s shop in Midport Village.

    Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.”

    “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.”
    —Laura Bell, Creative Technologist, Disguise

    Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack.

    Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.”

    Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!”

    A virtual study and final still of the cast members standing outside of the Lava Chicken Shack.

    “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.”
    —Talia Finlayson, Creative Technologist, Disguise

    The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.”

    Virtually conceptualizing the layout of Midport Village.

    Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.”

    An example of the virtual and final version of the Woodland Mansion.

    “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.”
    —Laura Bell, Creative Technologist, Disguise

    Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.”

    Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment.

    Doing a virtual scale study of the Mountainside.

    Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.”

    Piglots cause mayhem during the Wingsuit Chase.

    Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods.

    “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    #how #disguise #built #out #virtual
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “s the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve. “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Departmenton Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’sLava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younisadapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay Georgeand I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols, Pat Younis, Jake Tuckand Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.” #how #disguise #built #out #virtual
    WWW.VFXVOICE.COM
    HOW DISGUISE BUILT OUT THE VIRTUAL ENVIRONMENTS FOR A MINECRAFT MOVIE
    By TREVOR HOGG Images courtesy of Warner Bros. Pictures. Rather than a world constructed around photorealistic pixels, a video game created by Markus Persson has taken the boxier 3D voxel route, which has become its signature aesthetic, and sparked an international phenomenon that finally gets adapted into a feature with the release of A Minecraft Movie. Brought onboard to help filmmaker Jared Hess in creating the environments that the cast of Jason Momoa, Jack Black, Sebastian Hansen, Emma Myers and Danielle Brooks find themselves inhabiting was Disguise under the direction of Production VFX Supervisor Dan Lemmon. “[A]s the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” —Talia Finlayson, Creative Technologist, Disguise Interior and exterior environments had to be created, such as the shop owned by Steve (Jack Black). “Prior to working on A Minecraft Movie, I held more technical roles, like serving as the Virtual Production LED Volume Operator on a project for Apple TV+ and Paramount Pictures,” notes Talia Finlayson, Creative Technologist for Disguise. “But as the Senior Unreal Artist within the Virtual Art Department (VAD) on Minecraft, I experienced the full creative workflow. What stood out most was how deeply the VAD was embedded across every stage of production. We weren’t working in isolation. From the production designer and director to the VFX supervisor and DP, the VAD became a hub for collaboration.” The project provided new opportunities. “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance,” notes Laura Bell, Creative Technologist for Disguise. “But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” Set designs originally created by the art department in Rhinoceros 3D were transformed into fully navigable 3D environments within Unreal Engine. “These scenes were far more than visualizations,” Finlayson remarks. “They were interactive tools used throughout the production pipeline. We would ingest 3D models and concept art, clean and optimize geometry using tools like Blender, Cinema 4D or Maya, then build out the world in Unreal Engine. This included applying materials, lighting and extending environments. These Unreal scenes we created were vital tools across the production and were used for a variety of purposes such as enabling the director to explore shot compositions, block scenes and experiment with camera movement in a virtual space, as well as passing along Unreal Engine scenes to the visual effects vendors so they could align their digital environments and set extensions with the approved production layouts.” A virtual exploration of Steve’s shop in Midport Village. Certain elements have to be kept in mind when constructing virtual environments. “When building virtual environments, you need to consider what can actually be built, how actors and cameras will move through the space, and what’s safe and practical on set,” Bell observes. “Outside the areas where strict accuracy is required, you want the environments to blend naturally with the original designs from the art department and support the story, creating a space that feels right for the scene, guides the audience’s eye and sets the right tone. Things like composition, lighting and small environmental details can be really fun to work on, but also serve as beautiful additions to help enrich a story.” “I’ve always loved the physicality of working with an LED volume, both for the immersion it provides and the way that seeing the environment helps shape an actor’s performance. But for A Minecraft Movie, we used Simulcam instead, and it was an incredible experience to live-composite an entire Minecraft world in real-time, especially with nothing on set but blue curtains.” —Laura Bell, Creative Technologist, Disguise Among the buildings that had to be created for Midport Village was Steve’s (Jack Black) Lava Chicken Shack. Concept art was provided that served as visual touchstones. “We received concept art provided by the amazing team of concept artists,” Finlayson states. “Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging. Sometimes we would also help the storyboard artists by sending through images of the Unreal Engine worlds to help them geographically position themselves in the worlds and aid in their storyboarding.” At times, the video game assets came in handy. “Exteriors often involved large-scale landscapes and stylized architectural elements, which had to feel true to the Minecraft world,” Finlayson explains. “In some cases, we brought in geometry from the game itself to help quickly block out areas. For example, we did this for the Elytra Flight Chase sequence, which takes place through a large canyon.” Flexibility was critical. “A key technical challenge we faced was ensuring that the Unreal levels were built in a way that allowed for fast and flexible iteration,” Finlayson remarks. “Since our environments were constantly being reviewed by the director, production designer, DP and VFX supervisor, we needed to be able to respond quickly to feedback, sometimes live during a review session. To support this, we had to keep our scenes modular and well-organized; that meant breaking environments down into manageable components and maintaining clean naming conventions. By setting up the levels this way, we could make layout changes, swap assets or adjust lighting on the fly without breaking the scene or slowing down the process.” Production schedules influence the workflows, pipelines and techniques. “No two projects will ever feel exactly the same,” Bell notes. “For example, Pat Younis [VAD Art Director] adapted his typical VR setup to allow scene reviews using a PS5 controller, which made it much more comfortable and accessible for the director. On a more technical side, because everything was cubes and voxels, my Blender workflow ended up being way heavier on the re-mesh modifier than usual, definitely not something I’ll run into again anytime soon!” A virtual study and final still of the cast members standing outside of the Lava Chicken Shack. “We received concept art provided by the amazing team of concept artists. Not only did they send us 2D artwork, but they often shared the 3D models they used to create those visuals. These models were incredibly helpful as starting points when building out the virtual environments in Unreal Engine; they gave us a clear sense of composition and design intent. Storyboards were also a key part of the process and were constantly being updated as the project evolved. Having access to the latest versions allowed us to tailor the virtual environments to match camera angles, story beats and staging.” —Talia Finlayson, Creative Technologist, Disguise The design and composition of virtual environments tended to remain consistent throughout principal photography. “The only major design change I can recall was the removal of a second story from a building in Midport Village to allow the camera crane to get a clear shot of the chicken perched above Steve’s lava chicken shack,” Finlayson remarks. “I would agree that Midport Village likely went through the most iterations,” Bell responds. “The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled. I remember rebuilding the stairs leading up to the rampart five or six times, using different configurations based on the physically constructed stairs. This was because there were storyboarded sequences of the film’s characters, Henry, Steve and Garrett, being chased by piglins, and the action needed to match what could be achieved practically on set.” Virtually conceptualizing the layout of Midport Village. Complex virtual environments were constructed for the final battle and the various forest scenes throughout the movie. “What made these particularly challenging was the way physical set pieces were repurposed and repositioned to serve multiple scenes and locations within the story,” Finlayson reveals. “The same built elements had to appear in different parts of the world, so we had to carefully adjust the virtual environments to accommodate those different positions.” Bell is in agreement with her colleague. “The forest scenes were some of the more complex environments to manage. It could get tricky, particularly when the filming schedule shifted. There was one day on set where the order of shots changed unexpectedly, and because the physical sets looked so similar, I initially loaded a different perspective than planned. Fortunately, thanks to our workflow, Lindsay George [VP Tech] and I were able to quickly open the recorded sequence in Unreal Engine and swap out the correct virtual environment for the live composite without any disruption to the shoot.” An example of the virtual and final version of the Woodland Mansion. “Midport Village likely went through the most iterations. The archway, in particular, became a visual anchor across different levels. We often placed it off in the distance to help orient both ourselves and the audience and show how far the characters had traveled.” —Laura Bell, Creative Technologist, Disguise Extensive detail was given to the center of the sets where the main action unfolds. “For these areas, we received prop layouts from the prop department to ensure accurate placement and alignment with the physical builds,” Finlayson explains. “These central environments were used heavily for storyboarding, blocking and department reviews, so precision was essential. As we moved further out from the practical set, the environments became more about blocking and spatial context rather than fine detail. We worked closely with Production Designer Grant Major to get approval on these extended environments, making sure they aligned with the overall visual direction. We also used creatures and crowd stand-ins provided by the visual effects team. These gave a great sense of scale and placement during early planning stages and allowed other departments to better understand how these elements would be integrated into the scenes.” Cast members Sebastian Hansen, Danielle Brooks and Emma Myers stand in front of the Earth Portal Plateau environment. Doing a virtual scale study of the Mountainside. Practical requirements like camera moves, stunt choreography and crane setups had an impact on the creation of virtual environments. “Sometimes we would adjust layouts slightly to open up areas for tracking shots or rework spaces to accommodate key action beats, all while keeping the environment feeling cohesive and true to the Minecraft world,” Bell states. “Simulcam bridged the physical and virtual worlds on set, overlaying Unreal Engine environments onto live-action scenes in real-time, giving the director, DP and other department heads a fully-realized preview of shots and enabling precise, informed decisions during production. It also recorded critical production data like camera movement paths, which was handed over to the post-production team to give them the exact tracks they needed, streamlining the visual effects pipeline.” Piglots cause mayhem during the Wingsuit Chase. Virtual versions of the exterior and interior of the Safe House located in the Enchanted Woods. “One of the biggest challenges for me was managing constant iteration while keeping our environments clean, organized and easy to update,” Finlayson notes. “Because the virtual sets were reviewed regularly by the director and other heads of departments, feedback was often implemented live in the room. This meant the environments had to be flexible. But overall, this was an amazing project to work on, and I am so grateful for the incredible VAD team I was a part of – Heide Nichols [VAD Supervisor], Pat Younis, Jake Tuck [Unreal Artist] and Laura. Everyone on this team worked so collaboratively, seamlessly and in such a supportive way that I never felt like I was out of my depth.” There was another challenge that is more to do with familiarity. “Having a VAD on a film is still a relatively new process in production,” Bell states. “There were moments where other departments were still learning what we did and how to best work with us. That said, the response was overwhelmingly positive. I remember being on set at the Simulcam station and seeing how excited people were to look at the virtual environments as they walked by, often stopping for a chat and a virtual tour. Instead of seeing just a huge blue curtain, they were stoked to see something Minecraft and could get a better sense of what they were actually shooting.”
    0 التعليقات 0 المشاركات
  • Best Sectional Sofas Under $1000 That Look Way More Expensive

    Finding the perfect sectional that looks luxe but doesn’t blow the budget? Easier said than done. But good news—these stunning sofas clock in under and still bring the designer vibes. Think deep seats, boucle textures, cloud-like comfort, and silhouettes that steal the spotlight. Whether you’re after something cozy for movie nights or statement-worthy for your living room, these picks overdeliver without overspending.

    Curry Velvet Cloud Sectional

    Buy on Amazon

    Sleek, warm, and plush—this curry-hued modular sofa feels straight out of a luxe loft. With a deep-seat double-layer cushion and movable ottoman, it’s made for lounging in style. The mustard-curry color means it can create a head-turning statement in any space.

    Nargis Lamb Wool Sectional with Chaise

    Buy on Wayfair

    This one screams cozy minimalism. The ribbed texture, ivory lamb wool fabric, and boxy silhouette make it feel far more expensive than it is. Perfect for modern, neutral-toned spaces. And equally amazing for lounging on a comfy piece!

    U-Shaped Boucle Cloud Sectional

    Buy on Wayfair

    All the cloud couch vibes for less. The tufted seats, soft boucle fabric, and 5-seater layout make this one a crowd-pleaser for big families or binge-watchers. Such cloud pieces are actually priced much higher, so this one’s a steal for folks looking to add a designer-like piece at a competitive price.
    #best #sectional #sofas #under #that
    Best Sectional Sofas Under $1000 That Look Way More Expensive
    Finding the perfect sectional that looks luxe but doesn’t blow the budget? Easier said than done. But good news—these stunning sofas clock in under and still bring the designer vibes. Think deep seats, boucle textures, cloud-like comfort, and silhouettes that steal the spotlight. Whether you’re after something cozy for movie nights or statement-worthy for your living room, these picks overdeliver without overspending. Curry Velvet Cloud Sectional Buy on Amazon Sleek, warm, and plush—this curry-hued modular sofa feels straight out of a luxe loft. With a deep-seat double-layer cushion and movable ottoman, it’s made for lounging in style. The mustard-curry color means it can create a head-turning statement in any space. Nargis Lamb Wool Sectional with Chaise Buy on Wayfair This one screams cozy minimalism. The ribbed texture, ivory lamb wool fabric, and boxy silhouette make it feel far more expensive than it is. Perfect for modern, neutral-toned spaces. And equally amazing for lounging on a comfy piece! U-Shaped Boucle Cloud Sectional Buy on Wayfair All the cloud couch vibes for less. The tufted seats, soft boucle fabric, and 5-seater layout make this one a crowd-pleaser for big families or binge-watchers. Such cloud pieces are actually priced much higher, so this one’s a steal for folks looking to add a designer-like piece at a competitive price. #best #sectional #sofas #under #that
    WWW.HOME-DESIGNING.COM
    Best Sectional Sofas Under $1000 That Look Way More Expensive
    Finding the perfect sectional that looks luxe but doesn’t blow the budget? Easier said than done. But good news—these stunning sofas clock in under $1000 and still bring the designer vibes. Think deep seats, boucle textures, cloud-like comfort, and silhouettes that steal the spotlight. Whether you’re after something cozy for movie nights or statement-worthy for your living room, these picks overdeliver without overspending. Curry Velvet Cloud Sectional Buy on Amazon Sleek, warm, and plush—this curry-hued modular sofa feels straight out of a luxe loft. With a deep-seat double-layer cushion and movable ottoman, it’s made for lounging in style. The mustard-curry color means it can create a head-turning statement in any space. Nargis Lamb Wool Sectional with Chaise Buy on Wayfair This one screams cozy minimalism. The ribbed texture, ivory lamb wool fabric, and boxy silhouette make it feel far more expensive than it is. Perfect for modern, neutral-toned spaces. And equally amazing for lounging on a comfy piece! U-Shaped Boucle Cloud Sectional Buy on Wayfair All the cloud couch vibes for less. The tufted seats, soft boucle fabric, and 5-seater layout make this one a crowd-pleaser for big families or binge-watchers. Such cloud pieces are actually priced much higher, so this one’s a steal for folks looking to add a designer-like piece at a competitive price.
    Like
    Love
    Wow
    Sad
    Angry
    479
    2 التعليقات 0 المشاركات
  • IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029

    By John P. Mello Jr.
    June 11, 2025 5:00 AM PT

    IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT
    Enterprise IT Lead Generation Services
    Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more.

    IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible.
    The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent.
    “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.”
    IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del.
    “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.”
    A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time.
    Realistic Roadmap
    Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld.
    “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany.
    “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.”
    Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.”
    “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.”
    “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada.
    “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.”
    Solving the Quantum Error Correction Puzzle
    To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits.
    “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.”
    IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published.

    Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices.
    In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer.
    One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing.
    According to IBM, a practical fault-tolerant quantum architecture must:

    Suppress enough errors for useful algorithms to succeed
    Prepare and measure logical qubits during computation
    Apply universal instructions to logical qubits
    Decode measurements from logical qubits in real time and guide subsequent operations
    Scale modularly across hundreds or thousands of logical qubits
    Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources

    Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained.
    “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.”
    Q-Day Approaching Faster Than Expected
    For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated.
    “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif.
    “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.”

    “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said.
    Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years.
    “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld.
    “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.”
    “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.”
    “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.”

    John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John.

    Leave a Comment

    Click here to cancel reply.
    Please sign in to post or reply to a comment. New users create a free account.

    Related Stories

    More by John P. Mello Jr.

    view all

    More in Emerging Tech
    #ibm #plans #largescale #faulttolerant #quantum
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system.ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillionof the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity checkcodes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling, can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQCpreparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EOthat relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech #ibm #plans #largescale #faulttolerant #quantum
    WWW.TECHNEWSWORLD.COM
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029
    IBM Plans Large-Scale Fault-Tolerant Quantum Computer by 2029 By John P. Mello Jr. June 11, 2025 5:00 AM PT IBM unveiled its plan to build IBM Quantum Starling, shown in this rendering. Starling is expected to be the first large-scale, fault-tolerant quantum system. (Image Credit: IBM) ADVERTISEMENT Enterprise IT Lead Generation Services Fuel Your Pipeline. Close More Deals. Our full-service marketing programs deliver sales-ready leads. 100% Satisfaction Guarantee! Learn more. IBM revealed Tuesday its roadmap for bringing a large-scale, fault-tolerant quantum computer, IBM Quantum Starling, online by 2029, which is significantly earlier than many technologists thought possible. The company predicts that when its new Starling computer is up and running, it will be capable of performing 20,000 times more operations than today’s quantum computers — a computational state so vast it would require the memory of more than a quindecillion (10⁴⁸) of the world’s most powerful supercomputers to represent. “IBM is charting the next frontier in quantum computing,” Big Blue CEO Arvind Krishna said in a statement. “Our expertise across mathematics, physics, and engineering is paving the way for a large-scale, fault-tolerant quantum computer — one that will solve real-world challenges and unlock immense possibilities for business.” IBM’s plan to deliver a fault-tolerant quantum system by 2029 is ambitious but not implausible, especially given the rapid pace of its quantum roadmap and past milestones, observed Ensar Seker, CISO at SOCRadar, a threat intelligence company in Newark, Del. “They’ve consistently met or exceeded their qubit scaling goals, and their emphasis on modularity and error correction indicates they’re tackling the right challenges,” he told TechNewsWorld. “However, moving from thousands to millions of physical qubits with sufficient fidelity remains a steep climb.” A qubit is the fundamental unit of information in quantum computing, capable of representing a zero, a one, or both simultaneously due to quantum superposition. In practice, fault-tolerant quantum computers use clusters of physical qubits working together to form a logical qubit — a more stable unit designed to store quantum information and correct errors in real time. Realistic Roadmap Luke Yang, an equity analyst with Morningstar Research Services in Chicago, believes IBM’s roadmap is realistic. “The exact scale and error correction performance might still change between now and 2029, but overall, the goal is reasonable,” he told TechNewsWorld. “Given its reliability and professionalism, IBM’s bold claim should be taken seriously,” said Enrique Solano, co-CEO and co-founder of Kipu Quantum, a quantum algorithm company with offices in Berlin and Karlsruhe, Germany. “Of course, it may also fail, especially when considering the unpredictability of hardware complexities involved,” he told TechNewsWorld, “but companies like IBM exist for such challenges, and we should all be positively impressed by its current achievements and promised technological roadmap.” Tim Hollebeek, vice president of industry standards at DigiCert, a global digital security company, added: “IBM is a leader in this area, and not normally a company that hypes their news. This is a fast-moving industry, and success is certainly possible.” “IBM is attempting to do something that no one has ever done before and will almost certainly run into challenges,” he told TechNewsWorld, “but at this point, it is largely an engineering scaling exercise, not a research project.” “IBM has demonstrated consistent progress, has committed $30 billion over five years to quantum computing, and the timeline is within the realm of technical feasibility,” noted John Young, COO of Quantum eMotion, a developer of quantum random number generator technology, in Saint-Laurent, Quebec, Canada. “That said,” he told TechNewsWorld, “fault-tolerant in a practical, industrial sense is a very high bar.” Solving the Quantum Error Correction Puzzle To make a quantum computer fault-tolerant, errors need to be corrected so large workloads can be run without faults. In a quantum computer, errors are reduced by clustering physical qubits to form logical qubits, which have lower error rates than the underlying physical qubits. “Error correction is a challenge,” Young said. “Logical qubits require thousands of physical qubits to function reliably. That’s a massive scaling issue.” IBM explained in its announcement that creating increasing numbers of logical qubits capable of executing quantum circuits with as few physical qubits as possible is critical to quantum computing at scale. Until today, a clear path to building such a fault-tolerant system without unrealistic engineering overhead has not been published. Alternative and previous gold-standard, error-correcting codes present fundamental engineering challenges, IBM continued. To scale, they would require an unfeasible number of physical qubits to create enough logical qubits to perform complex operations — necessitating impractical amounts of infrastructure and control electronics. This renders them unlikely to be implemented beyond small-scale experiments and devices. In two research papers released with its roadmap, IBM detailed how it will overcome the challenges of building the large-scale, fault-tolerant architecture needed for a quantum computer. One paper outlines the use of quantum low-density parity check (qLDPC) codes to reduce physical qubit overhead. The other describes methods for decoding errors in real time using conventional computing. According to IBM, a practical fault-tolerant quantum architecture must: Suppress enough errors for useful algorithms to succeed Prepare and measure logical qubits during computation Apply universal instructions to logical qubits Decode measurements from logical qubits in real time and guide subsequent operations Scale modularly across hundreds or thousands of logical qubits Be efficient enough to run meaningful algorithms using realistic energy and infrastructure resources Aside from the technological challenges that quantum computer makers are facing, there may also be some market challenges. “Locating suitable use cases for quantum computers could be the biggest challenge,” Morningstar’s Yang maintained. “Only certain computing workloads, such as random circuit sampling [RCS], can fully unleash the computing power of quantum computers and show their advantage over the traditional supercomputers we have now,” he said. “However, workloads like RCS are not very commercially useful, and we believe commercial relevance is one of the key factors that determine the total market size for quantum computers.” Q-Day Approaching Faster Than Expected For years now, organizations have been told they need to prepare for “Q-Day” — the day a quantum computer will be able to crack all the encryption they use to keep their data secure. This IBM announcement suggests the window for action to protect data may be closing faster than many anticipated. “This absolutely adds urgency and credibility to the security expert guidance on post-quantum encryption being factored into their planning now,” said Dave Krauthamer, field CTO of QuSecure, maker of quantum-safe security solutions, in San Mateo, Calif. “IBM’s move to create a large-scale fault-tolerant quantum computer by 2029 is indicative of the timeline collapsing,” he told TechNewsWorld. “A fault-tolerant quantum computer of this magnitude could be well on the path to crack asymmetric ciphers sooner than anyone thinks.” “Security leaders need to take everything connected to post-quantum encryption as a serious measure and work it into their security plans now — not later,” he said. Roger Grimes, a defense evangelist with KnowBe4, a security awareness training provider in Clearwater, Fla., pointed out that IBM is just the latest in a surge of quantum companies announcing quickly forthcoming computational breakthroughs within a few years. “It leads to the question of whether the U.S. government’s original PQC [post-quantum cryptography] preparation date of 2030 is still a safe date,” he told TechNewsWorld. “It’s starting to feel a lot more risky for any company to wait until 2030 to be prepared against quantum attacks. It also flies in the face of the latest cybersecurity EO [Executive Order] that relaxed PQC preparation rules as compared to Biden’s last EO PQC standard order, which told U.S. agencies to transition to PQC ASAP.” “Most US companies are doing zero to prepare for Q-Day attacks,” he declared. “The latest executive order seems to tell U.S. agencies — and indirectly, all U.S. businesses — that they have more time to prepare. It’s going to cause even more agencies and businesses to be less prepared during a time when it seems multiple quantum computing companies are making significant progress.” “It definitely feels that something is going to give soon,” he said, “and if I were a betting man, and I am, I would bet that most U.S. companies are going to be unprepared for Q-Day on the day Q-Day becomes a reality.” John P. Mello Jr. has been an ECT News Network reporter since 2003. His areas of focus include cybersecurity, IT issues, privacy, e-commerce, social media, artificial intelligence, big data and consumer electronics. He has written and edited for numerous publications, including the Boston Business Journal, the Boston Phoenix, Megapixel.Net and Government Security News. Email John. Leave a Comment Click here to cancel reply. Please sign in to post or reply to a comment. New users create a free account. Related Stories More by John P. Mello Jr. view all More in Emerging Tech
    0 التعليقات 0 المشاركات
  • Sienna Net-Zero Home / billionBricks

    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines

    Architects:
    billionBricks
    Area
    Area of this architecture project

    Area: 
    45 m²

    Year
    Completion year of this architecture project

    Year: 

    2024

    Photographs

    Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs
    this picture!
    Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.this picture!this picture!this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.this picture!this picture!this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.this picture!this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated.

    Project gallerySee allShow less
    About this officebillionBricksOffice•••
    Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否
    You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    #sienna #netzero #home #billionbricks
    Sienna Net-Zero Home / billionBricks
    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines Architects: billionBricks Area Area of this architecture project Area:  45 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs this picture! Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.this picture!this picture!this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.this picture!this picture!this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.this picture!this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated. Project gallerySee allShow less About this officebillionBricksOffice••• Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . < ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream #sienna #netzero #home #billionbricks
    WWW.ARCHDAILY.COM
    Sienna Net-Zero Home / billionBricks
    Sienna Net-Zero Home / billionBricksSave this picture!© Ron Mendoza , Mark Twain C , BB teamHouses, Sustainability•Quezon City, Philippines Architects: billionBricks Area Area of this architecture project Area:  45 m² Year Completion year of this architecture project Year:  2024 Photographs Photographs:Ron Mendoza , Mark Twain C , BB teamMore SpecsLess Specs Save this picture! Text description provided by the architects. Built to address homelessness and climate change, the Sienna Net-Zero Home is a self-sustaining, solar-powered, cost-efficient, and compact housing solution. This climate-responsive and affordable home, located in Quezon City, Philippines, represents a revolutionary vision for social housing through its integration of thoughtful design, sustainability, and energy self-sufficiency.Save this picture!Save this picture!Save this picture!Designed with the unique tropical climate of the Philippines in mind, the Sienna Home prioritizes natural ventilation, passive cooling, and rainwater management to enhance indoor comfort and reduce reliance on artificial cooling systems. The compact 4.5m x 5.1m floor plan has been meticulously optimized for functionality, offering a flexible layout that grows and adapts to the families living in them.Save this picture!Save this picture!Save this picture!A key architectural feature is BillionBricks' innovative Powershade technology - an advanced solar roofing system that serves multiple purposes. Beyond generating clean, renewable energy, it acts as a protective heat barrier, reducing indoor temperatures and improving thermal comfort. Unlike conventional solar panels, Powershade seamlessly integrates with the home's structure, providing reliable energy generation while doubling as a durable roof. This makes the Sienna Home energy-positive, meaning it produces more electricity than it consumes, lowering utility costs and promoting long-term energy independence. Excess power can also be stored or sold back to the grid, creating an additional financial benefit for homeowners.Save this picture!When multiple Sienna Homes are built together, the innovative PowerShade roofing solution transcends its role as an individual energy source and transforms into a utility-scale solar rooftop farm, capable of powering essential community facilities and generating additional income. This shared energy infrastructure fosters a sense of collective empowerment, enabling residents to actively participate in a sustainable and financially rewarding energy ecosystem.Save this picture!Save this picture!The Sienna Home is built using lightweight prefabricated components, allowing for rapid on-site assembly while maintaining durability and structural integrity. This modular approach enables scalability, making it an ideal prototype for large-scale, cost-effective housing developments. The design also allows for future expansions, giving homeowners the flexibility to adapt their living spaces over time.Save this picture!Adhering to BP 220 social housing regulations, the unit features a 3-meter front setback and a 2-meter rear setback, ensuring proper ventilation, safety, and community-friendly spaces. Additionally, corner units include a 1.5-meter offset, enhancing privacy and accessibility within neighborhood layouts. Beyond providing a single-family residence, the Sienna House is designed to function within a larger sustainable community model, integrating shared green spaces, pedestrian pathways, and decentralized utilities. By promoting energy independence and environmental resilience, the project sets a new precedent for affordable yet high-quality housing solutions in rapidly urbanizing regions.Save this picture!The Sienna Home in Quezon City serves as a blueprint for future developments, proving that low-cost housing can be both architecturally compelling and socially transformative. By rethinking traditional housing models, BillionBricks is pioneering a future where affordability and sustainability are seamlessly integrated. Project gallerySee allShow less About this officebillionBricksOffice••• Published on June 15, 2025Cite: "Sienna Net-Zero Home / billionBricks" 14 Jun 2025. ArchDaily. Accessed . <https://www.archdaily.com/1031072/sienna-billionbricks&gt ISSN 0719-8884Save世界上最受欢迎的建筑网站现已推出你的母语版本!想浏览ArchDaily中国吗?是否 You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 التعليقات 0 المشاركات
  • 432 Park Avenue by Rafael Viñoly Architects: Minimalism in the New York Skyline

    432 Park Avenue | © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects
    Located in Midtown Manhattan, 432 Park Avenue is a prominent figure in the evolution of supertall residential towers. Completed in 2015, this 1,396-foot-high building by Rafael Viñoly Architects asserts a commanding presence over the city’s skyline. Its minimalist form and rigorous geometry have sparked considerable debate within the architectural community, marking it as a significant and controversial addition to New York City’s built environment.

    432 Park Avenue Technical Information

    Architects1-8: Rafael Viñoly Architects
    Location: Midtown Manhattan, New York City, USA
    Gross Area: 38,344 m2 | 412,637 Sq. Ft.
    Project Years: 2011 – 2015
    Photographs: © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects

    It’s a building designed for the enjoyment of its occupants, not for the delight of its creator.
    – Rafael Viñoly

    432 Park Avenue Photographs

    © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects

    Courtesy of Rafael Viñoly Architects

    Courtesy of Rafael Viñoly Architects

    Courtesy of Rafael Viñoly Architects

    Courtesy of Rafael Viñoly Architects
    Design Intent and Conceptual Framework
    At the heart of 432 Park Avenue’s design lies a commitment to pure geometry. The square, an elemental form, defines every aspect of the building, from its floor plate to its overall silhouette. This strict adherence to geometry speaks to Viñoly’s rationalist sensibilities and interest in stripping architecture to its fundamental components. The tower’s proportions, with its height-to-width ratio of roughly 1:15, transform this simple geometry into a monumental presence. This conceptual rigor positions the building as an object of formal clarity and a deliberate statement within the city’s varied skyline.
    The design’s minimalism extends beyond the building’s shape, reflecting Viñoly’s pursuit of a refined and disciplined expression. Eschewing decorative flourishes, the tower’s form directly responds to programmatic needs and structural imperatives. This disciplined approach underpins the project’s ambition to redefine the experience of vertical living, asserting that luxury in residential design can emerge from formal simplicity and a mastery of proportion.
    Spatial Organization and Interior Volumes
    The interior organization of 432 Park Avenue reveals an equally uncompromising commitment to clarity and openness. Each residential floor is free of interior columns, a testament to the structural ingenuity of the concrete exoskeleton. This column-free arrangement grants unobstructed floor plans and expansive panoramic views of the city, the rivers, and beyond. Floor-to-ceiling windows, measuring nearly 10 feet in height, accentuate the sense of openness and lightness within each residence.
    The tower’s slender core houses the vertical circulation and mechanical systems, ensuring the perimeter remains uninterrupted. This core placement allows for generous living spaces that maximize privacy and connection to the urban landscape. The interplay between structural precision and panoramic transparency shapes the experience of inhabiting these spaces. The result is a sequence of interiors that privilege intimacy and vastness, anchoring domestic life within an architectural expression of purity.
    Materiality, Structural Clarity, and Detailing
    Material choices in 432 Park Avenue reinforce the project’s disciplined approach. The building’s exposed concrete frame, treated as structure and façade, lends the tower a stark yet refined character. The grid of square windows, systematically repeated across the height of the building, becomes a defining feature of its visual identity. This modular repetition establishes a rhythmic order and speaks to the building’s underlying structural logic.
    High-strength concrete enables the tower’s slender profile and exceptional height while imparting a tactile materiality that resists the glassy anonymity typical of many contemporary towers. The restrained palette and attention to detail emphasize the tectonic clarity of the building’s assembly. By treating the structure itself as an architectural finish, Viñoly’s design elevates the material expression of concrete into a fundamental element of the building’s identity.
    Urban and Cultural Significance
    As one of the tallest residential buildings in the Western Hemisphere, 432 Park Avenue has significantly altered the Manhattan skyline. Its unwavering verticality and minimal ornamentation create a dialogue with the city’s diverse architectural heritage, juxtaposing a severe abstraction against a backdrop of historic and contemporary towers.
    432 Park Avenue occupies a distinctive place in the ongoing narrative of New York City’s architectural evolution. Its reductive form, structural clarity, and spatial generosity offer a compelling study of the power of minimalism at an urban scale.
    432 Park Avenue Plans

    Floor Plans | © Rafael Viñoly Architects

    Floor Plans | © Rafael Viñoly Architects

    Floor Plans | © Rafael Viñoly Architects

    Floor Plans | © Rafael Viñoly Architects
    432 Park Avenue Image Gallery

    © Rafael Viñoly Architects

    About Rafael Viñoly Architects
    Rafael Viñoly, a Uruguayan-born architect, founded Rafael Viñoly Architects in New York City in 1983. After studies in Buenos Aires and early practice in Argentina, he relocated to the U.S.. He established a global firm with offices in cities including London, Palo Alto, and Abu Dhabi. Renowned for large-scale, function-driven projects such as the Tokyo International Forum, Cleveland Museum of Art expansions, and 432 Park Avenue, the firm is praised for combining structural clarity, context-sensitive design, and institutional rigor across six continents.
    Credits and Additional Notes

    Client: Macklowe Properties and CIM Group
    Design Team: Rafael Viñoly, Deborah Berke Partners, Bentel & BentelStructural Engineer: WSP Cantor Seinuk
    Mechanical, Electrical, and Plumbing Engineers: Jaros, Baum & BollesConstruction Manager: Lendlease
    Height: 1,396 feetNumber of Floors: 96 stories
    Construction Years: 2011–2015
    #park #avenue #rafael #viñoly #architects
    432 Park Avenue by Rafael Viñoly Architects: Minimalism in the New York Skyline
    432 Park Avenue | © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects Located in Midtown Manhattan, 432 Park Avenue is a prominent figure in the evolution of supertall residential towers. Completed in 2015, this 1,396-foot-high building by Rafael Viñoly Architects asserts a commanding presence over the city’s skyline. Its minimalist form and rigorous geometry have sparked considerable debate within the architectural community, marking it as a significant and controversial addition to New York City’s built environment. 432 Park Avenue Technical Information Architects1-8: Rafael Viñoly Architects Location: Midtown Manhattan, New York City, USA Gross Area: 38,344 m2 | 412,637 Sq. Ft. Project Years: 2011 – 2015 Photographs: © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects It’s a building designed for the enjoyment of its occupants, not for the delight of its creator. – Rafael Viñoly 432 Park Avenue Photographs © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Design Intent and Conceptual Framework At the heart of 432 Park Avenue’s design lies a commitment to pure geometry. The square, an elemental form, defines every aspect of the building, from its floor plate to its overall silhouette. This strict adherence to geometry speaks to Viñoly’s rationalist sensibilities and interest in stripping architecture to its fundamental components. The tower’s proportions, with its height-to-width ratio of roughly 1:15, transform this simple geometry into a monumental presence. This conceptual rigor positions the building as an object of formal clarity and a deliberate statement within the city’s varied skyline. The design’s minimalism extends beyond the building’s shape, reflecting Viñoly’s pursuit of a refined and disciplined expression. Eschewing decorative flourishes, the tower’s form directly responds to programmatic needs and structural imperatives. This disciplined approach underpins the project’s ambition to redefine the experience of vertical living, asserting that luxury in residential design can emerge from formal simplicity and a mastery of proportion. Spatial Organization and Interior Volumes The interior organization of 432 Park Avenue reveals an equally uncompromising commitment to clarity and openness. Each residential floor is free of interior columns, a testament to the structural ingenuity of the concrete exoskeleton. This column-free arrangement grants unobstructed floor plans and expansive panoramic views of the city, the rivers, and beyond. Floor-to-ceiling windows, measuring nearly 10 feet in height, accentuate the sense of openness and lightness within each residence. The tower’s slender core houses the vertical circulation and mechanical systems, ensuring the perimeter remains uninterrupted. This core placement allows for generous living spaces that maximize privacy and connection to the urban landscape. The interplay between structural precision and panoramic transparency shapes the experience of inhabiting these spaces. The result is a sequence of interiors that privilege intimacy and vastness, anchoring domestic life within an architectural expression of purity. Materiality, Structural Clarity, and Detailing Material choices in 432 Park Avenue reinforce the project’s disciplined approach. The building’s exposed concrete frame, treated as structure and façade, lends the tower a stark yet refined character. The grid of square windows, systematically repeated across the height of the building, becomes a defining feature of its visual identity. This modular repetition establishes a rhythmic order and speaks to the building’s underlying structural logic. High-strength concrete enables the tower’s slender profile and exceptional height while imparting a tactile materiality that resists the glassy anonymity typical of many contemporary towers. The restrained palette and attention to detail emphasize the tectonic clarity of the building’s assembly. By treating the structure itself as an architectural finish, Viñoly’s design elevates the material expression of concrete into a fundamental element of the building’s identity. Urban and Cultural Significance As one of the tallest residential buildings in the Western Hemisphere, 432 Park Avenue has significantly altered the Manhattan skyline. Its unwavering verticality and minimal ornamentation create a dialogue with the city’s diverse architectural heritage, juxtaposing a severe abstraction against a backdrop of historic and contemporary towers. 432 Park Avenue occupies a distinctive place in the ongoing narrative of New York City’s architectural evolution. Its reductive form, structural clarity, and spatial generosity offer a compelling study of the power of minimalism at an urban scale. 432 Park Avenue Plans Floor Plans | © Rafael Viñoly Architects Floor Plans | © Rafael Viñoly Architects Floor Plans | © Rafael Viñoly Architects Floor Plans | © Rafael Viñoly Architects 432 Park Avenue Image Gallery © Rafael Viñoly Architects About Rafael Viñoly Architects Rafael Viñoly, a Uruguayan-born architect, founded Rafael Viñoly Architects in New York City in 1983. After studies in Buenos Aires and early practice in Argentina, he relocated to the U.S.. He established a global firm with offices in cities including London, Palo Alto, and Abu Dhabi. Renowned for large-scale, function-driven projects such as the Tokyo International Forum, Cleveland Museum of Art expansions, and 432 Park Avenue, the firm is praised for combining structural clarity, context-sensitive design, and institutional rigor across six continents. Credits and Additional Notes Client: Macklowe Properties and CIM Group Design Team: Rafael Viñoly, Deborah Berke Partners, Bentel & BentelStructural Engineer: WSP Cantor Seinuk Mechanical, Electrical, and Plumbing Engineers: Jaros, Baum & BollesConstruction Manager: Lendlease Height: 1,396 feetNumber of Floors: 96 stories Construction Years: 2011–2015 #park #avenue #rafael #viñoly #architects
    ARCHEYES.COM
    432 Park Avenue by Rafael Viñoly Architects: Minimalism in the New York Skyline
    432 Park Avenue | © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects Located in Midtown Manhattan, 432 Park Avenue is a prominent figure in the evolution of supertall residential towers. Completed in 2015, this 1,396-foot-high building by Rafael Viñoly Architects asserts a commanding presence over the city’s skyline. Its minimalist form and rigorous geometry have sparked considerable debate within the architectural community, marking it as a significant and controversial addition to New York City’s built environment. 432 Park Avenue Technical Information Architects1-8: Rafael Viñoly Architects Location: Midtown Manhattan, New York City, USA Gross Area: 38,344 m2 | 412,637 Sq. Ft. Project Years: 2011 – 2015 Photographs: © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects It’s a building designed for the enjoyment of its occupants, not for the delight of its creator. – Rafael Viñoly 432 Park Avenue Photographs © Halkin Mason Photography, Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Courtesy of Rafael Viñoly Architects Design Intent and Conceptual Framework At the heart of 432 Park Avenue’s design lies a commitment to pure geometry. The square, an elemental form, defines every aspect of the building, from its floor plate to its overall silhouette. This strict adherence to geometry speaks to Viñoly’s rationalist sensibilities and interest in stripping architecture to its fundamental components. The tower’s proportions, with its height-to-width ratio of roughly 1:15, transform this simple geometry into a monumental presence. This conceptual rigor positions the building as an object of formal clarity and a deliberate statement within the city’s varied skyline. The design’s minimalism extends beyond the building’s shape, reflecting Viñoly’s pursuit of a refined and disciplined expression. Eschewing decorative flourishes, the tower’s form directly responds to programmatic needs and structural imperatives. This disciplined approach underpins the project’s ambition to redefine the experience of vertical living, asserting that luxury in residential design can emerge from formal simplicity and a mastery of proportion. Spatial Organization and Interior Volumes The interior organization of 432 Park Avenue reveals an equally uncompromising commitment to clarity and openness. Each residential floor is free of interior columns, a testament to the structural ingenuity of the concrete exoskeleton. This column-free arrangement grants unobstructed floor plans and expansive panoramic views of the city, the rivers, and beyond. Floor-to-ceiling windows, measuring nearly 10 feet in height, accentuate the sense of openness and lightness within each residence. The tower’s slender core houses the vertical circulation and mechanical systems, ensuring the perimeter remains uninterrupted. This core placement allows for generous living spaces that maximize privacy and connection to the urban landscape. The interplay between structural precision and panoramic transparency shapes the experience of inhabiting these spaces. The result is a sequence of interiors that privilege intimacy and vastness, anchoring domestic life within an architectural expression of purity. Materiality, Structural Clarity, and Detailing Material choices in 432 Park Avenue reinforce the project’s disciplined approach. The building’s exposed concrete frame, treated as structure and façade, lends the tower a stark yet refined character. The grid of square windows, systematically repeated across the height of the building, becomes a defining feature of its visual identity. This modular repetition establishes a rhythmic order and speaks to the building’s underlying structural logic. High-strength concrete enables the tower’s slender profile and exceptional height while imparting a tactile materiality that resists the glassy anonymity typical of many contemporary towers. The restrained palette and attention to detail emphasize the tectonic clarity of the building’s assembly. By treating the structure itself as an architectural finish, Viñoly’s design elevates the material expression of concrete into a fundamental element of the building’s identity. Urban and Cultural Significance As one of the tallest residential buildings in the Western Hemisphere, 432 Park Avenue has significantly altered the Manhattan skyline. Its unwavering verticality and minimal ornamentation create a dialogue with the city’s diverse architectural heritage, juxtaposing a severe abstraction against a backdrop of historic and contemporary towers. 432 Park Avenue occupies a distinctive place in the ongoing narrative of New York City’s architectural evolution. Its reductive form, structural clarity, and spatial generosity offer a compelling study of the power of minimalism at an urban scale. 432 Park Avenue Plans Floor Plans | © Rafael Viñoly Architects Floor Plans | © Rafael Viñoly Architects Floor Plans | © Rafael Viñoly Architects Floor Plans | © Rafael Viñoly Architects 432 Park Avenue Image Gallery © Rafael Viñoly Architects About Rafael Viñoly Architects Rafael Viñoly, a Uruguayan-born architect (1944–2023), founded Rafael Viñoly Architects in New York City in 1983. After studies in Buenos Aires and early practice in Argentina, he relocated to the U.S.. He established a global firm with offices in cities including London, Palo Alto, and Abu Dhabi. Renowned for large-scale, function-driven projects such as the Tokyo International Forum, Cleveland Museum of Art expansions, and 432 Park Avenue, the firm is praised for combining structural clarity, context-sensitive design, and institutional rigor across six continents. Credits and Additional Notes Client: Macklowe Properties and CIM Group Design Team: Rafael Viñoly (Architect), Deborah Berke Partners (Interior Design of residential units), Bentel & Bentel (Amenity Spaces Design) Structural Engineer: WSP Cantor Seinuk Mechanical, Electrical, and Plumbing Engineers: Jaros, Baum & Bolles (JB&B) Construction Manager: Lendlease Height: 1,396 feet (425.5 meters) Number of Floors: 96 stories Construction Years: 2011–2015
    0 التعليقات 0 المشاركات
  • Znamy sie completes a coastal-inspired patisserie in Warsaw

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";
    Japanese architect Shigeru Ban has created the Blue Ocean Domefor the Osaka-Kansai Expo 2025, addressing the urgent issue of marine plastic pollution and raising crucial awareness about it.Named Blue Ocean Dome, the pavilion stands out with its innovative design, comprising three distinct dome types: Dome A, Dome B, and Dome C. Each dome is specifically crafted to host captivating installations and dynamic exhibitions, promising an unforgettable experience for all visitors throughout the event. Image © Taiki FukaoThe project was commissioned by the Zero Emissions Research and Initiatives , a global network of creative minds, seeking solutions to the ever increasing problems of the world.Rather than outright rejecting plastic, the pavilion inspires deep reflection on how we use and manage materials, highlighting our critical responsibility to make sustainable choices for the future.The BOD merges traditional and modern materials—like bamboo, paper, and carbon fiber reinforced plastic—to unlock new and innovative architectural possibilities.Dome A, serving as the striking entrance, is expertly crafted from laminated bamboo. This innovative design not only showcases the beauty of bamboo but also tackles the pressing issue of abandoned bamboo groves in Japan, which pose a risk to land stability due to their shallow root systems.Utilizing raw bamboo for structural purposes is often difficult; however, through advanced processing, it is transformed into thin, laminated boards that boast strength even greater than that of conventional wood. These boards have been skillfully fashioned into a remarkable 19-meter dome, drawing inspiration from traditional Japanese bamboo hats. This project brilliantly turns an environmental challenge into a sustainable architectural solution, highlighting the potential of bamboo as a valuable resource.Dome B stands as the central and largest structure of its kind, boasting a remarkable diameter of 42 meters. It is primarily constructed from Carbon Fiber Reinforced Polymer, a cutting-edge material revered for its extraordinary strength-to-weight ratio—four times stronger than steel yet only one-fifth the weight. While CFRP is predominantly seen in industries such as aerospace and automotive due to its high cost, its application in architecture is pioneering.In this project, the choice of CFRP was not just advantageous; it was essential. The primary goal was to minimize the foundation weight on the reclaimed land of the Expo site, making sustainability a top priority. To mitigate the environmental consequences of deep foundation piles, the structure had to be lighter than the soil excavated for its foundation. CFRP not only met this stringent requirement but also ensured the dome's structural integrity, showcasing a perfect marriage of innovation and environmental responsibility.Dome C, with its impressive 19-meter diameter, is crafted entirely from paper tubes that are 100% recyclable after use. Its innovative design features a three-dimensional truss structure, connected by elegant wooden spheres, evoking the beauty of molecular structures.To champion sustainability and minimize waste following the six-month Expo, the entire BOD pavilion has been meticulously designed for effortless disassembly and relocation. It is anchored by a robust steel foundation system and boasts a modular design that allows it to be conveniently packed into standard shipping containers. After the Expo concludes, this remarkable pavilion will be transported to the Maldives, where it will be transformed into a stunning resort facility, breathing new life into its design and purpose.Recently, Shigeru Ban's Paper Log House was revealed at Philip Johnson's Glass House Venue. In addition, Ban installed his Paper Partition Sheltersfor the victims of the Turkey-Syria earthquake in Mersin and Hatay provinces of Turkey.All images © Hiroyuki Hirai unless otherwise stated.> via Shigeru Ban Architects 
    #znamy #sie #completes #coastalinspired #patisserie
    Znamy sie completes a coastal-inspired patisserie in Warsaw
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; Japanese architect Shigeru Ban has created the Blue Ocean Domefor the Osaka-Kansai Expo 2025, addressing the urgent issue of marine plastic pollution and raising crucial awareness about it.Named Blue Ocean Dome, the pavilion stands out with its innovative design, comprising three distinct dome types: Dome A, Dome B, and Dome C. Each dome is specifically crafted to host captivating installations and dynamic exhibitions, promising an unforgettable experience for all visitors throughout the event. Image © Taiki FukaoThe project was commissioned by the Zero Emissions Research and Initiatives , a global network of creative minds, seeking solutions to the ever increasing problems of the world.Rather than outright rejecting plastic, the pavilion inspires deep reflection on how we use and manage materials, highlighting our critical responsibility to make sustainable choices for the future.The BOD merges traditional and modern materials—like bamboo, paper, and carbon fiber reinforced plastic—to unlock new and innovative architectural possibilities.Dome A, serving as the striking entrance, is expertly crafted from laminated bamboo. This innovative design not only showcases the beauty of bamboo but also tackles the pressing issue of abandoned bamboo groves in Japan, which pose a risk to land stability due to their shallow root systems.Utilizing raw bamboo for structural purposes is often difficult; however, through advanced processing, it is transformed into thin, laminated boards that boast strength even greater than that of conventional wood. These boards have been skillfully fashioned into a remarkable 19-meter dome, drawing inspiration from traditional Japanese bamboo hats. This project brilliantly turns an environmental challenge into a sustainable architectural solution, highlighting the potential of bamboo as a valuable resource.Dome B stands as the central and largest structure of its kind, boasting a remarkable diameter of 42 meters. It is primarily constructed from Carbon Fiber Reinforced Polymer, a cutting-edge material revered for its extraordinary strength-to-weight ratio—four times stronger than steel yet only one-fifth the weight. While CFRP is predominantly seen in industries such as aerospace and automotive due to its high cost, its application in architecture is pioneering.In this project, the choice of CFRP was not just advantageous; it was essential. The primary goal was to minimize the foundation weight on the reclaimed land of the Expo site, making sustainability a top priority. To mitigate the environmental consequences of deep foundation piles, the structure had to be lighter than the soil excavated for its foundation. CFRP not only met this stringent requirement but also ensured the dome's structural integrity, showcasing a perfect marriage of innovation and environmental responsibility.Dome C, with its impressive 19-meter diameter, is crafted entirely from paper tubes that are 100% recyclable after use. Its innovative design features a three-dimensional truss structure, connected by elegant wooden spheres, evoking the beauty of molecular structures.To champion sustainability and minimize waste following the six-month Expo, the entire BOD pavilion has been meticulously designed for effortless disassembly and relocation. It is anchored by a robust steel foundation system and boasts a modular design that allows it to be conveniently packed into standard shipping containers. After the Expo concludes, this remarkable pavilion will be transported to the Maldives, where it will be transformed into a stunning resort facility, breathing new life into its design and purpose.Recently, Shigeru Ban's Paper Log House was revealed at Philip Johnson's Glass House Venue. In addition, Ban installed his Paper Partition Sheltersfor the victims of the Turkey-Syria earthquake in Mersin and Hatay provinces of Turkey.All images © Hiroyuki Hirai unless otherwise stated.> via Shigeru Ban Architects  #znamy #sie #completes #coastalinspired #patisserie
    WORLDARCHITECTURE.ORG
    Znamy sie completes a coastal-inspired patisserie in Warsaw
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Japanese architect Shigeru Ban has created the Blue Ocean Dome (BOD) for the Osaka-Kansai Expo 2025, addressing the urgent issue of marine plastic pollution and raising crucial awareness about it.Named Blue Ocean Dome, the pavilion stands out with its innovative design, comprising three distinct dome types: Dome A, Dome B, and Dome C. Each dome is specifically crafted to host captivating installations and dynamic exhibitions, promising an unforgettable experience for all visitors throughout the event. Image © Taiki FukaoThe project was commissioned by the Zero Emissions Research and Initiatives (ZERI), a global network of creative minds, seeking solutions to the ever increasing problems of the world.Rather than outright rejecting plastic, the pavilion inspires deep reflection on how we use and manage materials, highlighting our critical responsibility to make sustainable choices for the future.The BOD merges traditional and modern materials—like bamboo, paper, and carbon fiber reinforced plastic (CFRP)—to unlock new and innovative architectural possibilities.Dome A, serving as the striking entrance, is expertly crafted from laminated bamboo. This innovative design not only showcases the beauty of bamboo but also tackles the pressing issue of abandoned bamboo groves in Japan, which pose a risk to land stability due to their shallow root systems.Utilizing raw bamboo for structural purposes is often difficult; however, through advanced processing, it is transformed into thin, laminated boards that boast strength even greater than that of conventional wood. These boards have been skillfully fashioned into a remarkable 19-meter dome, drawing inspiration from traditional Japanese bamboo hats. This project brilliantly turns an environmental challenge into a sustainable architectural solution, highlighting the potential of bamboo as a valuable resource.Dome B stands as the central and largest structure of its kind, boasting a remarkable diameter of 42 meters. It is primarily constructed from Carbon Fiber Reinforced Polymer (CFRP), a cutting-edge material revered for its extraordinary strength-to-weight ratio—four times stronger than steel yet only one-fifth the weight. While CFRP is predominantly seen in industries such as aerospace and automotive due to its high cost, its application in architecture is pioneering.In this project, the choice of CFRP was not just advantageous; it was essential. The primary goal was to minimize the foundation weight on the reclaimed land of the Expo site, making sustainability a top priority. To mitigate the environmental consequences of deep foundation piles, the structure had to be lighter than the soil excavated for its foundation. CFRP not only met this stringent requirement but also ensured the dome's structural integrity, showcasing a perfect marriage of innovation and environmental responsibility.Dome C, with its impressive 19-meter diameter, is crafted entirely from paper tubes that are 100% recyclable after use. Its innovative design features a three-dimensional truss structure, connected by elegant wooden spheres, evoking the beauty of molecular structures.To champion sustainability and minimize waste following the six-month Expo, the entire BOD pavilion has been meticulously designed for effortless disassembly and relocation. It is anchored by a robust steel foundation system and boasts a modular design that allows it to be conveniently packed into standard shipping containers. After the Expo concludes, this remarkable pavilion will be transported to the Maldives, where it will be transformed into a stunning resort facility, breathing new life into its design and purpose.Recently, Shigeru Ban's Paper Log House was revealed at Philip Johnson's Glass House Venue. In addition, Ban installed his Paper Partition Shelters (PPS) for the victims of the Turkey-Syria earthquake in Mersin and Hatay provinces of Turkey.All images © Hiroyuki Hirai unless otherwise stated.> via Shigeru Ban Architects 
    0 التعليقات 0 المشاركات
الصفحات المعززة