• Animate Faster with V-Ray, Anima & Vantage

    Get started with the V-Ray ArchViz Collection →

    V-Ray delivers world-class renders. But when your projects call for animated people, camera movement, or fast client feedback—traditional workflows can slow you down. In this video, we’ll show you how to combine V-Ray, Anima, and Chaos Vantage to create dynamic, animated scenes—and explore them in real time.

    ---------------------------------------------------------------------------
    Imagine. Design. Believe.
    Chaos provides world-class visualization solutions helping you share ideas, optimize workflows and create immersive experiences. Everything you need to visualize your ideas from start to finish. From architecture and VFX to product design and e-commerce, Chaos empowers creators to bring their projects to life.

    Our industry-leading tools, including V-Ray, Enscape, and Corona, are built for architects, designers, AEC professionals, and CG artists. Whether you’re crafting photorealistic visuals, immersive real-time experiences, or cinematic VFX, Chaos delivers the power and flexibility to render anything.

    Explore Chaos products →
    Learn more & get free tutorials →
    Subscribe for the latest updates!

    Follow us:
    LinkedIn:
    Instagram:
    Facebook:
    #Chaos #V-Ray #3DRendering #Visualization
    #animate #faster #with #vray #anima
    Animate Faster with V-Ray, Anima & Vantage
    🚀 Get started with the V-Ray ArchViz Collection → V-Ray delivers world-class renders. But when your projects call for animated people, camera movement, or fast client feedback—traditional workflows can slow you down. In this video, we’ll show you how to combine V-Ray, Anima, and Chaos Vantage to create dynamic, animated scenes—and explore them in real time. --------------------------------------------------------------------------- Imagine. Design. Believe. Chaos provides world-class visualization solutions helping you share ideas, optimize workflows and create immersive experiences. Everything you need to visualize your ideas from start to finish. From architecture and VFX to product design and e-commerce, Chaos empowers creators to bring their projects to life. Our industry-leading tools, including V-Ray, Enscape, and Corona, are built for architects, designers, AEC professionals, and CG artists. Whether you’re crafting photorealistic visuals, immersive real-time experiences, or cinematic VFX, Chaos delivers the power and flexibility to render anything. 🔎 Explore Chaos products → 🎓 Learn more & get free tutorials → 🔔 Subscribe for the latest updates! 🔗 Follow us: 👉 LinkedIn: 👉 Instagram: 👉 Facebook: #Chaos #V-Ray #3DRendering #Visualization #animate #faster #with #vray #anima
    WWW.YOUTUBE.COM
    Animate Faster with V-Ray, Anima & Vantage
    🚀 Get started with the V-Ray ArchViz Collection → https://bit.ly/AnimateFaster V-Ray delivers world-class renders. But when your projects call for animated people, camera movement, or fast client feedback—traditional workflows can slow you down. In this video, we’ll show you how to combine V-Ray, Anima, and Chaos Vantage to create dynamic, animated scenes—and explore them in real time. --------------------------------------------------------------------------- Imagine. Design. Believe. Chaos provides world-class visualization solutions helping you share ideas, optimize workflows and create immersive experiences. Everything you need to visualize your ideas from start to finish. From architecture and VFX to product design and e-commerce, Chaos empowers creators to bring their projects to life. Our industry-leading tools, including V-Ray, Enscape, and Corona, are built for architects, designers, AEC professionals, and CG artists. Whether you’re crafting photorealistic visuals, immersive real-time experiences, or cinematic VFX, Chaos delivers the power and flexibility to render anything. 🔎 Explore Chaos products → https://bit.ly/ExploreChaos 🎓 Learn more & get free tutorials → https://bit.ly/ChaosWebinars 🔔 Subscribe for the latest updates! 🔗 Follow us: 👉 LinkedIn: https://bit.ly/ChaosLinkedIn 👉 Instagram: https://bit.ly/ChaosIG 👉 Facebook: https://bit.ly/Chaos_Facebook #Chaos #V-Ray #3DRendering #Visualization
    Like
    Love
    Wow
    Sad
    Angry
    638
    3 Comments 0 Shares
  • @ChaosGroup Enscape & @Autodesk VRED just leveled up with DLSS 4! Enjoy sharper visuals + faster performance for real-time design & 3D visuali...

    @ChaosGroup Enscape & @Autodesk VRED just leveled up with DLSS 4! Enjoy sharper visuals + faster performance for real-time design & 3D visualization. Learn more:
    #chaosgroup #enscape #ampamp #autodesk #vred
    ⚡ @ChaosGroup Enscape & @Autodesk VRED just leveled up with DLSS 4! 🚀 Enjoy sharper visuals + faster performance for real-time design & 3D visuali...
    ⚡ @ChaosGroup Enscape & @Autodesk VRED just leveled up with DLSS 4! 🚀Enjoy sharper visuals + faster performance for real-time design & 3D visualization. 🔥Learn more: #chaosgroup #enscape #ampamp #autodesk #vred
    X.COM
    ⚡ @ChaosGroup Enscape & @Autodesk VRED just leveled up with DLSS 4! 🚀 Enjoy sharper visuals + faster performance for real-time design & 3D visuali...
    ⚡ @ChaosGroup Enscape & @Autodesk VRED just leveled up with DLSS 4! 🚀Enjoy sharper visuals + faster performance for real-time design & 3D visualization. 🔥Learn more: https://www.nvidia.com/en-us/geforce/news/125-dlss-4-multi-frame-gen-games-more-announced-computex-2025/#:~:text=Chaos%20Enscape%20%26%20Autodesk,NVIDIA%20Studio
    0 Comments 0 Shares
  • NVIDIA and Microsoft Advance Development on RTX AI PCs

    Generative AI is transforming PC software into breakthrough experiences — from digital humans to writing assistants, intelligent agents and creative tools.
    NVIDIA RTX AI PCs are powering this transformation with technology that makes it simpler to get started experimenting with generative AI and unlock greater performance on Windows 11.
    NVIDIA TensorRT has been reimagined for RTX AI PCs, combining industry-leading TensorRT performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs.
    Announced at Microsoft Build, TensorRT for RTX is natively supported by Windows ML — a new inference stack that provides app developers with both broad hardware compatibility and state-of-the-art performance.
    For developers looking for AI features ready to integrate, NVIDIA software development kitsoffer a wide array of options, from NVIDIA DLSS to multimedia enhancements like NVIDIA RTX Video. This month, top software applications from Autodesk, Bilibili, Chaos, LM Studio and Topaz Labs are releasing updates to unlock RTX AI features and acceleration.
    AI enthusiasts and developers can easily get started with AI using NVIDIA NIM — prepackaged, optimized AI models that can run in popular apps like AnythingLLM, Microsoft VS Code and ComfyUI. Releasing this week, the FLUX.1-schnell image generation model will be available as a NIM microservice, and the popular FLUX.1-dev NIM microservice has been updated to support more RTX GPUs.
    Those looking for a simple, no-code way to dive into AI development can tap into Project G-Assist — the RTX PC AI assistant in the NVIDIA app — to build plug-ins to control PC apps and peripherals using natural language AI. New community plug-ins are now available, including Google Gemini web search, Spotify, Twitch, IFTTT and SignalRGB.
    Accelerated AI Inference With TensorRT for RTX
    Today’s AI PC software stack requires developers to compromise on performance or invest in custom optimizations for specific hardware.
    Windows ML was built to solve these challenges. Windows ML is powered by ONNX Runtime and seamlessly connects to an optimized AI execution layer provided and maintained by each hardware manufacturer.
    For GeForce RTX GPUs, Windows ML automatically uses the TensorRT for RTX inference library for high performance and rapid deployment. Compared with DirectML, TensorRT delivers over 50% faster performance for AI workloads on PCs.
    TensorRT delivers over 50% faster performance for AI workloads on PCs than DirectML. Performance measured on GeForce RTX 5090.
    Windows ML also delivers quality-of-life benefits for developers. It can automatically select the right hardware — GPU, CPU or NPU — to run each AI feature, and download the execution provider for that hardware, removing the need to package those files into the app. This allows for the latest TensorRT performance optimizations to be delivered to users as soon as they’re ready.
    TensorRT performance optimizations are delivered to users as soon as they’re ready.
    TensorRT, a library originally built for data centers, has been redesigned for RTX AI PCs. Instead of pre-generating TensorRT engines and packaging them with the app, TensorRT for RTX uses just-in-time, on-device engine building to optimize how the AI model is run for the user’s specific RTX GPU in mere seconds. And the library’s packaging has been streamlined, reducing its file size significantly by 8x.
    TensorRT for RTX is available to developers through the Windows ML preview today, and will be available as a standalone SDK at NVIDIA Developer in June.
    Developers can learn more in the TensorRT for RTX launch blog or Microsoft’s Windows ML blog.
    Expanding the AI Ecosystem on Windows 11 PCs
    Developers looking to add AI features or boost app performance can tap into a broad range of NVIDIA SDKs. These include NVIDIA CUDA and TensorRT for GPU acceleration; NVIDIA DLSS and Optix for 3D graphics; NVIDIA RTX Video and Maxine for multimedia; and NVIDIA Riva and ACE for generative AI.
    Top applications are releasing updates this month to enable unique features using these NVIDIA SDKs, including:

    LM Studio, which released an update to its app to upgrade to the latest CUDA version, increasing performance by over 30%.
    Topaz Labs, which is releasing a generative AI video model to enhance video quality, accelerated by CUDA.
    Chaos Enscape and Autodesk VRED, which are adding DLSS 4 for faster performance and better image quality.
    Bilibili, which is integrating NVIDIA Broadcast features such as Virtual Background to enhance the quality of livestreams.

    NVIDIA looks forward to continuing to work with Microsoft and top AI app developers to help them accelerate their AI features on RTX-powered machines through the Windows ML and TensorRT integration.
    Local AI Made Easy With NIM Microservices and AI Blueprints
    Getting started with developing AI on PCs can be daunting. AI developers and enthusiasts have to select from over 1.2 million AI models on Hugging Face, quantize it into a format that runs well on PC, find and install all the dependencies to run it, and more.
    NVIDIA NIM makes it easy to get started by providing a curated list of AI models, prepackaged with all the files needed to run them and optimized to achieve full performance on RTX GPUs. And since they’re containerized, the same NIM microservice can be run seamlessly across PCs or the cloud.
    NVIDIA NIM microservices are available to download through build.nvidia.com or through top AI apps like Anything LLM, ComfyUI and AI Toolkit for Visual Studio Code.
    During COMPUTEX, NVIDIA will release the FLUX.1-schnell NIM microservice — an image generation model from Black Forest Labs for fast image generation — and update the FLUX.1-dev NIM microservice to add compatibility for a wide range of GeForce RTX 50 and 40 Series GPUs.
    These NIM microservices enable faster performance with TensorRT and quantized models. On NVIDIA Blackwell GPUs, they run over twice as fast as running them natively, thanks to FP4 and RTX optimizations.
    The FLUX.1-schnell NIM microservice runs over twice as fast as on NVIDIA Blackwell GPUs with FP4 and RTX optimizations.
    AI developers can also jumpstart their work with NVIDIA AI Blueprints — sample workflows and projects using NIM microservices.
    NVIDIA last month released the NVIDIA AI Blueprint for 3D-guided generative AI, a powerful way to control composition and camera angles of generated images by using a 3D scene as a reference. Developers can modify the open-source blueprint for their needs or extend it with additional functionality.
    New Project G-Assist Plug-Ins and Sample Projects Now Available
    NVIDIA recently released Project G-Assist as an experimental AI assistant integrated into the NVIDIA app. G-Assist enables users to control their GeForce RTX system using simple voice and text commands, offering a more convenient interface compared to manual controls spread across numerous legacy control panels.
    Developers can also use Project G-Assist to easily build plug-ins, test assistant use cases and publish them through NVIDIA’s Discord and GitHub.
    The Project G-Assist Plug-in Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy to start creating plug-ins. These lightweight, community-driven add-ons use straightforward JSON definitions and Python logic.
    New open-source plug-in samples are available now on GitHub, showcasing diverse ways on-device AI can enhance PC and gaming workflows. They include:

    Gemini: The existing Gemini plug-in that uses Google’s cloud-based free-to-use large language model has been updated to include real-time web search capabilities.
    IFTTT: A plug-in that lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights or smart shades, or pushing the latest gaming news to a mobile device.
    Discord: A plug-in that enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay.

    Explore the GitHub repository for more examples — including hands-free music control via Spotify, livestream status checks with Twitch, and more.

    Companies are adopting AI as the new PC interface. For example, SignalRGB is developing a G-Assist plug-in that enables unified lighting control across multiple manufacturers. Users will soon be able to install this plug-in directly from the SignalRGB app.
    SignalRGB’s G-Assist plug-in will soon enable unified lighting control across multiple manufacturers.
    Starting this week, the AI community will also be able to use G-Assist as a custom component in Langflow — enabling users to integrate function-calling capabilities in low-code or no-code workflows, AI applications and agentic flows.
    The G-Assist custom component in Langflow will soon enable users to integrate function-calling capabilities.
    Enthusiasts interested in developing and experimenting with Project G-Assist plug-ins are invited to join the NVIDIA Developer Discord channel to collaborate, share creations and gain support.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #microsoft #advance #development #rtx
    NVIDIA and Microsoft Advance Development on RTX AI PCs
    Generative AI is transforming PC software into breakthrough experiences — from digital humans to writing assistants, intelligent agents and creative tools. NVIDIA RTX AI PCs are powering this transformation with technology that makes it simpler to get started experimenting with generative AI and unlock greater performance on Windows 11. NVIDIA TensorRT has been reimagined for RTX AI PCs, combining industry-leading TensorRT performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. Announced at Microsoft Build, TensorRT for RTX is natively supported by Windows ML — a new inference stack that provides app developers with both broad hardware compatibility and state-of-the-art performance. For developers looking for AI features ready to integrate, NVIDIA software development kitsoffer a wide array of options, from NVIDIA DLSS to multimedia enhancements like NVIDIA RTX Video. This month, top software applications from Autodesk, Bilibili, Chaos, LM Studio and Topaz Labs are releasing updates to unlock RTX AI features and acceleration. AI enthusiasts and developers can easily get started with AI using NVIDIA NIM — prepackaged, optimized AI models that can run in popular apps like AnythingLLM, Microsoft VS Code and ComfyUI. Releasing this week, the FLUX.1-schnell image generation model will be available as a NIM microservice, and the popular FLUX.1-dev NIM microservice has been updated to support more RTX GPUs. Those looking for a simple, no-code way to dive into AI development can tap into Project G-Assist — the RTX PC AI assistant in the NVIDIA app — to build plug-ins to control PC apps and peripherals using natural language AI. New community plug-ins are now available, including Google Gemini web search, Spotify, Twitch, IFTTT and SignalRGB. Accelerated AI Inference With TensorRT for RTX Today’s AI PC software stack requires developers to compromise on performance or invest in custom optimizations for specific hardware. Windows ML was built to solve these challenges. Windows ML is powered by ONNX Runtime and seamlessly connects to an optimized AI execution layer provided and maintained by each hardware manufacturer. For GeForce RTX GPUs, Windows ML automatically uses the TensorRT for RTX inference library for high performance and rapid deployment. Compared with DirectML, TensorRT delivers over 50% faster performance for AI workloads on PCs. TensorRT delivers over 50% faster performance for AI workloads on PCs than DirectML. Performance measured on GeForce RTX 5090. Windows ML also delivers quality-of-life benefits for developers. It can automatically select the right hardware — GPU, CPU or NPU — to run each AI feature, and download the execution provider for that hardware, removing the need to package those files into the app. This allows for the latest TensorRT performance optimizations to be delivered to users as soon as they’re ready. TensorRT performance optimizations are delivered to users as soon as they’re ready. TensorRT, a library originally built for data centers, has been redesigned for RTX AI PCs. Instead of pre-generating TensorRT engines and packaging them with the app, TensorRT for RTX uses just-in-time, on-device engine building to optimize how the AI model is run for the user’s specific RTX GPU in mere seconds. And the library’s packaging has been streamlined, reducing its file size significantly by 8x. TensorRT for RTX is available to developers through the Windows ML preview today, and will be available as a standalone SDK at NVIDIA Developer in June. Developers can learn more in the TensorRT for RTX launch blog or Microsoft’s Windows ML blog. Expanding the AI Ecosystem on Windows 11 PCs Developers looking to add AI features or boost app performance can tap into a broad range of NVIDIA SDKs. These include NVIDIA CUDA and TensorRT for GPU acceleration; NVIDIA DLSS and Optix for 3D graphics; NVIDIA RTX Video and Maxine for multimedia; and NVIDIA Riva and ACE for generative AI. Top applications are releasing updates this month to enable unique features using these NVIDIA SDKs, including: LM Studio, which released an update to its app to upgrade to the latest CUDA version, increasing performance by over 30%. Topaz Labs, which is releasing a generative AI video model to enhance video quality, accelerated by CUDA. Chaos Enscape and Autodesk VRED, which are adding DLSS 4 for faster performance and better image quality. Bilibili, which is integrating NVIDIA Broadcast features such as Virtual Background to enhance the quality of livestreams. NVIDIA looks forward to continuing to work with Microsoft and top AI app developers to help them accelerate their AI features on RTX-powered machines through the Windows ML and TensorRT integration. Local AI Made Easy With NIM Microservices and AI Blueprints Getting started with developing AI on PCs can be daunting. AI developers and enthusiasts have to select from over 1.2 million AI models on Hugging Face, quantize it into a format that runs well on PC, find and install all the dependencies to run it, and more. NVIDIA NIM makes it easy to get started by providing a curated list of AI models, prepackaged with all the files needed to run them and optimized to achieve full performance on RTX GPUs. And since they’re containerized, the same NIM microservice can be run seamlessly across PCs or the cloud. NVIDIA NIM microservices are available to download through build.nvidia.com or through top AI apps like Anything LLM, ComfyUI and AI Toolkit for Visual Studio Code. During COMPUTEX, NVIDIA will release the FLUX.1-schnell NIM microservice — an image generation model from Black Forest Labs for fast image generation — and update the FLUX.1-dev NIM microservice to add compatibility for a wide range of GeForce RTX 50 and 40 Series GPUs. These NIM microservices enable faster performance with TensorRT and quantized models. On NVIDIA Blackwell GPUs, they run over twice as fast as running them natively, thanks to FP4 and RTX optimizations. The FLUX.1-schnell NIM microservice runs over twice as fast as on NVIDIA Blackwell GPUs with FP4 and RTX optimizations. AI developers can also jumpstart their work with NVIDIA AI Blueprints — sample workflows and projects using NIM microservices. NVIDIA last month released the NVIDIA AI Blueprint for 3D-guided generative AI, a powerful way to control composition and camera angles of generated images by using a 3D scene as a reference. Developers can modify the open-source blueprint for their needs or extend it with additional functionality. New Project G-Assist Plug-Ins and Sample Projects Now Available NVIDIA recently released Project G-Assist as an experimental AI assistant integrated into the NVIDIA app. G-Assist enables users to control their GeForce RTX system using simple voice and text commands, offering a more convenient interface compared to manual controls spread across numerous legacy control panels. Developers can also use Project G-Assist to easily build plug-ins, test assistant use cases and publish them through NVIDIA’s Discord and GitHub. The Project G-Assist Plug-in Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy to start creating plug-ins. These lightweight, community-driven add-ons use straightforward JSON definitions and Python logic. New open-source plug-in samples are available now on GitHub, showcasing diverse ways on-device AI can enhance PC and gaming workflows. They include: Gemini: The existing Gemini plug-in that uses Google’s cloud-based free-to-use large language model has been updated to include real-time web search capabilities. IFTTT: A plug-in that lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights or smart shades, or pushing the latest gaming news to a mobile device. Discord: A plug-in that enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. Explore the GitHub repository for more examples — including hands-free music control via Spotify, livestream status checks with Twitch, and more. Companies are adopting AI as the new PC interface. For example, SignalRGB is developing a G-Assist plug-in that enables unified lighting control across multiple manufacturers. Users will soon be able to install this plug-in directly from the SignalRGB app. SignalRGB’s G-Assist plug-in will soon enable unified lighting control across multiple manufacturers. Starting this week, the AI community will also be able to use G-Assist as a custom component in Langflow — enabling users to integrate function-calling capabilities in low-code or no-code workflows, AI applications and agentic flows. The G-Assist custom component in Langflow will soon enable users to integrate function-calling capabilities. Enthusiasts interested in developing and experimenting with Project G-Assist plug-ins are invited to join the NVIDIA Developer Discord channel to collaborate, share creations and gain support. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #microsoft #advance #development #rtx
    BLOGS.NVIDIA.COM
    NVIDIA and Microsoft Advance Development on RTX AI PCs
    Generative AI is transforming PC software into breakthrough experiences — from digital humans to writing assistants, intelligent agents and creative tools. NVIDIA RTX AI PCs are powering this transformation with technology that makes it simpler to get started experimenting with generative AI and unlock greater performance on Windows 11. NVIDIA TensorRT has been reimagined for RTX AI PCs, combining industry-leading TensorRT performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. Announced at Microsoft Build, TensorRT for RTX is natively supported by Windows ML — a new inference stack that provides app developers with both broad hardware compatibility and state-of-the-art performance. For developers looking for AI features ready to integrate, NVIDIA software development kits (SDKs) offer a wide array of options, from NVIDIA DLSS to multimedia enhancements like NVIDIA RTX Video. This month, top software applications from Autodesk, Bilibili, Chaos, LM Studio and Topaz Labs are releasing updates to unlock RTX AI features and acceleration. AI enthusiasts and developers can easily get started with AI using NVIDIA NIM — prepackaged, optimized AI models that can run in popular apps like AnythingLLM, Microsoft VS Code and ComfyUI. Releasing this week, the FLUX.1-schnell image generation model will be available as a NIM microservice, and the popular FLUX.1-dev NIM microservice has been updated to support more RTX GPUs. Those looking for a simple, no-code way to dive into AI development can tap into Project G-Assist — the RTX PC AI assistant in the NVIDIA app — to build plug-ins to control PC apps and peripherals using natural language AI. New community plug-ins are now available, including Google Gemini web search, Spotify, Twitch, IFTTT and SignalRGB. Accelerated AI Inference With TensorRT for RTX Today’s AI PC software stack requires developers to compromise on performance or invest in custom optimizations for specific hardware. Windows ML was built to solve these challenges. Windows ML is powered by ONNX Runtime and seamlessly connects to an optimized AI execution layer provided and maintained by each hardware manufacturer. For GeForce RTX GPUs, Windows ML automatically uses the TensorRT for RTX inference library for high performance and rapid deployment. Compared with DirectML, TensorRT delivers over 50% faster performance for AI workloads on PCs. TensorRT delivers over 50% faster performance for AI workloads on PCs than DirectML. Performance measured on GeForce RTX 5090. Windows ML also delivers quality-of-life benefits for developers. It can automatically select the right hardware — GPU, CPU or NPU — to run each AI feature, and download the execution provider for that hardware, removing the need to package those files into the app. This allows for the latest TensorRT performance optimizations to be delivered to users as soon as they’re ready. TensorRT performance optimizations are delivered to users as soon as they’re ready. TensorRT, a library originally built for data centers, has been redesigned for RTX AI PCs. Instead of pre-generating TensorRT engines and packaging them with the app, TensorRT for RTX uses just-in-time, on-device engine building to optimize how the AI model is run for the user’s specific RTX GPU in mere seconds. And the library’s packaging has been streamlined, reducing its file size significantly by 8x. TensorRT for RTX is available to developers through the Windows ML preview today, and will be available as a standalone SDK at NVIDIA Developer in June. Developers can learn more in the TensorRT for RTX launch blog or Microsoft’s Windows ML blog. Expanding the AI Ecosystem on Windows 11 PCs Developers looking to add AI features or boost app performance can tap into a broad range of NVIDIA SDKs. These include NVIDIA CUDA and TensorRT for GPU acceleration; NVIDIA DLSS and Optix for 3D graphics; NVIDIA RTX Video and Maxine for multimedia; and NVIDIA Riva and ACE for generative AI. Top applications are releasing updates this month to enable unique features using these NVIDIA SDKs, including: LM Studio, which released an update to its app to upgrade to the latest CUDA version, increasing performance by over 30%. Topaz Labs, which is releasing a generative AI video model to enhance video quality, accelerated by CUDA. Chaos Enscape and Autodesk VRED, which are adding DLSS 4 for faster performance and better image quality. Bilibili, which is integrating NVIDIA Broadcast features such as Virtual Background to enhance the quality of livestreams. NVIDIA looks forward to continuing to work with Microsoft and top AI app developers to help them accelerate their AI features on RTX-powered machines through the Windows ML and TensorRT integration. Local AI Made Easy With NIM Microservices and AI Blueprints Getting started with developing AI on PCs can be daunting. AI developers and enthusiasts have to select from over 1.2 million AI models on Hugging Face, quantize it into a format that runs well on PC, find and install all the dependencies to run it, and more. NVIDIA NIM makes it easy to get started by providing a curated list of AI models, prepackaged with all the files needed to run them and optimized to achieve full performance on RTX GPUs. And since they’re containerized, the same NIM microservice can be run seamlessly across PCs or the cloud. NVIDIA NIM microservices are available to download through build.nvidia.com or through top AI apps like Anything LLM, ComfyUI and AI Toolkit for Visual Studio Code. During COMPUTEX, NVIDIA will release the FLUX.1-schnell NIM microservice — an image generation model from Black Forest Labs for fast image generation — and update the FLUX.1-dev NIM microservice to add compatibility for a wide range of GeForce RTX 50 and 40 Series GPUs. These NIM microservices enable faster performance with TensorRT and quantized models. On NVIDIA Blackwell GPUs, they run over twice as fast as running them natively, thanks to FP4 and RTX optimizations. The FLUX.1-schnell NIM microservice runs over twice as fast as on NVIDIA Blackwell GPUs with FP4 and RTX optimizations. AI developers can also jumpstart their work with NVIDIA AI Blueprints — sample workflows and projects using NIM microservices. NVIDIA last month released the NVIDIA AI Blueprint for 3D-guided generative AI, a powerful way to control composition and camera angles of generated images by using a 3D scene as a reference. Developers can modify the open-source blueprint for their needs or extend it with additional functionality. New Project G-Assist Plug-Ins and Sample Projects Now Available NVIDIA recently released Project G-Assist as an experimental AI assistant integrated into the NVIDIA app. G-Assist enables users to control their GeForce RTX system using simple voice and text commands, offering a more convenient interface compared to manual controls spread across numerous legacy control panels. Developers can also use Project G-Assist to easily build plug-ins, test assistant use cases and publish them through NVIDIA’s Discord and GitHub. The Project G-Assist Plug-in Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy to start creating plug-ins. These lightweight, community-driven add-ons use straightforward JSON definitions and Python logic. New open-source plug-in samples are available now on GitHub, showcasing diverse ways on-device AI can enhance PC and gaming workflows. They include: Gemini: The existing Gemini plug-in that uses Google’s cloud-based free-to-use large language model has been updated to include real-time web search capabilities. IFTTT: A plug-in that lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights or smart shades, or pushing the latest gaming news to a mobile device. Discord: A plug-in that enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. Explore the GitHub repository for more examples — including hands-free music control via Spotify, livestream status checks with Twitch, and more. Companies are adopting AI as the new PC interface. For example, SignalRGB is developing a G-Assist plug-in that enables unified lighting control across multiple manufacturers. Users will soon be able to install this plug-in directly from the SignalRGB app. SignalRGB’s G-Assist plug-in will soon enable unified lighting control across multiple manufacturers. Starting this week, the AI community will also be able to use G-Assist as a custom component in Langflow — enabling users to integrate function-calling capabilities in low-code or no-code workflows, AI applications and agentic flows. The G-Assist custom component in Langflow will soon enable users to integrate function-calling capabilities. Enthusiasts interested in developing and experimenting with Project G-Assist plug-ins are invited to join the NVIDIA Developer Discord channel to collaborate, share creations and gain support. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    0 Comments 0 Shares
  • State of ArchViz Webinar: How AI Is Changing Architectural Visualization
    The world of architectural visualization is evolving rapidly, driven by new technologies and shifting industry demands.
    To capture the latest trends, Architizer and Chaos surveyed more than 1,000 design professionals worldwide, uncovering key insights into the challenges and opportunities shaping the field.
    The results are now available in the free-to-download State of Architectural Visualization 2024-25 Report — a must-read for architects, designers, and visualization specialists looking to stay ahead of the curve.
    In an engaging webinar hosted by Architizer and Chaos, Roderick Bates, Director of Corporate Development at Chaos, took to our virtual stage to explore the findings of the 2024-25 survey and report — and what they mean for the future of architectural visualization.
    Read on to discover the key points from Bates’ captivating presentation.
    Your browser does not support this, try viewing it on YouTube: YOUR URL
    Shifting Demographics and Global Reach
    Bates began by highlighting the wide-reaching, diverse nature of the survey respondents, which helps to substantiate and increase confidence in the findings.
    This year’s survey reflected a wider range of voices than ever before, with participants hailing from more than 75 countries.
    While 40% were U.S.-based, there was a significant increase in participation from the EU, UK, Oceania, and Asia.
    Firm size varied as well, with 64% of responses coming from small firms with fewer than 20 employees juxtaposed with a healthy 14% representation from firms with 100+ employees.
    “This kind of spread means that the data speaks to everyone in the AEC industry — from freelancers to global studios,” Bates emphasized.
    “All those different stakeholders have different needs, and the survey helps us understand them.”
    Chaos and Architizer’s annual industry reports include “The Future of Architectural Visualization” (2023), “The State of AI in Architecture” (2024), and “The State of Architectural Visualization” (2024-2025).
    Download the latest report for free here.
    Bates emphasized that this year’s survey data has already shaped real-world strategy for Chaos: “This is information that’s incredibly valuable to us as a company when we think about what products to develop — large-scale initiatives, M&A, and more.” The report has been used in internal product roadmaps and even supported due diligence in recent acquisitions.
    Acceleration and Specialization in AI
    AI emerged as a central theme in both the report and the webinar — a fact that should come as no surprise given the rapid emergence of this technology in recent years.
    According to the data, 56% of respondents are now actively using AI tools in their workflows, up dramatically from last year’s insights into AI across the industry.
    35% are using AI to generate quick variations, while 44% are generating concept images and ideas early in design.
    These numbers appear to show that a “maturing” process is underway when it comes to AI usage within architecture.
    “People are no longer AI-hesitant,” Bates said.
    “We’re seeing a lot of firms experimenting with it, and the number of people fully invested — who say it’s part of their workflow — is growing.”
    Chaos recently acquired EvolveLAB and is developing integrated AI tools for its suite of architectural visualization applications.
    Roderick’s webinar demonstrated a range of potential applications for AI, including this animated construction timelapse.
    Chaos’s response to this evolving landscape is characteristically dynamic — they have already developed and launched tailor-made AI tools like the AI Enhancer in Enscape, which can instantly improve the realism of renderings, and have acquired EvolveLAB, which creates high-quality AI-driven tools for visualization, modeling and project documentation.
    “These aren’t generic solutions anymore,” commented Bates.
    “These are tools trained on architectural datasets, made specifically for architectural visualization.”
    Benefits of AI cited in the survey included faster workflows, enhanced creativity, and lower costs — “this sounds like a CFO’s dream right here”, remarked Bates.
    He went on to present examples of AI in action, from a sketch being transformed into a rendering, to instant material variations for a contemporary interior.
    Standardization, Integration, and the Path Forward
    The webinar also tackled some of the biggest roadblocks identified in the report: integration friction, lack of standardization, and concerns around quality control.
    “Architectural firms thrive on consistency,” said Bates, “and AI’s variability can be a headache.”
    To address this, Chaos is building standardized prompt libraries and working toward seamless integration across its visualization tools.
    “You shouldn’t have to redo work,” Bates emphasized.
    “If you’re in our ecosystem — or bringing in data from other platforms — it should just work.”
    Additionally, sustainability was highlighted as another challenging yet high-potential area within visualization workflows, based on survey feedback.
    As Bates explained, tools like Enscape Impact now offer rapid building performance simulations integrated directly into the design environment, requiring just a small number of key inputs.
    “It almost gets to the level of an AI prompt,” he noted.
    Architects are seeking even more automation and ease of use within this niche, signaling a strong demand for sustainability tools that are faster, smarter, and more intuitive — underscoring a major opportunity for future innovation.
    The webinar concluded with a lively Q&A, with AI predictably at the center of the debate.
    Some viewers expressed apprehension around the rapid adoption of these tools, while others pointed to the promise of AI’s efficiencies, freeing them up to focus more on design ideation.
    Whichever side of the argument you land on currently, one thing is certain — the State of Architectural Visualization report provides an invaluable source of insight into the industry today — and where it is heading tomorrow.
    To learn more and download your free copy of the report, click here, and learn more about Chaos’s latest developments in architectural visualization here.
    The post State of ArchViz Webinar: How AI Is Changing Architectural Visualization appeared first on Journal.
    Source: https://architizer.com/blog/inspiration/industry/state-of-architectural-visualization-webinar-chaos/" style="color: #0066cc;">https://architizer.com/blog/inspiration/industry/state-of-architectural-visualization-webinar-chaos/
    #state #archviz #webinar #how #changing #architectural #visualization
    State of ArchViz Webinar: How AI Is Changing Architectural Visualization
    The world of architectural visualization is evolving rapidly, driven by new technologies and shifting industry demands. To capture the latest trends, Architizer and Chaos surveyed more than 1,000 design professionals worldwide, uncovering key insights into the challenges and opportunities shaping the field. The results are now available in the free-to-download State of Architectural Visualization 2024-25 Report — a must-read for architects, designers, and visualization specialists looking to stay ahead of the curve. In an engaging webinar hosted by Architizer and Chaos, Roderick Bates, Director of Corporate Development at Chaos, took to our virtual stage to explore the findings of the 2024-25 survey and report — and what they mean for the future of architectural visualization. Read on to discover the key points from Bates’ captivating presentation. Your browser does not support this, try viewing it on YouTube: YOUR URL Shifting Demographics and Global Reach Bates began by highlighting the wide-reaching, diverse nature of the survey respondents, which helps to substantiate and increase confidence in the findings. This year’s survey reflected a wider range of voices than ever before, with participants hailing from more than 75 countries. While 40% were U.S.-based, there was a significant increase in participation from the EU, UK, Oceania, and Asia. Firm size varied as well, with 64% of responses coming from small firms with fewer than 20 employees juxtaposed with a healthy 14% representation from firms with 100+ employees. “This kind of spread means that the data speaks to everyone in the AEC industry — from freelancers to global studios,” Bates emphasized. “All those different stakeholders have different needs, and the survey helps us understand them.” Chaos and Architizer’s annual industry reports include “The Future of Architectural Visualization” (2023), “The State of AI in Architecture” (2024), and “The State of Architectural Visualization” (2024-2025). Download the latest report for free here. Bates emphasized that this year’s survey data has already shaped real-world strategy for Chaos: “This is information that’s incredibly valuable to us as a company when we think about what products to develop — large-scale initiatives, M&A, and more.” The report has been used in internal product roadmaps and even supported due diligence in recent acquisitions. Acceleration and Specialization in AI AI emerged as a central theme in both the report and the webinar — a fact that should come as no surprise given the rapid emergence of this technology in recent years. According to the data, 56% of respondents are now actively using AI tools in their workflows, up dramatically from last year’s insights into AI across the industry. 35% are using AI to generate quick variations, while 44% are generating concept images and ideas early in design. These numbers appear to show that a “maturing” process is underway when it comes to AI usage within architecture. “People are no longer AI-hesitant,” Bates said. “We’re seeing a lot of firms experimenting with it, and the number of people fully invested — who say it’s part of their workflow — is growing.” Chaos recently acquired EvolveLAB and is developing integrated AI tools for its suite of architectural visualization applications. Roderick’s webinar demonstrated a range of potential applications for AI, including this animated construction timelapse. Chaos’s response to this evolving landscape is characteristically dynamic — they have already developed and launched tailor-made AI tools like the AI Enhancer in Enscape, which can instantly improve the realism of renderings, and have acquired EvolveLAB, which creates high-quality AI-driven tools for visualization, modeling and project documentation. “These aren’t generic solutions anymore,” commented Bates. “These are tools trained on architectural datasets, made specifically for architectural visualization.” Benefits of AI cited in the survey included faster workflows, enhanced creativity, and lower costs — “this sounds like a CFO’s dream right here”, remarked Bates. He went on to present examples of AI in action, from a sketch being transformed into a rendering, to instant material variations for a contemporary interior. Standardization, Integration, and the Path Forward The webinar also tackled some of the biggest roadblocks identified in the report: integration friction, lack of standardization, and concerns around quality control. “Architectural firms thrive on consistency,” said Bates, “and AI’s variability can be a headache.” To address this, Chaos is building standardized prompt libraries and working toward seamless integration across its visualization tools. “You shouldn’t have to redo work,” Bates emphasized. “If you’re in our ecosystem — or bringing in data from other platforms — it should just work.” Additionally, sustainability was highlighted as another challenging yet high-potential area within visualization workflows, based on survey feedback. As Bates explained, tools like Enscape Impact now offer rapid building performance simulations integrated directly into the design environment, requiring just a small number of key inputs. “It almost gets to the level of an AI prompt,” he noted. Architects are seeking even more automation and ease of use within this niche, signaling a strong demand for sustainability tools that are faster, smarter, and more intuitive — underscoring a major opportunity for future innovation. The webinar concluded with a lively Q&A, with AI predictably at the center of the debate. Some viewers expressed apprehension around the rapid adoption of these tools, while others pointed to the promise of AI’s efficiencies, freeing them up to focus more on design ideation. Whichever side of the argument you land on currently, one thing is certain — the State of Architectural Visualization report provides an invaluable source of insight into the industry today — and where it is heading tomorrow. To learn more and download your free copy of the report, click here, and learn more about Chaos’s latest developments in architectural visualization here. The post State of ArchViz Webinar: How AI Is Changing Architectural Visualization appeared first on Journal. Source: https://architizer.com/blog/inspiration/industry/state-of-architectural-visualization-webinar-chaos/ #state #archviz #webinar #how #changing #architectural #visualization
    ARCHITIZER.COM
    State of ArchViz Webinar: How AI Is Changing Architectural Visualization
    The world of architectural visualization is evolving rapidly, driven by new technologies and shifting industry demands. To capture the latest trends, Architizer and Chaos surveyed more than 1,000 design professionals worldwide, uncovering key insights into the challenges and opportunities shaping the field. The results are now available in the free-to-download State of Architectural Visualization 2024-25 Report — a must-read for architects, designers, and visualization specialists looking to stay ahead of the curve. In an engaging webinar hosted by Architizer and Chaos, Roderick Bates, Director of Corporate Development at Chaos, took to our virtual stage to explore the findings of the 2024-25 survey and report — and what they mean for the future of architectural visualization. Read on to discover the key points from Bates’ captivating presentation. Your browser does not support this, try viewing it on YouTube: YOUR URL Shifting Demographics and Global Reach Bates began by highlighting the wide-reaching, diverse nature of the survey respondents, which helps to substantiate and increase confidence in the findings. This year’s survey reflected a wider range of voices than ever before, with participants hailing from more than 75 countries. While 40% were U.S.-based, there was a significant increase in participation from the EU, UK, Oceania, and Asia. Firm size varied as well, with 64% of responses coming from small firms with fewer than 20 employees juxtaposed with a healthy 14% representation from firms with 100+ employees. “This kind of spread means that the data speaks to everyone in the AEC industry — from freelancers to global studios,” Bates emphasized. “All those different stakeholders have different needs, and the survey helps us understand them.” Chaos and Architizer’s annual industry reports include “The Future of Architectural Visualization” (2023), “The State of AI in Architecture” (2024), and “The State of Architectural Visualization” (2024-2025). Download the latest report for free here. Bates emphasized that this year’s survey data has already shaped real-world strategy for Chaos: “This is information that’s incredibly valuable to us as a company when we think about what products to develop — large-scale initiatives, M&A, and more.” The report has been used in internal product roadmaps and even supported due diligence in recent acquisitions. Acceleration and Specialization in AI AI emerged as a central theme in both the report and the webinar — a fact that should come as no surprise given the rapid emergence of this technology in recent years. According to the data, 56% of respondents are now actively using AI tools in their workflows, up dramatically from last year’s insights into AI across the industry. 35% are using AI to generate quick variations, while 44% are generating concept images and ideas early in design. These numbers appear to show that a “maturing” process is underway when it comes to AI usage within architecture. “People are no longer AI-hesitant,” Bates said. “We’re seeing a lot of firms experimenting with it, and the number of people fully invested — who say it’s part of their workflow — is growing.” Chaos recently acquired EvolveLAB and is developing integrated AI tools for its suite of architectural visualization applications. Roderick’s webinar demonstrated a range of potential applications for AI, including this animated construction timelapse. Chaos’s response to this evolving landscape is characteristically dynamic — they have already developed and launched tailor-made AI tools like the AI Enhancer in Enscape, which can instantly improve the realism of renderings, and have acquired EvolveLAB, which creates high-quality AI-driven tools for visualization, modeling and project documentation. “These aren’t generic solutions anymore,” commented Bates. “These are tools trained on architectural datasets, made specifically for architectural visualization.” Benefits of AI cited in the survey included faster workflows, enhanced creativity, and lower costs — “this sounds like a CFO’s dream right here”, remarked Bates. He went on to present examples of AI in action, from a sketch being transformed into a rendering, to instant material variations for a contemporary interior. Standardization, Integration, and the Path Forward The webinar also tackled some of the biggest roadblocks identified in the report: integration friction, lack of standardization, and concerns around quality control. “Architectural firms thrive on consistency,” said Bates, “and AI’s variability can be a headache.” To address this, Chaos is building standardized prompt libraries and working toward seamless integration across its visualization tools. “You shouldn’t have to redo work,” Bates emphasized. “If you’re in our ecosystem — or bringing in data from other platforms — it should just work.” Additionally, sustainability was highlighted as another challenging yet high-potential area within visualization workflows, based on survey feedback. As Bates explained, tools like Enscape Impact now offer rapid building performance simulations integrated directly into the design environment, requiring just a small number of key inputs. “It almost gets to the level of an AI prompt,” he noted. Architects are seeking even more automation and ease of use within this niche, signaling a strong demand for sustainability tools that are faster, smarter, and more intuitive — underscoring a major opportunity for future innovation. The webinar concluded with a lively Q&A, with AI predictably at the center of the debate. Some viewers expressed apprehension around the rapid adoption of these tools, while others pointed to the promise of AI’s efficiencies, freeing them up to focus more on design ideation. Whichever side of the argument you land on currently, one thing is certain — the State of Architectural Visualization report provides an invaluable source of insight into the industry today — and where it is heading tomorrow. To learn more and download your free copy of the report, click here, and learn more about Chaos’s latest developments in architectural visualization here. The post State of ArchViz Webinar: How AI Is Changing Architectural Visualization appeared first on Journal.
    0 Comments 0 Shares
  • Chaos has previewed new AI technologies in development at Chaos Next, its new 'AI lab'. They range from 'chat-driven material creation' in Chaos Cosmos to a new style transfer system for changing the look of Enscape renders.
    Chaos has previewed new AI technologies in development at Chaos Next, its new 'AI lab'. They range from 'chat-driven material creation' in Chaos Cosmos to a new style transfer system for changing the look of Enscape renders.
    0 Comments 0 Shares 63