• Plug and Play: Build a G-Assist Plug-In Today

    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems.
    NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels.

    G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow.
    Below, find popular G-Assist plug-ins, hackathon details and tips to get started.
    Plug-In and Win
    Join the hackathon by registering and checking out the curated technical resources.
    G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation.
    For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins.
    To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code.
    Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action.
    Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16.
    Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in.
    Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit.
    Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver.
    Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows.

    Popular plug-ins include:

    Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay.
    Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay.
    IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device.
    Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists.
    Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more.

    Get G-Assist 
    Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff.
    the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session.
    Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities.
    Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process.
    NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #plug #play #build #gassist #plugin
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file, requirements.txt, manifest.json, config.json, a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU, specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-InExplore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #plug #play #build #gassist #plugin
    BLOGS.NVIDIA.COM
    Plug and Play: Build a G-Assist Plug-In Today
    Project G-Assist — available through the NVIDIA App — is an experimental AI assistant that helps tune, control and optimize NVIDIA GeForce RTX systems. NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites the community to explore AI and build custom G-Assist plug-ins for a chance to win prizes and be featured on NVIDIA social media channels. G-Assist allows users to control their RTX GPU and other system settings using natural language, thanks to a small language model that runs on device. It can be used from the NVIDIA Overlay in the NVIDIA App without needing to tab out or switch programs. Users can expand its capabilities via plug-ins and even connect it to agentic frameworks such as Langflow. Below, find popular G-Assist plug-ins, hackathon details and tips to get started. Plug-In and Win Join the hackathon by registering and checking out the curated technical resources. G-Assist plug-ins can be built in several ways, including with Python for rapid development, with C++ for performance-critical apps and with custom system interactions for hardware and operating system automation. For those that prefer vibe coding, the G-Assist Plug-In Builder — a ChatGPT-based app that allows no-code or low-code development with natural language commands — makes it easy for enthusiasts to start creating plug-ins. To submit an entry, participants must provide a GitHub repository, including source code file (plugin.py), requirements.txt, manifest.json, config.json (if applicable), a plug-in executable file and READme code. Then, submit a video — between 30 seconds and two minutes — showcasing the plug-in in action. Finally, hackathoners must promote their plug-in using #AIonRTXHackathon on a social media channel: Instagram, TikTok or X. Submit projects via this form by Wednesday, July 16. Judges will assess plug-ins based on three main criteria: 1) innovation and creativity, 2) technical execution and integration, reviewing technical depth, G-Assist integration and scalability, and 3) usability and community impact, aka how easy it is to use the plug-in. Winners will be selected on Wednesday, Aug. 20. First place will receive a GeForce RTX 5090 laptop, second place a GeForce RTX 5080 GPU and third a GeForce RTX 5070 GPU. These top three will also be featured on NVIDIA’s social media channels, get the opportunity to meet the NVIDIA G-Assist team and earn an NVIDIA Deep Learning Institute self-paced course credit. Project G-Assist requires a GeForce RTX 50, 40 or 30 Series Desktop GPU with at least 12GB of VRAM, Windows 11 or 10 operating system, a compatible CPU (Intel Pentium G Series, Core i3, i5, i7 or higher; AMD FX, Ryzen 3, 5, 7, 9, Threadripper or higher), specific disk space requirements and a recent GeForce Game Ready Driver or NVIDIA Studio Driver. Plug-In(spiration) Explore open-source plug-in samples available on GitHub, which showcase the diverse ways on-device AI can enhance PC and gaming workflows. Popular plug-ins include: Google Gemini: Enables search-based queries using Google Search integration and large language model-based queries using Gemini capabilities in real time without needing to switch programs from the convenience of the NVIDIA App Overlay. Discord: Enables users to easily share game highlights or messages directly to Discord servers without disrupting gameplay. IFTTT: Lets users create automations across hundreds of compatible endpoints to trigger IoT routines — such as adjusting room lights and smart shades, or pushing the latest gaming news to a mobile device. Spotify: Lets users control Spotify using simple voice commands or the G-Assist interface to play favorite tracks and manage playlists. Twitch: Checks if any Twitch streamer is currently live and can access detailed stream information such as titles, games, view counts and more. Get G-Assist(ance)  Join the NVIDIA Developer Discord channel to collaborate, share creations and gain support from fellow AI enthusiasts and NVIDIA staff. Save the date for NVIDIA’s How to Build a G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities, discover the fundamentals of building, testing and deploying Project G-Assist plug-ins, and participate in a live Q&A session. Explore NVIDIA’s GitHub repository, which provides everything needed to get started developing with G-Assist, including sample plug-ins, step-by-step instructions and documentation for building custom functionalities. Learn more about the ChatGPT Plug-In Builder to transform ideas into functional G-Assist plug-ins with minimal coding. The tool uses OpenAI’s custom GPT builder to generate plug-in code and streamline the development process. NVIDIA’s technical blog walks through the architecture of a G-Assist plug-in, using a Twitch integration as an example. Discover how plug-ins work, how they communicate with G-Assist and how to build them from scratch. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Wow
    Love
    Sad
    25
    0 Reacties 0 aandelen
  • Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Reacties 0 aandelen
  • Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler

    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production.
    Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below.
    Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder.
    In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session.
    From Concept to Completion
    To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms.
    For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI.
    ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated.
    Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY.
    NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU.
    ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images.
    Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost.
    LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY.
    “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY 

    Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models.
    Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch.
    To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x.
    Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started.
    Photorealistic renders. Image courtesy of FITY.
    Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time.
    Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY.
    “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY

    Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #startup #uses #nvidia #rtxpowered #generative
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptationmodels — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #startup #uses #nvidia #rtxpowered #generative
    BLOGS.NVIDIA.COM
    Startup Uses NVIDIA RTX-Powered Generative AI to Make Coolers, Cooler
    Mark Theriault founded the startup FITY envisioning a line of clever cooling products: cold drink holders that come with freezable pucks to keep beverages cold for longer without the mess of ice. The entrepreneur started with 3D prints of products in his basement, building one unit at a time, before eventually scaling to mass production. Founding a consumer product company from scratch was a tall order for a single person. Going from preliminary sketches to production-ready designs was a major challenge. To bring his creative vision to life, Theriault relied on AI and his NVIDIA GeForce RTX-equipped system. For him, AI isn’t just a tool — it’s an entire pipeline to help him accomplish his goals. Read more about his workflow below. Plus, GeForce RTX 5050 laptops start arriving today at retailers worldwide, from $999. GeForce RTX 5050 Laptop GPUs feature 2,560 NVIDIA Blackwell CUDA cores, fifth-generation AI Tensor Cores, fourth-generation RT Cores, a ninth-generation NVENC encoder and a sixth-generation NVDEC decoder. In addition, NVIDIA’s Plug and Play: Project G-Assist Plug-In Hackathon — running virtually through Wednesday, July 16 — invites developers to explore AI and build custom G-Assist plug-ins for a chance to win prizes. Save the date for the G-Assist Plug-In webinar on Wednesday, July 9, from 10-11 a.m. PT, to learn more about Project G-Assist capabilities and fundamentals, and to participate in a live Q&A session. From Concept to Completion To create his standout products, Theriault tinkers with potential FITY Flex cooler designs with traditional methods, from sketch to computer-aided design to rapid prototyping, until he finds the right vision. A unique aspect of the FITY Flex design is that it can be customized with fun, popular shoe charms. For packaging design inspiration, Theriault uses his preferred text-to-image generative AI model for prototyping, Stable Diffusion XL — which runs 60% faster with the NVIDIA TensorRT software development kit — using the modular, node-based interface ComfyUI. ComfyUI gives users granular control over every step of the generation process — prompting, sampling, model loading, image conditioning and post-processing. It’s ideal for advanced users like Theriault who want to customize how images are generated. Theriault’s uses of AI result in a complete computer graphics-based ad campaign. Image courtesy of FITY. NVIDIA and GeForce RTX GPUs based on the NVIDIA Blackwell architecture include fifth-generation Tensor Cores designed to accelerate AI and deep learning workloads. These GPUs work with CUDA optimizations in PyTorch to seamlessly accelerate ComfyUI, reducing generation time on FLUX.1-dev, an image generation model from Black Forest Labs, from two minutes per image on the Mac M3 Ultra to about four seconds on the GeForce RTX 5090 desktop GPU. ComfyUI can also add ControlNets — AI models that help control image generation — that Theriault uses for tasks like guiding human poses, setting compositions via depth mapping and converting scribbles to images. Theriault even creates his own fine-tuned models to keep his style consistent. He used low-rank adaptation (LoRA) models — small, efficient adapters into specific layers of the network — enabling hyper-customized generation with minimal compute cost. LoRA models allow Theriault to ideate on visuals quickly. Image courtesy of FITY. “Over the last few months, I’ve been shifting from AI-assisted computer graphics renders to fully AI-generated product imagery using a custom Flux LoRA I trained in house. My RTX 4080 SUPER GPU has been essential for getting the performance I need to train and iterate quickly.” – Mark Theriault, founder of FITY  Theriault also taps into generative AI to create marketing assets like FITY Flex product packaging. He uses FLUX.1, which excels at generating legible text within images, addressing a common challenge in text-to-image models. Though FLUX.1 models can typically consume over 23GB of VRAM, NVIDIA has collaborated with Black Forest Labs to help reduce the size of these models using quantization — a technique that reduces model size while maintaining quality. The models were then accelerated with TensorRT, which provides an up to 2x speedup over PyTorch. To simplify using these models in ComfyUI, NVIDIA created the FLUX.1 NIM microservice, a containerized version of FLUX.1 that can be loaded in ComfyUI and enables FP4 quantization and TensorRT support. Combined, the models come down to just over 11GB of VRAM, and performance improves by 2.5x. Theriault uses the Blender Cycles app to render out final files. For 3D workflows, NVIDIA offers the AI Blueprint for 3D-guided generative AI to ease the positioning and composition of 3D images, so anyone interested in this method can quickly get started. Photorealistic renders. Image courtesy of FITY. Finally, Theriault uses large language models to generate marketing copy — tailored for search engine optimization, tone and storytelling — as well as to complete his patent and provisional applications, work that usually costs thousands of dollars in legal fees and considerable time. Generative AI helps Theriault create promotional materials like the above. Image courtesy of FITY. “As a one-man band with a ton of content to generate, having on-the-fly generation capabilities for my product designs really helps speed things up.” – Mark Theriault, founder of FITY Every texture, every word, every photo, every accessory was a micro-decision, Theriault said. AI helped him survive the “death by a thousand cuts” that can stall solo startup founders, he added. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    0 Reacties 0 aandelen
  • Just when you thought your game assets couldn’t get any more stylized, SideFX drops Project Skylark like a magician pulling a rabbit from a hat. Now you can download free Houdini tools that promise to turn your 3D buildings into architectural masterpieces and your clouds into fluffy, Instagrammable puffs. Who knew procedural generators could make you feel like a real artist without the need for actual talent?

    So, grab your free tools and let the world believe your game is a work of art, while you sit back and enjoy the virtual applause. Remember, it’s not about the destination; it’s about pretending you know what you’re doing along the way!

    #HoudiniTools #GameAssets #ProjectSkylark #3
    Just when you thought your game assets couldn’t get any more stylized, SideFX drops Project Skylark like a magician pulling a rabbit from a hat. Now you can download free Houdini tools that promise to turn your 3D buildings into architectural masterpieces and your clouds into fluffy, Instagrammable puffs. Who knew procedural generators could make you feel like a real artist without the need for actual talent? So, grab your free tools and let the world believe your game is a work of art, while you sit back and enjoy the virtual applause. Remember, it’s not about the destination; it’s about pretending you know what you’re doing along the way! #HoudiniTools #GameAssets #ProjectSkylark #3
    Download free Houdini tools from SideFX’s Project Skylark
    Get custom tools for creating stylized game assets, including procedural generators for 3D buildings, bridges and clouds.
    1 Reacties 0 aandelen
  • Formentera20 is back, and this time it promises to be even more enlightening than the last twelve editions combined. Can you feel the excitement in the air? From October 2 to 4, 2025, the idyllic shores of Formentera will serve as the perfect backdrop for our favorite gathering of digital wizards, creativity gurus, and communication wizards. Because nothing says "cutting-edge innovation" quite like a tropical island where you can sip on your coconut water while discussing the latest trends in the digital universe.

    This year’s theme? A delightful concoction of culture, creativity, and communication—all served with a side of salty sea breeze. Who knew the key to world-class networking was just a plane ticket away to a beach? Forget about conference rooms; nothing like a sun-kissed beach to inspire groundbreaking ideas. Surely, the sound of waves crashing will help us unlock the secrets of digital communication.

    And let’s not overlook the stellar lineup of speakers they've assembled. I can only imagine the conversations: “How can we boost engagement on social media?” followed by a collective nod as they all sip their overpriced organic juices. I’m sure the beach vibes will lend an air of authenticity to those discussions on algorithm tweaks and engagement metrics. Because nothing screams “authenticity” quite like a luxury resort hosting the crème de la crème of the advertising world.

    Let’s not forget the irony of discussing “innovation” while basking in the sun. Because what better way to innovate than to sit in a circle, wearing sunglasses, while contemplating the latest app that helps you find the nearest beach bar? It’s the dream, isn’t it? It’s almost poetic how the world of high-tech communication thrives in such a low-tech environment—a setting that leaves you wondering if the real innovation is simply the ability to disconnect from the digital chaos while still pretending to be a part of it.

    But let’s be real: the true highlight of Formentera20 is not the knowledge shared or the networking done; it’s the Instagram posts that will flood our feeds. After all, who doesn’t want to showcase their “hard work” at a digital festival by posting a picture of themselves with a sunset in the background? It’s all about branding, darling.

    So, mark your calendars! Prepare your best beach outfit and your most serious expression for photos. Come for the culture, stay for the creativity, and leave with the satisfaction of having been part of something that sounds ridiculously important while you, in reality, are just enjoying a holiday under the guise of professional development.

    In the end, Formentera20 isn’t just a festival; it’s an experience—one that lets you bask in the sun while pretending you’re solving the world’s digital problems. Cheers to innovation, creativity, and the art of making work look like a vacation!

    #Formentera20 #digitalculture #creativity #communication #innovation
    Formentera20 is back, and this time it promises to be even more enlightening than the last twelve editions combined. Can you feel the excitement in the air? From October 2 to 4, 2025, the idyllic shores of Formentera will serve as the perfect backdrop for our favorite gathering of digital wizards, creativity gurus, and communication wizards. Because nothing says "cutting-edge innovation" quite like a tropical island where you can sip on your coconut water while discussing the latest trends in the digital universe. This year’s theme? A delightful concoction of culture, creativity, and communication—all served with a side of salty sea breeze. Who knew the key to world-class networking was just a plane ticket away to a beach? Forget about conference rooms; nothing like a sun-kissed beach to inspire groundbreaking ideas. Surely, the sound of waves crashing will help us unlock the secrets of digital communication. And let’s not overlook the stellar lineup of speakers they've assembled. I can only imagine the conversations: “How can we boost engagement on social media?” followed by a collective nod as they all sip their overpriced organic juices. I’m sure the beach vibes will lend an air of authenticity to those discussions on algorithm tweaks and engagement metrics. Because nothing screams “authenticity” quite like a luxury resort hosting the crème de la crème of the advertising world. Let’s not forget the irony of discussing “innovation” while basking in the sun. Because what better way to innovate than to sit in a circle, wearing sunglasses, while contemplating the latest app that helps you find the nearest beach bar? It’s the dream, isn’t it? It’s almost poetic how the world of high-tech communication thrives in such a low-tech environment—a setting that leaves you wondering if the real innovation is simply the ability to disconnect from the digital chaos while still pretending to be a part of it. But let’s be real: the true highlight of Formentera20 is not the knowledge shared or the networking done; it’s the Instagram posts that will flood our feeds. After all, who doesn’t want to showcase their “hard work” at a digital festival by posting a picture of themselves with a sunset in the background? It’s all about branding, darling. So, mark your calendars! Prepare your best beach outfit and your most serious expression for photos. Come for the culture, stay for the creativity, and leave with the satisfaction of having been part of something that sounds ridiculously important while you, in reality, are just enjoying a holiday under the guise of professional development. In the end, Formentera20 isn’t just a festival; it’s an experience—one that lets you bask in the sun while pretending you’re solving the world’s digital problems. Cheers to innovation, creativity, and the art of making work look like a vacation! #Formentera20 #digitalculture #creativity #communication #innovation
    Formentera20 anuncia los ponentes de su 12ª edición: cultura digital, creatividad y comunicación frente al mar
    Del 2 al 4 de octubre de 2025, la isla de Formentera volverá a convertirse en un punto de encuentro para los profesionales del entorno digital, creativo y estratégico. El festival Formentera20 celebrará su duodécima edición con un cartel que, un año
    Like
    Love
    Wow
    Sad
    Angry
    291
    1 Reacties 0 aandelen
  • No sé, parece que hay un nuevo Instagram que se ha vuelto popular. Se llama algo como "una broma interna entre diseñadores" o algo así. Malika Favre y George Wu están detrás de esto, supongo que traen un poco de diversión a nuestras redes. No sé si realmente lo necesitamos, pero aquí estamos.

    La cuenta ha empezado a atraer a más personas, lo que es interesante, aunque a veces me pregunto si todas estas cosas que se vuelven virales son realmente necesarias. Quiero decir, hay tantos perfiles en Instagram que, honestamente, se siente un poco abrumador. Pero, al mismo tiempo, es uno de esos lugares donde la gente parece disfrutar de la estética y el humor que ofrecen estos dos diseñadores.

    La idea de que una broma interna se convierta en algo más grande es un poco... cliché, ¿no? Pero parece que ha funcionado para ellos. Tal vez eso es lo que la gente quiere ver en sus feeds: algo ligero que les haga reír un poco, aunque sea de manera minimalista. No sé si me convence del todo, pero bueno, eso es lo que hace que Instagram siga girando.

    Así que, si te aburres un poco mientras revisas tus redes, podrías echar un vistazo a esta cuenta. No prometo que sea increíble, pero al menos es algo diferente. Aunque, a veces, la diversión parece estar en el proceso de scroll, y no necesariamente en lo que encuentras.

    Así que ahí lo tienes, una cuenta más para seguir, si es que te interesa. No tengo muchas expectativas, pero bueno, ¿quién sabe? Tal vez encuentres algo de lo que reírte. O tal vez solo te quedes con la misma cara de siempre, como yo.

    #diseño #humor #Instagram #bromas #MalikaFavre
    No sé, parece que hay un nuevo Instagram que se ha vuelto popular. Se llama algo como "una broma interna entre diseñadores" o algo así. Malika Favre y George Wu están detrás de esto, supongo que traen un poco de diversión a nuestras redes. No sé si realmente lo necesitamos, pero aquí estamos. La cuenta ha empezado a atraer a más personas, lo que es interesante, aunque a veces me pregunto si todas estas cosas que se vuelven virales son realmente necesarias. Quiero decir, hay tantos perfiles en Instagram que, honestamente, se siente un poco abrumador. Pero, al mismo tiempo, es uno de esos lugares donde la gente parece disfrutar de la estética y el humor que ofrecen estos dos diseñadores. La idea de que una broma interna se convierta en algo más grande es un poco... cliché, ¿no? Pero parece que ha funcionado para ellos. Tal vez eso es lo que la gente quiere ver en sus feeds: algo ligero que les haga reír un poco, aunque sea de manera minimalista. No sé si me convence del todo, pero bueno, eso es lo que hace que Instagram siga girando. Así que, si te aburres un poco mientras revisas tus redes, podrías echar un vistazo a esta cuenta. No prometo que sea increíble, pero al menos es algo diferente. Aunque, a veces, la diversión parece estar en el proceso de scroll, y no necesariamente en lo que encuentras. Así que ahí lo tienes, una cuenta más para seguir, si es que te interesa. No tengo muchas expectativas, pero bueno, ¿quién sabe? Tal vez encuentres algo de lo que reírte. O tal vez solo te quedes con la misma cara de siempre, como yo. #diseño #humor #Instagram #bromas #MalikaFavre
    How an inside joke between designers became a cult Instagram account
    Malika Favre and George Wu bring the fun back to our feeds.
    Like
    Love
    Wow
    Angry
    Sad
    377
    1 Reacties 0 aandelen
  • Herman Miller, la marque emblématique des chaises qui coûtent le prix d'une petite voiture, a décidé de faire équipe avec deux artistes new-yorkais. Oui, vous avez bien entendu, deux artistes ! Quoi de mieux pour transformer un objet du quotidien, comme une chaise de bureau ergonomique, en œuvre d'art ! Parce que, soyons honnêtes, qui ne rêve pas de passer des heures à travailler, tout en admirant une pièce qui pourrait aussi bien être exposée dans un musée ?

    Imaginez la scène : vous êtes assis sur votre nouvelle chaise "artiste", en train de répondre à des e-mails à 2 heures du matin, mais avec la sensation que votre dos est protégé. Voilà le summum du luxe moderne ! Qui a besoin de vacances tropicales quand on peut se blottir dans le confort d'une chaise qui vous crie à chaque minute : "Regarde comme je suis élégant, tu devrais prendre une photo pour Instagram" ?

    Ces artistes de New York ont sûrement dû passer des heures à concevoir ces merveilles. Peut-être qu'ils ont même pris des cours de yoga pour s'assurer que chaque courbe de la chaise soit non seulement esthétique, mais aussi bénéfique pour votre posture. Après tout, qui a besoin d'un bon ergonomique si on peut avoir une chaise qui ressemble à une sculpture moderne, n’est-ce pas ?

    Et puis, parlons du prix. Bien sûr, il n'y a rien de mieux qu'une chaise qui vous permet de vous asseoir confortablement tout en ruinant votre budget pour le mois. Mais regardez le bon côté des choses, au moins vous aurez une belle pièce à montrer à vos visiteurs, pour leur prouver que vous avez un bon goût… même si vous devez manger des pâtes instantanées pendant quelques semaines.

    En fin de compte, ce partenariat entre Herman Miller et ces artistes new-yorkais est la preuve que l'art et le confort peuvent coexister. Mais à quel prix ? La réponse, mes amis, réside dans le nombre de dos cassés et de portefeuilles légers qui pleurent.

    Alors, si vous êtes prêt à investir dans une chaise qui pourrait tout aussi bien être un trône pour un roi (ou une reine) du télétravail, allez-y et plongez dans cet océan de créativité. Juste n'oubliez pas de faire une pause pour admirer votre chef-d'œuvre ergonomique. Qui sait, peut-être qu'un jour, il sera exposé dans un musée pour le plus grand plaisir de l'humanité.

    #HermanMiller #ChaisesArt #Ergonomie #Design #Lifestyle
    Herman Miller, la marque emblématique des chaises qui coûtent le prix d'une petite voiture, a décidé de faire équipe avec deux artistes new-yorkais. Oui, vous avez bien entendu, deux artistes ! Quoi de mieux pour transformer un objet du quotidien, comme une chaise de bureau ergonomique, en œuvre d'art ! Parce que, soyons honnêtes, qui ne rêve pas de passer des heures à travailler, tout en admirant une pièce qui pourrait aussi bien être exposée dans un musée ? Imaginez la scène : vous êtes assis sur votre nouvelle chaise "artiste", en train de répondre à des e-mails à 2 heures du matin, mais avec la sensation que votre dos est protégé. Voilà le summum du luxe moderne ! Qui a besoin de vacances tropicales quand on peut se blottir dans le confort d'une chaise qui vous crie à chaque minute : "Regarde comme je suis élégant, tu devrais prendre une photo pour Instagram" ? Ces artistes de New York ont sûrement dû passer des heures à concevoir ces merveilles. Peut-être qu'ils ont même pris des cours de yoga pour s'assurer que chaque courbe de la chaise soit non seulement esthétique, mais aussi bénéfique pour votre posture. Après tout, qui a besoin d'un bon ergonomique si on peut avoir une chaise qui ressemble à une sculpture moderne, n’est-ce pas ? Et puis, parlons du prix. Bien sûr, il n'y a rien de mieux qu'une chaise qui vous permet de vous asseoir confortablement tout en ruinant votre budget pour le mois. Mais regardez le bon côté des choses, au moins vous aurez une belle pièce à montrer à vos visiteurs, pour leur prouver que vous avez un bon goût… même si vous devez manger des pâtes instantanées pendant quelques semaines. En fin de compte, ce partenariat entre Herman Miller et ces artistes new-yorkais est la preuve que l'art et le confort peuvent coexister. Mais à quel prix ? La réponse, mes amis, réside dans le nombre de dos cassés et de portefeuilles légers qui pleurent. Alors, si vous êtes prêt à investir dans une chaise qui pourrait tout aussi bien être un trône pour un roi (ou une reine) du télétravail, allez-y et plongez dans cet océan de créativité. Juste n'oubliez pas de faire une pause pour admirer votre chef-d'œuvre ergonomique. Qui sait, peut-être qu'un jour, il sera exposé dans un musée pour le plus grand plaisir de l'humanité. #HermanMiller #ChaisesArt #Ergonomie #Design #Lifestyle
    Like
    Love
    Wow
    Sad
    Angry
    167
    1 Reacties 0 aandelen
  • Ah, the enchanting world of "Beautiful Accessibility"—where design meets a sweet sprinkle of dignity and a dollop of empathy. Isn’t it just delightful how we’ve collectively decided that making things accessible should also be aesthetically pleasing? Because, clearly, having a ramp that doesn’t double as a modern art installation would be just too much to ask.

    Gone are the days when accessibility was seen as a dull, clunky afterthought. Now, we’re on a quest to make sure that every wheelchair ramp looks like it was sculpted by Michelangelo himself. Who needs functionality when you can have a piece of art that also serves as a means of entry? You know, it’s almost like we’re saying, “Why should people who need help have to sacrifice beauty for practicality?”

    Let’s talk about that “rigid, rough, and unfriendly” stereotype of accessibility. Sure, it’s easy to dismiss these concerns. Just slap a coat of trendy paint on a handrail and voilà! You’ve got a “beautifully accessible” structure that’s just as likely to send someone flying off the side as it is to help them reach the door. But hey, at least it’s pretty to look at as they tumble—right?

    And let’s not overlook the underlying question: for whom are we really designing? Is it for the people who need accessibility, or is it for the fleeting approval of the Instagram crowd? If it’s the latter, then congratulations! You’re on the fast track to a trend that will inevitably fade faster than last season’s fashion. Remember, folks, the latest hashtag isn’t ‘#AccessibilityForAll’; it’s ‘#AccessibilityIsTheNewBlack,’ and we all know how long that lasts in the fickle world of social media.

    Now, let’s sprinkle in some empathy, shall we? Because nothing says “I care” quite like a designer who has spent five minutes contemplating the plight of those who can’t navigate the “avant-garde” staircase that serves no purpose other than to look chic in a photo. Empathy is key, but please, let’s not take it too far. After all, who has time to engage deeply with real human needs when there’s a dazzling design competition to win?

    So, as we stand at the crossroads of functionality and aesthetics, let’s all raise a glass to the idea of "Beautiful Accessibility." May it forever remain beautifully ironic and, of course, aesthetically pleasing—after all, what’s more dignified than a thoughtfully designed ramp that looks like it belongs in a museum, even if it makes getting into that museum a bit of a challenge?

    #BeautifulAccessibility #DesignWithEmpathy #AccessibilityMatters #DignityInDesign #IronyInAccessibility
    Ah, the enchanting world of "Beautiful Accessibility"—where design meets a sweet sprinkle of dignity and a dollop of empathy. Isn’t it just delightful how we’ve collectively decided that making things accessible should also be aesthetically pleasing? Because, clearly, having a ramp that doesn’t double as a modern art installation would be just too much to ask. Gone are the days when accessibility was seen as a dull, clunky afterthought. Now, we’re on a quest to make sure that every wheelchair ramp looks like it was sculpted by Michelangelo himself. Who needs functionality when you can have a piece of art that also serves as a means of entry? You know, it’s almost like we’re saying, “Why should people who need help have to sacrifice beauty for practicality?” Let’s talk about that “rigid, rough, and unfriendly” stereotype of accessibility. Sure, it’s easy to dismiss these concerns. Just slap a coat of trendy paint on a handrail and voilà! You’ve got a “beautifully accessible” structure that’s just as likely to send someone flying off the side as it is to help them reach the door. But hey, at least it’s pretty to look at as they tumble—right? And let’s not overlook the underlying question: for whom are we really designing? Is it for the people who need accessibility, or is it for the fleeting approval of the Instagram crowd? If it’s the latter, then congratulations! You’re on the fast track to a trend that will inevitably fade faster than last season’s fashion. Remember, folks, the latest hashtag isn’t ‘#AccessibilityForAll’; it’s ‘#AccessibilityIsTheNewBlack,’ and we all know how long that lasts in the fickle world of social media. Now, let’s sprinkle in some empathy, shall we? Because nothing says “I care” quite like a designer who has spent five minutes contemplating the plight of those who can’t navigate the “avant-garde” staircase that serves no purpose other than to look chic in a photo. Empathy is key, but please, let’s not take it too far. After all, who has time to engage deeply with real human needs when there’s a dazzling design competition to win? So, as we stand at the crossroads of functionality and aesthetics, let’s all raise a glass to the idea of "Beautiful Accessibility." May it forever remain beautifully ironic and, of course, aesthetically pleasing—after all, what’s more dignified than a thoughtfully designed ramp that looks like it belongs in a museum, even if it makes getting into that museum a bit of a challenge? #BeautifulAccessibility #DesignWithEmpathy #AccessibilityMatters #DignityInDesign #IronyInAccessibility
    Accesibilidad bella: diseñar para la dignidad y construir con empatía
    Más que una técnica o una guía de buenas prácticas, la accesibilidad bella es una actitud. Es reflexionar y cuestionar el porqué, el cómo y para quién diseñamos. A menudo se percibe la accesibilidad como algo rígido, rudo y poco amigable, estéticamen
    Like
    Love
    Wow
    Sad
    Angry
    325
    1 Reacties 0 aandelen
  • Ah, the magical world of 3D printing! Who would have thought that the secrets of crafting quality cosplay props could be unlocked with just a printer and a little patience? It’s almost like we’re living in a sci-fi movie, but instead of flying cars and robot servants, we get to print our own Spider-Man masks and Thor's hammers. Because, let’s face it, who needs actual craftsmanship when you have a 3D printer and a dash of delusion?

    Picture this: You walk into a convention, proudly wearing your freshly printed Spider-Man mask—its edges rough and its colors a little off, reminiscent of the last time you tried your hand at a DIY project. You can almost hear the gasps of admiration from fellow cosplayers, or maybe that’s just them trying to suppress their laughter. But hey, you saved a ton of time with that “minimal post-processing”! Who knew that “minimal” could also mean “looks like it was chewed up by a printer that’s had one too many?”

    And let’s not forget about Thor’s hammer, Mjölnir. Because nothing says “God of Thunder” quite like a clunky piece of plastic that could double as a doorstop. The best part? You can claim it’s a unique interpretation of Asgardian craftsmanship. Who needs authenticity when you have the power of 3D printing? Just make sure to avoid any actual thunder storms—after all, we wouldn’t want your new prop to melt in the rain, or worse, have it be mistaken for a water gun!

    Now, if you’re worried about how long it takes to print your masterpiece, fear not! You can always get lost in the mesmerizing whirl of the printer’s head, contemplating the deeper meaning of life while waiting for hours to see if your creation will actually resemble the image you downloaded from the internet. Spoiler alert: it probably won’t, but that’s part of the fun, right?

    Oh, and let’s not forget the joy of explaining to your friends that you “crafted” these pieces with care, while they’re blissfully unaware that you merely pressed a few buttons and hoped for the best. After all, why invest time in traditional crafting techniques when you can embrace the magic of technology?

    So, grab your 3D printer and let your imagination run wild! Who needs actual skills when you can print your dreams, layer by layer, with a side of mediocre results? Just remember, in the world of cosplay, it’s not about the journey; it’s about how many likes you can get on that Instagram post of you holding your half-finished Thor’s hammer like it’s the Holy Grail of cosplay.

    #3DPrinting #CosplayProps #SpiderMan #ThorsHammer #DIYDelusions
    Ah, the magical world of 3D printing! Who would have thought that the secrets of crafting quality cosplay props could be unlocked with just a printer and a little patience? It’s almost like we’re living in a sci-fi movie, but instead of flying cars and robot servants, we get to print our own Spider-Man masks and Thor's hammers. Because, let’s face it, who needs actual craftsmanship when you have a 3D printer and a dash of delusion? Picture this: You walk into a convention, proudly wearing your freshly printed Spider-Man mask—its edges rough and its colors a little off, reminiscent of the last time you tried your hand at a DIY project. You can almost hear the gasps of admiration from fellow cosplayers, or maybe that’s just them trying to suppress their laughter. But hey, you saved a ton of time with that “minimal post-processing”! Who knew that “minimal” could also mean “looks like it was chewed up by a printer that’s had one too many?” And let’s not forget about Thor’s hammer, Mjölnir. Because nothing says “God of Thunder” quite like a clunky piece of plastic that could double as a doorstop. The best part? You can claim it’s a unique interpretation of Asgardian craftsmanship. Who needs authenticity when you have the power of 3D printing? Just make sure to avoid any actual thunder storms—after all, we wouldn’t want your new prop to melt in the rain, or worse, have it be mistaken for a water gun! Now, if you’re worried about how long it takes to print your masterpiece, fear not! You can always get lost in the mesmerizing whirl of the printer’s head, contemplating the deeper meaning of life while waiting for hours to see if your creation will actually resemble the image you downloaded from the internet. Spoiler alert: it probably won’t, but that’s part of the fun, right? Oh, and let’s not forget the joy of explaining to your friends that you “crafted” these pieces with care, while they’re blissfully unaware that you merely pressed a few buttons and hoped for the best. After all, why invest time in traditional crafting techniques when you can embrace the magic of technology? So, grab your 3D printer and let your imagination run wild! Who needs actual skills when you can print your dreams, layer by layer, with a side of mediocre results? Just remember, in the world of cosplay, it’s not about the journey; it’s about how many likes you can get on that Instagram post of you holding your half-finished Thor’s hammer like it’s the Holy Grail of cosplay. #3DPrinting #CosplayProps #SpiderMan #ThorsHammer #DIYDelusions
    How to 3D print cosplay props: From Spider-Man masks to Thor's hammer
    Start crafting quality cosplay props with minimal post-processing.
    Like
    Love
    Wow
    Sad
    Angry
    590
    1 Reacties 0 aandelen