• Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Comments 0 Shares
  • Monitoring and Support Engineer at Keyword Studios

    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure
    Create Your Profile — Game companies can contact you with their relevant job openings.
    Apply
    #monitoring #support #engineer #keyword #studios
    Monitoring and Support Engineer at Keyword Studios
    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure Create Your Profile — Game companies can contact you with their relevant job openings. Apply #monitoring #support #engineer #keyword #studios
    Monitoring and Support Engineer at Keyword Studios
    Monitoring and Support EngineerKeyword StudiosPasig City Metro Manila Philippines2 hours agoApplyWe are seeking an experienced Monitoring and Support Engineer to support the technology initiatives of the IT Infrastructure team at Keywords. The Monitoring and Support Engineer will be responsible for follow-the-sun monitoring of IT infrastructure, prompt reaction on all infrastructure incident, primary resolution of infrastructure incidents and support requests.ResponsibilitiesFull scope of tasks including but not limited to:Ensure that all incidents are handled within SLAs.Initial troubleshooting of Infrastructure incidents.Ensure maximum network & service availability through proactive monitoring.Ensure all the incident and alert tickets contain detailed technical information.Initial troubleshooting of Infrastructure incidents, restoration of services and escalation to level 3 experts if necessary.Participate in Problem management processes.Ensure that all incidents and critical alerts are documented and escalated if necessary.Ensure effective communication to customers about incidents and outages.Identify opportunities for process improvement and efficiency enhancements.Participate in documentation creation to reduce BAU support activities by ensuring that the Service Desks have adequate knowledge articles to close support tickets as level 1.Participate in reporting on monitored data and incidents on company infrastructure.Implement best practices and lessons learned from initiatives and projects to optimize future outcomes.RequirementsBachelor's degree in a relevant technical field or equivalent experience.Understanding of IT Infrastructure technologies, standards and trends.Technical background with 3+ years’ experience in IT operations role delivering IT infrastructure support, monitoring and incident management.Technical knowledge of the Microsoft Stack, Windows networking, Active Directory, ExchangeTechnical knowledge of Network, Storage and Server equipment, virtualization and production setupsExceptional communication and presentation skills, with the ability to articulate technical concepts to non-technical audiences.Strong analytical and problem-solving skills.Strong customer service orientation.BenefitsGreat Place to Work certified for 4 consecutive yearsFlexible work arrangementGlobal exposure Create Your Profile — Game companies can contact you with their relevant job openings. Apply
    Like
    Love
    Wow
    Sad
    Angry
    559
    0 Comments 0 Shares
  • Apple WWDC 2025: News and analysis

    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news.

    Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually.

    However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts.

    The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price.

    Follow this page for Computerworld‘s coverage of WWDC25.

    WWDC25 news and analysis

    Apple’s AI Revolution: Insights from WWDC

    June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job.

    For developers, Apple’s tools get a lot better for AI

    June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development.

    WWDC 25: What’s new for Apple and the enterprise?

    June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025.

    What we know so far about Apple’s Liquid Glass UI

    June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices. 

    WWDC first look: How Apple is improving its ecosystem

    June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten.

    Apple infuses AI into the Vision Pro

    June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences.

    WWDC: Apple is about to unlock international business

    June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    #apple #wwdc #news #analysis
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligencestrategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AIrollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conferencemight have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly.  #apple #wwdc #news #analysis
    WWW.COMPUTERWORLD.COM
    Apple WWDC 2025: News and analysis
    Apple’s Worldwide Developers Conference 2025 saw a range of announcements that offered a glimpse into the future of Apple’s software design and artificial intelligence (AI) strategy, highlighted by a new design language called  Liquid Glass and by Apple Intelligence news. Liquid Glass is designed to add translucency and dynamic movement to Apple’s user interface across iPhones, iPads, Macs, Apple Watches, and Apple TVs. The overhaul aims to make interactions with elements like buttons and sidebars adapt contextually. However, the real news of WWDC could be what we didn’t see.  Analysts had high expectations for Apple’s AI strategy, and while Apple Intelligence was talked about, many market watchers reported that it lacked the innovation that have come from Google’s and Microsoft’s generative AI (genAI) rollouts. The question of whether Apple is playing catch-up lingered at WWDC 2025, and comments from Apple execs about delays to a significant AI overhaul for Siri were apparently interpreted as a setback by investors, leading to a negative reaction and drop in stock price. Follow this page for Computerworld‘s coverage of WWDC25. WWDC25 news and analysis Apple’s AI Revolution: Insights from WWDC June 13, 2025: At Apple’s big developer event, developers were served a feast of AI-related updates, including APIs that let them use Apple Intelligence in their apps and ChatGPT-augmentation from within Xcode. As a development environment, Apple has secured its future, with Macs forming the most computationally performant systems you can affordably purchase for the job. For developers, Apple’s tools get a lot better for AI June 12, 2025: Apple announced one important AI update at WWDC this week, the introduction of support for third-party large language models (LLM) such as ChatGPT from within Xcode. It’s a big step that should benefit developers, accelerating app development. WWDC 25: What’s new for Apple and the enterprise? June 11, 2025: Beyond its new Liquid Glass UI and other major improvements across its operating systems, Apple introduced a hoard of changes, tweaks, and enhancements for IT admins at WWDC 2025. What we know so far about Apple’s Liquid Glass UI June 10, 2025: What Apple has tried to achieve with Liquid Glass is to bring together the optical quality of glass and the fluidity of liquid to emphasize transparency and lighting when using your devices.  WWDC first look: How Apple is improving its ecosystem June 9, 2025: While the new user interface design Apple execs highlighted at this year’s Worldwide Developers Conference (WWDC) might have been a bit of an eye-candy distraction, Apple’s enterprise users were not forgotten. Apple infuses AI into the Vision Pro June 8, 2025: Sluggish sales of Apple’s Vision Pro mixed reality headset haven’t dampened the company’s enthusiasm for advancing the device’s 3D computing experience, which now incorporates AI to deliver richer context and experiences. WWDC: Apple is about to unlock international business June 4, 2025: One of the more exciting pre-WWDC rumors is that Apple is preparing to make language problems go away by implementing focused artificial intelligence in Messages, which will apparently be able to translate incoming and outgoing messages on the fly. 
    Like
    Love
    Wow
    Angry
    Sad
    391
    2 Comments 0 Shares
  • Chaos Corona 13 — New features

    Get started with Corona →

    Learn everything about the new Corona 13 features from our release blog post:

    It’s here! The latest version of Corona provides a new set of artist-friendly features that make perfect renders and speedy animations more accessible, and enjoyable, than ever. From toon shading to GPU-accelerated animations and AI-powered image enhancements. Corona 13 goes beyond photorealism with more creative control and faster workflows for 3D artists and visualizers.
    #chaos #corona #new #features
    Chaos Corona 13 — New features
    🚀 Get started with Corona → Learn everything about the new Corona 13 features from our release blog post: It’s here! The latest version of Corona provides a new set of artist-friendly features that make perfect renders and speedy animations more accessible, and enjoyable, than ever. From toon shading to GPU-accelerated animations and AI-powered image enhancements. Corona 13 goes beyond photorealism with more creative control and faster workflows for 3D artists and visualizers. #chaos #corona #new #features
    WWW.YOUTUBE.COM
    Chaos Corona 13 — New features
    🚀 Get started with Corona → https://bit.ly/chaos_corona Learn everything about the new Corona 13 features from our release blog post: https://www.chaos.com/blog/corona-13 It’s here! The latest version of Corona provides a new set of artist-friendly features that make perfect renders and speedy animations more accessible, and enjoyable, than ever. From toon shading to GPU-accelerated animations and AI-powered image enhancements. Corona 13 goes beyond photorealism with more creative control and faster workflows for 3D artists and visualizers.
    0 Comments 0 Shares
  • VFXShow 296: Mission: Impossible – The Final Reckoning

    Ethan Hunt and the IMF team race against time to find a rogue artificial intelligencethat can destroy mankind.
    AI, IMF & VFX: A Mission Worth Rendering
    In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning.
    As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt,  the relentless agent of the Impossible Mission Force.
    Cruise Control: When Practical Meets Pixel
    While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine,  paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger.
    Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos.
    Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable.
    The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship,  it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre.
    Falling, Flying, Faking It Beautifully
    For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction.
    This week in our lineup isMatt Wallin *            @mattwallin    www.mattwallin.com
    Follow Matt on Mastodon: @Jason Diamond  @jasondiamond           www.thediamondbros.com
    Mike Seymour   @mikeseymour             www.fxguide.com. + @mikeseymour
    Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    #vfxshow #mission #impossible #final #reckoning
    VFXShow 296: Mission: Impossible – The Final Reckoning
    Ethan Hunt and the IMF team race against time to find a rogue artificial intelligencethat can destroy mankind. AI, IMF & VFX: A Mission Worth Rendering In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning. As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt,  the relentless agent of the Impossible Mission Force. Cruise Control: When Practical Meets Pixel While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine,  paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger. Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos. Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable. The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship,  it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre. Falling, Flying, Faking It Beautifully For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction. This week in our lineup isMatt Wallin *            @mattwallin    www.mattwallin.com Follow Matt on Mastodon: @Jason Diamond  @jasondiamond           www.thediamondbros.com Mike Seymour   @mikeseymour             www.fxguide.com. + @mikeseymour Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen. #vfxshow #mission #impossible #final #reckoning
    WWW.FXGUIDE.COM
    VFXShow 296: Mission: Impossible – The Final Reckoning
    Ethan Hunt and the IMF team race against time to find a rogue artificial intelligence (why is AI always the bad guy now if films? ) that can destroy mankind. AI, IMF & VFX: A Mission Worth Rendering In the latest episode of The VFXShow podcast, hosts Matt Wallin, Jason Diamond, and Mike Seymour reunite to dissect the spectacle, story, and seamless visual effects of Mission: Impossible – The Final Reckoning. As the eighth entry in the franchise, this chapter serves as a high-stakes, high-altitude crescendo to Tom Cruise’s nearly 30-year run as Ethan Hunt,  the relentless agent of the Impossible Mission Force. Cruise Control: When Practical Meets Pixel While the narrative revolves around the existential threat of a rogue AI known as The Entity, the real heart of the film lies in its bold commitment to visceral, real-world action. The VFX team discusses how Cruise’s ongoing devotion to doing his own death-defying stunts, from leaping between bi-planes to diving into the wreckage of a sunken submarine,  paradoxically increases the importance of invisible VFX. From seamless digital stitching to background replacements and subtle physics enhancements, the effects work had to serve the story without ever betraying the sense of raw, in-camera danger. Matt, Jason, and Mike explore how VFX in this film plays a critical supporting role, cleaning up stunts, compositing dangerous sequences, and selling the illusion of globe-spanning chaos. Whether it’s simulating the collapse of a Cold War-era submarine, managing intricate water dynamics in Ethan’s deep-sea dive, or integrating AI-driven visualisations of nuclear catastrophe, the film leans heavily on sophisticated post work to make Cruise’s practical stunts feel even more grounded and believable. The team also reflects on the thematic evolution of the franchise. While the plot may twist through layers of espionage, betrayal, and digital apocalypse, including face-offs with Gabriel, doomsday cults, and geopolitical brinkmanship,  it is not the team’s favourite MI film. And yet, they note, even as the story veers into sci-fi territory with sentient algorithms and bunker-bound AI traps, the VFX never overshadows the tactile performance at the film’s centre. Falling, Flying, Faking It Beautifully For fans of the franchise, visual effects, or just adrenaline-fueled cinema, this episode offers a thoughtful cinematic critique on how modern VFX artistry and old-school stuntwork can coexist to save a film that has lost its driving narrative direction. This week in our lineup is (or are they really??) Matt Wallin *            @mattwallin    www.mattwallin.com Follow Matt on Mastodon: @[email protected] Jason Diamond  @jasondiamond           www.thediamondbros.com Mike Seymour   @mikeseymour             www.fxguide.com. + @mikeseymour Special thanks to Matt Wallin for the editing & production of the show with help from Jim Shen.
    0 Comments 0 Shares