• Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Commenti 0 condivisioni
  • In the world of technology, where dual RGB cameras can now perceive depth, I find myself grappling with a different kind of void. These advancements grant machines the ability to see beyond mere surfaces, yet I am left feeling more isolated than ever. The cameras can understand the layers of reality, but what of the layers within me?

    Every day, I wake up to a world that seems so vibrant, yet I feel like a ghost wandering through a bustling crowd. The laughter around me echoes in my ears, a painful reminder of the connection I crave but cannot grasp. Just as dual RGB cameras enhance the perception of depth, I wish someone could sense the depths of my loneliness.

    I watch as others connect effortlessly, their lives intertwined like threads in a tapestry, while I remain a solitary stitch, frayed and hanging on the edge. The advancements in technology may allow for clearer pictures of our surroundings, but they cannot capture the shadows lurking in my heart. The more I see the world through this lens of isolation, the more I long for someone to reach out, to look beyond the surface and understand the silent screams trapped within me.

    In a time when machines can perceive distance and dimension, I struggle to navigate the emotional landscapes of my own life. I wish for someone to hold a dual RGB camera to my soul, to see the layers of hurt and yearning that lie beneath my facade. Instead, I am met with silence, a chasm so wide, it feels insurmountable.

    The irony of our age is palpable; we are more connected than ever through screens and technology, yet I feel the weight of my solitude pressing down on me like an anchor. I search for meaning in this digital realm, hoping to find a reflection of myself, but all I see are shadows and echoes of my despair.

    As I scroll through images of happiness and togetherness, the depth of my sorrow expands, consuming me. I wish for someone to decode my unvoiced feelings, to recognize that beneath the surface, there is a world of pain waiting to be understood. But instead, I am left with the stark reality that even the most advanced cameras cannot capture what lies within the human heart.

    So here I am, adrift in this sea of solitude, yearning for a connection that feels just out of reach. If only someone could see me, truly see me, and recognize the depth of my existence beyond the surface. Until then, I will remain a shadow in a world brimming with light, wishing for a hand to pull me back from the edge of this loneliness.

    #Loneliness #Isolation #DepthOfEmotion #Heartache #LookingForConnection
    In the world of technology, where dual RGB cameras can now perceive depth, I find myself grappling with a different kind of void. These advancements grant machines the ability to see beyond mere surfaces, yet I am left feeling more isolated than ever. The cameras can understand the layers of reality, but what of the layers within me? Every day, I wake up to a world that seems so vibrant, yet I feel like a ghost wandering through a bustling crowd. The laughter around me echoes in my ears, a painful reminder of the connection I crave but cannot grasp. Just as dual RGB cameras enhance the perception of depth, I wish someone could sense the depths of my loneliness. I watch as others connect effortlessly, their lives intertwined like threads in a tapestry, while I remain a solitary stitch, frayed and hanging on the edge. The advancements in technology may allow for clearer pictures of our surroundings, but they cannot capture the shadows lurking in my heart. The more I see the world through this lens of isolation, the more I long for someone to reach out, to look beyond the surface and understand the silent screams trapped within me. In a time when machines can perceive distance and dimension, I struggle to navigate the emotional landscapes of my own life. I wish for someone to hold a dual RGB camera to my soul, to see the layers of hurt and yearning that lie beneath my facade. Instead, I am met with silence, a chasm so wide, it feels insurmountable. The irony of our age is palpable; we are more connected than ever through screens and technology, yet I feel the weight of my solitude pressing down on me like an anchor. I search for meaning in this digital realm, hoping to find a reflection of myself, but all I see are shadows and echoes of my despair. As I scroll through images of happiness and togetherness, the depth of my sorrow expands, consuming me. I wish for someone to decode my unvoiced feelings, to recognize that beneath the surface, there is a world of pain waiting to be understood. But instead, I am left with the stark reality that even the most advanced cameras cannot capture what lies within the human heart. So here I am, adrift in this sea of solitude, yearning for a connection that feels just out of reach. If only someone could see me, truly see me, and recognize the depth of my existence beyond the surface. Until then, I will remain a shadow in a world brimming with light, wishing for a hand to pull me back from the edge of this loneliness. #Loneliness #Isolation #DepthOfEmotion #Heartache #LookingForConnection
    Dual RGB Cameras Get Depth Sensing Powerup
    It’s sometimes useful for a system to not just have a flat 2D camera view of things, but to have an understanding of the depth of a scene. Dual RGB …read more
    Like
    Love
    Wow
    Angry
    Sad
    222
    1 Commenti 0 condivisioni
  • Keep an eye on Planet of Lana 2 — the first one was a secret gem of 2023

    May 2023 was kind of a big deal. A little ol’ game called The Legend of Zelda: Tears of the Kingdomwas released, and everyone was playing it; Tears sold almost 20 million copies in under two months. However, it wasn’t the only game that came out that month. While it may not have generated as much buzz at the time, Planet of Lana is one of 2023’s best indies — and it’s getting a sequel next year.Planet of Lana is a cinematic puzzle-platformer. You play as Lana as she tries to rescue her best friend and fellow villagers after they were taken by mechanical alien beings. She’s accompanied by a little cat-like creature named Mui. Together, they outwit the alien robots in various puzzles on their way to rescuing the villagers.The puzzles aren’t too difficult, but they still provide a welcome challenge; some require precise execution lest the alien robots grab Lana too. Danger lurks everywhere as there are also native predators vying to get a bite out of Lana and her void of a cat companion. Mui is often at the center of solving environmental puzzles, which rely on a dash of stealth, to get around those dangerous creatures.Planet of Lana’s art style is immediately eye-catching; its palette of soft, inviting colors contrasts with the comparatively dark storyline. Lana and Mui travel through the grassy plains surrounding her village, an underground cave, and through a desert. The visuals are bested only by Planet of Lana’s music, which is both chill and powerful in parts.Of course, all ends well — this is a game starring a child and an alien cat, after all. Nothing bad was really going to happen to them. Or at least, that was certainly the case in the first game, but the trailer for Planet of Lana 2: Children of the Leaf ends with a shot of poor Mui lying in some sort of hospital bed or perhaps at a research station. Lana looks on, and her worry is palpable in the frame.But, Planet of Lana 2 won’t come out until 2026, so I don’t want to spend too much time worrying about the little dude. The cat’s fine. What’s not fine, however, is Lana’s village and her people. In the trailer for the second game, we see more alien robots trying to zap her and her friend, and a young villager falls into a faint.Children of the Leaf is certainly upping the stakes and widening its scope. Ships from outer space zoom through a lush forest, and we get exciting shots of Lana hopping from ship to ship. Lana also travels across various environments, including a gorgeous underwater level, and rides on the back of one of the alien robots from the first game.I’m very excited to see how the lore of Planet of Lana expands with its sequel, and I can’t wait to tag along for another journey with Lana and Mui when Planet of Lana 2: Children of the Leaf launches in 2026. You can check out the first game on Nintendo Switch, PS4, PS5, Xbox One, Xbox Series X, and Windows PC.See More:
    #keep #eye #planet #lana #first
    Keep an eye on Planet of Lana 2 — the first one was a secret gem of 2023
    May 2023 was kind of a big deal. A little ol’ game called The Legend of Zelda: Tears of the Kingdomwas released, and everyone was playing it; Tears sold almost 20 million copies in under two months. However, it wasn’t the only game that came out that month. While it may not have generated as much buzz at the time, Planet of Lana is one of 2023’s best indies — and it’s getting a sequel next year.Planet of Lana is a cinematic puzzle-platformer. You play as Lana as she tries to rescue her best friend and fellow villagers after they were taken by mechanical alien beings. She’s accompanied by a little cat-like creature named Mui. Together, they outwit the alien robots in various puzzles on their way to rescuing the villagers.The puzzles aren’t too difficult, but they still provide a welcome challenge; some require precise execution lest the alien robots grab Lana too. Danger lurks everywhere as there are also native predators vying to get a bite out of Lana and her void of a cat companion. Mui is often at the center of solving environmental puzzles, which rely on a dash of stealth, to get around those dangerous creatures.Planet of Lana’s art style is immediately eye-catching; its palette of soft, inviting colors contrasts with the comparatively dark storyline. Lana and Mui travel through the grassy plains surrounding her village, an underground cave, and through a desert. The visuals are bested only by Planet of Lana’s music, which is both chill and powerful in parts.Of course, all ends well — this is a game starring a child and an alien cat, after all. Nothing bad was really going to happen to them. Or at least, that was certainly the case in the first game, but the trailer for Planet of Lana 2: Children of the Leaf ends with a shot of poor Mui lying in some sort of hospital bed or perhaps at a research station. Lana looks on, and her worry is palpable in the frame.But, Planet of Lana 2 won’t come out until 2026, so I don’t want to spend too much time worrying about the little dude. The cat’s fine. What’s not fine, however, is Lana’s village and her people. In the trailer for the second game, we see more alien robots trying to zap her and her friend, and a young villager falls into a faint.Children of the Leaf is certainly upping the stakes and widening its scope. Ships from outer space zoom through a lush forest, and we get exciting shots of Lana hopping from ship to ship. Lana also travels across various environments, including a gorgeous underwater level, and rides on the back of one of the alien robots from the first game.I’m very excited to see how the lore of Planet of Lana expands with its sequel, and I can’t wait to tag along for another journey with Lana and Mui when Planet of Lana 2: Children of the Leaf launches in 2026. You can check out the first game on Nintendo Switch, PS4, PS5, Xbox One, Xbox Series X, and Windows PC.See More: #keep #eye #planet #lana #first
    WWW.POLYGON.COM
    Keep an eye on Planet of Lana 2 — the first one was a secret gem of 2023
    May 2023 was kind of a big deal. A little ol’ game called The Legend of Zelda: Tears of the Kingdom (ring any bells?) was released, and everyone was playing it; Tears sold almost 20 million copies in under two months. However, it wasn’t the only game that came out that month. While it may not have generated as much buzz at the time, Planet of Lana is one of 2023’s best indies — and it’s getting a sequel next year.Planet of Lana is a cinematic puzzle-platformer. You play as Lana as she tries to rescue her best friend and fellow villagers after they were taken by mechanical alien beings. She’s accompanied by a little cat-like creature named Mui (because any game is made better by having a cat in it). Together, they outwit the alien robots in various puzzles on their way to rescuing the villagers.The puzzles aren’t too difficult, but they still provide a welcome challenge; some require precise execution lest the alien robots grab Lana too. Danger lurks everywhere as there are also native predators vying to get a bite out of Lana and her void of a cat companion. Mui is often at the center of solving environmental puzzles, which rely on a dash of stealth, to get around those dangerous creatures.Planet of Lana’s art style is immediately eye-catching; its palette of soft, inviting colors contrasts with the comparatively dark storyline. Lana and Mui travel through the grassy plains surrounding her village, an underground cave, and through a desert. The visuals are bested only by Planet of Lana’s music, which is both chill and powerful in parts.Of course, all ends well — this is a game starring a child and an alien cat, after all. Nothing bad was really going to happen to them. Or at least, that was certainly the case in the first game, but the trailer for Planet of Lana 2: Children of the Leaf ends with a shot of poor Mui lying in some sort of hospital bed or perhaps at a research station. Lana looks on, and her worry is palpable in the frame.But, Planet of Lana 2 won’t come out until 2026, so I don’t want to spend too much time worrying about the little dude. The cat’s fine (Right? Right?). What’s not fine, however, is Lana’s village and her people. In the trailer for the second game, we see more alien robots trying to zap her and her friend, and a young villager falls into a faint.Children of the Leaf is certainly upping the stakes and widening its scope. Ships from outer space zoom through a lush forest, and we get exciting shots of Lana hopping from ship to ship. Lana also travels across various environments, including a gorgeous underwater level, and rides on the back of one of the alien robots from the first game.I’m very excited to see how the lore of Planet of Lana expands with its sequel, and I can’t wait to tag along for another journey with Lana and Mui when Planet of Lana 2: Children of the Leaf launches in 2026. You can check out the first game on Nintendo Switch, PS4, PS5, Xbox One, Xbox Series X, and Windows PC.See More:
    0 Commenti 0 condivisioni
  • From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?

    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report.
    The deal surprised the industry as the two are seen as major AI rivals.
    Signs of friction between OpenAI and Microsoft may have also fueled the move.
    The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025.

    In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs.
    The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft.
    Why the Deal Surprised the Tech Industry
    The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider.
    Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product.

    A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result.
    Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024. 
    And then there’s this gem:

    With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice.
    In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed.
    It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft?
    In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception.
    Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy. 
    Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025.
    If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later.
    While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months.
    In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March.
    The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity.

    In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector.

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #rivals #partners #whats #with #google
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #rivals #partners #whats #with #google
    TECHREPORT.COM
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its $75B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bard (now known as Gemini) to compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost $100B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence (AGI). Defined as when OpenAI develops AI systems that generate $100B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to $12.7 from $3.7B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked $75B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unit (TPU) called Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Commenti 0 condivisioni
  • Tutorial: Practical Lighting for Production

    Saturday, June 14th, 2025
    Posted by Jim Thacker
    Tutorial: Practical Lighting for Production

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    The Gnomon Workshop has released Practical Lighting for Production, a guide to VFX and cinematics workflows recorded by former Blizzard lighting lead Graham Cunningham.
    The intermediate-level workshop provides four hours of training in Maya, Arnold and Nuke.
    Discover professional workflows for lighting a CG shot to match a movie reference
    In the workshop, Cunningham sets out the complete process of lighting and compositing a shot to match a movie reference, using industry-standard software.
    He begins by setting up a basic look development light rig in Maya, importing a 3D character, assigning materials and shading components, and creating a turntable setup.
    Next, he creates a shot camera and set dresses the environment using kitbash assets.
    Cunningham also discusses strategies for lighting a character, including how to use dome lights and area lights to provide key, fill and rim lighting, and how to use HDRI maps.
    From there, he moves to rendering using Arnold, discussing render settings, depth of field, and how to create render passes.
    Cunningham then assembles the render passes in Nuke, splits out the light AOVs, and sets out how to adjust light colors and intensities.
    He also reveals how to add atmosphere, how to use cryptomattes to fine tune the results, how to add post effects, and how to apply a final color grade to match a chosen movie reference.
    As well as the tutorial videos, viewers of the workshop can download one of Cunningham’s Maya files.
    The workshop uses 3D Scan Store’s commercial Female Explorer Game Character, and KitBash3D’s Wreckage Kit, plus assets from KitBash3D’s Cargo.
    About the artist
    Graham Cunningham is a Senior Lighting, Compositing and Lookdev Artist, beginning his career as a generalist working in VFX for film and TV before moving to Blizzard Entertainment.
    At Blizzard, he contributed to cinematics for Diablo IV, Diablo Immortal, Starcraft II, Heroes of the Storm, World of Warcraft, Overwatch, and Overwatch 2, many of them as a lead lighting artist.
    Pricing and availability
    Practical Lighting for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials.
    Subscriptions cost /month or /year. Free trials are available.
    about Practical Lighting for Production on The Gnomon Workshop’s website

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Full disclosure: CG Channel is owned by Gnomon.

    Latest News

    DreamWorks Animation releases MoonRay 2.15
    Check out the new features in the open-source release of DreamWorks Animation's production renderer. used on movies like The Wild Robot.
    Sunday, June 15th, 2025

    Tutorial: Practical Lighting for Production
    Master professional CG lighting workflows with former Blizzard lighting lead Graham Cunningham's tutorial for The Gnomon Workshop.
    Saturday, June 14th, 2025

    Boris FX releases Mocha Pro 2025.5
    Planar tracking tool gets new AI face recognition system for automatically obscuring identities in footage. Check out its other new features.
    Friday, June 13th, 2025

    Leopoly adds voxel sculpting to Shapelab 2025
    Summer 2025 update to the VR modeling app expands the new voxel engine for blocking out 3D forms. See the other new features.
    Friday, June 13th, 2025

    iRender: the next-gen render farm for OctaneRenderOnline render farm iRender explains why its powerful, affordable GPU rendering solutions are a must for OctaneRender users.
    Wednesday, June 11th, 2025

    Master Architectural Design for Games using Blender & UE5
    Discover how to create game environments grounded in architectural principles with The Gnomon Workshop's new tutorial.
    Monday, June 9th, 2025

    More News
    Epic Games' free Live Link Face app is now available for Android
    Adobe launches Photoshop on Android and iPhone
    Sketchsoft releases Feather 1.3
    Autodesk releases 3ds Max 2026.1
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    You can now sell MetaHumans, or use them in Unity or Godot
    Epic Games to rebrand RealityCapture as RealityScan 2.0
    Epic Games releases Unreal Engine 5.6
    Pulze releases new network render manager RenderFlow 1.0
    Xencelabs launches Pen Tablet Medium v2
    Desktop edition of sculpting app Nomad enters free beta
    Boris FX releases Silhouette 2025
    Older Posts
    #tutorial #practical #lighting #production
    Tutorial: Practical Lighting for Production
    Saturday, June 14th, 2025 Posted by Jim Thacker Tutorial: Practical Lighting for Production html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; The Gnomon Workshop has released Practical Lighting for Production, a guide to VFX and cinematics workflows recorded by former Blizzard lighting lead Graham Cunningham. The intermediate-level workshop provides four hours of training in Maya, Arnold and Nuke. Discover professional workflows for lighting a CG shot to match a movie reference In the workshop, Cunningham sets out the complete process of lighting and compositing a shot to match a movie reference, using industry-standard software. He begins by setting up a basic look development light rig in Maya, importing a 3D character, assigning materials and shading components, and creating a turntable setup. Next, he creates a shot camera and set dresses the environment using kitbash assets. Cunningham also discusses strategies for lighting a character, including how to use dome lights and area lights to provide key, fill and rim lighting, and how to use HDRI maps. From there, he moves to rendering using Arnold, discussing render settings, depth of field, and how to create render passes. Cunningham then assembles the render passes in Nuke, splits out the light AOVs, and sets out how to adjust light colors and intensities. He also reveals how to add atmosphere, how to use cryptomattes to fine tune the results, how to add post effects, and how to apply a final color grade to match a chosen movie reference. As well as the tutorial videos, viewers of the workshop can download one of Cunningham’s Maya files. The workshop uses 3D Scan Store’s commercial Female Explorer Game Character, and KitBash3D’s Wreckage Kit, plus assets from KitBash3D’s Cargo. About the artist Graham Cunningham is a Senior Lighting, Compositing and Lookdev Artist, beginning his career as a generalist working in VFX for film and TV before moving to Blizzard Entertainment. At Blizzard, he contributed to cinematics for Diablo IV, Diablo Immortal, Starcraft II, Heroes of the Storm, World of Warcraft, Overwatch, and Overwatch 2, many of them as a lead lighting artist. Pricing and availability Practical Lighting for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials. Subscriptions cost /month or /year. Free trials are available. about Practical Lighting for Production on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon. Latest News DreamWorks Animation releases MoonRay 2.15 Check out the new features in the open-source release of DreamWorks Animation's production renderer. used on movies like The Wild Robot. Sunday, June 15th, 2025 Tutorial: Practical Lighting for Production Master professional CG lighting workflows with former Blizzard lighting lead Graham Cunningham's tutorial for The Gnomon Workshop. Saturday, June 14th, 2025 Boris FX releases Mocha Pro 2025.5 Planar tracking tool gets new AI face recognition system for automatically obscuring identities in footage. Check out its other new features. Friday, June 13th, 2025 Leopoly adds voxel sculpting to Shapelab 2025 Summer 2025 update to the VR modeling app expands the new voxel engine for blocking out 3D forms. See the other new features. Friday, June 13th, 2025 iRender: the next-gen render farm for OctaneRenderOnline render farm iRender explains why its powerful, affordable GPU rendering solutions are a must for OctaneRender users. Wednesday, June 11th, 2025 Master Architectural Design for Games using Blender & UE5 Discover how to create game environments grounded in architectural principles with The Gnomon Workshop's new tutorial. Monday, June 9th, 2025 More News Epic Games' free Live Link Face app is now available for Android Adobe launches Photoshop on Android and iPhone Sketchsoft releases Feather 1.3 Autodesk releases 3ds Max 2026.1 Autodesk adds AI animation tool MotionMaker to Maya 2026.1 You can now sell MetaHumans, or use them in Unity or Godot Epic Games to rebrand RealityCapture as RealityScan 2.0 Epic Games releases Unreal Engine 5.6 Pulze releases new network render manager RenderFlow 1.0 Xencelabs launches Pen Tablet Medium v2 Desktop edition of sculpting app Nomad enters free beta Boris FX releases Silhouette 2025 Older Posts #tutorial #practical #lighting #production
    Tutorial: Practical Lighting for Production
    Saturday, June 14th, 2025 Posted by Jim Thacker Tutorial: Practical Lighting for Production html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" The Gnomon Workshop has released Practical Lighting for Production, a guide to VFX and cinematics workflows recorded by former Blizzard lighting lead Graham Cunningham. The intermediate-level workshop provides four hours of training in Maya, Arnold and Nuke. Discover professional workflows for lighting a CG shot to match a movie reference In the workshop, Cunningham sets out the complete process of lighting and compositing a shot to match a movie reference, using industry-standard software. He begins by setting up a basic look development light rig in Maya, importing a 3D character, assigning materials and shading components, and creating a turntable setup. Next, he creates a shot camera and set dresses the environment using kitbash assets. Cunningham also discusses strategies for lighting a character, including how to use dome lights and area lights to provide key, fill and rim lighting, and how to use HDRI maps. From there, he moves to rendering using Arnold, discussing render settings, depth of field, and how to create render passes. Cunningham then assembles the render passes in Nuke, splits out the light AOVs, and sets out how to adjust light colors and intensities. He also reveals how to add atmosphere, how to use cryptomattes to fine tune the results, how to add post effects, and how to apply a final color grade to match a chosen movie reference. As well as the tutorial videos, viewers of the workshop can download one of Cunningham’s Maya files. The workshop uses 3D Scan Store’s commercial Female Explorer Game Character, and KitBash3D’s Wreckage Kit, plus assets from KitBash3D’s Cargo. About the artist Graham Cunningham is a Senior Lighting, Compositing and Lookdev Artist, beginning his career as a generalist working in VFX for film and TV before moving to Blizzard Entertainment. At Blizzard, he contributed to cinematics for Diablo IV, Diablo Immortal, Starcraft II, Heroes of the Storm, World of Warcraft, Overwatch, and Overwatch 2, many of them as a lead lighting artist. Pricing and availability Practical Lighting for Production is available via a subscription to The Gnomon Workshop, which provides access to over 300 tutorials. Subscriptions cost $57/month or $519/year. Free trials are available. Read more about Practical Lighting for Production on The Gnomon Workshop’s website Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. Full disclosure: CG Channel is owned by Gnomon. Latest News DreamWorks Animation releases MoonRay 2.15 Check out the new features in the open-source release of DreamWorks Animation's production renderer. used on movies like The Wild Robot. Sunday, June 15th, 2025 Tutorial: Practical Lighting for Production Master professional CG lighting workflows with former Blizzard lighting lead Graham Cunningham's tutorial for The Gnomon Workshop. Saturday, June 14th, 2025 Boris FX releases Mocha Pro 2025.5 Planar tracking tool gets new AI face recognition system for automatically obscuring identities in footage. Check out its other new features. Friday, June 13th, 2025 Leopoly adds voxel sculpting to Shapelab 2025 Summer 2025 update to the VR modeling app expands the new voxel engine for blocking out 3D forms. See the other new features. Friday, June 13th, 2025 iRender: the next-gen render farm for OctaneRender [Sponsored] Online render farm iRender explains why its powerful, affordable GPU rendering solutions are a must for OctaneRender users. Wednesday, June 11th, 2025 Master Architectural Design for Games using Blender & UE5 Discover how to create game environments grounded in architectural principles with The Gnomon Workshop's new tutorial. Monday, June 9th, 2025 More News Epic Games' free Live Link Face app is now available for Android Adobe launches Photoshop on Android and iPhone Sketchsoft releases Feather 1.3 Autodesk releases 3ds Max 2026.1 Autodesk adds AI animation tool MotionMaker to Maya 2026.1 You can now sell MetaHumans, or use them in Unity or Godot Epic Games to rebrand RealityCapture as RealityScan 2.0 Epic Games releases Unreal Engine 5.6 Pulze releases new network render manager RenderFlow 1.0 Xencelabs launches Pen Tablet Medium v2 Desktop edition of sculpting app Nomad enters free beta Boris FX releases Silhouette 2025 Older Posts
    0 Commenti 0 condivisioni
  • Apple expands tools to help parents protect kids and teens online

    Apple today shared an update on new ways to help parents protect kids and teens online when using Apple products.
    #apple #expands #tools #help #parents
    Apple expands tools to help parents protect kids and teens online
    Apple today shared an update on new ways to help parents protect kids and teens online when using Apple products. #apple #expands #tools #help #parents
    WWW.APPLE.COM
    Apple expands tools to help parents protect kids and teens online
    Apple today shared an update on new ways to help parents protect kids and teens online when using Apple products.
    0 Commenti 0 condivisioni
  • Exploring the Rustline Home: An Interior Painted in Warm Tones

    The Rustline Home is decorated is an affair of terracotta, rust, and cream. This space mixes warm tones with bold art, creating an atmosphere that feels expressive. From color-blocked accents to framed pieces that pop against soft backdrops… the home feels like a live-in gallery!This space is decorated by Tetyana Savchenko and photographed by Sergiy Kadulin Photography.

    The living room in the Rustline Home blends soft grey upholstery with bold accents in rust, ochre, and black. The layered throw and cushions echo the terracotta tones found throughout the home. Simultaneously, the black-and-white geometric rug anchors the space with artistic contrast. Sculptural vases and art books on the nesting tables turn the coffee zone into a mini gallery. Finally, a tripod floor lamp adds a hint of the mid-century modern style.

    Just beyond, the kitchen continues the warm color story with matte terracotta cabinetry, subtly ribbed for texture, paired with light oak base cabinets and a speckled grey stone backsplash. Minimalist black fixtures and hardware offer a graphic element, while open sightlines between the kitchen and living room create a seamless flow.

    Tucked between clean lines and creamy walls, the dining area feels like a serene art gallery moment. The rounded table and boucle chairs add softness, while playful wall art and sculptural lighting add whimsy. Whether it’s morning coffee or a dinner chat, this corner makes everyday dining feel curated.

    This bedroom is anchored by a mix of rust, navy, and marigold. The bold textiles and striped pillows create a dynamic rhythm, while the geometric wall art adds visual interest. Crisp white bedding keeps the look fresh, and the floating nightstands with sculptural vases save floor space while adding functionality.

    This bedroom features a grid-style mirror that expands the space visually. Bold, framed art pieces inject personality. The color-blocked bedding and folk-style throw hint at global influences, while the adjacent workspace, with its woven baskets and sculptural decor, adds functionality.

    The bathrooms in the Rustline Home blend warm terracotta vanities with white sinks and black fixtures. Stone-textured tiles add depth, while round mirrors and curated accents keep the look soft and modern. Thoughtful touches, like framed prints and rolled towels, make these spaces feel calm and creative.
    #exploring #rustline #home #interior #painted
    Exploring the Rustline Home: An Interior Painted in Warm Tones
    The Rustline Home is decorated is an affair of terracotta, rust, and cream. This space mixes warm tones with bold art, creating an atmosphere that feels expressive. From color-blocked accents to framed pieces that pop against soft backdrops… the home feels like a live-in gallery!This space is decorated by Tetyana Savchenko and photographed by Sergiy Kadulin Photography. The living room in the Rustline Home blends soft grey upholstery with bold accents in rust, ochre, and black. The layered throw and cushions echo the terracotta tones found throughout the home. Simultaneously, the black-and-white geometric rug anchors the space with artistic contrast. Sculptural vases and art books on the nesting tables turn the coffee zone into a mini gallery. Finally, a tripod floor lamp adds a hint of the mid-century modern style. Just beyond, the kitchen continues the warm color story with matte terracotta cabinetry, subtly ribbed for texture, paired with light oak base cabinets and a speckled grey stone backsplash. Minimalist black fixtures and hardware offer a graphic element, while open sightlines between the kitchen and living room create a seamless flow. Tucked between clean lines and creamy walls, the dining area feels like a serene art gallery moment. The rounded table and boucle chairs add softness, while playful wall art and sculptural lighting add whimsy. Whether it’s morning coffee or a dinner chat, this corner makes everyday dining feel curated. This bedroom is anchored by a mix of rust, navy, and marigold. The bold textiles and striped pillows create a dynamic rhythm, while the geometric wall art adds visual interest. Crisp white bedding keeps the look fresh, and the floating nightstands with sculptural vases save floor space while adding functionality. This bedroom features a grid-style mirror that expands the space visually. Bold, framed art pieces inject personality. The color-blocked bedding and folk-style throw hint at global influences, while the adjacent workspace, with its woven baskets and sculptural decor, adds functionality. The bathrooms in the Rustline Home blend warm terracotta vanities with white sinks and black fixtures. Stone-textured tiles add depth, while round mirrors and curated accents keep the look soft and modern. Thoughtful touches, like framed prints and rolled towels, make these spaces feel calm and creative. #exploring #rustline #home #interior #painted
    WWW.HOME-DESIGNING.COM
    Exploring the Rustline Home: An Interior Painted in Warm Tones
    The Rustline Home is decorated is an affair of terracotta, rust, and cream. This space mixes warm tones with bold art, creating an atmosphere that feels expressive. From color-blocked accents to framed pieces that pop against soft backdrops… the home feels like a live-in gallery!This space is decorated by Tetyana Savchenko and photographed by Sergiy Kadulin Photography. The living room in the Rustline Home blends soft grey upholstery with bold accents in rust, ochre, and black. The layered throw and cushions echo the terracotta tones found throughout the home. Simultaneously, the black-and-white geometric rug anchors the space with artistic contrast. Sculptural vases and art books on the nesting tables turn the coffee zone into a mini gallery. Finally, a tripod floor lamp adds a hint of the mid-century modern style. Just beyond, the kitchen continues the warm color story with matte terracotta cabinetry, subtly ribbed for texture, paired with light oak base cabinets and a speckled grey stone backsplash. Minimalist black fixtures and hardware offer a graphic element, while open sightlines between the kitchen and living room create a seamless flow. Tucked between clean lines and creamy walls, the dining area feels like a serene art gallery moment. The rounded table and boucle chairs add softness, while playful wall art and sculptural lighting add whimsy. Whether it’s morning coffee or a dinner chat, this corner makes everyday dining feel curated. This bedroom is anchored by a mix of rust, navy, and marigold. The bold textiles and striped pillows create a dynamic rhythm, while the geometric wall art adds visual interest. Crisp white bedding keeps the look fresh, and the floating nightstands with sculptural vases save floor space while adding functionality. This bedroom features a grid-style mirror that expands the space visually. Bold, framed art pieces inject personality. The color-blocked bedding and folk-style throw hint at global influences, while the adjacent workspace, with its woven baskets and sculptural decor, adds functionality. The bathrooms in the Rustline Home blend warm terracotta vanities with white sinks and black fixtures. Stone-textured tiles add depth, while round mirrors and curated accents keep the look soft and modern. Thoughtful touches, like framed prints and rolled towels, make these spaces feel calm and creative.
    0 Commenti 0 condivisioni