• Step Inside the Vault: The ‘Borderland’ Series Arrives on GeForce NOW

    GeForce NOW is throwing open the vault doors to welcome the legendary Borderland series to the cloud.
    Whether a seasoned Vault Hunter or new to the mayhem of Pandora, prepare to experience the high-octane action and humor that define the series that includes Borderlands Game of the Year Enhanced, Borderlands 2, Borderlands 3 and Borderlands: The Pre-Sequel.
    Members can explore it all before the highly anticipated Borderlands 4 arrives in the cloud at launch.
    In addition, leap into the flames and save the day in the pulse-pounding FBC: Firebreak from Remedy Entertainment on GeForce NOW.
    It’s all part of the 13 new games in the cloud this week, including the latest Genshin Impact update and advanced access for REMATCH.
    Plus, GeForce NOW’s Summer Sale is still in full swing. For a limited time, get 40% off a six-month GeForce NOW Performance membership — perfect for diving into role-playing game favorites like the Borderlands series or any of the 2,200 titles in the platform’s cloud gaming library.
    Vault Hunters Assemble
    Gear up for a world where loot is king and chaos is always just a trigger pull away. The Borderlands series is known for its wild humor, outrageous characters and nonstop action — and now, its chaotic adventures can be streamed on GeForce NOW.
    Welcome to Pandora.
    Members revisiting the classics or jumping in for the first time can start with Borderlands Game of the Year Enhanced, the original mayhem-fueled classic now polished and packed with downloadable content. The title brings Pandora to life with a fresh coat of paint, crazy loot and the same iconic humor that started it all.
    New worlds, same chaos.
    In Borderlands 2, Handsome Jack steals the show with his mix of charm and villainy. This sequel cranks up the fun and insanity with unforgettable characters and a zany storyline. For more laughs and even wilder chaos, Borderlands 3 delivers the biggest loot explosion yet, with new worlds to explore. Face off against the Calypso twins and enjoy nonstop action.
    The rise of Handsome Jack.
    The adventure blasts off with Borderlands: The Pre-Sequel, revealing how Handsome Jack became so handsome. The game throws in zero gravity, moon boots and enough sarcasm to fuel a spaceship.
    Jump in with GeForce NOW and get ready to laugh, loot and blast through Pandora, all from the cloud. With instant access and seamless streaming at up to 4K resolution with an Ultimate membership, enter the chaos of Borderlands anytime, anywhere. No downloads, no waiting.
    Suit Up, Clean Up
    The Oldest House needs you.
    Step into the shoes of the Federal Bureau of Control’s elite first responders in the highly anticipated three-player co-op first-person shooter FBC: Firebreak. Taking place six years after Control, the game is set in the Oldest House — under siege by reality-warping threats. It’s up to players to restore order before chaos wins.
    Equip unique Crisis Kits packed with weapons, specialized tools and paranatural augments, like a garden gnome that summons a thunderstorm or a piggy bank that spews coins. As each mission, or “Job,” drops players into unpredictable environments with shifting objectives, bizarre crises and wacky enemies, teamwork and quick thinking are key.
    Jump into the fray with friends and stream it on GeForce NOW instantly across devices. Experience the mind-bending action and stunning visuals powered by cloud streaming. Contain the chaos, save the Oldest House and enjoy a new kind of co-op adventure, all from the cloud.
    No Rules Included
    Score big laughs in the cloud.
    REMATCH gives soccer a bold twist, transforming the classic sport into a fast-paced, third-person action experience where every player controls a single athlete on the field.
    With no fouls, offsides or breaks, matches are nonstop and skills-based, demanding quick reflexes and seamless teamwork. Dynamic role-switching lets players jump between attack, defense and goalkeeping, while seasonal updates and various multiplayer modes keep the competition fresh and the action intense.
    Where arcade flair meets tactical depth, REMATCH is football, unleashed. Get instant access to the soccer pitch by streaming the title on GeForce NOW and jump into the action wherever the match calls.
    Time To Game
    Skirk has arrived.
    Genshin Impact’s next major update launches this week, and members can stream the latest adventures from Teyvat at GeForce quality on any device. Version 5.7 includes the new playable characters Skirk and Dahlia — as well as fresh story quests and the launch of a Stygian Onslaught combat mode.
    Look for the following games available to stream in the cloud this week:

    REMATCHBroken ArrowCrime SimulatorDate Everything!FBC: FirebreakLost in Random: The Eternal DieArchitect Life: A House Design SimulatorBorderlands Game of the Year EnhancedBorderlands 2Borderlands 3Borderlands: The Pre-SequelMETAL EDEN DemoTorque Drift 2What are you planning to play this weekend? Let us know on X or in the comments below.

    What's a gaming achievement you'll never forget?
    — NVIDIA GeForce NOWJune 18, 2025
    #step #inside #vault #borderland #series
    Step Inside the Vault: The ‘Borderland’ Series Arrives on GeForce NOW
    GeForce NOW is throwing open the vault doors to welcome the legendary Borderland series to the cloud. Whether a seasoned Vault Hunter or new to the mayhem of Pandora, prepare to experience the high-octane action and humor that define the series that includes Borderlands Game of the Year Enhanced, Borderlands 2, Borderlands 3 and Borderlands: The Pre-Sequel. Members can explore it all before the highly anticipated Borderlands 4 arrives in the cloud at launch. In addition, leap into the flames and save the day in the pulse-pounding FBC: Firebreak from Remedy Entertainment on GeForce NOW. It’s all part of the 13 new games in the cloud this week, including the latest Genshin Impact update and advanced access for REMATCH. Plus, GeForce NOW’s Summer Sale is still in full swing. For a limited time, get 40% off a six-month GeForce NOW Performance membership — perfect for diving into role-playing game favorites like the Borderlands series or any of the 2,200 titles in the platform’s cloud gaming library. Vault Hunters Assemble Gear up for a world where loot is king and chaos is always just a trigger pull away. The Borderlands series is known for its wild humor, outrageous characters and nonstop action — and now, its chaotic adventures can be streamed on GeForce NOW. Welcome to Pandora. Members revisiting the classics or jumping in for the first time can start with Borderlands Game of the Year Enhanced, the original mayhem-fueled classic now polished and packed with downloadable content. The title brings Pandora to life with a fresh coat of paint, crazy loot and the same iconic humor that started it all. New worlds, same chaos. In Borderlands 2, Handsome Jack steals the show with his mix of charm and villainy. This sequel cranks up the fun and insanity with unforgettable characters and a zany storyline. For more laughs and even wilder chaos, Borderlands 3 delivers the biggest loot explosion yet, with new worlds to explore. Face off against the Calypso twins and enjoy nonstop action. The rise of Handsome Jack. The adventure blasts off with Borderlands: The Pre-Sequel, revealing how Handsome Jack became so handsome. The game throws in zero gravity, moon boots and enough sarcasm to fuel a spaceship. Jump in with GeForce NOW and get ready to laugh, loot and blast through Pandora, all from the cloud. With instant access and seamless streaming at up to 4K resolution with an Ultimate membership, enter the chaos of Borderlands anytime, anywhere. No downloads, no waiting. Suit Up, Clean Up The Oldest House needs you. Step into the shoes of the Federal Bureau of Control’s elite first responders in the highly anticipated three-player co-op first-person shooter FBC: Firebreak. Taking place six years after Control, the game is set in the Oldest House — under siege by reality-warping threats. It’s up to players to restore order before chaos wins. Equip unique Crisis Kits packed with weapons, specialized tools and paranatural augments, like a garden gnome that summons a thunderstorm or a piggy bank that spews coins. As each mission, or “Job,” drops players into unpredictable environments with shifting objectives, bizarre crises and wacky enemies, teamwork and quick thinking are key. Jump into the fray with friends and stream it on GeForce NOW instantly across devices. Experience the mind-bending action and stunning visuals powered by cloud streaming. Contain the chaos, save the Oldest House and enjoy a new kind of co-op adventure, all from the cloud. No Rules Included Score big laughs in the cloud. REMATCH gives soccer a bold twist, transforming the classic sport into a fast-paced, third-person action experience where every player controls a single athlete on the field. With no fouls, offsides or breaks, matches are nonstop and skills-based, demanding quick reflexes and seamless teamwork. Dynamic role-switching lets players jump between attack, defense and goalkeeping, while seasonal updates and various multiplayer modes keep the competition fresh and the action intense. Where arcade flair meets tactical depth, REMATCH is football, unleashed. Get instant access to the soccer pitch by streaming the title on GeForce NOW and jump into the action wherever the match calls. Time To Game Skirk has arrived. Genshin Impact’s next major update launches this week, and members can stream the latest adventures from Teyvat at GeForce quality on any device. Version 5.7 includes the new playable characters Skirk and Dahlia — as well as fresh story quests and the launch of a Stygian Onslaught combat mode. Look for the following games available to stream in the cloud this week: REMATCHBroken ArrowCrime SimulatorDate Everything!FBC: FirebreakLost in Random: The Eternal DieArchitect Life: A House Design SimulatorBorderlands Game of the Year EnhancedBorderlands 2Borderlands 3Borderlands: The Pre-SequelMETAL EDEN DemoTorque Drift 2What are you planning to play this weekend? Let us know on X or in the comments below. What's a gaming achievement you'll never forget? — NVIDIA GeForce NOWJune 18, 2025 #step #inside #vault #borderland #series
    BLOGS.NVIDIA.COM
    Step Inside the Vault: The ‘Borderland’ Series Arrives on GeForce NOW
    GeForce NOW is throwing open the vault doors to welcome the legendary Borderland series to the cloud. Whether a seasoned Vault Hunter or new to the mayhem of Pandora, prepare to experience the high-octane action and humor that define the series that includes Borderlands Game of the Year Enhanced, Borderlands 2, Borderlands 3 and Borderlands: The Pre-Sequel. Members can explore it all before the highly anticipated Borderlands 4 arrives in the cloud at launch. In addition, leap into the flames and save the day in the pulse-pounding FBC: Firebreak from Remedy Entertainment on GeForce NOW. It’s all part of the 13 new games in the cloud this week, including the latest Genshin Impact update and advanced access for REMATCH. Plus, GeForce NOW’s Summer Sale is still in full swing. For a limited time, get 40% off a six-month GeForce NOW Performance membership — perfect for diving into role-playing game favorites like the Borderlands series or any of the 2,200 titles in the platform’s cloud gaming library. Vault Hunters Assemble Gear up for a world where loot is king and chaos is always just a trigger pull away. The Borderlands series is known for its wild humor, outrageous characters and nonstop action — and now, its chaotic adventures can be streamed on GeForce NOW. Welcome to Pandora. Members revisiting the classics or jumping in for the first time can start with Borderlands Game of the Year Enhanced, the original mayhem-fueled classic now polished and packed with downloadable content. The title brings Pandora to life with a fresh coat of paint, crazy loot and the same iconic humor that started it all. New worlds, same chaos. In Borderlands 2, Handsome Jack steals the show with his mix of charm and villainy. This sequel cranks up the fun and insanity with unforgettable characters and a zany storyline. For more laughs and even wilder chaos, Borderlands 3 delivers the biggest loot explosion yet, with new worlds to explore. Face off against the Calypso twins and enjoy nonstop action. The rise of Handsome Jack. The adventure blasts off with Borderlands: The Pre-Sequel, revealing how Handsome Jack became so handsome. The game throws in zero gravity, moon boots and enough sarcasm to fuel a spaceship. Jump in with GeForce NOW and get ready to laugh, loot and blast through Pandora, all from the cloud. With instant access and seamless streaming at up to 4K resolution with an Ultimate membership, enter the chaos of Borderlands anytime, anywhere. No downloads, no waiting. Suit Up, Clean Up The Oldest House needs you. Step into the shoes of the Federal Bureau of Control’s elite first responders in the highly anticipated three-player co-op first-person shooter FBC: Firebreak. Taking place six years after Control, the game is set in the Oldest House — under siege by reality-warping threats. It’s up to players to restore order before chaos wins. Equip unique Crisis Kits packed with weapons, specialized tools and paranatural augments, like a garden gnome that summons a thunderstorm or a piggy bank that spews coins. As each mission, or “Job,” drops players into unpredictable environments with shifting objectives, bizarre crises and wacky enemies, teamwork and quick thinking are key. Jump into the fray with friends and stream it on GeForce NOW instantly across devices. Experience the mind-bending action and stunning visuals powered by cloud streaming. Contain the chaos, save the Oldest House and enjoy a new kind of co-op adventure, all from the cloud. No Rules Included Score big laughs in the cloud. REMATCH gives soccer a bold twist, transforming the classic sport into a fast-paced, third-person action experience where every player controls a single athlete on the field. With no fouls, offsides or breaks, matches are nonstop and skills-based, demanding quick reflexes and seamless teamwork. Dynamic role-switching lets players jump between attack, defense and goalkeeping, while seasonal updates and various multiplayer modes keep the competition fresh and the action intense. Where arcade flair meets tactical depth, REMATCH is football, unleashed. Get instant access to the soccer pitch by streaming the title on GeForce NOW and jump into the action wherever the match calls. Time To Game Skirk has arrived. Genshin Impact’s next major update launches this week, and members can stream the latest adventures from Teyvat at GeForce quality on any device. Version 5.7 includes the new playable characters Skirk and Dahlia — as well as fresh story quests and the launch of a Stygian Onslaught combat mode. Look for the following games available to stream in the cloud this week: REMATCH (New release on Steam, Xbox, available on PC Game Pass, June 16) Broken Arrow (New release on Steam, June 19) Crime Simulator (New release on Steam, June 17) Date Everything! (New release on Steam, June 17) FBC: Firebreak (New release on Steam, Xbox, available on PC Game Pass, June 17) Lost in Random: The Eternal Die (New release on Steam, Xbox, available on PC Game Pass, June 17) Architect Life: A House Design Simulator (New release on Steam, June 19) Borderlands Game of the Year Enhanced (Steam) Borderlands 2 (Steam, Epic Games Store) Borderlands 3 (Steam, Epic Games Store) Borderlands: The Pre-Sequel (Steam, Epic Games Store) METAL EDEN Demo (Steam) Torque Drift 2 (Epic Games Store) What are you planning to play this weekend? Let us know on X or in the comments below. What's a gaming achievement you'll never forget? — NVIDIA GeForce NOW (@NVIDIAGFN) June 18, 2025
    Like
    Love
    Wow
    Sad
    Angry
    32
    0 Commentaires 0 Parts
  • Air-Conditioning Can Help the Power Grid instead of Overloading It

    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    #airconditioning #can #help #power #grid
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article. #airconditioning #can #help #power #grid
    WWW.SCIENTIFICAMERICAN.COM
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    Like
    Love
    Wow
    Sad
    Angry
    602
    0 Commentaires 0 Parts
  • ‘Color Lim’ Changes Your Hue to Solve Platforming Puzzles

    Color Lim is a puzzle platformer where you need to use your slimy color-switching ability to solve puzzles and save a small village.

    This is a fantastic, smooth puzzle platform that has you playing a colorless slime called Lim who is looking to find their true color. At first, you aren’t really colorless – you are blue, but this isn’t your true and only color, as you can change and adapt when needed. Why is that important? Because in this world, color determines everything.

    Being able to absorb and transform to different colors allows you to go through specific platforms or to spray slime and then slide into the walls of the world, finding a new way to navigate around. There isn’t a limit on your own slime, so you are able to blast out some goop and then inject yourself into the walls, moving quickly around levels and solving puzzles. There are enemies that are looking to harm you, too, but these can be easily avoided most of the time.
    Color Lim doesn’t just have endless platforms to enjoy, but also has a little story showcased through cute characters who seem quite helpful! In this world, something bad has happened, and now there is a small village that is rebuilding, needing your help. As someone with such a great ability, you can find yourself using your colors to help them.

    I got to play a short demo of Color Lim at Pocket Gamer Connects Barcelona where I really liked how sleek and fast the movement felt for the game. The cute characters and little hints of a story captivated me, especially when exploring the town. However, I did feel that sometimes it wasn’t obvious what to do next or where to go – especially in the town where the platforms were hard to determine against what was just the background. Hopefully, these minor issues will be fixed before release.

    Color Lim is currently in development, but in the meantime, you can add it to your Steam Wishlist.
    About The Author
    #color #lim #changes #your #hue
    ‘Color Lim’ Changes Your Hue to Solve Platforming Puzzles
    Color Lim is a puzzle platformer where you need to use your slimy color-switching ability to solve puzzles and save a small village. This is a fantastic, smooth puzzle platform that has you playing a colorless slime called Lim who is looking to find their true color. At first, you aren’t really colorless – you are blue, but this isn’t your true and only color, as you can change and adapt when needed. Why is that important? Because in this world, color determines everything. Being able to absorb and transform to different colors allows you to go through specific platforms or to spray slime and then slide into the walls of the world, finding a new way to navigate around. There isn’t a limit on your own slime, so you are able to blast out some goop and then inject yourself into the walls, moving quickly around levels and solving puzzles. There are enemies that are looking to harm you, too, but these can be easily avoided most of the time. Color Lim doesn’t just have endless platforms to enjoy, but also has a little story showcased through cute characters who seem quite helpful! In this world, something bad has happened, and now there is a small village that is rebuilding, needing your help. As someone with such a great ability, you can find yourself using your colors to help them. I got to play a short demo of Color Lim at Pocket Gamer Connects Barcelona where I really liked how sleek and fast the movement felt for the game. The cute characters and little hints of a story captivated me, especially when exploring the town. However, I did feel that sometimes it wasn’t obvious what to do next or where to go – especially in the town where the platforms were hard to determine against what was just the background. Hopefully, these minor issues will be fixed before release. Color Lim is currently in development, but in the meantime, you can add it to your Steam Wishlist. About The Author #color #lim #changes #your #hue
    INDIEGAMESPLUS.COM
    ‘Color Lim’ Changes Your Hue to Solve Platforming Puzzles
    Color Lim is a puzzle platformer where you need to use your slimy color-switching ability to solve puzzles and save a small village. This is a fantastic, smooth puzzle platform that has you playing a colorless slime called Lim who is looking to find their true color. At first, you aren’t really colorless – you are blue, but this isn’t your true and only color, as you can change and adapt when needed. Why is that important? Because in this world, color determines everything. Being able to absorb and transform to different colors allows you to go through specific platforms or to spray slime and then slide into the walls of the world, finding a new way to navigate around. There isn’t a limit on your own slime, so you are able to blast out some goop and then inject yourself into the walls, moving quickly around levels and solving puzzles. There are enemies that are looking to harm you, too, but these can be easily avoided most of the time. Color Lim doesn’t just have endless platforms to enjoy, but also has a little story showcased through cute characters who seem quite helpful! In this world, something bad has happened, and now there is a small village that is rebuilding, needing your help. As someone with such a great ability, you can find yourself using your colors to help them. I got to play a short demo of Color Lim at Pocket Gamer Connects Barcelona where I really liked how sleek and fast the movement felt for the game. The cute characters and little hints of a story captivated me, especially when exploring the town. However, I did feel that sometimes it wasn’t obvious what to do next or where to go – especially in the town where the platforms were hard to determine against what was just the background. Hopefully, these minor issues will be fixed before release. Color Lim is currently in development, but in the meantime, you can add it to your Steam Wishlist. About The Author
    0 Commentaires 0 Parts
  • Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 

    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks.
    To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms.
    Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA. 
    Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior.
    Proving Rust program properties with Aeneas
    Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”.
    For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references.
    As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs.
    Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community.
    Compiling Rust to C supports backward compatibility  
    We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs.
    Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code.
    As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed.

    Microsoft research podcast

    Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness
    As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India.

    Listen now

    Opens in a new tab
    Timing analysis with Revizor 
    Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct. 
    To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.  
    Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel. 
    By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code.
    Verified Rust implementations begin with ML-KEM
    This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling.
    A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings. 
    Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations. 
    As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems.
    Looking forward 
    This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library.
    Opens in a new tab
    #rewriting #symcrypt #rust #modernize #microsofts
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsicsand assembly language. It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneasbecause it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean, allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice, a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydicecompiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries, or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor, a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcryptobranch of the SymCrypt repository. We encourage users to try the Rust build and share feedback. Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab #rewriting #symcrypt #rust #modernize #microsofts
    WWW.MICROSOFT.COM
    Rewriting SymCrypt in Rust to modernize Microsoft’s cryptographic library 
    Outdated coding practices and memory-unsafe languages like C are putting software, including cryptographic libraries, at risk. Fortunately, memory-safe languages like Rust, along with formal verification tools, are now mature enough to be used at scale, helping prevent issues like crashes, data corruption, flawed implementation, and side-channel attacks. To address these vulnerabilities and improve memory safety, we’re rewriting SymCrypt (opens in new tab)—Microsoft’s open-source cryptographic library—in Rust. We’re also incorporating formal verification methods. SymCrypt is used in Windows, Azure Linux, Xbox, and other platforms. Currently, SymCrypt is primarily written in cross-platform C, with limited use of hardware-specific optimizations through intrinsics (compiler-provided low-level functions) and assembly language (direct processor instructions). It provides a wide range of algorithms, including AES-GCM, SHA, ECDSA, and the more recent post-quantum algorithms ML-KEM and ML-DSA.  Formal verification will confirm that implementations behave as intended and don’t deviate from algorithm specifications, critical for preventing attacks. We’ll also analyze compiled code to detect side-channel leaks caused by timing or hardware-level behavior. Proving Rust program properties with Aeneas Program verification is the process of proving that a piece of code will always satisfy a given property, no matter the input. Rust’s type system profoundly improves the prospects for program verification by providing strong ownership guarantees, by construction, using a discipline known as “aliasing xor mutability”. For example, reasoning about C code often requires proving that two non-const pointers are live and non-overlapping, a property that can depend on external client code. In contrast, Rust’s type system guarantees this property for any two mutably borrowed references. As a result, new tools have emerged specifically for verifying Rust code. We chose Aeneas (opens in new tab) because it helps provide a clean separation between code and proofs. Developed by Microsoft Azure Research in partnership with Inria, the French National Institute for Research in Digital Science and Technology, Aeneas connects to proof assistants like Lean (opens in new tab), allowing us to draw on a large body of mathematical proofs—especially valuable given the mathematical nature of cryptographic algorithms—and benefit from Lean’s active user community. Compiling Rust to C supports backward compatibility   We recognize that switching to Rust isn’t feasible for all use cases, so we’ll continue to support, extend, and certify C-based APIs as long as users need them. Users won’t see any changes, as Rust runs underneath the existing C APIs. Some users compile our C code directly and may rely on specific toolchains or compiler features that complicate the adoption of Rust code. To address this, we will use Eurydice (opens in new tab), a Rust-to-C compiler developed by Microsoft Azure Research, to replace handwritten C code with C generated from formally verified Rust. Eurydice (opens in new tab) compiles directly from Rust’s MIR intermediate language, and the resulting C code will be checked into the SymCrypt repository alongside the original Rust source code. As more users adopt Rust, we’ll continue supporting this compilation path for those who build SymCrypt from source code but aren’t ready to use the Rust compiler. In the long term, we hope to transition users to either use precompiled SymCrypt binaries (via C or Rust APIs), or compile from source code in Rust, at which point the Rust-to-C compilation path will no longer be needed. Microsoft research podcast Ideas: AI and democracy with Madeleine Daepp and Robert Osazuwa Ness As the “biggest election year in history” comes to an end, researchers Madeleine Daepp and Robert Osazuwa Ness and Democracy Forward GM Ginny Badanes discuss AI’s impact on democracy, including the tech’s use in Taiwan and India. Listen now Opens in a new tab Timing analysis with Revizor  Even software that has been verified for functional correctness can remain vulnerable to low-level security threats, such as side channels caused by timing leaks or speculative execution. These threats operate at the hardware level and can leak private information, such as memory load addresses, branch targets, or division operands, even when the source code is provably correct.  To address this, we’re extending Revizor (opens in new tab), a tool developed by Microsoft Azure Research, to more effectively analyze SymCrypt binaries. Revizor models microarchitectural leakage and uses fuzzing techniques to systematically uncover instructions that may expose private information through known hardware-level effects.   Earlier cryptographic libraries relied on constant-time programming to avoid operations on secret data. However, recent research has shown that this alone is insufficient with today’s CPUs, where every new optimization may open a new side channel.  By analyzing binary code for specific compilers and platforms, our extended Revizor tool enables deeper scrutiny of vulnerabilities that aren’t visible in the source code. Verified Rust implementations begin with ML-KEM This long-term effort is in alignment with the Microsoft Secure Future Initiative and brings together experts across Microsoft, building on decades of Microsoft Research investment in program verification and security tooling. A preliminary version of ML-KEM in Rust is now available on the preview feature/verifiedcrypto (opens in new tab) branch of the SymCrypt repository. We encourage users to try the Rust build and share feedback (opens in new tab). Looking ahead, we plan to support direct use of the same cryptographic library in Rust without requiring C bindings.  Over the coming months, we plan to rewrite, verify, and ship several algorithms in Rust as part of SymCrypt. As our investment in Rust deepens, we expect to gain new insights into how to best leverage the language for high-assurance cryptographic implementations with low-level optimizations.  As performance is key to scalability and sustainability, we’re holding new implementations to a high bar using our benchmarking tools to match or exceed existing systems. Looking forward  This is a pivotal moment for high-assurance software. Microsoft’s investment in Rust and formal verification presents a rare opportunity to advance one of our key libraries. We’re excited to scale this work and ultimately deliver an industrial-grade, Rust-based, FIPS-certified cryptographic library. Opens in a new tab
    0 Commentaires 0 Parts
  • Mock up a website in five prompts

    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box.
    Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website.
    Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe
    Core pages
    Primary goal
    Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon.
    2. Home, About, Pricing / Subscription Box, Menu.
    3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.”
    4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for:
    • Home
    • About
    • Pricing
    • Menu
    Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon.
    The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    #mock #website #five #prompts
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu. 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today. #mock #website #five #prompts
    WWW.BUILDER.IO
    Mock up a website in five prompts
    “Wait, can users actually add products to the cart?”Every prototype faces that question or one like it. You start to explain it’s “just Figma,” “just dummy data,” but what if you didn’t need disclaimers?What if you could hand clients—or your team—a working, data-connected mock-up of their website, or new pages and components, in less time than it takes to wireframe?That’s the challenge we’ll tackle today. But first, we need to look at:The problem with today’s prototyping toolsPick two: speed, flexibility, or interactivity.The prototyping ecosystem, despite having amazing software that addresses a huge variety of needs, doesn’t really have one tool that gives you all three.Wireframing apps let you draw boxes in minutes but every button is fake. Drag-and-drop builders animate scroll triggers until you ask for anything off-template. Custom code frees you… after you wave goodbye to a few afternoons.AI tools haven’t smashed the trade-off; they’ve just dressed it in flashier costumes. One prompt births a landing page, the next dumps a 2,000-line, worse-than-junior-level React file in your lap. The bottleneck is still there. Builder’s approach to website mockupsWe’ve been trying something a little different to maintain speed, flexibility, and interactivity while mocking full websites. Our AI-driven visual editor:Spins up a repo in seconds or connects to your existing one to use the code as design inspiration. React, Vue, Angular, and Svelte all work out of the box. Lets you shape components via plain English, visual edits, copy/pasted Figma frames, web inspos, MCP tools, and constant visual awareness of your entire website. Commits each change as a clean GitHub pull request your team can review like hand-written code. All your usual CI checks and lint rules apply.And if you need a tweak, you can comment to @builderio-bot right in the GitHub PR to make asynchronous changes without context switching.This results in a live site the café owner can interact with today, and a branch your devs can merge tomorrow. Stakeholders get to click actual buttons and trigger real state—no more “so, just imagine this works” demos.Let’s see it in action.From blank canvas to working mockup in five promptsToday, I’m going to mock up a fake business website. You’re welcome to create a real one.Before we fire off a single prompt, grab a note and write:Business name & vibe Core pages Primary goal Brand palette & toneThat’s it. Don’t sweat the details—we can always iterate. For mine, I wrote:1. Sunny Trails Bakery — family-owned, feel-good, smells like warm cinnamon. 2. Home, About, Pricing / Subscription Box, Menu (with daily specials). 3. Drive online orders and foot traffic—every CTA should funnel toward “Order Now” or “Reserve a Table.” 4. Warm yellow, chocolate brown, rounded typography, playful copy.We’re not trying to fit everything here. What matters is clarity on what we’re creating, so the AI has enough context to produce usable scaffolds, and so later tweaks stay aligned with the client’s vision. Builder will default to using React, Vite, and Tailwind. If you want a different JS framework, you can link an existing repo in that stack. In the near future, you won’t need to do this extra step to get non-React frameworks to function.(Free tier Builder gives you 5 AI credits/day and 25/month—plenty to follow along with today’s demo. Upgrade only when you need it.)An entire website from the first promptNow, we’re ready to get going.Head over to Builder.io and paste in this prompt or your own:Create a cozy bakery website called “Sunny Trails Bakery” with pages for: • Home • About • Pricing • Menu Brand palette: warm yellow and chocolate brown. Tone: playful, inviting. The restaurant is family-owned, feel-good, and smells like cinnamon. The goal of this site is to drive online orders and foot traffic—every CTA should funnel toward "Order Now" or "Reserve a Table."Once you hit enter, Builder will spin up a new dev container, and then inside that container, the AI will build out the first version of your site. You can leave the page and come back when it’s done.Now, before we go further, let’s create our repo, so that we get version history right from the outset. Click “Create Repo” up in the top right, and link your GitHub account.Once the process is complete, you’ll have a brand new repo.If you need any help on this step, or any of the below, check out these docs.Making the mockup’s order system workFrom our one-shot prompt, we’ve already got a really nice start for our client. However, when we press the “Order Now” button, we just get a generic alert. Let’s fix this.The best part about connecting to GitHub is that we get version control. Head back to your dashboard and edit the settings of your new project. We can give it a better name, and then, in the “Advanced” section, we can change the “Commit Mode” to “Pull Requests.”Now, we have the ability to create new branches right within Builder, allowing us to make drastic changes without worrying about the main version. This is also helpful if you’d like to show your client or team a few different versions of the same prototype.On a new branch, I’ll write another short prompt:Can you make the "Order Now" button work, even if it's just with dummy JSON for now?As you can see in the GIF above, Builder creates an ordering system and a fully mobile-responsive cart and checkout flow.Now, we can click “Send PR” in the top right, and we have an ordinary GitHub PR that can be reviewed and merged as needed.This is what’s possible in two prompts. For our third, let’s gussy up the style.If you’re like me, you might spend a lot of time admiring other people’s cool designs and learning how to code up similar components in your own style.Luckily, Builder has this capability, too, with our Chrome extension. I found a “Featured Posts” section on OpenAI’s website, where I like how the layout and scrolling work. We can copy and paste it onto our “Featured Treats” section, retaining our cafe’s distinctive brand style.Don’t worry—OpenAI doesn’t mind a little web scraping.You can do this with any component on any website, so your own projects can very quickly become a “best of the web” if you know what you’re doing.Plus, you can use Figma designs in much the same way, with even better design fidelity. Copy and paste a Figma frame with our Figma plugin, and tell the AI to either use the component as inspiration or as a 1:1 to reference for what the design should be.(You can grab our design-to-code guide for a lot more ideas of what this can help you accomplish.)Now, we’re ready to send our PR. This time, let’s take a closer look at the code the AI has created.As you can see, the code is neatly formatted into two reusable components. Scrolling down further, I find a CSS file and then the actual implementation on the homepage, with clean JSON to represent the dummy post data.Design tweaks to the mockup with visual editsOne issue that cropped up when the AI brought in the OpenAI layout is that it changed my text from “Featured Treats” to “Featured Stories & Treats.” I’ve realized I don’t like either, and I want to replace that text with: “Fresh Out of the Bakery.”It would be silly, though, to prompt the AI just for this small tweak. Let’s switch into edit mode.Edit Mode lets you select any component and change any of its content or underlying CSS directly. You get a host of Webflow-like options to choose from, so that you can finesse the details as needed.Once you’ve made all the visual changes you want—maybe tweaking a button color or a border radius—you can click “Apply Edits,” and the AI will ensure the underlying code matches your repo’s style.Async fixes to the mockup with Builder BotNow, our pull request is nearly ready to merge, but I found one issue with it:When we copied the OpenAI website layout earlier, one of the blog posts had a video as its featured graphic instead of just an image. This is cool for OpenAI, but for our bakery, I just wanted images in this section. Since I didn’t instruct Builder’s AI otherwise, it went ahead and followed the layout and created extra code for video capability.No problem. We can fix this inside GItHub with our final prompt. We just need to comment on the PR and tag builderio-bot. Within about a minute, Builder Bot has successfully removed the video functionality, leaving a minimal diff that affects only the code it needed to. For example: Returning to my project in Builder, I can see that the bot’s changes are accounted for in the chat window as well, and I can use the live preview link to make sure my site works as expected:Now, if this were a real project, you could easily deploy this to the web for your client. After all, you’ve got a whole GitHub repo. This isn’t just a mockup; it’s actual code you can tweak—with Builder or Cursor or by hand—until you’re satisfied to run the site in production.So, why use Builder to mock up your website?Sure, this has been a somewhat contrived example. A real prototype is going to look prettier, because I’m going to spend more time on pieces of the design that I don’t like as much.But that’s the point of the best AI tools: they don’t take you, the human, out of the loop.You still get to make all the executive decisions, and it respects your hard work. Since you can constantly see all the code the AI creates, work in branches, and prompt with component-level precision, you can stop worrying about AI overwriting your opinions and start using it more as the tool it’s designed to be.You can copy in your team’s Figma designs, import web inspos, connect MCP servers to get Jira tickets in hand, and—most importantly—work with existing repos full of existing styles that Builder will understand and match, just like it matched OpenAI’s layout to our little cafe.So, we get speed, flexibility, and interactivity all the way from prompt to PR to production.Try Builder today.
    0 Commentaires 0 Parts
  • Reclaiming Control: Digital Sovereignty in 2025

    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders.
    Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure.
    The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself.
    But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack.
    Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas.
    Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty.
    As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems.
    What does the digital sovereignty landscape look like today?
    Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts.
    We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales.
    We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data?
    This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks.
    Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP.
    How Are Cloud Providers Responding?
    Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now.
    We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue.
    Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players.
    What Can Enterprise Organizations Do About It?
    First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience.
    If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that.
    This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture.
    It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency.
    Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate.
    Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing.
    Where to start? Look after your own organization first
    Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once.
    Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario.
    Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it.
    Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience.
    The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    #reclaiming #control #digital #sovereignty
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades, most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas othersare adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoringits spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France. However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom. #reclaiming #control #digital #sovereignty
    GIGAOM.COM
    Reclaiming Control: Digital Sovereignty in 2025
    Sovereignty has mattered since the invention of the nation state—defined by borders, laws, and taxes that apply within and without. While many have tried to define it, the core idea remains: nations or jurisdictions seek to stay in control, usually to the benefit of those within their borders. Digital sovereignty is a relatively new concept, also difficult to define but straightforward to understand. Data and applications don’t understand borders unless they are specified in policy terms, as coded into the infrastructure. The World Wide Web had no such restrictions at its inception. Communitarian groups such as the Electronic Frontier Foundation, service providers and hyperscalers, non-profits and businesses all embraced a model that suggested data would look after itself. But data won’t look after itself, for several reasons. First, data is massively out of control. We generate more of it all the time, and for at least two or three decades (according to historical surveys I’ve run), most organizations haven’t fully understood their data assets. This creates inefficiency and risk—not least, widespread vulnerability to cyberattack. Risk is probability times impact—and right now, the probabilities have shot up. Invasions, tariffs, political tensions, and more have brought new urgency. This time last year, the idea of switching off another country’s IT systems was not on the radar. Now we’re seeing it happen—including the U.S. government blocking access to services overseas. Digital sovereignty isn’t just a European concern, though it is often framed as such. In South America for example, I am told that sovereignty is leading conversations with hyperscalers; in African countries, it is being stipulated in supplier agreements. Many jurisdictions are watching, assessing, and reviewing their stance on digital sovereignty. As the adage goes: a crisis is a problem with no time left to solve it. Digital sovereignty was a problem in waiting—but now it’s urgent. It’s gone from being an abstract ‘right to sovereignty’ to becoming a clear and present issue, in government thinking, corporate risk and how we architect and operate our computer systems. What does the digital sovereignty landscape look like today? Much has changed since this time last year. Unknowns remain, but much of what was unclear this time last year is now starting to solidify. Terminology is clearer – for example talking about classification and localisation rather than generic concepts. We’re seeing a shift from theory to practice. Governments and organizations are putting policies in place that simply didn’t exist before. For example, some countries are seeing “in-country” as a primary goal, whereas others (the UK included) are adopting a risk-based approach based on trusted locales. We’re also seeing a shift in risk priorities. From a risk standpoint, the classic triad of confidentiality, integrity, and availability are at the heart of the digital sovereignty conversation. Historically, the focus has been much more on confidentiality, driven by concerns about the US Cloud Act: essentially, can foreign governments see my data? This year however, availability is rising in prominence, due to geopolitics and very real concerns about data accessibility in third countries. Integrity is being talked about less from a sovereignty perspective, but is no less important as a cybercrime target—ransomware and fraud being two clear and present risks. Thinking more broadly, digital sovereignty is not just about data, or even intellectual property, but also the brain drain. Countries don’t want all their brightest young technologists leaving university only to end up in California or some other, more attractive country. They want to keep talent at home and innovate locally, to the benefit of their own GDP. How Are Cloud Providers Responding? Hyperscalers are playing catch-up, still looking for ways to satisfy the letter of the law whilst ignoring (in the French sense) its spirit. It’s not enough for Microsoft or AWS to say they will do everything they can to protect a jurisdiction’s data, if they are already legally obliged to do the opposite. Legislation, in this case US legislation, calls the shots—and we all know just how fragile this is right now. We see hyperscaler progress where they offer technology to be locally managed by a third party, rather than themselves. For example, Google’s partnership with Thales, or Microsoft with Orange, both in France (Microsoft has similar in Germany). However, these are point solutions, not part of a general standard. Meanwhile, AWS’ recent announcement about creating a local entity doesn’t solve for the problem of US over-reach, which remains a core issue. Non-hyperscaler providers and software vendors have an increasingly significant play: Oracle and HPE offer solutions that can be deployed and managed locally for example; Broadcom/VMware and Red Hat provide technologies that locally situated, private cloud providers can host. Digital sovereignty is thus a catalyst for a redistribution of “cloud spend” across a broader pool of players. What Can Enterprise Organizations Do About It? First, see digital sovereignty as a core element of data and application strategy. For a nation, sovereignty means having solid borders, control over IP, GDP, and so on. That’s the goal for corporations as well—control, self-determination, and resilience. If sovereignty isn’t seen as an element of strategy, it gets pushed down into the implementation layer, leading to inefficient architectures and duplicated effort. Far better to decide up front what data, applications and processes need to be treated as sovereign, and defining an architecture to support that. This sets the scene for making informed provisioning decisions. Your organization may have made some big bets on key vendors or hyperscalers, but multi-platform thinking increasingly dominates: multiple public and private cloud providers, with integrated operations and management. Sovereign cloud becomes one element of a well-structured multi-platform architecture. It is not cost-neutral to deliver on sovereignty, but the overall business value should be tangible. A sovereignty initiative should bring clear advantages, not just for itself, but through the benefits that come with better control, visibility, and efficiency. Knowing where your data is, understanding which data matters, managing it efficiently so you’re not duplicating or fragmenting it across systems—these are valuable outcomes. In addition, ignoring these questions can lead to non-compliance or be outright illegal. Even if we don’t use terms like ‘sovereignty’, organizations need a handle on their information estate. Organizations shouldn’t be thinking everything cloud-based needs to be sovereign, but should be building strategies and policies based on data classification, prioritization and risk. Build that picture and you can solve for the highest-priority items first—the data with the strongest classification and greatest risk. That process alone takes care of 80–90% of the problem space, avoiding making sovereignty another problem whilst solving nothing. Where to start? Look after your own organization first Sovereignty and systems thinking go hand in hand: it’s all about scope. In enterprise architecture or business design, the biggest mistake is boiling the ocean—trying to solve everything at once. Instead, focus on your own sovereignty. Worry about your own organization, your own jurisdiction. Know where your own borders are. Understand who your customers are, and what their requirements are. For example, if you’re a manufacturer selling into specific countries—what do those countries require? Solve for that, not for everything else. Don’t try to plan for every possible future scenario. Focus on what you have, what you’re responsible for, and what you need to address right now. Classify and prioritise your data assets based on real-world risk. Do that, and you’re already more than halfway toward solving digital sovereignty—with all the efficiency, control, and compliance benefits that come with it. Digital sovereignty isn’t just regulatory, but strategic. Organizations that act now can reduce risk, improve operational clarity, and prepare for a future based on trust, compliance, and resilience. The post Reclaiming Control: Digital Sovereignty in 2025 appeared first on Gigaom.
    0 Commentaires 0 Parts
  • Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista

    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation.
    Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.

     

     
     

     

    View this post on Instagram

     

     
     
     

     
     

     
     
     

     
     

    A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions.
    It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices.
    On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta.

    Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable.
    Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language.
    The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons.
    Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console.
    Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.
    #microsoft #trolls #apple039s #new #liquid
    Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista
    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.         View this post on Instagram                       A post shared by WindowsLiquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency. #microsoft #trolls #apple039s #new #liquid
    WWW.TECHSPOT.COM
    Microsoft trolls Apple's new Liquid Glass UI for looking like Windows Vista
    In a nutshell: The OS updates coming to Apple devices later this year will institute the company's first major UI design shift in over a decade, but eagle-eyed observers noticed similarities with an old version of Windows – comparisons that haven't escaped Microsoft's notice. Thankfully, users concerned about Apple's upcoming interface will have options to change its visual presentation. Some of Microsoft's social media accounts recently poked fun at the upcoming "Liquid Glass" user interface design language Apple unveiled at WWDC this week. Although the Cupertino giant has hailed the update as a major innovation, many immediately began comparing it to Microsoft's nearly two-decade-old Windows Vista UI.         View this post on Instagram                       A post shared by Windows (@windows) Liquid Glass is Apple's name for the new visual style arriving in iOS 26, iPadOS 26, macOS 26 Tahoe, watchOS 26, and tvOS 26, which will launch this fall. Inspired by the Apple Vision Pro's visionOS, the design language favors rounded edges and transparent backgrounds for inputs and other UI functions. It is Apple's most significant design change since iOS 7 debuted almost 12 years ago, and the first to establish a unified language across all of the company's devices. On the left: nice Liquid Glass UI minimalistic look. On the right: Liquid Glass looking all kinds of wrong in the current beta. Apps, wallpapers, and other background content will be visible through app icons, notifications, and menu elements for a glass-like appearance. Apple claims that the effect will improve cohesion across the interface, but beta testers are concerned that text will become less readable. Others, including Microsoft, mocked the update's resemblance to Windows Vista's glass-like "Aero" aesthetic, which debuted in 2007. That OS also made UI elements partially transparent, but Microsoft eventually phased it out when it began moving toward its current design language. The official Windows Instagram account recently responded to Apple's presentation by posting a slideshow of Vista screenshots played over a nostalgic Windows boot tune. The Windows Twitter account also shared a picture recalling the Vista-era profile icons. Other social media users joined in on the fun. Some highlighted the unfortunate placement of the YouTube icon in Apple's Liquid Glass explainer video, which the company altered. Others compared the design language to the unique chassis for Apple's 2000 Power Mac G4 Cube and the main menu for Nintendo's 2012 Wii U game console. Fortunately, users can customize Liquid Glass by switching between transparent, light, and dark modes. They can also opt for a slightly more opaque presentation with a toggle located under Settings > Accessibility > Display & Text Size > Reduce Transparency.
    0 Commentaires 0 Parts
  • Why Half Backsplashes Are Taking Over Kitchen Design, According to Experts

    Pictured Above: Designer Amber Lewis balances New England charm with old-world sophistication with a half Calacatta Vagli marble backsplash in the kitchen of this Martha's Vineyard home. To backsplash or not to backsplash? That is the question. Or is it? Because if anyone’s ever told you “you shouldn’t do anything halfway,” they clearly haven’t heard of the half backsplash. This twist on a design mainstay makes a compelling case for stopping short. So maybe the real question is: to backsplash or to half backsplash?Lately, we’ve seen more and more designers going for the latter. “A trend these days is to use 1/2 or 2/3 stone backsplashes with a six- to nine-inch ledge,” says designer Jennifer Gilmer. “This is typically used behind a range and adds interest as well as softening the overall look.” It’s not just aesthetic—it’s strategic functionality. “The ledge is useful for salt and pepper shakers, olive oil, and other items,” she adds. Ahead, we break down everything to know about half backsplashes and why this kitchen trend is gaining traction in the design world.Related StoriesWhat Is a Half Backsplash?Lisa PetroleMagnolia’s director of styling, Ashley Maddox, enlisted the help of designer Hilary Walker to create her midcentury-modern dream home in Waco, Texas. Complete with walnut kitchen cabinetry topped with a Topzstone countertop continued into a partial backsplash.“A half backsplash or 1/3 backsplash is when the material stops at a point on the wall determined by the design,” explains designer Isabella Patrick. This makes it distinct from a “built-out or existing element, such as upper cabinets, a ceiling, soffit, or some other inherent element of the space.” In other words, it’s intentional, not just the result of running out of tile.Courtesy of JN Interior SpacesTaking the ceiling height into consideration, JN Interior Spaces decided a half backsplash would be suitable for this sleek, modern kitchen.While traditional backsplashes typically reach the bottom of upper cabinetry or span the entire wall, partial backsplashes usually stop somewhere around four to 25 inches up, depending on the look you’re going for.And while it may sound like a design compromise, it’s actually quite the opposite.Related StoryWhy Designers Are Loving the Half-Height LookOpting for a half backsplash is a clever way to balance proportion, budget, and visual interest. “If the design does not have upper cabinets, we would opt for a half backsplash to create visual interest,” Patrick says. “A full wall of the same tile or stone could overwhelm the space and seem like an afterthought.”Shannon Dupre/DD RepsIsabella Patrick experimented with this concept in her own kitchen, mixing materials for a more layered half backsplash look.Instead, Patrick often mixes materials—like running Cambria quartzite up from the counter to a ledge, then switching to Fireclay tile above. “This is a great example of how a singular material would have overwhelmed the space but also may have felt like an afterthought,” she explains. “Mixing materials and adding in details and personal touches is what good design is.”Another bonus? It lets the rest of the kitchen sing. “In another design, we eliminated the upper cabinets in favor of a more open and airy look so that the windows were not blocked—and so you were not walking right into a side view of cabinetry,” Patrick says. “No upper cabinets also makes the kitchen feel more of a transitional space and decorative, especially since it opens right into a dining room.”krafty_photos
copyright 2021This kitchen from JN Interior Spaces proves that a partial backsplash can still make a big impact. They chose to use an iridescent, almost-patina tile in this Wyoming kitchen.For Jill Najinigier of JN Interior Spaces, the choice is just as much about form as it is function. “It's all about how the backsplash interacts with the architecture,” she explains. “Wall height, windows, the shape of the hood, upper cabinets, or open shelves—where do they start and terminate?”In one standout project, Najinigier used a luminous tile just tall enough to tuck under a tapered plaster hood, topped with a narrow stone ledge carved from the same slab as the counter. The result? “Clean lines that make a stunning statement.”Mixing materials and adding in details and personal touches is what good design is.It’s Decorative and FunctionalHeather TalbertDesigner Kate Pearce installed a statement-making marble backsplash. Bringing it only halfway up allows its beauty to be appreciated while giving the other aesthetic elements in the space room to breathe.Don’t underestimate what that ledge can do. Designer Kate Pearce swears by hers: “I love my little five-inch-deep marble shelf that allows me to style some vintage kitchenware in the space,” she says. “And I think the shelfis exactly what gives the kitchen an approachable feel—versus having a full backsplash of marble, which would have given the space a more serious vibe.”Stylish ProductionsPrioritizing visually continuity, Italian designer Federica Asack of Masseria Chic used the same leathered sandstone, a natural material that will develop a wonderful patina, for both the counters and the backsplash.Designer Federica Asack of Masseria Chic used a leathered sandstone for both her countertop and half backsplash, adding a ledge that’s just deep enough to style. “It allows for a splash-free decorating opportunity to layer artwork and favorite objects,” she says.Designer Molly Watson agrees: “The simple shelf is just deep enough for some special items to be on display,” she notes of a project where carrying the countertop stone up the wall helped keep things visually calm and scaled to the space. Related StoryThe Verdict on Half BacksplashesErin Kelly"Keeping materials simple in this kitchen was important for scale," says designer Molly Watson. "Carrying the countertop up the wall as a backsplash allowed the space to feel larger."Half backsplashes are having a major design moment, but not just because they’re practical. They’re a blank canvas for creativity. From floating ledges and mixed materials to budget-conscious decisions that don’t skimp on style, they’re a smartway to make your kitchen feel lighter, livelier, and totally considered.So, go ahead—do it halfway.Follow House Beautiful on Instagram and TikTok.
    #why #half #backsplashes #are #taking
    Why Half Backsplashes Are Taking Over Kitchen Design, According to Experts
    Pictured Above: Designer Amber Lewis balances New England charm with old-world sophistication with a half Calacatta Vagli marble backsplash in the kitchen of this Martha's Vineyard home. To backsplash or not to backsplash? That is the question. Or is it? Because if anyone’s ever told you “you shouldn’t do anything halfway,” they clearly haven’t heard of the half backsplash. This twist on a design mainstay makes a compelling case for stopping short. So maybe the real question is: to backsplash or to half backsplash?Lately, we’ve seen more and more designers going for the latter. “A trend these days is to use 1/2 or 2/3 stone backsplashes with a six- to nine-inch ledge,” says designer Jennifer Gilmer. “This is typically used behind a range and adds interest as well as softening the overall look.” It’s not just aesthetic—it’s strategic functionality. “The ledge is useful for salt and pepper shakers, olive oil, and other items,” she adds. Ahead, we break down everything to know about half backsplashes and why this kitchen trend is gaining traction in the design world.Related StoriesWhat Is a Half Backsplash?Lisa PetroleMagnolia’s director of styling, Ashley Maddox, enlisted the help of designer Hilary Walker to create her midcentury-modern dream home in Waco, Texas. Complete with walnut kitchen cabinetry topped with a Topzstone countertop continued into a partial backsplash.“A half backsplash or 1/3 backsplash is when the material stops at a point on the wall determined by the design,” explains designer Isabella Patrick. This makes it distinct from a “built-out or existing element, such as upper cabinets, a ceiling, soffit, or some other inherent element of the space.” In other words, it’s intentional, not just the result of running out of tile.Courtesy of JN Interior SpacesTaking the ceiling height into consideration, JN Interior Spaces decided a half backsplash would be suitable for this sleek, modern kitchen.While traditional backsplashes typically reach the bottom of upper cabinetry or span the entire wall, partial backsplashes usually stop somewhere around four to 25 inches up, depending on the look you’re going for.And while it may sound like a design compromise, it’s actually quite the opposite.Related StoryWhy Designers Are Loving the Half-Height LookOpting for a half backsplash is a clever way to balance proportion, budget, and visual interest. “If the design does not have upper cabinets, we would opt for a half backsplash to create visual interest,” Patrick says. “A full wall of the same tile or stone could overwhelm the space and seem like an afterthought.”Shannon Dupre/DD RepsIsabella Patrick experimented with this concept in her own kitchen, mixing materials for a more layered half backsplash look.Instead, Patrick often mixes materials—like running Cambria quartzite up from the counter to a ledge, then switching to Fireclay tile above. “This is a great example of how a singular material would have overwhelmed the space but also may have felt like an afterthought,” she explains. “Mixing materials and adding in details and personal touches is what good design is.”Another bonus? It lets the rest of the kitchen sing. “In another design, we eliminated the upper cabinets in favor of a more open and airy look so that the windows were not blocked—and so you were not walking right into a side view of cabinetry,” Patrick says. “No upper cabinets also makes the kitchen feel more of a transitional space and decorative, especially since it opens right into a dining room.”krafty_photos
copyright 2021This kitchen from JN Interior Spaces proves that a partial backsplash can still make a big impact. They chose to use an iridescent, almost-patina tile in this Wyoming kitchen.For Jill Najinigier of JN Interior Spaces, the choice is just as much about form as it is function. “It's all about how the backsplash interacts with the architecture,” she explains. “Wall height, windows, the shape of the hood, upper cabinets, or open shelves—where do they start and terminate?”In one standout project, Najinigier used a luminous tile just tall enough to tuck under a tapered plaster hood, topped with a narrow stone ledge carved from the same slab as the counter. The result? “Clean lines that make a stunning statement.”Mixing materials and adding in details and personal touches is what good design is.It’s Decorative and FunctionalHeather TalbertDesigner Kate Pearce installed a statement-making marble backsplash. Bringing it only halfway up allows its beauty to be appreciated while giving the other aesthetic elements in the space room to breathe.Don’t underestimate what that ledge can do. Designer Kate Pearce swears by hers: “I love my little five-inch-deep marble shelf that allows me to style some vintage kitchenware in the space,” she says. “And I think the shelfis exactly what gives the kitchen an approachable feel—versus having a full backsplash of marble, which would have given the space a more serious vibe.”Stylish ProductionsPrioritizing visually continuity, Italian designer Federica Asack of Masseria Chic used the same leathered sandstone, a natural material that will develop a wonderful patina, for both the counters and the backsplash.Designer Federica Asack of Masseria Chic used a leathered sandstone for both her countertop and half backsplash, adding a ledge that’s just deep enough to style. “It allows for a splash-free decorating opportunity to layer artwork and favorite objects,” she says.Designer Molly Watson agrees: “The simple shelf is just deep enough for some special items to be on display,” she notes of a project where carrying the countertop stone up the wall helped keep things visually calm and scaled to the space. Related StoryThe Verdict on Half BacksplashesErin Kelly"Keeping materials simple in this kitchen was important for scale," says designer Molly Watson. "Carrying the countertop up the wall as a backsplash allowed the space to feel larger."Half backsplashes are having a major design moment, but not just because they’re practical. They’re a blank canvas for creativity. From floating ledges and mixed materials to budget-conscious decisions that don’t skimp on style, they’re a smartway to make your kitchen feel lighter, livelier, and totally considered.So, go ahead—do it halfway.Follow House Beautiful on Instagram and TikTok. #why #half #backsplashes #are #taking
    WWW.HOUSEBEAUTIFUL.COM
    Why Half Backsplashes Are Taking Over Kitchen Design, According to Experts
    Pictured Above: Designer Amber Lewis balances New England charm with old-world sophistication with a half Calacatta Vagli marble backsplash in the kitchen of this Martha's Vineyard home. To backsplash or not to backsplash? That is the question. Or is it? Because if anyone’s ever told you “you shouldn’t do anything halfway,” they clearly haven’t heard of the half backsplash. This twist on a design mainstay makes a compelling case for stopping short. So maybe the real question is: to backsplash or to half backsplash?Lately, we’ve seen more and more designers going for the latter. “A trend these days is to use 1/2 or 2/3 stone backsplashes with a six- to nine-inch ledge,” says designer Jennifer Gilmer. “This is typically used behind a range and adds interest as well as softening the overall look.” It’s not just aesthetic—it’s strategic functionality. “The ledge is useful for salt and pepper shakers, olive oil, and other items,” she adds. Ahead, we break down everything to know about half backsplashes and why this kitchen trend is gaining traction in the design world.Related StoriesWhat Is a Half Backsplash?Lisa PetroleMagnolia’s director of styling, Ashley Maddox, enlisted the help of designer Hilary Walker to create her midcentury-modern dream home in Waco, Texas. Complete with walnut kitchen cabinetry topped with a Topzstone countertop continued into a partial backsplash.“A half backsplash or 1/3 backsplash is when the material stops at a point on the wall determined by the design,” explains designer Isabella Patrick. This makes it distinct from a “built-out or existing element, such as upper cabinets, a ceiling, soffit, or some other inherent element of the space.” In other words, it’s intentional, not just the result of running out of tile.Courtesy of JN Interior SpacesTaking the ceiling height into consideration, JN Interior Spaces decided a half backsplash would be suitable for this sleek, modern kitchen.While traditional backsplashes typically reach the bottom of upper cabinetry or span the entire wall, partial backsplashes usually stop somewhere around four to 25 inches up, depending on the look you’re going for.And while it may sound like a design compromise, it’s actually quite the opposite.Related StoryWhy Designers Are Loving the Half-Height LookOpting for a half backsplash is a clever way to balance proportion, budget, and visual interest. “If the design does not have upper cabinets, we would opt for a half backsplash to create visual interest,” Patrick says. “A full wall of the same tile or stone could overwhelm the space and seem like an afterthought.”Shannon Dupre/DD RepsIsabella Patrick experimented with this concept in her own kitchen, mixing materials for a more layered half backsplash look.Instead, Patrick often mixes materials—like running Cambria quartzite up from the counter to a ledge, then switching to Fireclay tile above. “This is a great example of how a singular material would have overwhelmed the space but also may have felt like an afterthought,” she explains. “Mixing materials and adding in details and personal touches is what good design is.”Another bonus? It lets the rest of the kitchen sing. “In another design, we eliminated the upper cabinets in favor of a more open and airy look so that the windows were not blocked—and so you were not walking right into a side view of cabinetry,” Patrick says. “No upper cabinets also makes the kitchen feel more of a transitional space and decorative, especially since it opens right into a dining room.”krafty_photos
copyright 2021This kitchen from JN Interior Spaces proves that a partial backsplash can still make a big impact. They chose to use an iridescent, almost-patina tile in this Wyoming kitchen.For Jill Najinigier of JN Interior Spaces, the choice is just as much about form as it is function. “It's all about how the backsplash interacts with the architecture,” she explains. “Wall height, windows, the shape of the hood, upper cabinets, or open shelves—where do they start and terminate?”In one standout project, Najinigier used a luminous tile just tall enough to tuck under a tapered plaster hood, topped with a narrow stone ledge carved from the same slab as the counter. The result? “Clean lines that make a stunning statement.”Mixing materials and adding in details and personal touches is what good design is.It’s Decorative and FunctionalHeather TalbertDesigner Kate Pearce installed a statement-making marble backsplash. Bringing it only halfway up allows its beauty to be appreciated while giving the other aesthetic elements in the space room to breathe.Don’t underestimate what that ledge can do. Designer Kate Pearce swears by hers: “I love my little five-inch-deep marble shelf that allows me to style some vintage kitchenware in the space,” she says. “And I think the shelf (and the pieces styled on it) is exactly what gives the kitchen an approachable feel—versus having a full backsplash of marble, which would have given the space a more serious vibe.”Stylish ProductionsPrioritizing visually continuity, Italian designer Federica Asack of Masseria Chic used the same leathered sandstone, a natural material that will develop a wonderful patina, for both the counters and the backsplash.Designer Federica Asack of Masseria Chic used a leathered sandstone for both her countertop and half backsplash, adding a ledge that’s just deep enough to style. “It allows for a splash-free decorating opportunity to layer artwork and favorite objects,” she says.Designer Molly Watson agrees: “The simple shelf is just deep enough for some special items to be on display,” she notes of a project where carrying the countertop stone up the wall helped keep things visually calm and scaled to the space. Related StoryThe Verdict on Half BacksplashesErin Kelly"Keeping materials simple in this kitchen was important for scale," says designer Molly Watson. "Carrying the countertop up the wall as a backsplash allowed the space to feel larger."Half backsplashes are having a major design moment, but not just because they’re practical. They’re a blank canvas for creativity. From floating ledges and mixed materials to budget-conscious decisions that don’t skimp on style, they’re a smart (and stylish) way to make your kitchen feel lighter, livelier, and totally considered.So, go ahead—do it halfway.Follow House Beautiful on Instagram and TikTok.
    0 Commentaires 0 Parts
  • OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs
    Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty. 
    Limitations of Existing Training-Based and Training-Free Approaches
    Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly. 
    Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework
    Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks. 
    System Architecture: Reasoning Pruning and Dual-Reference Optimization
    The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth. 

    Empirical Evaluation and Comparative Performance
    The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning. 

    Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems
    In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future. 

    Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter.
    Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    #othinkr1 #dualmode #reasoning #framework #cut
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger #othinkr1 #dualmode #reasoning #framework #cut
    WWW.MARKTECHPOST.COM
    OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
    The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However, many simple tasks they handle could be solved by smaller models with fewer tokens, making such elaborate reasoning unnecessary. This echoes human thinking, where we use fast, intuitive responses for easy problems and slower, analytical thinking for complex ones. While LRMs mimic slow, logical reasoning, they generate significantly longer outputs, thereby increasing computational cost. Current methods for reducing reasoning steps lack flexibility, limiting models to a single fixed reasoning style. There is a growing need for adaptive reasoning that adjusts effort according to task difficulty.  Limitations of Existing Training-Based and Training-Free Approaches Recent research on improving reasoning efficiency in LRMs can be categorized into two main areas: training-based and training-free methods. Training strategies often use reinforcement learning or fine-tuning to limit token usage or adjust reasoning depth, but they tend to follow fixed patterns without flexibility. Training-free approaches utilize prompt engineering or pattern detection to shorten outputs during inference; however, they also lack adaptability. More recent work focuses on variable-length reasoning, where models adjust reasoning depth based on task complexity. Others study “overthinking,” where models over-reason unnecessarily. However, few methods enable dynamic switching between quick and thorough reasoning—something this paper addresses directly.  Introducing OThink-R1: Dynamic Fast/Slow Reasoning Framework Researchers from Zhejiang University and OPPO have developed OThink-R1, a new approach that enables LRMs to switch between fast and slow thinking smartly, much like humans do. By analyzing reasoning patterns, they identified which steps are essential and which are redundant. With help from another model acting as a judge, they trained LRMs to adapt their reasoning style based on task complexity. Their method reduces unnecessary reasoning by over 23% without losing accuracy. Using a loss function and fine-tuned datasets, OThink-R1 outperforms previous models in both efficiency and performance on various math and question-answering tasks.  System Architecture: Reasoning Pruning and Dual-Reference Optimization The OThink-R1 framework helps LRMs dynamically switch between fast and slow thinking. First, it identifies when LRMs include unnecessary reasoning, like overexplaining or double-checking, versus when detailed steps are truly essential. Using this, it builds a curated training dataset by pruning redundant reasoning and retaining valuable logic. Then, during fine-tuning, a special loss function balances both reasoning styles. This dual-reference loss compares the model’s outputs with both fast and slow thinking variants, encouraging flexibility. As a result, OThink-R1 can adaptively choose the most efficient reasoning path for each problem while preserving accuracy and logical depth.  Empirical Evaluation and Comparative Performance The OThink-R1 model was tested on simpler QA and math tasks to evaluate its ability to switch between fast and slow reasoning. Using datasets like OpenBookQA, CommonsenseQA, ASDIV, and GSM8K, the model demonstrated strong performance, generating fewer tokens while maintaining or improving accuracy. Compared to baselines such as NoThinking and DualFormer, OThink-R1 demonstrated a better balance between efficiency and effectiveness. Ablation studies confirmed the importance of pruning, KL constraints, and LLM-Judge in achieving optimal results. A case study illustrated that unnecessary reasoning can lead to overthinking and reduced accuracy, highlighting OThink-R1’s strength in adaptive reasoning.  Conclusion: Towards Scalable and Efficient Hybrid Reasoning Systems In conclusion, OThink-R1 is a large reasoning model that adaptively switches between fast and slow thinking modes to improve both efficiency and performance. It addresses the issue of unnecessarily complex reasoning in large models by analyzing and classifying reasoning steps as either essential or redundant. By pruning the redundant ones while maintaining logical accuracy, OThink-R1 reduces unnecessary computation. It also introduces a dual-reference KL-divergence loss to strengthen hybrid reasoning. Tested on math and QA tasks, it cuts down reasoning redundancy by 23% without sacrificing accuracy, showing promise for building more adaptive, scalable, and efficient AI reasoning systems in the future.  Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 100k+ ML SubReddit and Subscribe to our Newsletter. Sana HassanSana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.Sana Hassanhttps://www.marktechpost.com/author/sana-hassan/Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDevSana Hassanhttps://www.marktechpost.com/author/sana-hassan/MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language ModelsSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Google AI Unveils a Hybrid AI-Physics Model for Accurate Regional Climate Risk Forecasts with Better Uncertainty AssessmentSana Hassanhttps://www.marktechpost.com/author/sana-hassan/Run Multiple AI Coding Agents in Parallel with Container-Use from Dagger
    0 Commentaires 0 Parts
  • Need to run Windows on your Mac? A Parallels Pro subscription just went on sale

    Macworld

    If you have a Mac but still need to run Windows apps, Parallels Pro makes it surprisingly easy. Instead of switching computers or rebooting into a different system, Parallels lets you run Windows, Linux, or other operating systems right alongside macOS. Whether you are moving from PC to Mac or just need access to certain programs that do not have Mac versions, this tool creates a virtual machine that works like a second computer within your Mac. It’s usually for a 1-year Parallels Desktop Subscription for Macs, but right now it’s only What can Parallels do?

    Parallels Pro lets you run Windows 10, Windows 11, or even multiple operating systems simultaneously. It also includes Parallels Toolbox, which gives you over 30 one-touch tools for both Mac and PC, helping you handle tasks like freeing up disk space or taking screenshots with just a click.

    The Parallels AI Package includes a Linux-based virtual machine built for machine learning and AI development. With GitHub integration and natural language VM control, managing virtual machines becomes much simpler. There’s even an enhanced Packer plugin that automates setup for Apple Silicon Macs.

    If you want to run something other than macOS, then get a 1-Year Parallels Pro Subscription while it’s on sale for Parallels Pro for Mac: 1-Year SubscriptionSee Deal

    StackSocial prices subject to change.
    #need #run #windows #your #mac
    Need to run Windows on your Mac? A Parallels Pro subscription just went on sale
    Macworld If you have a Mac but still need to run Windows apps, Parallels Pro makes it surprisingly easy. Instead of switching computers or rebooting into a different system, Parallels lets you run Windows, Linux, or other operating systems right alongside macOS. Whether you are moving from PC to Mac or just need access to certain programs that do not have Mac versions, this tool creates a virtual machine that works like a second computer within your Mac. It’s usually for a 1-year Parallels Desktop Subscription for Macs, but right now it’s only What can Parallels do? Parallels Pro lets you run Windows 10, Windows 11, or even multiple operating systems simultaneously. It also includes Parallels Toolbox, which gives you over 30 one-touch tools for both Mac and PC, helping you handle tasks like freeing up disk space or taking screenshots with just a click. The Parallels AI Package includes a Linux-based virtual machine built for machine learning and AI development. With GitHub integration and natural language VM control, managing virtual machines becomes much simpler. There’s even an enhanced Packer plugin that automates setup for Apple Silicon Macs. If you want to run something other than macOS, then get a 1-Year Parallels Pro Subscription while it’s on sale for Parallels Pro for Mac: 1-Year SubscriptionSee Deal StackSocial prices subject to change. #need #run #windows #your #mac
    WWW.MACWORLD.COM
    Need to run Windows on your Mac? A Parallels Pro subscription just went on sale
    Macworld If you have a Mac but still need to run Windows apps, Parallels Pro makes it surprisingly easy. Instead of switching computers or rebooting into a different system, Parallels lets you run Windows, Linux, or other operating systems right alongside macOS. Whether you are moving from PC to Mac or just need access to certain programs that do not have Mac versions, this tool creates a virtual machine that works like a second computer within your Mac. It’s usually $119.99 for a 1-year Parallels Desktop Subscription for Macs, but right now it’s only $74.99. What can Parallels do? Parallels Pro lets you run Windows 10, Windows 11, or even multiple operating systems simultaneously. It also includes Parallels Toolbox, which gives you over 30 one-touch tools for both Mac and PC, helping you handle tasks like freeing up disk space or taking screenshots with just a click. The Parallels AI Package includes a Linux-based virtual machine built for machine learning and AI development. With GitHub integration and natural language VM control, managing virtual machines becomes much simpler. There’s even an enhanced Packer plugin that automates setup for Apple Silicon Macs. If you want to run something other than macOS, then get a 1-Year Parallels Pro Subscription while it’s on sale for $74.99. Parallels Pro for Mac: 1-Year SubscriptionSee Deal StackSocial prices subject to change.
    0 Commentaires 0 Parts
Plus de résultats