• PlayStation finally removes regional restrictions from Helldivers 2 and more after infuriating gamers everywhere 

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    PlayStation’s annoying regional restrictions on PC games have proved infuriating for gamers all across the world. While most gamers were unaffected, those in some regions found themselves unable to play games like Helldivers 2 and other titles due to the restrictions. 
    Thankfully, after months of complaints, it appears that PlayStation is finally removing the regional restrictions of its PC releases for a large number of countries. However, not every title has been altered at the time of writing. 
    Regional restrictions removed from Helldivers 2 and more 
    As spotted by players online, a number of Steam database updates have changed the regional restrictions of PlayStation games on PC. 
    Games such as Helldivers 2, Spider-Man 2, God of War: Ragnarok and The Last of Us: Part 2 are now available to purchase in a large number of additional countries. The change appears to be rolling out to PlayStation PC releases at the time of writing.
    It’s been a long time coming, and the introduction of the restrictions last year was a huge controversy for the company. Since the restrictions were put in place, players who previously purchased Helldivers 2 were unable to play the title online without a VPN. 
    Additionally, Ghost of Tsushima could be played in a number of countries, but its Legends multiplayer mode was inaccessible due to the regional issues. 
    Honestly, PlayStation should’ve removed these restrictions far quicker than they initially did. However, the phrase “better late than never” exists for a reason, and we’re happy that more gamers around the world are no longer punished for simply being born in a different country. 
    For more PlayStation news, read the company’s recent comments about the next generation PlayStation 6 console. Additionally, read about potential PS Plus price increases that could be on the way as the company aims to “maximise profitability”. 

    Helldivers 2

    Platform:
    PC, PlayStation 5

    Genre:
    Action, Shooter, Third Person

    8
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #playstation #finally #removes #regional #restrictions
    PlayStation finally removes regional restrictions from Helldivers 2 and more after infuriating gamers everywhere 
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here PlayStation’s annoying regional restrictions on PC games have proved infuriating for gamers all across the world. While most gamers were unaffected, those in some regions found themselves unable to play games like Helldivers 2 and other titles due to the restrictions.  Thankfully, after months of complaints, it appears that PlayStation is finally removing the regional restrictions of its PC releases for a large number of countries. However, not every title has been altered at the time of writing.  Regional restrictions removed from Helldivers 2 and more  As spotted by players online, a number of Steam database updates have changed the regional restrictions of PlayStation games on PC.  Games such as Helldivers 2, Spider-Man 2, God of War: Ragnarok and The Last of Us: Part 2 are now available to purchase in a large number of additional countries. The change appears to be rolling out to PlayStation PC releases at the time of writing. It’s been a long time coming, and the introduction of the restrictions last year was a huge controversy for the company. Since the restrictions were put in place, players who previously purchased Helldivers 2 were unable to play the title online without a VPN.  Additionally, Ghost of Tsushima could be played in a number of countries, but its Legends multiplayer mode was inaccessible due to the regional issues.  Honestly, PlayStation should’ve removed these restrictions far quicker than they initially did. However, the phrase “better late than never” exists for a reason, and we’re happy that more gamers around the world are no longer punished for simply being born in a different country.  For more PlayStation news, read the company’s recent comments about the next generation PlayStation 6 console. Additionally, read about potential PS Plus price increases that could be on the way as the company aims to “maximise profitability”.  Helldivers 2 Platform: PC, PlayStation 5 Genre: Action, Shooter, Third Person 8 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #playstation #finally #removes #regional #restrictions
    WWW.VIDEOGAMER.COM
    PlayStation finally removes regional restrictions from Helldivers 2 and more after infuriating gamers everywhere 
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here PlayStation’s annoying regional restrictions on PC games have proved infuriating for gamers all across the world. While most gamers were unaffected, those in some regions found themselves unable to play games like Helldivers 2 and other titles due to the restrictions.  Thankfully, after months of complaints, it appears that PlayStation is finally removing the regional restrictions of its PC releases for a large number of countries. However, not every title has been altered at the time of writing.  Regional restrictions removed from Helldivers 2 and more  As spotted by players online (thanks, Wario64), a number of Steam database updates have changed the regional restrictions of PlayStation games on PC.  Games such as Helldivers 2, Spider-Man 2, God of War: Ragnarok and The Last of Us: Part 2 are now available to purchase in a large number of additional countries. The change appears to be rolling out to PlayStation PC releases at the time of writing. It’s been a long time coming, and the introduction of the restrictions last year was a huge controversy for the company. Since the restrictions were put in place, players who previously purchased Helldivers 2 were unable to play the title online without a VPN.  Additionally, Ghost of Tsushima could be played in a number of countries, but its Legends multiplayer mode was inaccessible due to the regional issues.  Honestly, PlayStation should’ve removed these restrictions far quicker than they initially did. However, the phrase “better late than never” exists for a reason, and we’re happy that more gamers around the world are no longer punished for simply being born in a different country.  For more PlayStation news, read the company’s recent comments about the next generation PlayStation 6 console. Additionally, read about potential PS Plus price increases that could be on the way as the company aims to “maximise profitability”.  Helldivers 2 Platform(s): PC, PlayStation 5 Genre(s): Action, Shooter, Third Person 8 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    Like
    Love
    Wow
    Angry
    Sad
    487
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Air-Conditioning Can Help the Power Grid instead of Overloading It

    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    #airconditioning #can #help #power #grid
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article. #airconditioning #can #help #power #grid
    WWW.SCIENTIFICAMERICAN.COM
    Air-Conditioning Can Help the Power Grid instead of Overloading It
    June 13, 20256 min readAir-Conditioning Can Surprisingly Help the Power Grid during Extreme HeatSwitching on air-conditioning during extreme heat doesn’t have to make us feel guilty—it can actually boost power grid reliability and help bring more renewable energy onlineBy Johanna Mathieu & The Conversation US Imagedepotpro/Getty ImagesThe following essay is reprinted with permission from The Conversation, an online publication covering the latest research.As summer arrives, people are turning on air conditioners in most of the U.S. But if you’re like me, you always feel a little guilty about that. Past generations managed without air conditioning – do I really need it? And how bad is it to use all this electricity for cooling in a warming world?If I leave my air conditioner off, I get too hot. But if everyone turns on their air conditioner at the same time, electricity demand spikes, which can force power grid operators to activate some of the most expensive, and dirtiest, power plants. Sometimes those spikes can ask too much of the grid and lead to brownouts or blackouts.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Research I recently published with a team of scholars makes me feel a little better, though. We have found that it is possible to coordinate the operation of large numbers of home air-conditioning units, balancing supply and demand on the power grid – and without making people endure high temperatures inside their homes.Studies along these lines, using remote control of air conditioners to support the grid, have for many years explored theoretical possibilities like this. However, few approaches have been demonstrated in practice and never for such a high-value application and at this scale. The system we developed not only demonstrated the ability to balance the grid on timescales of seconds, but also proved it was possible to do so without affecting residents’ comfort.The benefits include increasing the reliability of the power grid, which makes it easier for the grid to accept more renewable energy. Our goal is to turn air conditioners from a challenge for the power grid into an asset, supporting a shift away from fossil fuels toward cleaner energy.Adjustable equipmentMy research focuses on batteries, solar panels and electric equipment – such as electric vehicles, water heaters, air conditioners and heat pumps – that can adjust itself to consume different amounts of energy at different times.Originally, the U.S. electric grid was built to transport electricity from large power plants to customers’ homes and businesses. And originally, power plants were large, centralized operations that burned coal or natural gas, or harvested energy from nuclear reactions. These plants were typically always available and could adjust how much power they generated in response to customer demand, so the grid would be balanced between power coming in from producers and being used by consumers.But the grid has changed. There are more renewable energy sources, from which power isn’t always available – like solar panels at night or wind turbines on calm days. And there are the devices and equipment I study. These newer options, called “distributed energy resources,” generate or store energy near where consumers need it – or adjust how much energy they’re using in real time.One aspect of the grid hasn’t changed, though: There’s not much storage built into the system. So every time you turn on a light, for a moment there’s not enough electricity to supply everything that wants it right then: The grid needs a power producer to generate a little more power. And when you turn off a light, there’s a little too much: A power producer needs to ramp down.The way power plants know what real-time power adjustments are needed is by closely monitoring the grid frequency. The goal is to provide electricity at a constant frequency – 60 hertz – at all times. If more power is needed than is being produced, the frequency drops and a power plant boosts output. If there’s too much power being produced, the frequency rises and a power plant slows production a little. These actions, a process called “frequency regulation,” happen in a matter of seconds to keep the grid balanced.This output flexibility, primarily from power plants, is key to keeping the lights on for everyone.Finding new optionsI’m interested in how distributed energy resources can improve flexibility in the grid. They can release more energy, or consume less, to respond to the changing supply or demand, and help balance the grid, ensuring the frequency remains near 60 hertz.Some people fear that doing so might be invasive, giving someone outside your home the ability to control your battery or air conditioner. Therefore, we wanted to see if we could help balance the grid with frequency regulation using home air-conditioning units rather than power plants – without affecting how residents use their appliances or how comfortable they are in their homes.From 2019 to 2023, my group at the University of Michigan tried this approach, in collaboration with researchers at Pecan Street Inc., Los Alamos National Laboratory and the University of California, Berkeley, with funding from the U.S. Department of Energy Advanced Research Projects Agency-Energy.We recruited 100 homeowners in Austin, Texas, to do a real-world test of our system. All the homes had whole-house forced-air cooling systems, which we connected to custom control boards and sensors the owners allowed us to install in their homes. This equipment let us send instructions to the air-conditioning units based on the frequency of the grid.Before I explain how the system worked, I first need to explain how thermostats work. When people set thermostats, they pick a temperature, and the thermostat switches the air-conditioning compressor on and off to maintain the air temperature within a small range around that set point. If the temperature is set at 68 degrees, the thermostat turns the AC on when the temperature is, say, 70, and turns it off when it’s cooled down to, say, 66.Every few seconds, our system slightly changed the timing of air-conditioning compressor switching for some of the 100 air conditioners, causing the units’ aggregate power consumption to change. In this way, our small group of home air conditioners reacted to grid changes the way a power plant would – using more or less energy to balance the grid and keep the frequency near 60 hertz.Moreover, our system was designed to keep home temperatures within the same small temperature range around the set point.Testing the approachWe ran our system in four tests, each lasting one hour. We found two encouraging results.First, the air conditioners were able to provide frequency regulation at least as accurately as a traditional power plant. Therefore, we showed that air conditioners could play a significant role in increasing grid flexibility. But perhaps more importantly – at least in terms of encouraging people to participate in these types of systems – we found that we were able to do so without affecting people’s comfort in their homes.We found that home temperatures did not deviate more than 1.6 Fahrenheit from their set point. Homeowners were allowed to override the controls if they got uncomfortable, but most didn’t. For most tests, we received zero override requests. In the worst case, we received override requests from two of the 100 homes in our test.In practice, this sort of technology could be added to commercially available internet-connected thermostats. In exchange for credits on their energy bills, users could choose to join a service run by the thermostat company, their utility provider or some other third party.Then people could turn on the air conditioning in the summer heat without that pang of guilt, knowing they were helping to make the grid more reliable and more capable of accommodating renewable energy sources – without sacrificing their own comfort in the process.This article was originally published on The Conversation. Read the original article.
    Like
    Love
    Wow
    Sad
    Angry
    602
    0 Comentários 0 Compartilhamentos 0 Anterior
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Inside Mark Zuckerberg’s AI hiring spree

    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    #inside #mark #zuckerbergs #hiring #spree
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch, Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI. “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies. Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will needto approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More: #inside #mark #zuckerbergs #hiring #spree
    WWW.THEVERGE.COM
    Inside Mark Zuckerberg’s AI hiring spree
    AI researchers have recently been asking themselves a version of the question, “Is that really Zuck?”As first reported by Bloomberg, the Meta CEO has been personally asking top AI talent to join his new “superintelligence” AI lab and reboot Llama. His recruiting process typically goes like this: a cold outreach via email or WhatsApp that cites the recruit’s work history and requests a 15-minute chat. Dozens of researchers have gotten these kinds of messages at Google alone. For those who do agree to hear his pitch (amazingly, not all of them do), Zuckerberg highlights the latitude they’ll have to make risky bets, the scale of Meta’s products, and the money he’s prepared to invest in the infrastructure to support them. He makes clear that this new team will be empowered and sit with him at Meta’s headquarters, where I’m told the desks have already been rearranged for the incoming team.Most of the headlines so far have focused on the eye-popping compensation packages Zuckerberg is offering, some of which are well into the eight-figure range. As I’ve covered before, hiring the best AI researcher is like hiring a star basketball player: there are very few of them, and you have to pay up. Case in point: Zuckerberg basically just paid 14 Instagrams to hire away Scale AI CEO Alexandr Wang. It’s easily the most expensive hire of all time, dwarfing the billions that Google spent to rehire Noam Shazeer and his core team from Character.AI (a deal Zuckerberg passed on). “Opportunities of this magnitude often come at a cost,” Wang wrote in his note to employees this week. “In this instance, that cost is my departure.”Zuckerberg’s recruiting spree is already starting to rattle his competitors. The day before his offer deadline for some senior OpenAI employees, Sam Altman dropped an essay proclaiming that “before anything else, we are a superintelligence research company.” And after Zuckerberg tried to hire DeepMind CTO Koray Kavukcuoglu, he was given a larger SVP title and now reports directly to Google CEO Sundar Pichai. I expect Wang to have the title of “chief AI officer” at Meta when the new lab is announced. Jack Rae, a principal researcher from DeepMind who has signed on, will lead pre-training. Meta certainly needs a reset. According to my sources, Llama has fallen so far behind that Meta’s product teams have recently discussed using AI models from other companies (although that is highly unlikely to happen). Meta’s internal coding tool for engineers, however, is already using Claude. While Meta’s existing AI researchers have good reason to be looking over their shoulders, Zuckerberg’s $14.3 billion investment in Scale is making many longtime employees, or Scaliens, quite wealthy. They were popping champagne in the office this morning. Then, Wang held his last all-hands meeting to say goodbye and cried. He didn’t mention what he would be doing at Meta. I expect his new team will be unveiled within the next few weeks after Zuckerberg gets a critical number of members to officially sign on. Tim Cook. Getty Images / The VergeApple’s AI problemApple is accustomed to being on top of the tech industry, and for good reason: the company has enjoyed a nearly unrivaled run of dominance. After spending time at Apple HQ this week for WWDC, I’m not sure that its leaders appreciate the meteorite that is heading their way. The hubris they display suggests they don’t understand how AI is fundamentally changing how people use and build software.Heading into the keynote on Monday, everyone knew not to expect the revamped Siri that had been promised the previous year. Apple, to its credit, acknowledged that it dropped the ball there, and it sounds like a large language model rebuild of Siri is very much underway and coming in 2026.The AI industry moves much faster than Apple’s release schedule, though. By the time Siri is perhaps good enough to keep pace, it will have to contend with the lock-in that OpenAI and others are building through their memory features. Apple and OpenAI are currently partners, but both companies want to ultimately control the interface for interacting with AI, which puts them on a collision course. Apple’s decision to let developers use its own, on-device foundational models for free in their apps sounds strategically smart, but unfortunately, the models look far from leading. Apple ran its own benchmarks, which aren’t impressive, and has confirmed a measly context window of 4,096 tokens. It’s also saying that the models will be updated alongside its operating systems — a snail’s pace compared to how quickly AI companies move. I’d be surprised if any serious developers use these Apple models, although I can see them being helpful to indie devs who are just getting started and don’t want to spend on the leading cloud models. I don’t think most people care about the privacy angle that Apple is claiming as a differentiator; they are already sharing their darkest secrets with ChatGPT and other assistants. Some of the new Apple Intelligence features I demoed this week were impressive, such as live language translation for calls. Mostly, I came away with the impression that the company is heavily leaning on its ChatGPT partnership as a stopgap until Apple Intelligence and Siri are both where they need to be. AI probably isn’t a near-term risk to Apple’s business. No one has shipped anything close to the contextually aware Siri that was demoed at last year’s WWDC. People will continue to buy Apple hardware for a long time, even after Sam Altman and Jony Ive announce their first AI device for ChatGPT next year. AR glasses aren’t going mainstream anytime soon either, although we can expect to see more eyewear from Meta, Google, and Snap over the coming year. In aggregate, these AI-powered devices could begin to siphon away engagement from the iPhone, but I don’t see people fully replacing their smartphones for a long time. The bigger question after this week is whether Apple has what it takes to rise to the occasion and culturally reset itself for the AI era. I would have loved to hear Tim Cook address this issue directly, but the only interview he did for WWDC was a cover story in Variety about the company’s new F1 movie.ElsewhereAI agents are coming. I recently caught up with Databricks CEO Ali Ghodsi ahead of his company’s annual developer conference this week in San Francisco. Given Databricks’ position, he has a unique, bird’s-eye view of where things are headed for AI. He doesn’t envision a near-term future where AI agents completely automate real-world tasks, but he does predict a wave of startups over the next year that will come close to completing actions in areas such as travel booking. He thinks humans will need (and want) to approve what an agent does before it goes off and completes a task. “We have most of the airplanes flying automated, and we still want pilots in there.”Buyouts are the new normal at Google. That much is clear after this week’s rollout of the “voluntary exit program” in core engineering, the Search organization, and some other divisions. In his internal memo, Search SVP Nick Fox was clear that management thinks buyouts have been successful in other parts of the company that have tried them. In a separate memo I saw, engineering exec Jen Fitzpatrick called the buyouts an “opportunity to create internal mobility and fresh growth opportunities.” Google appears to be attempting a cultural reset, which will be a challenging task for a company of its size. We’ll see if it can pull it off. Evan Spiegel wants help with AR glasses. I doubt that his announcement that consumer glasses are coming next year was solely aimed at AR developers. Telegraphing the plan and announcing that Snap has spent $3 billion on hardware to date feels more aimed at potential partners that want to make a bigger glasses play, such as Google. A strategic investment could help insulate Snap from the pain of the stock market. A full acquisition may not be off the table, either. When he was recently asked if he’d be open to a sale, Spiegel didn’t shut it down like he always has, but instead said he’d “consider anything” that helps the company “create the next computing platform.”Link listMore to click on:If you haven’t already, don’t forget to subscribe to The Verge, which includes unlimited access to Command Line and all of our reporting.As always, I welcome your feedback, especially if you’re an AI researcher fielding a juicy job offer. You can respond here or ping me securely on Signal.Thanks for subscribing.See More:
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Fortnite x Squid Game skins, release date, leaks, and more

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    The Squid Game collaboration is the next big thing for Fortnite. Its release date is right around the corner, and Epic Games has already revealed some of the things that will come with it. The iconic South Korean series will bring much more than just cosmetics, and here’s everything you need to know about it.
    In this article, we will reveal everything we know about Squid Game skins in Fortnite, as well as Creative tools. Furthermore, we will reveal the release date and take a look at several leaks that have come out since the collab announcement.
    Will the Squid Game collaboration bring Fortnite skins?
    According to trusted Fortnite leakers, the Squid Game partnership will introduce new cosmetics, including character skins. At the moment, it’s unknown what these skins will look like, but we should find out more details soon. The release date of the Squid Game collab is set for Friday, June 27, which is also the release date of the last season of the series.
    Earlier this month, Epic Games confirmed that the collaboration will bring new UEFNtools. Thanks to this, creators will be able to make Squid Game-themed maps with new items and mechanics.
    The Squid Game collaboration will bring new Fortnite cosmetics. Image by VideoGamer
    Epic has already released a cryptic teaser for the collab which reveals the following: “Red Greens, Square Meals, Affluent Arrivals, June 27th.” With less than two weeks to go until the big update, we expect even more teasers and possibly skin leaks. Considering how popular Squid Game is, this could become one of Fortnite’s most iconic collaborations.
    The next Fortnite update is set to come out on Tuesday, June 17. Since this update will contain Squid Game data, we could see more early leaks in just a few more days.

    Fortnite

    Platform:
    Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X

    Genre:
    Action, Massively Multiplayer, Shooter

    9
    VideoGamer

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #fortnite #squid #game #skins #release
    Fortnite x Squid Game skins, release date, leaks, and more
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here The Squid Game collaboration is the next big thing for Fortnite. Its release date is right around the corner, and Epic Games has already revealed some of the things that will come with it. The iconic South Korean series will bring much more than just cosmetics, and here’s everything you need to know about it. In this article, we will reveal everything we know about Squid Game skins in Fortnite, as well as Creative tools. Furthermore, we will reveal the release date and take a look at several leaks that have come out since the collab announcement. Will the Squid Game collaboration bring Fortnite skins? According to trusted Fortnite leakers, the Squid Game partnership will introduce new cosmetics, including character skins. At the moment, it’s unknown what these skins will look like, but we should find out more details soon. The release date of the Squid Game collab is set for Friday, June 27, which is also the release date of the last season of the series. Earlier this month, Epic Games confirmed that the collaboration will bring new UEFNtools. Thanks to this, creators will be able to make Squid Game-themed maps with new items and mechanics. The Squid Game collaboration will bring new Fortnite cosmetics. Image by VideoGamer Epic has already released a cryptic teaser for the collab which reveals the following: “Red Greens, Square Meals, Affluent Arrivals, June 27th.” With less than two weeks to go until the big update, we expect even more teasers and possibly skin leaks. Considering how popular Squid Game is, this could become one of Fortnite’s most iconic collaborations. The next Fortnite update is set to come out on Tuesday, June 17. Since this update will contain Squid Game data, we could see more early leaks in just a few more days. Fortnite Platform: Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre: Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #fortnite #squid #game #skins #release
    WWW.VIDEOGAMER.COM
    Fortnite x Squid Game skins, release date, leaks, and more
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here The Squid Game collaboration is the next big thing for Fortnite. Its release date is right around the corner, and Epic Games has already revealed some of the things that will come with it. The iconic South Korean series will bring much more than just cosmetics, and here’s everything you need to know about it. In this article, we will reveal everything we know about Squid Game skins in Fortnite, as well as Creative tools. Furthermore, we will reveal the release date and take a look at several leaks that have come out since the collab announcement. Will the Squid Game collaboration bring Fortnite skins? According to trusted Fortnite leakers, the Squid Game partnership will introduce new cosmetics, including character skins. At the moment, it’s unknown what these skins will look like, but we should find out more details soon. The release date of the Squid Game collab is set for Friday, June 27, which is also the release date of the last season of the series. Earlier this month, Epic Games confirmed that the collaboration will bring new UEFN (Unreal Editor for Fortnite) tools. Thanks to this, creators will be able to make Squid Game-themed maps with new items and mechanics. The Squid Game collaboration will bring new Fortnite cosmetics. Image by VideoGamer Epic has already released a cryptic teaser for the collab which reveals the following: “Red Greens, Square Meals, Affluent Arrivals, June 27th.” With less than two weeks to go until the big update, we expect even more teasers and possibly skin leaks. Considering how popular Squid Game is, this could become one of Fortnite’s most iconic collaborations. The next Fortnite update is set to come out on Tuesday, June 17. Since this update will contain Squid Game data, we could see more early leaks in just a few more days. Fortnite Platform(s): Android, iOS, macOS, Nintendo Switch, PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series S/X Genre(s): Action, Massively Multiplayer, Shooter 9 VideoGamer Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Five Climate Issues to Watch When Trump Goes to Canada

    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best,realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    #five #climate #issues #watch #when
    Five Climate Issues to Watch When Trump Goes to Canada
    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best,realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals. #five #climate #issues #watch #when
    WWW.SCIENTIFICAMERICAN.COM
    Five Climate Issues to Watch When Trump Goes to Canada
    June 13, 20255 min readFive Climate Issues to Watch When Trump Goes to CanadaPresident Trump will attend the G7 summit on Sunday in a nation he threatened to annex. He will also be an outlier on climate issuesBy Sara Schonhardt & E&E News Saul Loeb/AFP via Getty ImagesCLIMATEWIRE | The world’s richest nations are gathering Sunday in the Canadian Rockies for a summit that could reveal whether President Donald Trump's policies are shaking global climate efforts.The Group of Seven meeting comes at a challenging time for international climate policy. Trump’s tariff seesaw has cast a shade over the global economy, and his domestic policies have threatened billions of dollars in funding for clean energy programs. Those pressures are colliding with record-breaking temperatures worldwide and explosive demand for energy, driven by power-hungry data centers linked to artificial intelligence technologies.On top of that, Trump has threatened to annex the host of the meeting — Canada — and members of his Cabinet have taken swipes at Europe’s use of renewable energy. Rather than being aligned with much of the world's assertion that fossil fuels should be tempered, Trump embraces the opposite position — drill for more oil and gas and keep burning coal, while repealing environmental regulations on the biggest sources of U.S. carbon pollution.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.Those moves illustrate his rejection of climate science and underscore his outlying positions on global warming in the G7.Here are five things to know about the summit.Who will be there?The group comprises Canada, France, Germany, Italy, Japan, the United Kingdom and the United States — plus the European Union. Together they account for more than 40 percent of gross domestic product globally and around a quarter of all energy-related carbon dioxide pollution, according to the International Energy Agency. The U.S. is the only one among them that is not trying to hit a carbon reduction goal.Some emerging economies have also been invited, including Mexico, India, South Africa and Brazil, the host of this year’s COP30 climate talks in November.Ahead of the meeting, the office of Canada's prime minister, Mark Carney, said he and Brazilian President Luiz Inácio Lula da Silva agreed to strengthen cooperation on energy security and critical minerals. White House press secretary Karoline Leavitt said Trump would be having "quite a few" bilateral meetings but that his schedule was in flux.The G7 first came together 50 years ago following the Arab oil embargo. Since then, its seven members have all joined the United Nations Framework Convention on Climate Change and the Paris Agreement. The U.S. is the only nation in the group that has withdrawn from the Paris Agreement, which counts almost every country in the world as a signatory.What’s on the table?Among Canada’s top priorities as host are strengthening energy security and fortifying critical mineral supply chains. Carney would also like to see some agreement on joint wildfire action.Expanding supply chains for critical minerals — and competing more aggressively with China over those resources — could be areas of common ground among the leaders. Climate change is expected to remain divisive. Looming over the discussions will be tariffs — which Trump has applied across the board — because they will have an impact on the clean energy transition.“I think probably the majority of the conversation will be less about climate per se, or certainly not using climate action as the frame, but more about energy transition and infrastructure as a way of kind of bridging the known gaps between most of the G7 and where the United States is right now,” said Dan Baer, director of the Europe program at the Carnegie Endowment for International Peace.What are the possible outcomes?The leaders could issue a communique at the end of their meeting, but those statements are based on consensus, something that would be difficult to reach without other G7 countries capitulating to Trump. Bloomberg reported Wednesday that nations won’t try to reach a joint agreement, in part because bridging gaps on climate change could be too hard.Instead, Carney could issue a chair’s summary or joint statements based on certain issues.The question is how far Canada will go to accommodate the U.S., which could try to roll back past statements on advancing clean energy, said Andrew Light, former assistant secretary of Energy for international affairs, who led ministerial-level negotiations for the G7.“They might say, rather than watering everything down that we accomplished in the last four years, we just do a chair's statement, which summarizes the debate,” Light said. “That will show you that you didn't get consensus, but you also didn't get capitulation.”What to watch forIf there is a communique, Light says he’ll be looking for whether there is tougher language on China and any signal of support for science and the Paris Agreement. During his first term, Trump refused to support the Paris accord in the G7 and G20 declarations.The statement could avoid climate and energy issues entirely. But if it backtracks on those issues, that could be a sign that countries made a deal by trading climate-related language for something else, Light said.Baer of Carnegie said a statement framed around energy security and infrastructure could be seen as a “pragmatic adaptation” to the U.S. administration, rather than an indication that other leaders aren’t concerned about climate change.Climate activists have lower expectations.“Realistically, we can expect very little, if any, mention of climate change,” said Caroline Brouillette, executive director of Climate Action Network Canada.“The message we should be expecting from those leaders is that climate action remains a priority for the rest of the G7 … whether it's on the transition away from fossil fuels and supporting developing countries through climate finance,” she said. “Especially now that the U.S. is stepping back, we need countries, including Canada, to be stepping up.”Best- and worst-case scenariosThe challenge for Carney will be preventing any further rupture with Trump, analysts said.In 2018, Trump made a hasty exit from the G7 summit, also in Canada that year, due largely to trade disagreements. He retracted his support for the joint statement.“The best, [most] realistic case outcome is that things don't get worse,” said Baer.The worst-case scenario? Some kind of “highly personalized spat” that could add to the sense of disorder, he added.“I think the G7 on the one hand has the potential to be more important than ever, as fewer and fewer platforms for international cooperation seem to be able to take action,” Baer said. “So it's both very important and also I don't have super-high expectations.”Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2025. E&E News provides essential news for energy and environment professionals.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • This Marvel Rivals hero became unstoppable after the controversial change

    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here

    Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations.
    In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move.
    Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate
    With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move.
    So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Womanand Luna Snow, both of whom are considered amazing healers.
    Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer
    In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%.
    Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments.

    Marvel Rivals

    Platform:
    macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X

    Genre:
    Fighting, Shooter

    Subscribe to our newsletters!

    By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime.

    Share
    #this #marvel #rivals #hero #became
    This Marvel Rivals hero became unstoppable after the controversial change
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations. In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move. Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move. So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Womanand Luna Snow, both of whom are considered amazing healers. Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%. Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments. Marvel Rivals Platform: macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X Genre: Fighting, Shooter Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share #this #marvel #rivals #hero #became
    WWW.VIDEOGAMER.COM
    This Marvel Rivals hero became unstoppable after the controversial change
    You can trust VideoGamer. Our team of gaming experts spend hours testing and reviewing the latest games, to ensure you're reading the most comprehensive guide possible. Rest assured, all imagery and advice is unique and original. Check out how we test and review games here Season 2.5 of Marvel Rivals has completely shaken up the competitive meta. Ultron, a new Strategist, has performed much better than the community had expected him to, while Peni Parker has had the highest win rate ever since the season was released. Many other heroes have been changed, but one in particular has surpassed all expectations. In this article, we will dive deeper into Marvel Rivals win rates in Season 2.5. The situation has drastically changed over the past month, and some of the balance changes have turned out to be the right move. Marvel Rivals Season 2.5 has skyrocketed Jeff’s win rate With the release of Season 2.5, NetEase introduced a major rework for Jeff the Land Shark. The fan-favorite Strategist went through ground-breaking changes, which forced players to change their playstyle. Initially, the community believed that Jeff was nerfed to the ground. However, this Marvel Rivals rework has turned out to be the right move. So far this season, Jeff the Land Shark holds a 46.39% win rate. While this is certainly not impressive, it actually surpasses the win rates of Invisible Woman (45.54%) and Luna Snow (45.28%), both of whom are considered amazing healers. Jeff’s win rate has been much better in Marvel Rivals Season 2.5. Image by VideoGamer In addition to this, Jeff is the hero with the largest win rate increase in comparison to Season 2, as it increased by 4.35%. The only other hero with a comparable change is Peni Parker, whose win rate has jumped by 4.09%. Jeff is still far from a top-tier character in the game. However, there is no denying that he’s now a viable pick who can keep his teammates alive while also eliminating the enemy team. With Marvel Rivals Season 3 dropping in a few weeks, we expect even more balance changes to come out. While NetEase may buff Jeff even more, he seems to have reached a balanced “sweet spot” that doesn’t require immediate adjustments. Marvel Rivals Platform(s): macOS, PC, PlayStation 5, Xbox Series S, Xbox Series X Genre(s): Fighting, Shooter Subscribe to our newsletters! By subscribing, you agree to our Privacy Policy and may receive occasional deal communications; you can unsubscribe anytime. Share
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained

    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb.But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy”. Only about 0.7 percent is uranium 235, a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream.Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsiusit is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month.
    #could #iran #have #been #close
    Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained
    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb.But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy”. Only about 0.7 percent is uranium 235, a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream.Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsiusit is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month. #could #iran #have #been #close
    WWW.SCIENTIFICAMERICAN.COM
    Could Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment Explained
    June 13, 20253 min readCould Iran Have Been Close to Making a Nuclear Weapon? Uranium Enrichment ExplainedWhen Israeli aircraft recently struck a uranium-enrichment complex in the nation, Iran could have been days away from achieving “breakout,” the ability to quickly turn “yellowcake” uranium into bomb-grade fuel, with its new high-speed centrifugesBy Deni Ellis Béchard edited by Dean VisserMen work inside of a uranium conversion facility just outside the city of Isfahan, Iran, on March 30, 2005. The facility in Isfahan made hexaflouride gas, which was then enriched by feeding it into centrifuges at a facility in Natanz, Iran. Getty ImagesIn the predawn darkness on Friday local time, Israeli military aircraft struck one of Iran’s uranium-enrichment complexes near the city of Natanz. The warheads aimed to do more than shatter concrete; they were meant to buy time, according to news reports. For months, Iran had seemed to be edging ever closer to “breakout,” the point at which its growing stockpile of partially enriched uranium could be converted into fuel for a nuclear bomb. (Iran has denied that it has been pursuing nuclear weapons development.)But why did the strike occur now? One consideration could involve the way enrichment complexes work. Natural uranium is composed almost entirely of uranium 238, or U-238, an isotope that is relatively “heavy” (meaning it has more neutrons in its nucleus). Only about 0.7 percent is uranium 235 (U-235), a lighter isotope that is capable of sustaining a nuclear chain reaction. That means that in natural uranium, only seven atoms in 1,000 are the lighter, fission-ready U-235; “enrichment” simply means raising the percentage of U-235.U-235 can be used in warheads because its nucleus can easily be split. The International Atomic Energy Agency uses 25 kilograms of contained U-235 as the benchmark amount deemed sufficient for a first-generation implosion bomb. In such a weapon, the U-235 is surrounded by conventional explosives that, when detonated, compress the isotope. A separate device releases a neutron stream. (Neutrons are the neutral subatomic particle in an atom’s nucleus that adds to their mass.) Each time a neutron strikes a U-235 atom, the atom fissions; it divides and spits out, on average, two or three fresh neutrons—plus a burst of energy in the form of heat and gamma radiation. And the emitted neutrons in turn strike other U-235 nuclei, creating a self-sustaining chain reaction among the U-235 atoms that have been packed together into a critical mass. The result is a nuclear explosion. By contrast, the more common isotope, U-238, usually absorbs slow neutrons without splitting and cannot drive such a devastating chain reaction.On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.To enrich uranium so that it contains enough U-235, the “yellowcake” uranium powder that comes out of a mine must go through a lengthy process of conversions to transform it from a solid into the gas uranium hexafluoride. First, a series of chemical processes refine the uranium and then, at high temperatures, each uranium atom is bound to six fluorine atoms. The result, uranium hexafluoride, is unusual: below 56 degrees Celsius (132.8 degrees Fahrenheit) it is a white, waxy solid, but just above that temperature, it sublimates into a dense, invisible gas.During enrichment, this uranium hexafluoride is loaded into a centrifuge: a metal cylinder that spins at tens of thousands of revolutions per minute—faster than the blades of a jet engine. As the heavier U-238 molecules drift toward the cylinder wall, the lighter U-235 molecules remain closer to the center and are siphoned off. This new, slightly U-235-richer gas is then put into the next centrifuge. The process is repeated 10 to 20 times as ever more enriched gas is sent through a series of centrifuges.Enrichment is a slow process, but the Iranian government has been working on this for years and already holds roughly 400 kilograms of uranium enriched to 60 percent U-235. This falls short of the 90 percent required for nuclear weapons. But whereas Iran’s first-generation IR-1 centrifuges whirl at about 63,000 revolutions per minute and do relatively modest work, its newer IR-6 models, built from high-strength carbon fiber, spin faster and produce enriched uranium far more quickly.Iran has been installing thousands of these units, especially at Fordow, an underground enrichment facility built beneath 80 to 90 meters of rock. According to a report released on Monday by the Institute for Science and International Security, the new centrifuges could produce enough 90 percent U-235 uranium for a warhead “in as little as two to three days” and enough for nine nuclear weapons in three weeks—or 19 by the end of the third month.
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com