• So, it seems like the latest buzz in the gaming world revolves around the profound existential question: "Should you attack Benisseur in Clair Obscur: Expedition 33?" I mean, what a dilemma! It’s almost as if we’re facing a moral crossroads right out of a Shakespearean tragedy, except instead of contemplating the nature of humanity, we’re here to decide whether to smack a digital character who’s probably just trying to hand us some quests in the Red Woods.

    Let’s break this down, shall we? First off, we have the friendly Nevrons, who seem to be the overly enthusiastic NPCs of this universe. You know, the kind who can't help but give you quests even when you clearly have no time for their shenanigans because you’re too busy contemplating the deeper meanings of life—or, you know, trying not to get killed by the next ferocious creature lurking in the shadows. And what do they come up with? "Hey, why not take on Benisseur?" Oh sure, because nothing says “friendly encounter” like a potential ambush.

    Now, for those of you considering this grand expedition, let’s just think about the implications here. Attacking Benisseur? Really? Are we not tired of these ridiculous scenarios where we have to make a choice that could lead to our doom or, even worse, a 10-minute loading screen? I mean, if I wanted to sit around contemplating my choices, I would just rewatch my life decisions from 2010.

    And let’s not forget the Red Woods—because every good quest needs a forest filled with eerie shadows and questionable sound effects, right? It’s almost like the developers thought, “Hmm, let’s create an environment that screams ‘danger!’ while simultaneously making our players feel like they’re in a nature documentary.” Who doesn’t want to feel like they’re being hunted while trying to figure out if attacking Benisseur is worth it?

    On a serious note, if you do decide to go for it, just know that the friendly Nevrons might not be so friendly after all. After all, what’s a little betrayal between friends? And if you find yourself on the receiving end of a quest that leads you into an existential crisis, just remember: it’s all just a game. Or is it?

    So here’s to you, brave adventurers! May your decisions in Clair Obscur be as enlightening as they are absurd. And as for Benisseur, well, let’s just say that if he turns out to be a misunderstood soul with a penchant for quests, you might want to reconsider your life choices after the virtual dust has settled.

    #ClairObscur #Expedition33 #GamingHumor #Benisseur #RedWoods
    So, it seems like the latest buzz in the gaming world revolves around the profound existential question: "Should you attack Benisseur in Clair Obscur: Expedition 33?" I mean, what a dilemma! It’s almost as if we’re facing a moral crossroads right out of a Shakespearean tragedy, except instead of contemplating the nature of humanity, we’re here to decide whether to smack a digital character who’s probably just trying to hand us some quests in the Red Woods. Let’s break this down, shall we? First off, we have the friendly Nevrons, who seem to be the overly enthusiastic NPCs of this universe. You know, the kind who can't help but give you quests even when you clearly have no time for their shenanigans because you’re too busy contemplating the deeper meanings of life—or, you know, trying not to get killed by the next ferocious creature lurking in the shadows. And what do they come up with? "Hey, why not take on Benisseur?" Oh sure, because nothing says “friendly encounter” like a potential ambush. Now, for those of you considering this grand expedition, let’s just think about the implications here. Attacking Benisseur? Really? Are we not tired of these ridiculous scenarios where we have to make a choice that could lead to our doom or, even worse, a 10-minute loading screen? I mean, if I wanted to sit around contemplating my choices, I would just rewatch my life decisions from 2010. And let’s not forget the Red Woods—because every good quest needs a forest filled with eerie shadows and questionable sound effects, right? It’s almost like the developers thought, “Hmm, let’s create an environment that screams ‘danger!’ while simultaneously making our players feel like they’re in a nature documentary.” Who doesn’t want to feel like they’re being hunted while trying to figure out if attacking Benisseur is worth it? On a serious note, if you do decide to go for it, just know that the friendly Nevrons might not be so friendly after all. After all, what’s a little betrayal between friends? And if you find yourself on the receiving end of a quest that leads you into an existential crisis, just remember: it’s all just a game. Or is it? So here’s to you, brave adventurers! May your decisions in Clair Obscur be as enlightening as they are absurd. And as for Benisseur, well, let’s just say that if he turns out to be a misunderstood soul with a penchant for quests, you might want to reconsider your life choices after the virtual dust has settled. #ClairObscur #Expedition33 #GamingHumor #Benisseur #RedWoods
    Should You Attack Benisseur In Clair Obscur: Expedition 33?
    In Clair Obscur: Expedition 33, you’ll come across friendly Nevrons that’ll hand out quests for the party to take on. Some are easier than others, including this one located in the Red Woods.Read more...
    Like
    Love
    Wow
    Angry
    Sad
    245
    1 Commentarii 0 Distribuiri
  • One of the most versatile action cameras I've tested isn't from GoPro - and it's on sale

    DJI Osmo Action 4. Adrian Kingsley-Hughes/ZDNETMultiple DJI Osmo Action 4 packages are on sale . Both the Essential and Standard Combos have been discounted to while the Adventure Combo has dropped to DJI might not be the first name on people's lips when it comes to action cameras, but the company that's better known for its drones also has a really solid line of action cameras. And its latest device, the Osmo Action 4 camera, has some very impressive tricks up its sleeve.Also: One of the most versatile cameras I've used is not from Sony or Canon and it's on saleSo, what sets this action camera apart from the competition? Let's take a look.
    details
    View First off, this is not just an action camera -- it's a pro-grade action camera.From a hardware point of view, the Osmo Action 4 features a 1/1.3-inch image sensor that can record 4K at up to 120 frames per second. This sensor is combined with a wide-angle f/2.8 aperture lens that provides an ultra-wide field of view of up to 155°. And that's wide. Build quality and fit and finish are second to none. Adrian Kingsley-Hughes/ZDNETFor when the going gets rough, the Osmo Action 4 offers 360° HorizonSteady stabilization modes, including RockSteady 3.0/3.0+ for first-person video footage and HorizonBalancing/HorizonSteady modes for horizontal shots. That's pro-grade hardware right there.Also: This new AI video editor is an all-in-one production service for filmmakers - how to try itThe Osmo Action 4 also features a 10-bit D-Log M color mode. This mode allows the sensor to record over one billion colors and offers a wider dynamic range, giving you a video that is more vivid and that offers greater detail in the highlights and shadows. This mode, combined with an advanced color temperature sensor, means that the colors have a true-to-life feel regardless of whether you're shooting outdoors, indoors, or even underwater. The DJI Osmo Action 4 ready for action. Adrian Kingsley-Hughes/ZDNETI've added some video output from the Osmo Action 4 below. There are examples in both 1080p and 4K. To test the stabilization, I attached the camera to the truck and took it on some roads, some of which are pretty rough. The Osmo Action 4 had no problem with that terrain. I also popped the camera into the sea, just because. And again, no problem.I've also captured a few time-lapses with the camera -- not because I like clouds, but pointing a camera at a sky can be a good test of how it handles changing light. Also: I recommend this action camera to beginners and professional creators. Here's whyTimelapses with action cameras can suffer from unsightly exposure changes that cause the image to pulse, a condition known as exposure pumping. This issue can also cause the white balance to change noticeably in a video, but the Osmo Action 4 handled this test well.All the footage I've shot is what I've come to expect from a DJI camera, whether it's from an action camera or drone -- crisp, clear, vivid, and also nice and stable.The Osmo Action 4 is packed with various electronic image-stabilizationtech to ensure that your footage is smooth and on the horizon. It's worth noting the limitations of EIS -- it's not supported in slow-motion and timelapse modes, and the HorizonSteady and HorizonBalancing features are only available for video recorded at 1080por 2.7Kwith a frame rate of 60fps or below. On the durability front, I've no concerns. I've subjected the Osmo Action 4 to a hard few days of testing, and it's not let me down or complained once. It takes impacts like a champ, and being underwater or in dirt and sand is no problem at all. Also: I'm a full-time Canon photographer, but this Nikon camera made me wonder if I'm missing outYou might think that this heavy-duty testing would be hard on the camera's tiny batteries, but you'd be wrong. Remember I said the Osmo Action 4 offered hours of battery life? Well, I wasn't kidding.  The Osmo Action 4's ultra-long life batteries are incredible.  Adrian Kingsley-Hughes/ZDNETDJI says that a single battery can deliver up to 160 minutes of 1080p/24fps video recording. That's over two and a half hours of recording time. In the real world, I was blown away by how much a single battery can deliver. I shot video and timelapse, messed around with a load of camera settings, and then transferred that footage to my iPhone, and still had 16% battery left.No action camera has delivered so much for me on one battery. The two extra batteries and the multifunction case that come as part of the Adventure Combo are worth the extra Adrian Kingsley-Hughes/ZDNETAnd when you're ready to recharge, a 30W USB-C charger can take a battery from zero to 80% in 18 minutes. That's also impressive.What's more, the batteries are resistant to cold, offering up to 150 minutes of 1080p/24fps recording in temperatures as low as -20°C. This resistance also blows the competition away.Even taking into account all these strong points, the Osmo Action 4 offers even more.The camera has 2x digital zoom for better composition, Voice Prompts that let you know what the camera is doing without looking, and Voice Control that lets you operate the device without touching the screen or using the app. The Osmo Action 4 also digitally hides the selfie stick from a variety of different shots, and you can even connect the DJI Mic to the camera via the USB-C port for better audio capture.Also: Yes, an Android tablet finally made me reconsider my iPad Pro loyaltyAs for price, the Osmo Action 4 Standard Combo bundle comes in at while the Osmo Action 4 Adventure Combo, which comes with two extra Osmo Action Extreme batteries, an additional mini Osmo Action quick-release adapter mount, a battery case that acts as a power bank, and a 1.5-meter selfie stick, is I'm in love with the Osmo Action 4. It's hands down the best, most versatile, most powerful action camera on the market today, offering pro-grade features at a price that definitely isn't pro-grade.  Everything included in the Action Combo bundle. DJIDJI Osmo Action 4 tech specsDimensions: 70.5×44.2×32.8mmWeight: 145gWaterproof: 18m, up to 60m with the optional waterproof case Microphones: 3Sensor 1/1.3-inch CMOSLens: FOV 155°, aperture f/2.8, focus distance 0.4m to ∞Max Photo Resolution: 3648×2736Max Video Resolution: 4K: 3840×2880@24/25/30/48/50/60fps and 4K: 3840×2160@24/25/30/48/50/60/100/120fpsISO Range: 100-12800Front Screen: 1.4-inch, 323ppi, 320×320Rear Screen: 2.25-inch, 326ppi, 360×640Front/Rear Screen Brightness: 750±50 cd/m² Storage: microSDBattery: 1770mAh, lab tested to offer up to 160 minutes of runtimeOperating Temperature: -20° to 45° CThis article was originally published in August of 2023 and updated in March 2025.Featured reviews
    #one #most #versatile #action #cameras
    One of the most versatile action cameras I've tested isn't from GoPro - and it's on sale
    DJI Osmo Action 4. Adrian Kingsley-Hughes/ZDNETMultiple DJI Osmo Action 4 packages are on sale . Both the Essential and Standard Combos have been discounted to while the Adventure Combo has dropped to DJI might not be the first name on people's lips when it comes to action cameras, but the company that's better known for its drones also has a really solid line of action cameras. And its latest device, the Osmo Action 4 camera, has some very impressive tricks up its sleeve.Also: One of the most versatile cameras I've used is not from Sony or Canon and it's on saleSo, what sets this action camera apart from the competition? Let's take a look. details View First off, this is not just an action camera -- it's a pro-grade action camera.From a hardware point of view, the Osmo Action 4 features a 1/1.3-inch image sensor that can record 4K at up to 120 frames per second. This sensor is combined with a wide-angle f/2.8 aperture lens that provides an ultra-wide field of view of up to 155°. And that's wide. Build quality and fit and finish are second to none. Adrian Kingsley-Hughes/ZDNETFor when the going gets rough, the Osmo Action 4 offers 360° HorizonSteady stabilization modes, including RockSteady 3.0/3.0+ for first-person video footage and HorizonBalancing/HorizonSteady modes for horizontal shots. That's pro-grade hardware right there.Also: This new AI video editor is an all-in-one production service for filmmakers - how to try itThe Osmo Action 4 also features a 10-bit D-Log M color mode. This mode allows the sensor to record over one billion colors and offers a wider dynamic range, giving you a video that is more vivid and that offers greater detail in the highlights and shadows. This mode, combined with an advanced color temperature sensor, means that the colors have a true-to-life feel regardless of whether you're shooting outdoors, indoors, or even underwater. The DJI Osmo Action 4 ready for action. Adrian Kingsley-Hughes/ZDNETI've added some video output from the Osmo Action 4 below. There are examples in both 1080p and 4K. To test the stabilization, I attached the camera to the truck and took it on some roads, some of which are pretty rough. The Osmo Action 4 had no problem with that terrain. I also popped the camera into the sea, just because. And again, no problem.I've also captured a few time-lapses with the camera -- not because I like clouds, but pointing a camera at a sky can be a good test of how it handles changing light. Also: I recommend this action camera to beginners and professional creators. Here's whyTimelapses with action cameras can suffer from unsightly exposure changes that cause the image to pulse, a condition known as exposure pumping. This issue can also cause the white balance to change noticeably in a video, but the Osmo Action 4 handled this test well.All the footage I've shot is what I've come to expect from a DJI camera, whether it's from an action camera or drone -- crisp, clear, vivid, and also nice and stable.The Osmo Action 4 is packed with various electronic image-stabilizationtech to ensure that your footage is smooth and on the horizon. It's worth noting the limitations of EIS -- it's not supported in slow-motion and timelapse modes, and the HorizonSteady and HorizonBalancing features are only available for video recorded at 1080por 2.7Kwith a frame rate of 60fps or below. On the durability front, I've no concerns. I've subjected the Osmo Action 4 to a hard few days of testing, and it's not let me down or complained once. It takes impacts like a champ, and being underwater or in dirt and sand is no problem at all. Also: I'm a full-time Canon photographer, but this Nikon camera made me wonder if I'm missing outYou might think that this heavy-duty testing would be hard on the camera's tiny batteries, but you'd be wrong. Remember I said the Osmo Action 4 offered hours of battery life? Well, I wasn't kidding.  The Osmo Action 4's ultra-long life batteries are incredible.  Adrian Kingsley-Hughes/ZDNETDJI says that a single battery can deliver up to 160 minutes of 1080p/24fps video recording. That's over two and a half hours of recording time. In the real world, I was blown away by how much a single battery can deliver. I shot video and timelapse, messed around with a load of camera settings, and then transferred that footage to my iPhone, and still had 16% battery left.No action camera has delivered so much for me on one battery. The two extra batteries and the multifunction case that come as part of the Adventure Combo are worth the extra Adrian Kingsley-Hughes/ZDNETAnd when you're ready to recharge, a 30W USB-C charger can take a battery from zero to 80% in 18 minutes. That's also impressive.What's more, the batteries are resistant to cold, offering up to 150 minutes of 1080p/24fps recording in temperatures as low as -20°C. This resistance also blows the competition away.Even taking into account all these strong points, the Osmo Action 4 offers even more.The camera has 2x digital zoom for better composition, Voice Prompts that let you know what the camera is doing without looking, and Voice Control that lets you operate the device without touching the screen or using the app. The Osmo Action 4 also digitally hides the selfie stick from a variety of different shots, and you can even connect the DJI Mic to the camera via the USB-C port for better audio capture.Also: Yes, an Android tablet finally made me reconsider my iPad Pro loyaltyAs for price, the Osmo Action 4 Standard Combo bundle comes in at while the Osmo Action 4 Adventure Combo, which comes with two extra Osmo Action Extreme batteries, an additional mini Osmo Action quick-release adapter mount, a battery case that acts as a power bank, and a 1.5-meter selfie stick, is I'm in love with the Osmo Action 4. It's hands down the best, most versatile, most powerful action camera on the market today, offering pro-grade features at a price that definitely isn't pro-grade.  Everything included in the Action Combo bundle. DJIDJI Osmo Action 4 tech specsDimensions: 70.5×44.2×32.8mmWeight: 145gWaterproof: 18m, up to 60m with the optional waterproof case Microphones: 3Sensor 1/1.3-inch CMOSLens: FOV 155°, aperture f/2.8, focus distance 0.4m to ∞Max Photo Resolution: 3648×2736Max Video Resolution: 4K: 3840×2880@24/25/30/48/50/60fps and 4K: 3840×2160@24/25/30/48/50/60/100/120fpsISO Range: 100-12800Front Screen: 1.4-inch, 323ppi, 320×320Rear Screen: 2.25-inch, 326ppi, 360×640Front/Rear Screen Brightness: 750±50 cd/m² Storage: microSDBattery: 1770mAh, lab tested to offer up to 160 minutes of runtimeOperating Temperature: -20° to 45° CThis article was originally published in August of 2023 and updated in March 2025.Featured reviews #one #most #versatile #action #cameras
    WWW.ZDNET.COM
    One of the most versatile action cameras I've tested isn't from GoPro - and it's on sale
    DJI Osmo Action 4. Adrian Kingsley-Hughes/ZDNETMultiple DJI Osmo Action 4 packages are on sale at Amazon. Both the Essential and Standard Combos have been discounted to $249, while the Adventure Combo has dropped to $349.DJI might not be the first name on people's lips when it comes to action cameras, but the company that's better known for its drones also has a really solid line of action cameras. And its latest device, the Osmo Action 4 camera, has some very impressive tricks up its sleeve.Also: One of the most versatile cameras I've used is not from Sony or Canon and it's on saleSo, what sets this action camera apart from the competition? Let's take a look. details View at Amazon First off, this is not just an action camera -- it's a pro-grade action camera.From a hardware point of view, the Osmo Action 4 features a 1/1.3-inch image sensor that can record 4K at up to 120 frames per second (fps). This sensor is combined with a wide-angle f/2.8 aperture lens that provides an ultra-wide field of view of up to 155°. And that's wide. Build quality and fit and finish are second to none. Adrian Kingsley-Hughes/ZDNETFor when the going gets rough, the Osmo Action 4 offers 360° HorizonSteady stabilization modes, including RockSteady 3.0/3.0+ for first-person video footage and HorizonBalancing/HorizonSteady modes for horizontal shots. That's pro-grade hardware right there.Also: This new AI video editor is an all-in-one production service for filmmakers - how to try itThe Osmo Action 4 also features a 10-bit D-Log M color mode. This mode allows the sensor to record over one billion colors and offers a wider dynamic range, giving you a video that is more vivid and that offers greater detail in the highlights and shadows. This mode, combined with an advanced color temperature sensor, means that the colors have a true-to-life feel regardless of whether you're shooting outdoors, indoors, or even underwater. The DJI Osmo Action 4 ready for action. Adrian Kingsley-Hughes/ZDNETI've added some video output from the Osmo Action 4 below. There are examples in both 1080p and 4K. To test the stabilization, I attached the camera to the truck and took it on some roads, some of which are pretty rough. The Osmo Action 4 had no problem with that terrain. I also popped the camera into the sea, just because. And again, no problem.I've also captured a few time-lapses with the camera -- not because I like clouds (well, actually, I do like clouds), but pointing a camera at a sky can be a good test of how it handles changing light. Also: I recommend this action camera to beginners and professional creators. Here's whyTimelapses with action cameras can suffer from unsightly exposure changes that cause the image to pulse, a condition known as exposure pumping. This issue can also cause the white balance to change noticeably in a video, but the Osmo Action 4 handled this test well.All the footage I've shot is what I've come to expect from a DJI camera, whether it's from an action camera or drone -- crisp, clear, vivid, and also nice and stable.The Osmo Action 4 is packed with various electronic image-stabilization (EIS) tech to ensure that your footage is smooth and on the horizon. It's worth noting the limitations of EIS -- it's not supported in slow-motion and timelapse modes, and the HorizonSteady and HorizonBalancing features are only available for video recorded at 1080p (16:9) or 2.7K (16:9) with a frame rate of 60fps or below. On the durability front, I've no concerns. I've subjected the Osmo Action 4 to a hard few days of testing, and it's not let me down or complained once. It takes impacts like a champ, and being underwater or in dirt and sand is no problem at all. Also: I'm a full-time Canon photographer, but this Nikon camera made me wonder if I'm missing outYou might think that this heavy-duty testing would be hard on the camera's tiny batteries, but you'd be wrong. Remember I said the Osmo Action 4 offered hours of battery life? Well, I wasn't kidding.  The Osmo Action 4's ultra-long life batteries are incredible.  Adrian Kingsley-Hughes/ZDNETDJI says that a single battery can deliver up to 160 minutes of 1080p/24fps video recording (at room temperature, with RockSteady on, Wi-Fi off, and screen off). That's over two and a half hours of recording time. In the real world, I was blown away by how much a single battery can deliver. I shot video and timelapse, messed around with a load of camera settings, and then transferred that footage to my iPhone, and still had 16% battery left.No action camera has delivered so much for me on one battery. The two extra batteries and the multifunction case that come as part of the Adventure Combo are worth the extra $100. Adrian Kingsley-Hughes/ZDNETAnd when you're ready to recharge, a 30W USB-C charger can take a battery from zero to 80% in 18 minutes. That's also impressive.What's more, the batteries are resistant to cold, offering up to 150 minutes of 1080p/24fps recording in temperatures as low as -20°C (-4°F). This resistance also blows the competition away.Even taking into account all these strong points, the Osmo Action 4 offers even more.The camera has 2x digital zoom for better composition, Voice Prompts that let you know what the camera is doing without looking, and Voice Control that lets you operate the device without touching the screen or using the app. The Osmo Action 4 also digitally hides the selfie stick from a variety of different shots, and you can even connect the DJI Mic to the camera via the USB-C port for better audio capture.Also: Yes, an Android tablet finally made me reconsider my iPad Pro loyaltyAs for price, the Osmo Action 4 Standard Combo bundle comes in at $399, while the Osmo Action 4 Adventure Combo, which comes with two extra Osmo Action Extreme batteries, an additional mini Osmo Action quick-release adapter mount, a battery case that acts as a power bank, and a 1.5-meter selfie stick, is $499.I'm in love with the Osmo Action 4. It's hands down the best, most versatile, most powerful action camera on the market today, offering pro-grade features at a price that definitely isn't pro-grade.  Everything included in the Action Combo bundle. DJIDJI Osmo Action 4 tech specsDimensions: 70.5×44.2×32.8mmWeight: 145gWaterproof: 18m, up to 60m with the optional waterproof case Microphones: 3Sensor 1/1.3-inch CMOSLens: FOV 155°, aperture f/2.8, focus distance 0.4m to ∞Max Photo Resolution: 3648×2736Max Video Resolution: 4K (4:3): 3840×2880@24/25/30/48/50/60fps and 4K (16:9): 3840×2160@24/25/30/48/50/60/100/120fpsISO Range: 100-12800Front Screen: 1.4-inch, 323ppi, 320×320Rear Screen: 2.25-inch, 326ppi, 360×640Front/Rear Screen Brightness: 750±50 cd/m² Storage: microSD (up to 512GB)Battery: 1770mAh, lab tested to offer up to 160 minutes of runtime (tested at room temperature - 25°C/77°F - and 1080p/24fps, with RockSteady on, Wi-Fi off, and screen off)Operating Temperature: -20° to 45° C (-4° to 113° F)This article was originally published in August of 2023 and updated in March 2025.Featured reviews
    0 Commentarii 0 Distribuiri
  • Biofuels policy has been a failure for the climate, new report claims

    Fewer food crops

    Biofuels policy has been a failure for the climate, new report claims

    Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions.

    Georgina Gustin, Inside Climate News



    Jun 14, 2025 7:10 am

    |

    24

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    An ethanol production plant on March 20, 2024 near Ravenna, Nebraska.

    Credit:

    David Madison/Getty Images

    Story text

    Size

    Small
    Standard
    Large

    Width
    *

    Standard
    Wide

    Links

    Standard
    Orange

    * Subscribers only
      Learn more

    This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here.
    The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels.
    But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits.
    The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand.
    “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.”
    The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel.

    The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed.
    The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline.
    In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest.
    But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries.
    The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest.
    On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too.

    The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.”
    New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead.
    President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel.
    The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews.
    An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist.
    They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs.
    “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said.

    Georgina Gustin, Inside Climate News

    24 Comments
    #biofuels #policy #has #been #failure
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has.“The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments #biofuels #policy #has #been #failure
    ARSTECHNICA.COM
    Biofuels policy has been a failure for the climate, new report claims
    Fewer food crops Biofuels policy has been a failure for the climate, new report claims Report: An expansion of biofuels policy under Trump would lead to more greenhouse gas emissions. Georgina Gustin, Inside Climate News – Jun 14, 2025 7:10 am | 24 An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images An ethanol production plant on March 20, 2024 near Ravenna, Nebraska. Credit: David Madison/Getty Images Story text Size Small Standard Large Width * Standard Wide Links Standard Orange * Subscribers only   Learn more This article originally appeared on Inside Climate News, a nonprofit, non-partisan news organization that covers climate, energy, and the environment. Sign up for their newsletter here. The American Midwest is home to some of the richest, most productive farmland in the world, enabling its transformation into a vast corn- and soy-producing machine—a conversion spurred largely by decades-long policies that support the production of biofuels. But a new report takes a big swing at the ethanol orthodoxy of American agriculture, criticizing the industry for causing economic and social imbalances across rural communities and saying that the expansion of biofuels will increase greenhouse gas emissions, despite their purported climate benefits. The report, from the World Resources Institute, which has been critical of US biofuel policy in the past, draws from 100 academic studies on biofuel impacts. It concludes that ethanol policy has been largely a failure and ought to be reconsidered, especially as the world needs more land to produce food to meet growing demand. “Multiple studies show that US biofuel policies have reshaped crop production, displacing food crops and driving up emissions from land conversion, tillage, and fertilizer use,” said the report’s lead author, Haley Leslie-Bole. “Corn-based ethanol, in particular, has contributed to nutrient runoff, degraded water quality and harmed wildlife habitat. As climate pressures grow, increasing irrigation and refining for first-gen biofuels could deepen water scarcity in already drought-prone parts of the Midwest.” The conversion of Midwestern agricultural land has been sweeping. Between 2004 and 2024, ethanol production increased by nearly 500 percent. Corn and soybeans are now grown on 92 and 86 million acres of land respectively—and roughly a third of those crops go to produce ethanol. That means about 30 million acres of land that could be used to grow food crops are instead being used to produce ethanol, despite ethanol only accounting for 6 percent of the country’s transportation fuel. The biofuels industry—which includes refiners, corn and soy growers and the influential agriculture lobby writ large—has long insisted that corn- and soy-based biofuels provide an energy-efficient alternative to fossil-based fuels. Congress and the US Department of Agriculture have agreed. The country’s primary biofuels policy, the Renewable Fuel Standard, requires that biofuels provide a greenhouse gas reduction over fossil fuels: The law says that ethanol from new plants must deliver a 20 percent reduction in greenhouse gas emissions compared to gasoline. In addition to greenhouse gas reductions, the industry and its allies in Congress have also continued to say that ethanol is a primary mainstay of the rural economy, benefiting communities across the Midwest. But a growing body of research—much of which the industry has tried to debunk and deride—suggests that ethanol actually may not provide the benefits that policies require. It may, in fact, produce more greenhouse gases than the fossil fuels it was intended to replace. Recent research says that biofuel refiners also emit significant amounts of carcinogenic and dangerous substances, including hexane and formaldehyde, in greater amounts than petroleum refineries. The new report points to research saying that increased production of biofuels from corn and soy could actually raise greenhouse gas emissions, largely from carbon emissions linked to clearing land in other countries to compensate for the use of land in the Midwest. On top of that, corn is an especially fertilizer-hungry crop requiring large amounts of nitrogen-based fertilizer, which releases huge amounts of nitrous oxide when it interacts with the soil. American farming is, by far, the largest source of domestic nitrous oxide emissions already—about 50 percent. If biofuel policies lead to expanded production, emissions of this enormously powerful greenhouse gas will likely increase, too. The new report concludes that not only will the expansion of ethanol increase greenhouse gas emissions, but it has also failed to provide the social and financial benefits to Midwestern communities that lawmakers and the industry say it has. (The report defines the Midwest as Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin.) “The benefits from biofuels remain concentrated in the hands of a few,” Leslie-Bole said. “As subsidies flow, so may the trend of farmland consolidation, increasing inaccessibility of farmland in the Midwest, and locking out emerging or low-resource farmers. This means the benefits of biofuels production are flowing to fewer people, while more are left bearing the costs.” New policies being considered in state legislatures and Congress, including additional tax credits and support for biofuel-based aviation fuel, could expand production, potentially causing more land conversion and greenhouse gas emissions, widening the gap between the rural communities and rich agribusinesses at a time when food demand is climbing and, critics say, land should be used to grow food instead. President Donald Trump’s tax cut bill, passed by the House and currently being negotiated in the Senate, would not only extend tax credits for biofuels producers, it specifically excludes calculations of emissions from land conversion when determining what qualifies as a low-emission fuel. The primary biofuels industry trade groups, including Growth Energy and the Renewable Fuels Association, did not respond to Inside Climate News requests for comment or interviews. An employee with the Clean Fuels Alliance America, which represents biodiesel and sustainable aviation fuel producers, not ethanol, said the report vastly overstates the carbon emissions from crop-based fuels by comparing the farmed land to natural landscapes, which no longer exist. They also noted that the impact of soy-based fuels in 2024 was more than $42 billion, providing over 100,000 jobs. “Ten percent of the value of every bushel of soybeans is linked to biomass-based fuel,” they said. Georgina Gustin, Inside Climate News 24 Comments
    0 Commentarii 0 Distribuiri
  • Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm

    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more

    When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development.
    What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute. 
    As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention.
    Engineering around constraints
    DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement.
    While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well.
    This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment.
    If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development.
    That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently.
    This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing.
    Pragmatism over process
    Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process.
    The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content.
    This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations. 
    Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance.
    Market reverberations
    Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders.
    Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI. 
    With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change.
    This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s.
    Beyond model training
    Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training.
    To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards.
    The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk.
    For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted.
    At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort.
    This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails.
    Moving into the future
    So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity. 
    Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market.
    Meta has also responded,
    With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail.
    Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching.
    Jae Lee is CEO and co-founder of TwelveLabs.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #rethinking #deepseeks #playbook #shakes #highspend
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate. Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of expertsarchitectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending billion or billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive billion funding round that valued the company at an unprecedented billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute”. As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning”. This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM”. But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of othersto create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #rethinking #deepseeks #playbook #shakes #highspend
    VENTUREBEAT.COM
    Rethinking AI: DeepSeek’s playbook shakes up the high-spend, high-compute paradigm
    Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more When DeepSeek released its R1 model this January, it wasn’t just another AI announcement. It was a watershed moment that sent shockwaves through the tech industry, forcing industry leaders to reconsider their fundamental approaches to AI development. What makes DeepSeek’s accomplishment remarkable isn’t that the company developed novel capabilities; rather, it was how it achieved comparable results to those delivered by tech heavyweights at a fraction of the cost. In reality, DeepSeek didn’t do anything that hadn’t been done before; its innovation stemmed from pursuing different priorities. As a result, we are now experiencing rapid-fire development along two parallel tracks: efficiency and compute.  As DeepSeek prepares to release its R2 model, and as it concurrently faces the potential of even greater chip restrictions from the U.S., it’s important to look at how it captured so much attention. Engineering around constraints DeepSeek’s arrival, as sudden and dramatic as it was, captivated us all because it showcased the capacity for innovation to thrive even under significant constraints. Faced with U.S. export controls limiting access to cutting-edge AI chips, DeepSeek was forced to find alternative pathways to AI advancement. While U.S. companies pursued performance gains through more powerful hardware, bigger models and better data, DeepSeek focused on optimizing what was available. It implemented known ideas with remarkable execution — and there is novelty in executing what’s known and doing it well. This efficiency-first mindset yielded incredibly impressive results. DeepSeek’s R1 model reportedly matches OpenAI’s capabilities at just 5 to 10% of the operating cost. According to reports, the final training run for DeepSeek’s V3 predecessor cost a mere $6 million — which was described by former Tesla AI scientist Andrej Karpathy as “a joke of a budget” compared to the tens or hundreds of millions spent by U.S. competitors. More strikingly, while OpenAI reportedly spent $500 million training its recent “Orion” model, DeepSeek achieved superior benchmark results for just $5.6 million — less than 1.2% of OpenAI’s investment. If you get starry eyed believing these incredible results were achieved even as DeepSeek was at a severe disadvantage based on its inability to access advanced AI chips, I hate to tell you, but that narrative isn’t entirely accurate (even though it makes a good story). Initial U.S. export controls focused primarily on compute capabilities, not on memory and networking — two crucial components for AI development. That means that the chips DeepSeek had access to were not poor quality chips; their networking and memory capabilities allowed DeepSeek to parallelize operations across many units, a key strategy for running their large model efficiently. This, combined with China’s national push toward controlling the entire vertical stack of AI infrastructure, resulted in accelerated innovation that many Western observers didn’t anticipate. DeepSeek’s advancements were an inevitable part of AI development, but they brought known advancements forward a few years earlier than would have been possible otherwise, and that’s pretty amazing. Pragmatism over process Beyond hardware optimization, DeepSeek’s approach to training data represents another departure from conventional Western practices. Rather than relying solely on web-scraped content, DeepSeek reportedly leveraged significant amounts of synthetic data and outputs from other proprietary models. This is a classic example of model distillation, or the ability to learn from really powerful models. Such an approach, however, raises questions about data privacy and governance that might concern Western enterprise customers. Still, it underscores DeepSeek’s overall pragmatic focus on results over process. The effective use of synthetic data is a key differentiator. Synthetic data can be very effective when it comes to training large models, but you have to be careful; some model architectures handle synthetic data better than others. For instance, transformer-based models with mixture of experts (MoE) architectures like DeepSeek’s tend to be more robust when incorporating synthetic data, while more traditional dense architectures like those used in early Llama models can experience performance degradation or even “model collapse” when trained on too much synthetic content. This architectural sensitivity matters because synthetic data introduces different patterns and distributions compared to real-world data. When a model architecture doesn’t handle synthetic data well, it may learn shortcuts or biases present in the synthetic data generation process rather than generalizable knowledge. This can lead to reduced performance on real-world tasks, increased hallucinations or brittleness when facing novel situations.  Still, DeepSeek’s engineering teams reportedly designed their model architecture specifically with synthetic data integration in mind from the earliest planning stages. This allowed the company to leverage the cost benefits of synthetic data without sacrificing performance. Market reverberations Why does all of this matter? Stock market aside, DeepSeek’s emergence has triggered substantive strategic shifts among industry leaders. Case in point: OpenAI. Sam Altman recently announced plans to release the company’s first “open-weight” language model since 2019. This is a pretty notable pivot for a company that built its business on proprietary systems. It seems DeepSeek’s rise, on top of Llama’s success, has hit OpenAI’s leader hard. Just a month after DeepSeek arrived on the scene, Altman admitted that OpenAI had been “on the wrong side of history” regarding open-source AI.  With OpenAI reportedly spending $7 to 8 billion annually on operations, the economic pressure from efficient alternatives like DeepSeek has become impossible to ignore. As AI scholar Kai-Fu Lee bluntly put it: “You’re spending $7 billion or $8 billion a year, making a massive loss, and here you have a competitor coming in with an open-source model that’s for free.” This necessitates change. This economic reality prompted OpenAI to pursue a massive $40 billion funding round that valued the company at an unprecedented $300 billion. But even with a war chest of funds at its disposal, the fundamental challenge remains: OpenAI’s approach is dramatically more resource-intensive than DeepSeek’s. Beyond model training Another significant trend accelerated by DeepSeek is the shift toward “test-time compute” (TTC). As major AI labs have now trained their models on much of the available public data on the internet, data scarcity is slowing further improvements in pre-training. To get around this, DeepSeek announced a collaboration with Tsinghua University to enable “self-principled critique tuning” (SPCT). This approach trains AI to develop its own rules for judging content and then uses those rules to provide detailed critiques. The system includes a built-in “judge” that evaluates the AI’s answers in real-time, comparing responses against core rules and quality standards. The development is part of a movement towards autonomous self-evaluation and improvement in AI systems in which models use inference time to improve results, rather than simply making models larger during training. DeepSeek calls its system “DeepSeek-GRM” (generalist reward modeling). But, as with its model distillation approach, this could be considered a mix of promise and risk. For example, if the AI develops its own judging criteria, there’s a risk those principles diverge from human values, ethics or context. The rules could end up being overly rigid or biased, optimizing for style over substance, and/or reinforce incorrect assumptions or hallucinations. Additionally, without a human in the loop, issues could arise if the “judge” is flawed or misaligned. It’s a kind of AI talking to itself, without robust external grounding. On top of this, users and developers may not understand why the AI reached a certain conclusion — which feeds into a bigger concern: Should an AI be allowed to decide what is “good” or “correct” based solely on its own logic? These risks shouldn’t be discounted. At the same time, this approach is gaining traction, as again DeepSeek builds on the body of work of others (think OpenAI’s “critique and revise” methods, Anthropic’s constitutional AI or research on self-rewarding agents) to create what is likely the first full-stack application of SPCT in a commercial effort. This could mark a powerful shift in AI autonomy, but there still is a need for rigorous auditing, transparency and safeguards. It’s not just about models getting smarter, but that they remain aligned, interpretable, and trustworthy as they begin critiquing themselves without human guardrails. Moving into the future So, taking all of this into account, the rise of DeepSeek signals a broader shift in the AI industry toward parallel innovation tracks. While companies continue building more powerful compute clusters for next-generation capabilities, there will also be intense focus on finding efficiency gains through software engineering and model architecture improvements to offset the challenges of AI energy consumption, which far outpaces power generation capacity.  Companies are taking note. Microsoft, for example, has halted data center development in multiple regions globally, recalibrating toward a more distributed, efficient infrastructure approach. While still planning to invest approximately $80 billion in AI infrastructure this fiscal year, the company is reallocating resources in response to the efficiency gains DeepSeek introduced to the market. Meta has also responded, With so much movement in such a short time, it becomes somewhat ironic that the U.S. sanctions designed to maintain American AI dominance may have instead accelerated the very innovation they sought to contain. By constraining access to materials, DeepSeek was forced to blaze a new trail. Moving forward, as the industry continues to evolve globally, adaptability for all players will be key. Policies, people and market reactions will continue to shift the ground rules — whether it’s eliminating the AI diffusion rule, a new ban on technology purchases or something else entirely. It’s what we learn from one another and how we respond that will be worth watching. Jae Lee is CEO and co-founder of TwelveLabs. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Commentarii 0 Distribuiri